[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 760 - Still Failing

2015-02-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/760/

6 tests failed.
FAILED:  org.apache.solr.handler.component.DistributedMLTComponentTest.test

Error Message:
Timeout occured while waiting response from server at: 
https://127.0.0.1:20657//collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:20657//collection1
at 
__randomizedtesting.SeedInfo.seed([3B901651DA539072:B3C4298B74AFFD8A]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:568)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:214)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:210)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:309)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryServer(BaseDistributedSearchTestCase.java:538)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:586)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:568)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:547)
at 
org.apache.solr.handler.component.DistributedMLTComponentTest.test(DistributedMLTComponentTest.java:126)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 

[jira] [Commented] (SOLR-7099) bin/solr -cloud mode should launch a local ZK in its own process using zkcli's runzk option (instead of embedded in the first Solr process)

2015-02-11 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316341#comment-14316341
 ] 

Steve Rowe commented on SOLR-7099:
--

+1

Maybe also a {{bin/solr zk}} command?  It would be nice to have a Solr-based 
facility to be able to setup a ZK ensemble.

 bin/solr -cloud mode should launch a local ZK in its own process using 
 zkcli's runzk option (instead of embedded in the first Solr process)
 ---

 Key: SOLR-7099
 URL: https://issues.apache.org/jira/browse/SOLR-7099
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Timothy Potter

 Embedded ZK is great for unit testing and quick examples, but as soon as 
 someone wants to restart their cluster, embedded mode causes a lot of issues, 
 esp. if you restart the node that embeds ZK. Of course we don't want users to 
 have to install ZooKeeper just to get started with Solr either. 
 Thankfully, ZkCLI already includes a way to launch ZooKeeper in its own 
 process but still within the Solr directory structure. We can hide the 
 details and complexity of working with ZK in the bin/solr script. The 
 solution to this should still make it very clear that this is for getting 
 started / examples and not to be used in production.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6239) Remove RAMUsageEstimator Unsafe calls

2015-02-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316344#comment-14316344
 ] 

Michael McCandless commented on LUCENE-6239:


+1 to stop using Unsafe and switch to dynamic hotspot bean check instead, or 
even a simpler/naive heuristic.

If we have places in Lucene where estimating pointer size as 8 bytes when it 
was really 4 bytes in fact makes a practical difference (do we have big 
Object[] anywhere) then that's bad: most of Lucene's RAM heavy structures are 
also compact (not using so many pointers)?

 Remove RAMUsageEstimator Unsafe calls
 -

 Key: LUCENE-6239
 URL: https://issues.apache.org/jira/browse/LUCENE-6239
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir

 This is unnecessary risk. We should remove this stuff, it is not needed here. 
 We are a search engine, not a ram calculator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6239) Remove RAMUsageEstimator Unsafe calls

2015-02-11 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316377#comment-14316377
 ] 

Uwe Schindler commented on LUCENE-6239:
---

bq. I would love it if we avoided unsafe usage and replaced it with something 
safer like that.

I can take care of that. Do you want me to make a patch removing Unsafe from 
trunk?

 Remove RAMUsageEstimator Unsafe calls
 -

 Key: LUCENE-6239
 URL: https://issues.apache.org/jira/browse/LUCENE-6239
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir

 This is unnecessary risk. We should remove this stuff, it is not needed here. 
 We are a search engine, not a ram calculator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6239) Remove RAMUsageEstimator Unsafe calls

2015-02-11 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316391#comment-14316391
 ] 

Uwe Schindler edited comment on LUCENE-6239 at 2/11/15 3:50 PM:


It is cleaner with Java 8 and Java 9. The reference size in Java 7 was just 
horrible to detect, because IBM J9 did not have hotspot bean. So the Unsafe 
approach was cleaner at that time. The remaining constants would be just simple 
static values, only dependend on bitness.


was (Author: thetaphi):
It is cleaner with Java 8 and Java 9. The reference size in Java 7 was just 
horrible to detect, because IBM J9 did not have hotspot bean. So the Unsafe 
approach was cleander. The remaining constants would be just simple static 
values, only dependend on bitness.

 Remove RAMUsageEstimator Unsafe calls
 -

 Key: LUCENE-6239
 URL: https://issues.apache.org/jira/browse/LUCENE-6239
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir

 This is unnecessary risk. We should remove this stuff, it is not needed here. 
 We are a search engine, not a ram calculator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6239) Remove RAMUsageEstimator Unsafe calls

2015-02-11 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316391#comment-14316391
 ] 

Uwe Schindler commented on LUCENE-6239:
---

It is cleaner with Java 8 and Java 9. The reference size in Java 7 was just 
horrible to detect, because IBM J9 did not have hotspot bean. So the Unsafe 
approach was cleander. The remaining constants would be just simple static 
values, only dependend on bitness.

 Remove RAMUsageEstimator Unsafe calls
 -

 Key: LUCENE-6239
 URL: https://issues.apache.org/jira/browse/LUCENE-6239
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir

 This is unnecessary risk. We should remove this stuff, it is not needed here. 
 We are a search engine, not a ram calculator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] 5.0.0 RC2

2015-02-11 Thread david.w.smi...@gmail.com
Thanks for the clarifications on these two issues, Shalin, Ryan, and Uwe.

I got it to pass when my CWD is 5x and current JAVA_HOME is Java 7, with
—test-java8 test to my Java 8.

SUCCESS! [1:24:57.743374]

+1 to Ship!

~ David Smiley
Freelance Apache Lucene/Solr Search Consultant/Developer
http://www.linkedin.com/in/davidwsmiley

On Wed, Feb 11, 2015 at 10:36 AM, Uwe Schindler u...@thetaphi.de wrote:

 I think the problem is the inverse:



 RuntimeError: JAR file
 /private/tmp/smoke_lucene_5.0.0_1658469_1/unpack/lucene-5.0.0/analysis/common/lucene-analyzers-common-5.0.0.jar
 is missing X-Compile-Source-JDK: 1.8 inside its META-INF/MANIFEST.MF



 The problem: Smoketester expects to find Java 1.8 in the JAR file’s
 metadata. The problem: Shalin said, he runs trunk’s smoke tester on the 5.0
 branch. This will break here, because Trunk’s smoketester expects Lucene
 compiled with Java 8.



 Uwe

 -

 Uwe Schindler

 H.-H.-Meier-Allee 63, D-28213 Bremen

 http://www.thetaphi.de

 eMail: u...@thetaphi.de



 *From:* Ryan Ernst [mailto:r...@iernst.net]
 *Sent:* Wednesday, February 11, 2015 3:27 PM
 *To:* dev@lucene.apache.org
 *Subject:* Re: [VOTE] 5.0.0 RC2



 And I got this:
 Java 1.8
 JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_20.jdk/Contents/Home



 Did you change your JAVA_HOME to point to java 8 as well (that's what it
 looks like since only jdk is listed in that output)? --test-java8 is meant
 to take the java 8 home, but your regular JAVA_HOME should stay java 7.



 On Wed, Feb 11, 2015 at 6:13 AM, david.w.smi...@gmail.com 
 david.w.smi...@gmail.com wrote:

 I found two problems, and I’m not sure what to make of them.



 First, perhaps the simplest.  I ran it with Java 8 with this at the
 command-line (copied from Uwe’s email, inserting my environment variable):



 python3 -u dev-tools/scripts/smokeTestRelease.py --test-java8 $JAVA8_HOME
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-rev1658469



 And I got this:



 Java 1.8
 JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_20.jdk/Contents/Home

 NOTE: output encoding is UTF-8



 Load release URL 
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-rev1658469
 ...

   unshortened:
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-rev1658469/



 Test Lucene...

   test basics...

   get KEYS

 0.1 MB in 0.69 sec (0.2 MB/sec)

   check changes HTML...

   download lucene-5.0.0-src.tgz...

 27.9 MB in 129.06 sec (0.2 MB/sec)

 verify md5/sha1 digests

 verify sig

 verify trust

   GPG: gpg: WARNING: This key is not certified with a trusted
 signature!

   download lucene-5.0.0.tgz...

 64.0 MB in 154.61 sec (0.4 MB/sec)

 verify md5/sha1 digests

 verify sig

 verify trust

   GPG: gpg: WARNING: This key is not certified with a trusted
 signature!

   download lucene-5.0.0.zip...

 73.5 MB in 223.35 sec (0.3 MB/sec)

 verify md5/sha1 digests

 verify sig

 verify trust

   GPG: gpg: WARNING: This key is not certified with a trusted
 signature!

   unpack lucene-5.0.0.tgz...

 verify JAR metadata/identity/no javax.* or java.* classes...

 Traceback (most recent call last):

   File dev-tools/scripts/smokeTestRelease.py, line 1486, in module

 main()

   File dev-tools/scripts/smokeTestRelease.py, line 1431, in main

 smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir,
 c.is_signed, ' '.join(c.test_args))

   File dev-tools/scripts/smokeTestRelease.py, line 1468, in smokeTest

 unpackAndVerify(java, 'lucene', tmpDir, artifact, svnRevision,
 version, testArgs, baseURL)

   File dev-tools/scripts/smokeTestRelease.py, line 616, in
 unpackAndVerify

 verifyUnpacked(java, project, artifact, unpackPath, svnRevision,
 version, testArgs, tmpDir, baseURL)

   File dev-tools/scripts/smokeTestRelease.py, line 737, in verifyUnpacked

 checkAllJARs(os.getcwd(), project, svnRevision, version, tmpDir,
 baseURL)

   File dev-tools/scripts/smokeTestRelease.py, line 257, in checkAllJARs

 checkJARMetaData('JAR file %s' % fullPath, fullPath, svnRevision,
 version)

   File dev-tools/scripts/smokeTestRelease.py, line 185, in
 checkJARMetaData

 (desc, verify))

 RuntimeError: JAR file
 /private/tmp/smoke_lucene_5.0.0_1658469_1/unpack/lucene-5.0.0/analysis/common/lucene-analyzers-common-5.0.0.jar
 is missing X-Compile-Source-JDK: 1.8 inside its META-INF/MANIFEST.MF



 When I executed the above command, my CWS was a trunk checkout. Should
 that matter?  It seems unlikely; the specific error references the unpacked
 location, not CWD.







 I also executed with Java 7; I did this first, actually.  This time, my
 JAVA_HOME is set to Java 7 and I ran this from my 5x checkout.  When the
 Solr tests ran, I got a particular test failure.  It reproduces, but only
 on the 5.0 checkout — not my 5x checkout:



 ant test  -Dtestcase=SaslZkACLProviderTest
 -Dtests.method=testSaslZkACLProvider 

[jira] [Created] (SOLR-7100) SpellCheckComponent should throw error if queryAnalyzerFieldType provided doesn't exist

2015-02-11 Thread David Smiley (JIRA)
David Smiley created SOLR-7100:
--

 Summary: SpellCheckComponent should throw error if 
queryAnalyzerFieldType provided doesn't exist
 Key: SOLR-7100
 URL: https://issues.apache.org/jira/browse/SOLR-7100
 Project: Solr
  Issue Type: Bug
  Components: spellchecker
Affects Versions: 4.10.2
Reporter: David Smiley
Priority: Minor


If you typo or otherwise mess up the queryAnalyzerFieldType setting in 
solrconfig.xml for the spellcheck component, you will not get an error.  
Instead, the code falls back to the default (WhitespaceTokenizer).  This should 
really be an error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6198) two phase intersection

2015-02-11 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316568#comment-14316568
 ] 

Robert Muir commented on LUCENE-6198:
-

I am +1 for this API because it solves my major complaint with the first stab i 
took, invasive methods being added to very low level apis.

But i think, on the implementation we should support approximations of 
conjunctions like the first patch. I think its important because this way 
nested conjunctions/filters work and there is not so much performance pressure 
for users to flatten things. If we later fix scorers like disjunctionscorer 
too, then it starts to have bigger benefits because users can e.g. put 
proximity queries or slow filters that should be checked last anywhere 
arbitrarily in the query, and we always do the right thing. 


 two phase intersection
 --

 Key: LUCENE-6198
 URL: https://issues.apache.org/jira/browse/LUCENE-6198
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-6198.patch, LUCENE-6198.patch


 Currently some scorers have to do a lot of per-document work to determine if 
 a document is a match. The simplest example is a phrase scorer, but there are 
 others (spans, sloppy phrase, geospatial, etc).
 Imagine a conjunction with two MUST clauses, one that is a term that matches 
 all odd documents, another that is a phrase matching all even documents. 
 Today this conjunction will be very expensive, because the zig-zag 
 intersection is reading a ton of useless positions.
 The same problem happens with filteredQuery and anything else that acts like 
 a conjunction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6239) Remove RAMUsageEstimator Unsafe calls

2015-02-11 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316584#comment-14316584
 ] 

Robert Muir commented on LUCENE-6239:
-

+1

 Remove RAMUsageEstimator Unsafe calls
 -

 Key: LUCENE-6239
 URL: https://issues.apache.org/jira/browse/LUCENE-6239
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6239.patch


 This is unnecessary risk. We should remove this stuff, it is not needed here. 
 We are a search engine, not a ram calculator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_31) - Build # 4479 - Still Failing!

2015-02-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4479/
Java: 64bit/jdk1.8.0_31 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.lucene.facet.TestRandomSamplingFacetsCollector.testRandomSampling

Error Message:
83

Stack Trace:
java.lang.ArrayIndexOutOfBoundsException: 83
at 
__randomizedtesting.SeedInfo.seed([A3C2779ADAF9AB15:49DA1E876BAE5CA2]:0)
at 
org.apache.lucene.facet.TestRandomSamplingFacetsCollector.testRandomSampling(TestRandomSamplingFacetsCollector.java:133)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 6167 lines...]
   [junit4] Suite: org.apache.lucene.facet.TestRandomSamplingFacetsCollector
   [junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=TestRandomSamplingFacetsCollector -Dtests.method=testRandomSampling 
-Dtests.seed=A3C2779ADAF9AB15 -Dtests.slow=true -Dtests.locale=el_GR 
-Dtests.timezone=America/Guatemala -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR   0.82s | 
TestRandomSamplingFacetsCollector.testRandomSampling 
   [junit4] Throwable #1: java.lang.ArrayIndexOutOfBoundsException: 83
   [junit4]at 

[jira] [Updated] (LUCENE-1518) Merge Query and Filter classes

2015-02-11 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-1518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-1518:
-
Attachment: LUCENE-1518.patch

I'd like to revisit this issue now that queries can be configured to not 
produce scores and that boolean queries accept filter clauses. Here is a new 
patch. Like Uwe's patch, it makes Filter extend Query and removes the 
ContantScoreQuery(Filter) constructor. So Filter is now mostly a helper class 
in order to build queries that do not produce scores (Scorer.score() always 
returns 0). I also added changes to Filter in order not to break existing 
Filter implementations (this is why I override equals() and hashCode() in 
Filter to go back to the way that they are implemented in Object).

[~thetaphi] what do you think?

 Merge Query and Filter classes
 --

 Key: LUCENE-1518
 URL: https://issues.apache.org/jira/browse/LUCENE-1518
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Affects Versions: 2.4
Reporter: Uwe Schindler
 Fix For: 4.9, Trunk

 Attachments: LUCENE-1518.patch, LUCENE-1518.patch


 This issue presents a patch, that merges Queries and Filters in a way, that 
 the new Filter class extends Query. This would make it possible, to use every 
 filter as a query.
 The new abstract filter class would contain all methods of 
 ConstantScoreQuery, deprecate ConstantScoreQuery. If somebody implements the 
 Filter's getDocIdSet()/bits() methods he has nothing more to do, he could 
 just use the filter as a normal query.
 I do not want to completely convert Filters to ConstantScoreQueries. The idea 
 is to combine Queries and Filters in such a way, that every Filter can 
 automatically be used at all places where a Query can be used (e.g. also 
 alone a search query without any other constraint). For that, the abstract 
 Query methods must be implemented and return a default weight for Filters 
 which is the current ConstantScore Logic. If the filter is used as a real 
 filter (where the API wants a Filter), the getDocIdSet part could be directly 
 used, the weight is useless (as it is currently, too). The constant score 
 default implementation is only used when the Filter is used as a Query (e.g. 
 as direct parameter to Searcher.search()). For the special case of 
 BooleanQueries combining Filters and Queries the idea is, to optimize the 
 BooleanQuery logic in such a way, that it detects if a BooleanClause is a 
 Filter (using instanceof) and then directly uses the Filter API and not take 
 the burden of the ConstantScoreQuery (see LUCENE-1345).
 Here some ideas how to implement Searcher.search() with Query and Filter:
 - User runs Searcher.search() using a Filter as the only parameter. As every 
 Filter is also a ConstantScoreQuery, the query can be executed and returns 
 score 1.0 for all matching documents.
 - User runs Searcher.search() using a Query as the only parameter: No change, 
 all is the same as before
 - User runs Searcher.search() using a BooleanQuery as parameter: If the 
 BooleanQuery does not contain a Query that is subclass of Filter (the new 
 Filter) everything as usual. If the BooleanQuery only contains exactly one 
 Filter and nothing else the Filter is used as a constant score query. If 
 BooleanQuery contains clauses with Queries and Filters the new algorithm 
 could be used: The queries are executed and the results filtered with the 
 filters.
 For the user this has the main advantage: That he can construct his query 
 using a simplified API without thinking about Filters oder Queries, you can 
 just combine clauses together. The scorer/weight logic then identifies the 
 cases to use the filter or the query weight API. Just like the query 
 optimizer of a RDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1348: POMs out of sync

2015-02-11 Thread Steve Rowe
[javadoc] 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/src/java/org/apache/solr/core/RequestHandlers.java:250:
 warning: empty p tag
  [javadoc]* p
  [javadoc]  ^
  [javadoc] Generating 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/build/docs/solr-core/org/apache/solr/util/package-summary.html...
  [javadoc] Copying file 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/src/java/org/apache/solr/util/doc-files/min-should-match.html
 to directory 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/build/docs/solr-core/org/apache/solr/util/doc-files...
  [javadoc] Building index for all the packages and classes...
  [javadoc] Building index for all classes...
  [javadoc] Generating 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/build/docs/solr-core/help-doc.html...
  [javadoc] 1 warning


 On Feb 11, 2015, at 11:43 AM, Apache Jenkins Server 
 jenk...@builds.apache.org wrote:
 
 Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1348/
 
 No tests ran.
 
 Build Log:
 [...truncated 17949 lines...]
 BUILD FAILED
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:535:
  The following error occurred while executing this line:
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:185:
  The following error occurred while executing this line:
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/common-build.xml:61:
  The following error occurred while executing this line:
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/common-build.xml:58:
  The following error occurred while executing this line:
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/build.xml:453:
  The following error occurred while executing this line:
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/common-build.xml:276:
  The following error occurred while executing this line:
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/build.xml:49:
  The following error occurred while executing this line:
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/common-build.xml:298:
  The following error occurred while executing this line:
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:2054:
  Javadocs warnings were found!
 
 Total time: 9 minutes 20 seconds
 Build step 'Invoke Ant' marked build as failure
 Email was triggered for: Failure
 Sending email for trigger: Failure
 
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_31) - Build # 11780 - Failure!

2015-02-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11780/
Java: 32bit/jdk1.8.0_31 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.testRandom 
{#3 seed=[6B7EE18F8044BF08:1263454538DCD1B5]}

Error Message:
expected:1 but was:0

Stack Trace:
java.lang.AssertionError: expected:1 but was:0
at 
__randomizedtesting.SeedInfo.seed([6B7EE18F8044BF08:1263454538DCD1B5]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.validateHeatmapResult(HeatmapFacetCounterTest.java:221)
at 
org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.queryHeatmapRecursive(HeatmapFacetCounterTest.java:188)
at 
org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.queryHeatmapRecursive(HeatmapFacetCounterTest.java:201)
at 
org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.testRandom(HeatmapFacetCounterTest.java:172)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-6311) SearchHandler should use path when no qt or shard.qt parameter is specified

2015-02-11 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316493#comment-14316493
 ] 

Hoss Man commented on SOLR-6311:


bq.  It should have been done this way to begin with. I consider it a bug that 
distributed requests were apparently hard-coded to use /select

Definitely not a bug.

you have to remember the context of how distributed search was added -- prior 
to SolrCloud, you had to specify a shards param listing all of the cores you 
wanted to do a distributed search over, and the primary convinience mechanism 
for doing that was to register a handler like this...

{noformat}
requestHandler name=/my_handler class=solr.SearchHandler/
  lst name=defaults
str name=shardsfoo:8983/solr,bar:8983/solr/str
int name=rows100/int
  /lst
/requestHandler
{noformat}

...so the choice to have shards.qt default to /select instead of the 
current qt was _extremely_ important to make distributed search function 
correctly for most users for multiple reasons:

1) so that the shards param wouldn't cause infinite recursion
2) so that the defaults wouldn't be automatically inherited by the per-shard 
requests

But now is not then -- the default behavior of shards.qt should change to make 
the most sense given the features and best practice currently available in 
Solr.  SolrCloud solves #1, and IIUC useParams solves #2, so we can move 
forward.


 SearchHandler should use path when no qt or shard.qt parameter is specified
 ---

 Key: SOLR-6311
 URL: https://issues.apache.org/jira/browse/SOLR-6311
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Molloy
Assignee: Timothy Potter
 Attachments: SOLR-6311.patch


 When performing distributed searches, you have to specify shards.qt unless 
 you're on the default /select path for your handler. As this is configurable, 
 even the default search handler could be on another path. The shard requests 
 should thus default to the path if no shards.qt was specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6239) Remove RAMUsageEstimator Unsafe calls

2015-02-11 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6239:
--
Attachment: LUCENE-6239.patch

Path removing Unsafe.

I also found out that Constants.Java also used Unsfae for the bitness. Now it 
uses solely sun.misc.data.model sysprop. I will investigate if we can get the 
information otherwise.

[~dweiss]: Can you look at the array header value? The one prevously looked 
strange to me, now the constant does the same as the comment says. I am not 
sure where the comment is documented, I assume, you wrote that.

 Remove RAMUsageEstimator Unsafe calls
 -

 Key: LUCENE-6239
 URL: https://issues.apache.org/jira/browse/LUCENE-6239
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6239.patch


 This is unnecessary risk. We should remove this stuff, it is not needed here. 
 We are a search engine, not a ram calculator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2627 - Still Failing

2015-02-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2627/

5 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.NoHttpResponseException: The target server failed to respond
at 
__randomizedtesting.SeedInfo.seed([12765C5D892F585B:9A22638727D335A3]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:865)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:730)
at 
org.apache.solr.cloud.HttpPartitionTest.doSendDoc(HttpPartitionTest.java:484)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:501)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:193)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 

[jira] [Updated] (SOLR-7097) Update other Document in DocTransformer

2015-02-11 Thread yuanyun.cn (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuanyun.cn updated SOLR-7097:
-
Description: 
Solr DocTransformer is good, but it only allows us to change current document: 
add or remove, update fields.

It would be great if we can update other document(previous especially) , or 
better we can delete doc(especially useful during test) or add doc in 
DocTransformer.

User case:
We can use flat group mode(group.main=true) to put parent and child close to 
each other(parent first), then we can use DocTransformer to update parent 
document when access its child document.

Some thought about Implementation:
org.apache.solr.response.TextResponseWriter.writeDocuments(String, 
ResultContext, ReturnFields)
when cachMode=true, in the for loop, after transform, we can store the solrdoc 
in a list, write these doc at the end.

cachMode = req.getParams().getBool(cachMode, false);
SolrDocument[] cachedDocs = new SolrDocument[sz];
for (int i = 0; i  sz; i++) {
 SolrDocument sdoc = toSolrDocument(doc);
 if (transformer != null) {
  transformer.transform(sdoc, id);
 }
 if(cachMode)
 {
cachedDocs[i] = sdoc;
 }
 else{
writeSolrDocument( null, sdoc, returnFields, i );
 }
  
}
if (transformer != null) {
 transformer.setContext(null);
}
if(cachMode) {
 for (int i = 0; i  sz; i++) {
  writeSolrDocument(null, cachedDocs[i], returnFields, i);
 }
}
writeEndDocumentList();

  was:
Solr DocTransformer is good, but it only allows us to change current document: 
add or remove, update fields.

It would be great if we can update other document(previous especially) , or 
better we can delete doc(especially useful during test) or add doc in 

User case:
We can use flat group mode(group.main=true) to put parent and child close to 
each other(parent first), then we can use DocTransformer to update parent 
document when access its child document.

Some thought about Implementation:
org.apache.solr.response.TextResponseWriter.writeDocuments(String, 
ResultContext, ReturnFields)
when cachMode=true, in the for loop, after transform, we can store the solrdoc 
in a list, write these doc at the end.

cachMode = req.getParams().getBool(cachMode, false);
SolrDocument[] cachedDocs = new SolrDocument[sz];
for (int i = 0; i  sz; i++) {
 SolrDocument sdoc = toSolrDocument(doc);
 if (transformer != null) {
  transformer.transform(sdoc, id);
 }
 if(cachMode)
 {
cachedDocs[i] = sdoc;
 }
 else{
writeSolrDocument( null, sdoc, returnFields, i );
 }
  
}
if (transformer != null) {
 transformer.setContext(null);
}
if(cachMode) {
 for (int i = 0; i  sz; i++) {
  writeSolrDocument(null, cachedDocs[i], returnFields, i);
 }
}
writeEndDocumentList();


 Update other Document in DocTransformer
 ---

 Key: SOLR-7097
 URL: https://issues.apache.org/jira/browse/SOLR-7097
 Project: Solr
  Issue Type: Improvement
Reporter: yuanyun.cn
Priority: Minor
  Labels: searcher, transformers

 Solr DocTransformer is good, but it only allows us to change current 
 document: add or remove, update fields.
 It would be great if we can update other document(previous especially) , or 
 better we can delete doc(especially useful during test) or add doc in 
 DocTransformer.
 User case:
 We can use flat group mode(group.main=true) to put parent and child close to 
 each other(parent first), then we can use DocTransformer to update parent 
 document when access its child document.
 Some thought about Implementation:
 org.apache.solr.response.TextResponseWriter.writeDocuments(String, 
 ResultContext, ReturnFields)
 when cachMode=true, in the for loop, after transform, we can store the 
 solrdoc in a list, write these doc at the end.
 cachMode = req.getParams().getBool(cachMode, false);
 SolrDocument[] cachedDocs = new SolrDocument[sz];
 for (int i = 0; i  sz; i++) {
  SolrDocument sdoc = toSolrDocument(doc);
  if (transformer != null) {
   transformer.transform(sdoc, id);
  }
  if(cachMode)
  {
 cachedDocs[i] = sdoc;
  }
  else{
 writeSolrDocument( null, sdoc, returnFields, i );
  }
   
 }
 if (transformer != null) {
  transformer.setContext(null);
 }
 if(cachMode) {
  for (int i = 0; i  sz; i++) {
   writeSolrDocument(null, cachedDocs[i], returnFields, i);
  }
 }
 writeEndDocumentList();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7097) Update other Document in DocTransformer

2015-02-11 Thread yuanyun.cn (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuanyun.cn updated SOLR-7097:
-
Description: 
Solr DocTransformer is good, but it only allows us to change current document: 
add or remove, update fields.

It would be great if we can update other document(previous especially) , or 
better we can delete doc(especially useful during test) or add doc in 

User case:
We can use flat group mode(group.main=true) to put parent and child close to 
each other(parent first), then we can use DocTransformer to update parent 
document when access its child document.

Some thought about Implementation:
org.apache.solr.response.TextResponseWriter.writeDocuments(String, 
ResultContext, ReturnFields)
when cachMode=true, in the for loop, after transform, we can store the solrdoc 
in a list, write these doc at the end.

cachMode = req.getParams().getBool(cachMode, false);
SolrDocument[] cachedDocs = new SolrDocument[sz];
for (int i = 0; i  sz; i++) {
 SolrDocument sdoc = toSolrDocument(doc);
 if (transformer != null) {
  transformer.transform(sdoc, id);
 }
 if(cachMode)
 {
cachedDocs[i] = sdoc;
 }
 else{
writeSolrDocument( null, sdoc, returnFields, i );
 }
  
}
if (transformer != null) {
 transformer.setContext(null);
}
if(cachMode) {
 for (int i = 0; i  sz; i++) {
  writeSolrDocument(null, cachedDocs[i], returnFields, i);
 }
}
writeEndDocumentList();

  was:
Solr DocTransformer is good, but it only allows us to change current document: 
add or remove, update fields.

It would be great if we can update other document(previous especially)  .

User case:
We can use flat group mode(group.main=true) to put parent and child close to 
each other(parent first), then we can use DocTransformer to update parent 
document when access its child document.

Some thought about Implementation:
org.apache.solr.response.TextResponseWriter.writeDocuments(String, 
ResultContext, ReturnFields)
when cachMode=true, in the for loop, after transform, we can store the solrdoc 
in a list, write these doc at the end.

cachMode = req.getParams().getBool(cachMode, false);
SolrDocument[] cachedDocs = new SolrDocument[sz];
for (int i = 0; i  sz; i++) {
 SolrDocument sdoc = toSolrDocument(doc);
 if (transformer != null) {
  transformer.transform(sdoc, id);
 }
 if(cachMode)
 {
cachedDocs[i] = sdoc;
 }
 else{
writeSolrDocument( null, sdoc, returnFields, i );
 }
  
}
if (transformer != null) {
 transformer.setContext(null);
}
if(cachMode) {
 for (int i = 0; i  sz; i++) {
  writeSolrDocument(null, cachedDocs[i], returnFields, i);
 }
}
writeEndDocumentList();


 Update other Document in DocTransformer
 ---

 Key: SOLR-7097
 URL: https://issues.apache.org/jira/browse/SOLR-7097
 Project: Solr
  Issue Type: Improvement
Reporter: yuanyun.cn
Priority: Minor
  Labels: searcher, transformers

 Solr DocTransformer is good, but it only allows us to change current 
 document: add or remove, update fields.
 It would be great if we can update other document(previous especially) , or 
 better we can delete doc(especially useful during test) or add doc in 
 User case:
 We can use flat group mode(group.main=true) to put parent and child close to 
 each other(parent first), then we can use DocTransformer to update parent 
 document when access its child document.
 Some thought about Implementation:
 org.apache.solr.response.TextResponseWriter.writeDocuments(String, 
 ResultContext, ReturnFields)
 when cachMode=true, in the for loop, after transform, we can store the 
 solrdoc in a list, write these doc at the end.
 cachMode = req.getParams().getBool(cachMode, false);
 SolrDocument[] cachedDocs = new SolrDocument[sz];
 for (int i = 0; i  sz; i++) {
  SolrDocument sdoc = toSolrDocument(doc);
  if (transformer != null) {
   transformer.transform(sdoc, id);
  }
  if(cachMode)
  {
 cachedDocs[i] = sdoc;
  }
  else{
 writeSolrDocument( null, sdoc, returnFields, i );
  }
   
 }
 if (transformer != null) {
  transformer.setContext(null);
 }
 if(cachMode) {
  for (int i = 0; i  sz; i++) {
   writeSolrDocument(null, cachedDocs[i], returnFields, i);
  }
 }
 writeEndDocumentList();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1348: POMs out of sync

2015-02-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1348/

No tests ran.

Build Log:
[...truncated 17949 lines...]
BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:535:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:185:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/common-build.xml:61:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/common-build.xml:58:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/build.xml:453:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/common-build.xml:276:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/build.xml:49:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/common-build.xml:298:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:2054:
 Javadocs warnings were found!

Total time: 9 minutes 20 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-7099) bin/solr -cloud mode should launch a local ZK in its own process using zkcli's runzk option (instead of embedded in the first Solr process)

2015-02-11 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316499#comment-14316499
 ] 

Hoss Man commented on SOLR-7099:


i've mentioned this in the past: ideally the _example_ mode of bin/solr will 
launch a single node zk server for you as needed, but will do so using a script 
and echo out what script command it ran.  (similar to how it echos out 
collection creation / health check commands)

when you run solr in (non-example) cloud mode, it should expect zk to already 
be running and by this point you should either already know what you need to 
setup a zk quorom, or you will remember that bin/solr has a command line option 
to launch solr that you saw when you were running the examples

 bin/solr -cloud mode should launch a local ZK in its own process using 
 zkcli's runzk option (instead of embedded in the first Solr process)
 ---

 Key: SOLR-7099
 URL: https://issues.apache.org/jira/browse/SOLR-7099
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Timothy Potter

 Embedded ZK is great for unit testing and quick examples, but as soon as 
 someone wants to restart their cluster, embedded mode causes a lot of issues, 
 esp. if you restart the node that embeds ZK. Of course we don't want users to 
 have to install ZooKeeper just to get started with Solr either. 
 Thankfully, ZkCLI already includes a way to launch ZooKeeper in its own 
 process but still within the Solr directory structure. We can hide the 
 details and complexity of working with ZK in the bin/solr script. The 
 solution to this should still make it very clear that this is for getting 
 started / examples and not to be used in production.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.0-Linux (32bit/jdk1.8.0_31) - Build # 127 - Failure!

2015-02-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.0-Linux/127/
Java: 32bit/jdk1.8.0_31 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.SaslZkACLProviderTest.testSaslZkACLProvider

Error Message:
org.apache.directory.api.ldap.model.exception.LdapOtherException: 
org.apache.directory.api.ldap.model.exception.LdapOtherException: 
ERR_04447_CANNOT_NORMALIZE_VALUE Cannot normalize the wrapped value 
ERR_04473_NOT_VALID_VALUE Not a valid value '20090818022733Z' for the 
AttributeType 'ATTRIBUTE_TYPE ( 1.3.6.1.4.1.18060.0.4.1.2.35  NAME 
'schemaModifyTimestamp'  DESC time which schema was modified  SUP 
modifyTimestamp  EQUALITY generalizedTimeMatch  ORDERING 
generalizedTimeOrderingMatch  SYNTAX 1.3.6.1.4.1.1466.115.121.1.24  USAGE 
directoryOperation  ) '

Stack Trace:
java.lang.RuntimeException: 
org.apache.directory.api.ldap.model.exception.LdapOtherException: 
org.apache.directory.api.ldap.model.exception.LdapOtherException: 
ERR_04447_CANNOT_NORMALIZE_VALUE Cannot normalize the wrapped value 
ERR_04473_NOT_VALID_VALUE Not a valid value '20090818022733Z' for the 
AttributeType 'ATTRIBUTE_TYPE ( 1.3.6.1.4.1.18060.0.4.1.2.35
 NAME 'schemaModifyTimestamp'
 DESC time which schema was modified
 SUP modifyTimestamp
 EQUALITY generalizedTimeMatch
 ORDERING generalizedTimeOrderingMatch
 SYNTAX 1.3.6.1.4.1.1466.115.121.1.24
 USAGE directoryOperation
 )
'
at 
org.apache.solr.cloud.SaslZkACLProviderTest$SaslZkTestServer.run(SaslZkACLProviderTest.java:204)
at 
org.apache.solr.cloud.SaslZkACLProviderTest.setUp(SaslZkACLProviderTest.java:74)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:861)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-4524) Merge DocsEnum and DocsAndPositionsEnum into PostingsEnum

2015-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1431#comment-1431
 ] 

ASF subversion and git services commented on LUCENE-4524:
-

Commit 1659021 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1659021 ]

LUCENE-4524: remove fixed @Seed

 Merge DocsEnum and DocsAndPositionsEnum into PostingsEnum
 -

 Key: LUCENE-4524
 URL: https://issues.apache.org/jira/browse/LUCENE-4524
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/codecs, core/index, core/search
Affects Versions: 4.0
Reporter: Simon Willnauer
Assignee: Alan Woodward
 Fix For: Trunk, 5.1

 Attachments: LUCENE-4524.patch, LUCENE-4524.patch, LUCENE-4524.patch, 
 LUCENE-4524.patch, LUCENE-4524.patch, LUCENE-4524.patch


 spinnoff from http://www.gossamer-threads.com/lists/lucene/java-dev/172261
 {noformat}
 hey folks, 
 I have spend a hell lot of time on the positions branch to make 
 positions and offsets working on all queries if needed. The one thing 
 that bugged me the most is the distinction between DocsEnum and 
 DocsAndPositionsEnum. Really when you look at it closer DocsEnum is a 
 DocsAndFreqsEnum and if we omit Freqs we should return a DocIdSetIter. 
 Same is true for 
 DocsAndPostionsAndPayloadsAndOffsets*YourFancyFeatureHere*Enum. I 
 don't really see the benefits from this. We should rather make the 
 interface simple and call it something like PostingsEnum where you 
 have to specify flags on the TermsIterator and if we can't provide the 
 sufficient enum we throw an exception? 
 I just want to bring up the idea here since it might simplify a lot 
 for users as well for us when improving our positions / offset etc. 
 support. 
 thoughts? Ideas? 
 simon 
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7101) JmxMonitoredMap can throw an exception in clear when queryNames fails.

2015-02-11 Thread Mark Miller (JIRA)
Mark Miller created SOLR-7101:
-

 Summary: JmxMonitoredMap can throw an exception in clear when 
queryNames fails.
 Key: SOLR-7101
 URL: https://issues.apache.org/jira/browse/SOLR-7101
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: Trunk, 5.1


This was added in SOLR-2927 - we should be lienant on failures here like we are 
in other parts of this class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6239) Remove RAMUsageEstimator Unsafe calls

2015-02-11 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316743#comment-14316743
 ] 

Uwe Schindler commented on LUCENE-6239:
---

Hi,
I just found out: With Java 1.7+, all the Unsafe constants are exposed as 
public static final variables. So we dont need to directly access the unsafe 
constant.

By that it would be possible to get the REFERENCE_SIZE without hotspot bean 
just be getting a static final int constant... The same applies fo the JVM 
bitness.

Would this be a valid use? In fact there can break nothing, it could just be 
that our code cabnot see those constants, but thats not different from the 
HotspotBean.

We just did not use that in RAMUsageEstimator before, because in Java 6, those 
constants were not there! On the other hand, in Java 9, Unsafe is likely to 
disappear, so I think we should really work without Unsafe.

 Remove RAMUsageEstimator Unsafe calls
 -

 Key: LUCENE-6239
 URL: https://issues.apache.org/jira/browse/LUCENE-6239
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6239.patch


 This is unnecessary risk. We should remove this stuff, it is not needed here. 
 We are a search engine, not a ram calculator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6239) Remove RAMUsageEstimator Unsafe calls

2015-02-11 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316743#comment-14316743
 ] 

Uwe Schindler edited comment on LUCENE-6239 at 2/11/15 6:46 PM:


Hi,
I just found out: With Java 1.7+, all the Unsafe constants are exposed as 
public static final variables. So we dont need to directly access the unsafe 
instance object.

By that it would be possible to get the REFERENCE_SIZE without hotspot bean 
just by getting a static final int constant... The same applies fo the JVM 
bitness.

Would this be a valid use? In fact there can break nothing, it could just be 
that our code cannot see those constants, but thats not different from the 
HotspotBean. We are just reading a public static constant from Unsafe (via 
reflection).

We just did not use that in RAMUsageEstimator before, because in Java 6, those 
constants were not there! On the other hand, in Java 9, Unsafe is likely to 
disappear, so I think we should really work without Unsafe.


was (Author: thetaphi):
Hi,
I just found out: With Java 1.7+, all the Unsafe constants are exposed as 
public static final variables. So we dont need to directly access the unsafe 
constant.

By that it would be possible to get the REFERENCE_SIZE without hotspot bean 
just by getting a static final int constant... The same applies fo the JVM 
bitness.

Would this be a valid use? In fact there can break nothing, it could just be 
that our code cannot see those constants, but thats not different from the 
HotspotBean. We are just reading a public static constant from Unsafe (via 
reflection).

We just did not use that in RAMUsageEstimator before, because in Java 6, those 
constants were not there! On the other hand, in Java 9, Unsafe is likely to 
disappear, so I think we should really work without Unsafe.

 Remove RAMUsageEstimator Unsafe calls
 -

 Key: LUCENE-6239
 URL: https://issues.apache.org/jira/browse/LUCENE-6239
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6239.patch


 This is unnecessary risk. We should remove this stuff, it is not needed here. 
 We are a search engine, not a ram calculator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6311) SearchHandler should use path when no qt or shard.qt parameter is specified

2015-02-11 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316751#comment-14316751
 ] 

Timothy Potter commented on SOLR-6311:
--

I'm going with [~hossman]'s suggestion of using the LUCENE_MATCH_VERSION and am 
targeting this fix for the 5.1 release. So my first inclination was to do:

{code}
if 
(req.getCore().getSolrConfig().luceneMatchVersion.onOrAfter(Version.LUCENE_5_1_0))
 {
 ...
{code}

But Version.LUCENE_5_1_0 is deprecated, so do I do this instead? 

{code}
if (req.getCore().getSolrConfig().luceneMatchVersion.onOrAfter(Version.LATEST)) 
{
...
{code}

I guess it's the deprecated thing that's throwing me off.

 SearchHandler should use path when no qt or shard.qt parameter is specified
 ---

 Key: SOLR-6311
 URL: https://issues.apache.org/jira/browse/SOLR-6311
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Molloy
Assignee: Timothy Potter
 Attachments: SOLR-6311.patch


 When performing distributed searches, you have to specify shards.qt unless 
 you're on the default /select path for your handler. As this is configurable, 
 even the default search handler could be on another path. The shard requests 
 should thus default to the path if no shards.qt was specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6311) SearchHandler should use path when no qt or shard.qt parameter is specified

2015-02-11 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316767#comment-14316767
 ] 

Timothy Potter commented on SOLR-6311:
--

nm! If I look at branch5x, my question is answered ;-) sometimes you have to 
look outside of trunk to see clearly!

 SearchHandler should use path when no qt or shard.qt parameter is specified
 ---

 Key: SOLR-6311
 URL: https://issues.apache.org/jira/browse/SOLR-6311
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Molloy
Assignee: Timothy Potter
 Attachments: SOLR-6311.patch


 When performing distributed searches, you have to specify shards.qt unless 
 you're on the default /select path for your handler. As this is configurable, 
 even the default search handler could be on another path. The shard requests 
 should thus default to the path if no shards.qt was specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6832) Queries be served locally rather than being forwarded to another replica

2015-02-11 Thread Sachin Goyal (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316779#comment-14316779
 ] 

Sachin Goyal commented on SOLR-6832:


Thank you [~thelabdude].
Please let me know how we can get this committed into the trunk and I can edit 
the Solr reference guide.
I would also like to back-port this into the 5x branch.

 Queries be served locally rather than being forwarded to another replica
 

 Key: SOLR-6832
 URL: https://issues.apache.org/jira/browse/SOLR-6832
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.2
Reporter: Sachin Goyal
Assignee: Timothy Potter
 Attachments: SOLR-6832.patch, SOLR-6832.patch, SOLR-6832.patch, 
 SOLR-6832.patch


 Currently, I see that code flow for a query in SolrCloud is as follows:
 For distributed query:
 SolrCore - SearchHandler.handleRequestBody() - HttpShardHandler.submit()
 For non-distributed query:
 SolrCore - SearchHandler.handleRequestBody() - QueryComponent.process()
 \\
 \\
 \\
 For a distributed query, the request is always sent to all the shards even if 
 the originating SolrCore (handling the original distributed query) is a 
 replica of one of the shards.
 If the original Solr-Core can check itself before sending http requests for 
 any shard, we can probably save some network hopping and gain some 
 performance.
 \\
 \\
 We can change SearchHandler.handleRequestBody() or HttpShardHandler.submit() 
 to fix this behavior (most likely the former and not the latter).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6240) ban @Seed in tests.

2015-02-11 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316778#comment-14316778
 ] 

Ryan Ernst commented on LUCENE-6240:


+1

 ban @Seed in tests.
 ---

 Key: LUCENE-6240
 URL: https://issues.apache.org/jira/browse/LUCENE-6240
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6240.patch


 If someone is debugging, they can easily accidentally commit \@Seed 
 annotation, hurting the quality of the test. We should detect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6198) two phase intersection

2015-02-11 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316791#comment-14316791
 ] 

Adrien Grand commented on LUCENE-6198:
--

bq. New patch that adds two-phase support to ConjunctionScorer.

By that I mean that not only ConjunctionScorer can take sub-clauses that 
supports approximations, but also that in that case it will support 
approximation too.

 two phase intersection
 --

 Key: LUCENE-6198
 URL: https://issues.apache.org/jira/browse/LUCENE-6198
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-6198.patch, LUCENE-6198.patch, LUCENE-6198.patch


 Currently some scorers have to do a lot of per-document work to determine if 
 a document is a match. The simplest example is a phrase scorer, but there are 
 others (spans, sloppy phrase, geospatial, etc).
 Imagine a conjunction with two MUST clauses, one that is a term that matches 
 all odd documents, another that is a phrase matching all even documents. 
 Today this conjunction will be very expensive, because the zig-zag 
 intersection is reading a ton of useless positions.
 The same problem happens with filteredQuery and anything else that acts like 
 a conjunction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6030) Add norms patched compression which uses table for most common values

2015-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316659#comment-14316659
 ] 

ASF subversion and git services commented on LUCENE-6030:
-

Commit 1659020 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1659020 ]

LUCENE-6030: remove fixed @Seed

 Add norms patched compression which uses table for most common values
 -

 Key: LUCENE-6030
 URL: https://issues.apache.org/jira/browse/LUCENE-6030
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Ryan Ernst
Assignee: Ryan Ernst
 Fix For: 5.0, Trunk

 Attachments: LUCENE-6030.patch


 We have added the PATCHED norms sub format in lucene 50, which uses a bitset 
 to mark documents that have the most common value (when 97% of the documents 
 have that value).  This works well for fields that have a predominant value 
 length, and then a small number of docs with some other random values.  But 
 another common case is having a handful of very common value lengths, like 
 with a title field.
 We can use a table (see TABLE_COMPRESSION) to store the most common values, 
 and save an oridinal for the other case, at which point we can lookup in 
 the secondary patch table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [VOTE] 5.0.0 RC2

2015-02-11 Thread Uwe Schindler
Hi,

For me it worked, maybe Europe is a better location for downloads from 
people.a.o. With Java 7 and Java 8 tested, I got the following result:

SUCCESS! [2:33:21.113312]

I also did some manual checks of documentation and Solr artifacts under Windows 
with whitespace in user name (no adoption to my Lucene apps - too much work).

Finally,
My vote is:
+1 to release!

Uwe
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Steve Rowe [mailto:sar...@gmail.com]
 Sent: Wednesday, February 11, 2015 1:23 AM
 To: dev@lucene.apache.org
 Subject: Re: [VOTE] 5.0.0 RC2
 
 I’ll work on adding multiple retries with a pause between, hopefully that’ll
 help. - Steve
 
  On Feb 10, 2015, at 6:08 PM, Anshum Gupta ans...@anshumgupta.net
 wrote:
 
  Thanks Uwe. I've tried it a few times and it's failed after retrying so I'm 
  just
 sticking to running it after manually downloading.
 
  On Tue, Feb 10, 2015 at 2:17 PM, Uwe Schindler u...@thetaphi.de
 wrote:
  Actually this is how it looked like:
 
 
 
  thetaphi@opteron:~/lucene$ tail -100f nohup.out
 
  Java 1.7 JAVA_HOME=/home/thetaphi/jdk1.7.0_76
 
  Java 1.8 JAVA_HOME=/home/thetaphi/jdk1.8.0_31
 
  NOTE: output encoding is UTF-8
 
 
 
  Load release URL
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-
 rev1658469...
 
unshortened: http://people.apache.org/~anshum/staging_area/lucene-
 solr-5.0.0-RC2-rev1658469/
 
  Retrying download of url
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-
 rev1658469/ after exception: urlopen e
 
  rror [Errno 110] Connection timed out
 
 
 
  Test Lucene...
 
test basics...
 
get KEYS
 
  0.1 MB in 1.42 sec (0.1 MB/sec)
 
check changes HTML...
 
download lucene-5.0.0-src.tgz...
 
  27.9 MB in 8.32 sec (3.4 MB/sec)
 
  verify md5/sha1 digests
 
  verify sig
 
  verify trust
 
GPG: gpg: WARNING: This key is not certified with a trusted signature!
 
download lucene-5.0.0.tgz...
 
  64.0 MB in 15.83 sec (4.0 MB/sec)
 
  verify md5/sha1 digests
 
  verify sig
 
  verify trust
 
GPG: gpg: WARNING: This key is not certified with a trusted signature!
 
download lucene-5.0.0.zip...
 
  73.5 MB in 23.91 sec (3.1 MB/sec)
 
  verify md5/sha1 digests
 
  verify sig
 
  verify trust
 
GPG: gpg: WARNING: This key is not certified with a trusted signature!
 
unpack lucene-5.0.0.tgz...
 
  verify JAR metadata/identity/no javax.* or java.* classes...
 
  test demo with 1.7...
 
got 5647 hits for query lucene
 
  checkindex with 1.7...
 
  test demo with 1.8...
 
got 5647 hits for query lucene
 
  checkindex with 1.8...
 
  check Lucene's javadoc JAR
 
unpack lucene-5.0.0.zip...
 
  verify JAR metadata/identity/no javax.* or java.* classes...
 
  test demo with 1.7...
 
got 5647 hits for query lucene
 
  checkindex with 1.7...
 
  test demo with 1.8...
 
got 5647 hits for query lucene
 
  checkindex with 1.8...
 
  check Lucene's javadoc JAR
 
unpack lucene-5.0.0-src.tgz...
 
  make sure no JARs/WARs in src dist...
 
  run ant validate
 
  run tests w/ Java 7 and testArgs=''...
 
  test demo with 1.7...
 
got 210 hits for query lucene
 
  checkindex with 1.7...
 
  generate javadocs w/ Java 7...
 
 
 
  -
 
  Uwe Schindler
 
  H.-H.-Meier-Allee 63, D-28213 Bremen
 
  http://www.thetaphi.de
 
  eMail: u...@thetaphi.de
 
 
 
  From: Uwe Schindler [mailto:u...@thetaphi.de]
  Sent: Tuesday, February 10, 2015 11:15 PM
  To: dev@lucene.apache.org
  Subject: RE: [VOTE] 5.0.0 RC2
 
 
 
  It is still running with http. For me it repeated one download because of
 timeout, but it passed through this.
 
 
 
  Uwe
 
 
 
  -
 
  Uwe Schindler
 
  H.-H.-Meier-Allee 63, D-28213 Bremen
 
  http://www.thetaphi.de
 
  eMail: u...@thetaphi.de
 
 
 
  From: Anshum Gupta [mailto:ans...@anshumgupta.net]
  Sent: Tuesday, February 10, 2015 10:59 PM
  To: dev@lucene.apache.org
  Subject: Re: [VOTE] 5.0.0 RC2
 
 
 
  I'm curious to know how many people actually ran it using http vs
 downloading the tgz. Did someone succeed with http?
 
 
 
  On Tue, Feb 10, 2015 at 1:43 PM, Uwe Schindler u...@thetaphi.de
 wrote:
 
  Don’t forget to also test Java 8!
 
 
 
  python3 -u dev-tools/scripts/smokeTestRelease.py --test-java8
 /path/to/jdk1.8.0 http://people.apache.org/~anshum/staging_area/lucene-
 solr-5.0.0-RC2-rev1658469
 
 
 
  Uwe
 
 
 
  -
 
  Uwe Schindler
 
  H.-H.-Meier-Allee 63, D-28213 Bremen
 
  http://www.thetaphi.de
 
  eMail: u...@thetaphi.de
 
 
 
  From: Anshum Gupta [mailto:ans...@anshumgupta.net]
  Sent: Tuesday, February 10, 2015 12:17 AM
  To: dev@lucene.apache.org
  Subject: [VOTE] 5.0.0 RC2
 
 
 
  Please vote for the second release candidate for Lucene/Solr 5.0.0.
 
 
 
  The 

[jira] [Comment Edited] (LUCENE-6239) Remove RAMUsageEstimator Unsafe calls

2015-02-11 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316743#comment-14316743
 ] 

Uwe Schindler edited comment on LUCENE-6239 at 2/11/15 6:46 PM:


Hi,
I just found out: With Java 1.7+, all the Unsafe constants are exposed as 
public static final variables. So we dont need to directly access the unsafe 
constant.

By that it would be possible to get the REFERENCE_SIZE without hotspot bean 
just by getting a static final int constant... The same applies fo the JVM 
bitness.

Would this be a valid use? In fact there can break nothing, it could just be 
that our code cannot see those constants, but thats not different from the 
HotspotBean. We are just reading a public static constant from Unsafe (via 
reflection).

We just did not use that in RAMUsageEstimator before, because in Java 6, those 
constants were not there! On the other hand, in Java 9, Unsafe is likely to 
disappear, so I think we should really work without Unsafe.


was (Author: thetaphi):
Hi,
I just found out: With Java 1.7+, all the Unsafe constants are exposed as 
public static final variables. So we dont need to directly access the unsafe 
constant.

By that it would be possible to get the REFERENCE_SIZE without hotspot bean 
just be getting a static final int constant... The same applies fo the JVM 
bitness.

Would this be a valid use? In fact there can break nothing, it could just be 
that our code cabnot see those constants, but thats not different from the 
HotspotBean.

We just did not use that in RAMUsageEstimator before, because in Java 6, those 
constants were not there! On the other hand, in Java 9, Unsafe is likely to 
disappear, so I think we should really work without Unsafe.

 Remove RAMUsageEstimator Unsafe calls
 -

 Key: LUCENE-6239
 URL: https://issues.apache.org/jira/browse/LUCENE-6239
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6239.patch


 This is unnecessary risk. We should remove this stuff, it is not needed here. 
 We are a search engine, not a ram calculator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7101) JmxMonitoredMap can throw an exception in clear when queryNames fails.

2015-02-11 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-7101:
--
Attachment: SOLR-7101.patch

 JmxMonitoredMap can throw an exception in clear when queryNames fails.
 --

 Key: SOLR-7101
 URL: https://issues.apache.org/jira/browse/SOLR-7101
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: SOLR-7101.patch


 This was added in SOLR-2927 - we should be lienant on failures here like we 
 are in other parts of this class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6240) ban @Seed in tests.

2015-02-11 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316768#comment-14316768
 ] 

Alan Woodward commented on LUCENE-6240:
---

+1!  And thanks for fixing.

 ban @Seed in tests.
 ---

 Key: LUCENE-6240
 URL: https://issues.apache.org/jira/browse/LUCENE-6240
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6240.patch


 If someone is debugging, they can easily accidentally commit \@Seed 
 annotation, hurting the quality of the test. We should detect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6191) Spatial 2D faceting (heatmaps)

2015-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316796#comment-14316796
 ] 

ASF subversion and git services commented on LUCENE-6191:
-

Commit 1659041 from [~dsmiley] in branch 'dev/trunk'
[ https://svn.apache.org/r1659041 ]

LUCENE-6191: fix test bug when given 0-area input

 Spatial 2D faceting (heatmaps)
 --

 Key: LUCENE-6191
 URL: https://issues.apache.org/jira/browse/LUCENE-6191
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.1

 Attachments: LUCENE-6191__Spatial_heatmap.patch, 
 LUCENE-6191__Spatial_heatmap.patch, LUCENE-6191__Spatial_heatmap.patch


 Lucene spatial's PrefixTree (grid) based strategies index data in a way 
 highly amenable to faceting on grids cells to compute a so-called _heatmap_. 
 The underlying code in this patch uses the PrefixTreeFacetCounter utility 
 class which was recently refactored out of faceting for NumberRangePrefixTree 
 LUCENE-5735.  At a low level, the terms (== grid cells) are navigated 
 per-segment, forward only with TermsEnum.seek, so it's pretty quick and 
 furthermore requires no extra caches  no docvalues.  Ideally you should use 
 QuadPrefixTree (or Flex once it comes out) to maximize the number grid levels 
 which in turn maximizes the fidelity of choices when you ask for a grid 
 covering a region.  Conveniently, the provided capability returns the data in 
 a 2-D grid of counts, so the caller needn't know a thing about how the data 
 is encoded in the prefix tree.  Well almost... at this point they need to 
 provide a grid level, but I'll soon provide a means of deriving the grid 
 level based on a min/max cell count.
 I recommend QuadPrefixTree with geo=false so that you can provide a square 
 world-bounds (360x360 degrees), which means square grid cells which are more 
 desirable to display than rectangular cells.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6240) ban @Seed in tests.

2015-02-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316794#comment-14316794
 ] 

Michael McCandless commented on LUCENE-6240:


+1

 ban @Seed in tests.
 ---

 Key: LUCENE-6240
 URL: https://issues.apache.org/jira/browse/LUCENE-6240
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6240.patch


 If someone is debugging, they can easily accidentally commit \@Seed 
 annotation, hurting the quality of the test. We should detect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6240) ban @Seed in tests.

2015-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316834#comment-14316834
 ] 

ASF subversion and git services commented on LUCENE-6240:
-

Commit 1659049 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1659049 ]

LUCENE-6240: ban @Seed in tests

 ban @Seed in tests.
 ---

 Key: LUCENE-6240
 URL: https://issues.apache.org/jira/browse/LUCENE-6240
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6240.patch


 If someone is debugging, they can easily accidentally commit \@Seed 
 annotation, hurting the quality of the test. We should detect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4524) Merge DocsEnum and DocsAndPositionsEnum into PostingsEnum

2015-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316651#comment-14316651
 ] 

ASF subversion and git services commented on LUCENE-4524:
-

Commit 1659018 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1659018 ]

LUCENE-4524: remove fixed @Seed

 Merge DocsEnum and DocsAndPositionsEnum into PostingsEnum
 -

 Key: LUCENE-4524
 URL: https://issues.apache.org/jira/browse/LUCENE-4524
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/codecs, core/index, core/search
Affects Versions: 4.0
Reporter: Simon Willnauer
Assignee: Alan Woodward
 Fix For: Trunk, 5.1

 Attachments: LUCENE-4524.patch, LUCENE-4524.patch, LUCENE-4524.patch, 
 LUCENE-4524.patch, LUCENE-4524.patch, LUCENE-4524.patch


 spinnoff from http://www.gossamer-threads.com/lists/lucene/java-dev/172261
 {noformat}
 hey folks, 
 I have spend a hell lot of time on the positions branch to make 
 positions and offsets working on all queries if needed. The one thing 
 that bugged me the most is the distinction between DocsEnum and 
 DocsAndPositionsEnum. Really when you look at it closer DocsEnum is a 
 DocsAndFreqsEnum and if we omit Freqs we should return a DocIdSetIter. 
 Same is true for 
 DocsAndPostionsAndPayloadsAndOffsets*YourFancyFeatureHere*Enum. I 
 don't really see the benefits from this. We should rather make the 
 interface simple and call it something like PostingsEnum where you 
 have to specify flags on the TermsIterator and if we can't provide the 
 sufficient enum we throw an exception? 
 I just want to bring up the idea here since it might simplify a lot 
 for users as well for us when improving our positions / offset etc. 
 support. 
 thoughts? Ideas? 
 simon 
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.8.0_31) - Build # 4375 - Still Failing!

2015-02-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4375/
Java: 64bit/jdk1.8.0_31 -XX:+UseCompressedOops -XX:+UseSerialGC

5 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
There were too many update fails - we expect it can happen, but shouldn't easily

Stack Trace:
java.lang.AssertionError: There were too many update fails - we expect it can 
happen, but shouldn't easily
at 
__randomizedtesting.SeedInfo.seed([6029226F8A9430BA:E87D1DB524685D42]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:220)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Commented] (LUCENE-1518) Merge Query and Filter classes

2015-02-11 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-1518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316802#comment-14316802
 ] 

Adrien Grand commented on LUCENE-1518:
--

bq. So this looks fine, makes it easy to use Filters as real queries. There is 
only one thing: The score returned is now always be 0. If you want to get the 
old behaviour where you get the boost as score, you just have to wrap the 
Filter with ConstantScoreQuery, like it was before?

Exactly.

bq. One other thing: QueryWrapperFilter is now obsolete, or not?

I didn't want to remove it yet because we still have some APIs that take a 
filter and not a query (eg. IndexSearcher.search, FilteredQuery). I want to 
remove it eventually but I think it's still a bit early?

 Merge Query and Filter classes
 --

 Key: LUCENE-1518
 URL: https://issues.apache.org/jira/browse/LUCENE-1518
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Affects Versions: 2.4
Reporter: Uwe Schindler
 Fix For: 4.9, Trunk

 Attachments: LUCENE-1518.patch, LUCENE-1518.patch


 This issue presents a patch, that merges Queries and Filters in a way, that 
 the new Filter class extends Query. This would make it possible, to use every 
 filter as a query.
 The new abstract filter class would contain all methods of 
 ConstantScoreQuery, deprecate ConstantScoreQuery. If somebody implements the 
 Filter's getDocIdSet()/bits() methods he has nothing more to do, he could 
 just use the filter as a normal query.
 I do not want to completely convert Filters to ConstantScoreQueries. The idea 
 is to combine Queries and Filters in such a way, that every Filter can 
 automatically be used at all places where a Query can be used (e.g. also 
 alone a search query without any other constraint). For that, the abstract 
 Query methods must be implemented and return a default weight for Filters 
 which is the current ConstantScore Logic. If the filter is used as a real 
 filter (where the API wants a Filter), the getDocIdSet part could be directly 
 used, the weight is useless (as it is currently, too). The constant score 
 default implementation is only used when the Filter is used as a Query (e.g. 
 as direct parameter to Searcher.search()). For the special case of 
 BooleanQueries combining Filters and Queries the idea is, to optimize the 
 BooleanQuery logic in such a way, that it detects if a BooleanClause is a 
 Filter (using instanceof) and then directly uses the Filter API and not take 
 the burden of the ConstantScoreQuery (see LUCENE-1345).
 Here some ideas how to implement Searcher.search() with Query and Filter:
 - User runs Searcher.search() using a Filter as the only parameter. As every 
 Filter is also a ConstantScoreQuery, the query can be executed and returns 
 score 1.0 for all matching documents.
 - User runs Searcher.search() using a Query as the only parameter: No change, 
 all is the same as before
 - User runs Searcher.search() using a BooleanQuery as parameter: If the 
 BooleanQuery does not contain a Query that is subclass of Filter (the new 
 Filter) everything as usual. If the BooleanQuery only contains exactly one 
 Filter and nothing else the Filter is used as a constant score query. If 
 BooleanQuery contains clauses with Queries and Filters the new algorithm 
 could be used: The queries are executed and the results filtered with the 
 filters.
 For the user this has the main advantage: That he can construct his query 
 using a simplified API without thinking about Filters oder Queries, you can 
 just combine clauses together. The scorer/weight logic then identifies the 
 cases to use the filter or the query weight API. Just like the query 
 optimizer of a RDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6191) Spatial 2D faceting (heatmaps)

2015-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316800#comment-14316800
 ] 

ASF subversion and git services commented on LUCENE-6191:
-

Commit 1659042 from [~dsmiley] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1659042 ]

LUCENE-6191: fix test bug when given 0-area input

 Spatial 2D faceting (heatmaps)
 --

 Key: LUCENE-6191
 URL: https://issues.apache.org/jira/browse/LUCENE-6191
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.1

 Attachments: LUCENE-6191__Spatial_heatmap.patch, 
 LUCENE-6191__Spatial_heatmap.patch, LUCENE-6191__Spatial_heatmap.patch


 Lucene spatial's PrefixTree (grid) based strategies index data in a way 
 highly amenable to faceting on grids cells to compute a so-called _heatmap_. 
 The underlying code in this patch uses the PrefixTreeFacetCounter utility 
 class which was recently refactored out of faceting for NumberRangePrefixTree 
 LUCENE-5735.  At a low level, the terms (== grid cells) are navigated 
 per-segment, forward only with TermsEnum.seek, so it's pretty quick and 
 furthermore requires no extra caches  no docvalues.  Ideally you should use 
 QuadPrefixTree (or Flex once it comes out) to maximize the number grid levels 
 which in turn maximizes the fidelity of choices when you ask for a grid 
 covering a region.  Conveniently, the provided capability returns the data in 
 a 2-D grid of counts, so the caller needn't know a thing about how the data 
 is encoded in the prefix tree.  Well almost... at this point they need to 
 provide a grid level, but I'll soon provide a means of deriving the grid 
 level based on a min/max cell count.
 I recommend QuadPrefixTree with geo=false so that you can provide a square 
 world-bounds (360x360 degrees), which means square grid cells which are more 
 desirable to display than rectangular cells.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-11 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316831#comment-14316831
 ] 

Noble Paul commented on SOLR-6736:
--

[~varunrajput] The syntax followed by your patch is not as specified in the 
description. The syntax is as important as the functionality

 A collections-like request handler to manage solr configurations on zookeeper
 -

 Key: SOLR-6736
 URL: https://issues.apache.org/jira/browse/SOLR-6736
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Varun Rajput
Assignee: Anshum Gupta
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6736.patch, SOLR-6736.patch


 Managing Solr configuration files on zookeeper becomes cumbersome while using 
 solr in cloud mode, especially while trying out changes in the 
 configurations. 
 It will be great if there is a request handler that can provide an API to 
 manage the configurations similar to the collections handler that would allow 
 actions like uploading new configurations, linking them to a collection, 
 deleting configurations, etc.
 example : 
 {code}
 #use the following command to upload a new configset called mynewconf. This 
 will fail if there is alredy a conf called 'mynewconf'. The file could be a 
 jar , zip or a tar file which contains all the files for the this conf.
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @testconf.zip http://localhost:8983/solr/admin/configs/mynewconf
 {code}
 A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
 available
 A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
 list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6240) ban @Seed in tests.

2015-02-11 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-6240.
-
   Resolution: Fixed
Fix Version/s: 5.1
   Trunk

 ban @Seed in tests.
 ---

 Key: LUCENE-6240
 URL: https://issues.apache.org/jira/browse/LUCENE-6240
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6240.patch


 If someone is debugging, they can easily accidentally commit \@Seed 
 annotation, hurting the quality of the test. We should detect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6832) Queries be served locally rather than being forwarded to another replica

2015-02-11 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-6832:
-
Attachment: SOLR-6832.patch

[~sachingoyal] It seems like your latest patch was created / tested against 
branch4x vs. trunk? It's better to work against trunk for new features and then 
we'll back-port the changes as needed. I went ahead and migrated your patch to 
work with trunk and cleaned up a few places in the code. Overall looking good!

 Queries be served locally rather than being forwarded to another replica
 

 Key: SOLR-6832
 URL: https://issues.apache.org/jira/browse/SOLR-6832
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.2
Reporter: Sachin Goyal
Assignee: Timothy Potter
 Attachments: SOLR-6832.patch, SOLR-6832.patch, SOLR-6832.patch, 
 SOLR-6832.patch


 Currently, I see that code flow for a query in SolrCloud is as follows:
 For distributed query:
 SolrCore - SearchHandler.handleRequestBody() - HttpShardHandler.submit()
 For non-distributed query:
 SolrCore - SearchHandler.handleRequestBody() - QueryComponent.process()
 \\
 \\
 \\
 For a distributed query, the request is always sent to all the shards even if 
 the originating SolrCore (handling the original distributed query) is a 
 replica of one of the shards.
 If the original Solr-Core can check itself before sending http requests for 
 any shard, we can probably save some network hopping and gain some 
 performance.
 \\
 \\
 We can change SearchHandler.handleRequestBody() or HttpShardHandler.submit() 
 to fix this behavior (most likely the former and not the latter).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



'ant test' -- calculation for tests.jvms

2015-02-11 Thread Shawn Heisey
If the computer has four CPU cores, running tests via the build system
will set tests.jvms to 3, but if it has three CPU cores, it will set
tests.jvms to 1.

IMHO, this calculation should be adjusted so that a 3-core system gets a
value of 2.  I've been trying to find the code that calculates it, but
I've come up empty so far.

Does anyone like or hate this idea?

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6198) two phase intersection

2015-02-11 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6198:
-
Attachment: LUCENE-6198.patch

New patch that adds two-phase support to ConjunctionScorer. luceneutil seems 
happy with the patch too:

{noformat}
TaskQPS baseline  StdDev   QPS patch  StdDev
Pct diff
  HighPhrase   12.26 (11.3%)   11.89  (5.3%)   
-3.0% ( -17% -   15%)
  AndHighLow  894.95  (9.5%)  874.08  (2.9%)   
-2.3% ( -13% -   11%)
   LowPhrase   18.81  (9.2%)   18.51  (4.8%)   
-1.6% ( -14% -   13%)
  Fuzzy1   72.76 (12.2%)   71.65  (9.6%)   
-1.5% ( -20% -   23%)
   MedPhrase   54.31 (11.0%)   53.81  (3.2%)   
-0.9% ( -13% -   14%)
 LowTerm  806.00 (11.9%)  808.20  (4.5%)
0.3% ( -14% -   18%)
 Respell   55.89 (10.2%)   56.57  (4.2%)
1.2% ( -11% -   17%)
OrNotHighLow 1102.88 (11.4%) 1116.63  (4.3%)
1.2% ( -13% -   19%)
 LowSpanNear9.48  (9.5%)9.61  (4.4%)
1.4% ( -11% -   16%)
 LowSloppyPhrase   71.86  (8.8%)   72.89  (3.5%)
1.4% (  -9% -   15%)
 MedSloppyPhrase   29.92 (10.3%)   30.35  (4.2%)
1.4% ( -11% -   17%)
 MedSpanNear   79.24  (8.6%)   80.39  (3.2%)
1.5% (  -9% -   14%)
  IntNRQ   16.81  (9.4%)   17.06  (6.1%)
1.5% ( -12% -   18%)
HighSloppyPhrase   23.27 (11.6%)   23.64  (8.1%)
1.6% ( -16% -   24%)
  OrHighHigh   16.79 (10.6%)   17.08  (7.7%)
1.7% ( -15% -   22%)
OrHighNotLow   84.84 (10.3%)   86.32  (3.2%)
1.7% ( -10% -   17%)
   OrNotHighHigh   56.28  (9.4%)   57.30  (1.9%)
1.8% (  -8% -   14%)
HighTerm  123.91 (10.8%)  126.29  (2.8%)
1.9% ( -10% -   17%)
 MedTerm  243.44 (11.1%)  248.40  (2.9%)
2.0% ( -10% -   18%)
Wildcard   74.84  (9.9%)   76.36  (3.1%)
2.0% (  -9% -   16%)
   OrHighNotHigh   45.48  (9.9%)   46.47  (1.9%)
2.2% (  -8% -   15%)
   OrHighLow   79.36 (11.3%)   81.10  (6.5%)
2.2% ( -14% -   22%)
 Prefix3   74.29 (10.5%)   75.96  (4.9%)
2.2% ( -11% -   19%)
OrHighNotMed   53.37 (10.7%)   54.62  (2.5%)
2.3% (  -9% -   17%)
PKLookup  266.92 (10.4%)  273.30  (3.4%)
2.4% ( -10% -   18%)
HighSpanNear   19.64 (10.4%)   20.11  (3.0%)
2.4% (  -9% -   17%)
OrNotHighMed  167.57 (11.7%)  171.67  (2.4%)
2.4% ( -10% -   18%)
   OrHighMed   72.90 (12.5%)   74.87  (6.6%)
2.7% ( -14% -   24%)
  Fuzzy2   50.70 (13.8%)   52.58  (8.4%)
3.7% ( -16% -   30%)
  AndHighMed  160.13 (10.1%)  169.60  (3.4%)
5.9% (  -6% -   21%)
 AndHighHigh   69.49  (8.8%)   74.19  (3.3%)
6.8% (  -4% -   20%)
{noformat}

 two phase intersection
 --

 Key: LUCENE-6198
 URL: https://issues.apache.org/jira/browse/LUCENE-6198
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-6198.patch, LUCENE-6198.patch, LUCENE-6198.patch


 Currently some scorers have to do a lot of per-document work to determine if 
 a document is a match. The simplest example is a phrase scorer, but there are 
 others (spans, sloppy phrase, geospatial, etc).
 Imagine a conjunction with two MUST clauses, one that is a term that matches 
 all odd documents, another that is a phrase matching all even documents. 
 Today this conjunction will be very expensive, because the zig-zag 
 intersection is reading a ton of useless positions.
 The same problem happens with filteredQuery and anything else that acts like 
 a conjunction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 'ant test' -- calculation for tests.jvms

2015-02-11 Thread Dawid Weiss
 IMHO, this calculation should be adjusted so that a 3-core system gets a 
 value of 2.

A 3-core system? What happened to one of its, ahem, gems? :)

 I've been trying to find the code that calculates it, but I've come up empty 
 so far.

The code to adjust it automatically is in the runner itself, here:

https://github.com/carrotsearch/randomizedtesting/blob/master/junit4-ant/src/main/java/com/carrotsearch/ant/tasks/junit4/JUnit4.java#L1288

Feel free to provide a patch, although I think a 3-core system is an
not something many people have. The rationale for decreasing the
number of threads on 4-cores and on is to leave some slack for GC, ANT
itself, etc. Otherwise you can brick the machine.

Dawid

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6226) Allow TermScorer to expose positions, offsets and payloads

2015-02-11 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-6226:
--
Attachment: LUCENE-6226.patch

New patch.

Rather than getting positions directly from the Scorer, this goes back to 
Simon's original idea of having a separate per-scorer IntervalIterator.  We 
have an IntervalQuery that will match a document if it's child scorers produce 
any matching intervals, and the notion of an IntervalFilter that allows you to 
select which intervals match.

Query.createWeight() and IndexSearcher.createNormalizedWeight() take an enum 
based on Adrien's idea.  Scorers that don't support iterators (which at the 
moment is all of them except TermScorer) throw an IllegalArgumentException.  
TermWeight.scorer() will throw an IllegalStateException if the weight has been 
created with DOCS_AND_SCORES_AND_POSITIONS but no positions were indexed.

 Allow TermScorer to expose positions, offsets and payloads
 --

 Key: LUCENE-6226
 URL: https://issues.apache.org/jira/browse/LUCENE-6226
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6226.patch, LUCENE-6226.patch, LUCENE-6226.patch, 
 LUCENE-6226.patch, LUCENE-6226.patch, LUCENE-6226.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6226) Allow TermScorer to expose positions, offsets and payloads

2015-02-11 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316615#comment-14316615
 ] 

Alan Woodward edited comment on LUCENE-6226 at 2/11/15 5:43 PM:


New patch.

Rather than getting positions directly from the Scorer, this goes back to 
Simon's original idea of having a separate per-scorer IntervalIterator.  We 
have an IntervalQuery that will match a document if it's child scorers produce 
any matching intervals, and the notion of an IntervalFilter that allows you to 
select which intervals match.

Query.createWeight() and IndexSearcher.createNormalizedWeight() take an enum 
based on Adrien's idea.  Scorers that don't support iterators (which at the 
moment is all of them except TermScorer) throw an IllegalArgumentException.  
TermWeight.scorer() will throw an IllegalStateException if the weight has been 
created with DOCS_AND_SCORES_AND_POSITIONS but no positions were indexed.

Edit: Meant to add, the patch also includes a RangeFilteredQuery that will only 
match queries that have intervals within a given range in a document, and a 
couple of tests to show how the various bits work.


was (Author: romseygeek):
New patch.

Rather than getting positions directly from the Scorer, this goes back to 
Simon's original idea of having a separate per-scorer IntervalIterator.  We 
have an IntervalQuery that will match a document if it's child scorers produce 
any matching intervals, and the notion of an IntervalFilter that allows you to 
select which intervals match.

Query.createWeight() and IndexSearcher.createNormalizedWeight() take an enum 
based on Adrien's idea.  Scorers that don't support iterators (which at the 
moment is all of them except TermScorer) throw an IllegalArgumentException.  
TermWeight.scorer() will throw an IllegalStateException if the weight has been 
created with DOCS_AND_SCORES_AND_POSITIONS but no positions were indexed.

 Allow TermScorer to expose positions, offsets and payloads
 --

 Key: LUCENE-6226
 URL: https://issues.apache.org/jira/browse/LUCENE-6226
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6226.patch, LUCENE-6226.patch, LUCENE-6226.patch, 
 LUCENE-6226.patch, LUCENE-6226.patch, LUCENE-6226.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_31) - Build # 11780 - Failure!

2015-02-11 Thread david.w.smi...@gmail.com
It reproduces; I’m on it.

~ David Smiley
Freelance Apache Lucene/Solr Search Consultant/Developer
http://www.linkedin.com/in/davidwsmiley

On Wed, Feb 11, 2015 at 12:30 PM, Policeman Jenkins Server 
jenk...@thetaphi.de wrote:

 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11780/
 Java: 32bit/jdk1.8.0_31 -server -XX:+UseConcMarkSweepGC

 1 tests failed.
 FAILED:
 org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.testRandom {#3
 seed=[6B7EE18F8044BF08:1263454538DCD1B5]}

 Error Message:
 expected:1 but was:0

 Stack Trace:
 java.lang.AssertionError: expected:1 but was:0
 at
 __randomizedtesting.SeedInfo.seed([6B7EE18F8044BF08:1263454538DCD1B5]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at org.junit.Assert.failNotEquals(Assert.java:647)
 at org.junit.Assert.assertEquals(Assert.java:128)
 at org.junit.Assert.assertEquals(Assert.java:472)
 at org.junit.Assert.assertEquals(Assert.java:456)
 at
 org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.validateHeatmapResult(HeatmapFacetCounterTest.java:221)
 at
 org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.queryHeatmapRecursive(HeatmapFacetCounterTest.java:188)
 at
 org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.queryHeatmapRecursive(HeatmapFacetCounterTest.java:201)
 at
 org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.testRandom(HeatmapFacetCounterTest.java:172)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:483)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
 at
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
 at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
 at
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)

[jira] [Commented] (LUCENE-6240) ban @Seed in tests.

2015-02-11 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316798#comment-14316798
 ] 

Uwe Schindler commented on LUCENE-6240:
---

+1

 ban @Seed in tests.
 ---

 Key: LUCENE-6240
 URL: https://issues.apache.org/jira/browse/LUCENE-6240
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6240.patch


 If someone is debugging, they can easily accidentally commit \@Seed 
 annotation, hurting the quality of the test. We should detect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6240) ban @Seed in tests.

2015-02-11 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316727#comment-14316727
 ] 

Adrien Grand commented on LUCENE-6240:
--

+1

 ban @Seed in tests.
 ---

 Key: LUCENE-6240
 URL: https://issues.apache.org/jira/browse/LUCENE-6240
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6240.patch


 If someone is debugging, they can easily accidentally commit \@Seed 
 annotation, hurting the quality of the test. We should detect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6240) ban @Seed in tests.

2015-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316823#comment-14316823
 ] 

ASF subversion and git services commented on LUCENE-6240:
-

Commit 1659044 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1659044 ]

LUCENE-6240: ban @Seed in tests

 ban @Seed in tests.
 ---

 Key: LUCENE-6240
 URL: https://issues.apache.org/jira/browse/LUCENE-6240
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6240.patch


 If someone is debugging, they can easily accidentally commit \@Seed 
 annotation, hurting the quality of the test. We should detect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6240) ban @Seed in tests.

2015-02-11 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-6240:
---

 Summary: ban @Seed in tests.
 Key: LUCENE-6240
 URL: https://issues.apache.org/jira/browse/LUCENE-6240
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


If someone is debugging, they can easily accidentally commit \@Seed annotation, 
hurting the quality of the test. We should detect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6030) Add norms patched compression which uses table for most common values

2015-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316676#comment-14316676
 ] 

ASF subversion and git services commented on LUCENE-6030:
-

Commit 1659022 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1659022 ]

LUCENE-6030: remove fixed @Seed

 Add norms patched compression which uses table for most common values
 -

 Key: LUCENE-6030
 URL: https://issues.apache.org/jira/browse/LUCENE-6030
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Ryan Ernst
Assignee: Ryan Ernst
 Fix For: 5.0, Trunk

 Attachments: LUCENE-6030.patch


 We have added the PATCHED norms sub format in lucene 50, which uses a bitset 
 to mark documents that have the most common value (when 97% of the documents 
 have that value).  This works well for fields that have a predominant value 
 length, and then a small number of docs with some other random values.  But 
 another common case is having a handful of very common value lengths, like 
 with a title field.
 We can use a table (see TABLE_COMPRESSION) to store the most common values, 
 and save an oridinal for the other case, at which point we can lookup in 
 the secondary patch table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6832) Queries be served locally rather than being forwarded to another replica

2015-02-11 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316697#comment-14316697
 ] 

Timothy Potter commented on SOLR-6832:
--

Also, I don't think we need to include this parameter in all of the configs, as 
we're trying to get away from bloated configs. So I changed the patch to just 
include in the sample techproducts configs. We'll also need to document this 
parameter in the Solr reference guide.

 Queries be served locally rather than being forwarded to another replica
 

 Key: SOLR-6832
 URL: https://issues.apache.org/jira/browse/SOLR-6832
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.2
Reporter: Sachin Goyal
Assignee: Timothy Potter
 Attachments: SOLR-6832.patch, SOLR-6832.patch, SOLR-6832.patch, 
 SOLR-6832.patch


 Currently, I see that code flow for a query in SolrCloud is as follows:
 For distributed query:
 SolrCore - SearchHandler.handleRequestBody() - HttpShardHandler.submit()
 For non-distributed query:
 SolrCore - SearchHandler.handleRequestBody() - QueryComponent.process()
 \\
 \\
 \\
 For a distributed query, the request is always sent to all the shards even if 
 the originating SolrCore (handling the original distributed query) is a 
 replica of one of the shards.
 If the original Solr-Core can check itself before sending http requests for 
 any shard, we can probably save some network hopping and gain some 
 performance.
 \\
 \\
 We can change SearchHandler.handleRequestBody() or HttpShardHandler.submit() 
 to fix this behavior (most likely the former and not the latter).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6240) ban @Seed in tests.

2015-02-11 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-6240:

Attachment: LUCENE-6240.patch

Patch. You can still use this annotation when debugging, but just don't commit 
it.

precommit / jenkins will fail like this:
{noformat}
[forbidden-apis] Forbidden class/interface/annotation use: 
com.carrotsearch.randomizedtesting.annotations.Seed [Don't commit hardcoded 
seeds]
[forbidden-apis]   in org.apache.lucene.TestDemo (TestDemo.java, annotation on 
class declaration)
[forbidden-apis] Scanned 1118 (and 910 related) class file(s) for forbidden API 
invocations (in 0.42s), 1 error(s).

BUILD FAILED
{noformat}

 ban @Seed in tests.
 ---

 Key: LUCENE-6240
 URL: https://issues.apache.org/jira/browse/LUCENE-6240
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6240.patch


 If someone is debugging, they can easily accidentally commit \@Seed 
 annotation, hurting the quality of the test. We should detect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-11 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316831#comment-14316831
 ] 

Noble Paul edited comment on SOLR-6736 at 2/11/15 7:31 PM:
---

[~varunrajput] The syntax followed by your patch is not as specified in the 
description. I see no reason to deviate from the plan. The syntax is as 
important as the functionality.  BlobHandler.java implements a similar API  


was (Author: noble.paul):
[~varunrajput] The syntax followed by your patch is not as specified in the 
description. The syntax is as important as the functionality.  BlobHandler.java 
implements a similar API  

 A collections-like request handler to manage solr configurations on zookeeper
 -

 Key: SOLR-6736
 URL: https://issues.apache.org/jira/browse/SOLR-6736
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Varun Rajput
Assignee: Anshum Gupta
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6736.patch, SOLR-6736.patch


 Managing Solr configuration files on zookeeper becomes cumbersome while using 
 solr in cloud mode, especially while trying out changes in the 
 configurations. 
 It will be great if there is a request handler that can provide an API to 
 manage the configurations similar to the collections handler that would allow 
 actions like uploading new configurations, linking them to a collection, 
 deleting configurations, etc.
 example : 
 {code}
 #use the following command to upload a new configset called mynewconf. This 
 will fail if there is alredy a conf called 'mynewconf'. The file could be a 
 jar , zip or a tar file which contains all the files for the this conf.
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @testconf.zip http://localhost:8983/solr/admin/configs/mynewconf
 {code}
 A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
 available
 A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
 list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-11 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316831#comment-14316831
 ] 

Noble Paul edited comment on SOLR-6736 at 2/11/15 7:31 PM:
---

[~varunrajput] The syntax followed by your patch is not as specified in the 
description. The syntax is as important as the functionality.  BlobHandler.java 
implements a similar API  


was (Author: noble.paul):
[~varunrajput] The syntax followed by your patch is not as specified in the 
description. The syntax is as important as the functionality

 A collections-like request handler to manage solr configurations on zookeeper
 -

 Key: SOLR-6736
 URL: https://issues.apache.org/jira/browse/SOLR-6736
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Varun Rajput
Assignee: Anshum Gupta
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6736.patch, SOLR-6736.patch


 Managing Solr configuration files on zookeeper becomes cumbersome while using 
 solr in cloud mode, especially while trying out changes in the 
 configurations. 
 It will be great if there is a request handler that can provide an API to 
 manage the configurations similar to the collections handler that would allow 
 actions like uploading new configurations, linking them to a collection, 
 deleting configurations, etc.
 example : 
 {code}
 #use the following command to upload a new configset called mynewconf. This 
 will fail if there is alredy a conf called 'mynewconf'. The file could be a 
 jar , zip or a tar file which contains all the files for the this conf.
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @testconf.zip http://localhost:8983/solr/admin/configs/mynewconf
 {code}
 A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
 available
 A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
 list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6030) Add norms patched compression which uses table for most common values

2015-02-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316702#comment-14316702
 ] 

ASF subversion and git services commented on LUCENE-6030:
-

Commit 1659025 from [~rcmuir] in branch 'dev/branches/lucene_solr_5_0'
[ https://svn.apache.org/r1659025 ]

LUCENE-6030: remove fixed @Seed

 Add norms patched compression which uses table for most common values
 -

 Key: LUCENE-6030
 URL: https://issues.apache.org/jira/browse/LUCENE-6030
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Ryan Ernst
Assignee: Ryan Ernst
 Fix For: 5.0, Trunk

 Attachments: LUCENE-6030.patch


 We have added the PATCHED norms sub format in lucene 50, which uses a bitset 
 to mark documents that have the most common value (when 97% of the documents 
 have that value).  This works well for fields that have a predominant value 
 length, and then a small number of docs with some other random values.  But 
 another common case is having a handful of very common value lengths, like 
 with a title field.
 We can use a table (see TABLE_COMPRESSION) to store the most common values, 
 and save an oridinal for the other case, at which point we can lookup in 
 the secondary patch table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-1518) Merge Query and Filter classes

2015-02-11 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-1518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316785#comment-14316785
 ] 

Uwe Schindler commented on LUCENE-1518:
---

Long time ago :-)

So this looks fine, makes it easy to use Filters as real queries. There is only 
one thing: The score returned is now always be 0. If you want to get the old 
behaviour where you get the boost as score, you just have to wrap the Filter 
with ConstantScoreQuery, like it was before?

There is a typo in description of Filter: Convenient base class for building 
queries that only perform matching, but no scoring. The scorer produced by such 
queries always returns 0. - i think it should be returns 0 as score.

One other thing: QueryWrapperFilter is now obsolete, or not?

So looks really fine.

 Merge Query and Filter classes
 --

 Key: LUCENE-1518
 URL: https://issues.apache.org/jira/browse/LUCENE-1518
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Affects Versions: 2.4
Reporter: Uwe Schindler
 Fix For: 4.9, Trunk

 Attachments: LUCENE-1518.patch, LUCENE-1518.patch


 This issue presents a patch, that merges Queries and Filters in a way, that 
 the new Filter class extends Query. This would make it possible, to use every 
 filter as a query.
 The new abstract filter class would contain all methods of 
 ConstantScoreQuery, deprecate ConstantScoreQuery. If somebody implements the 
 Filter's getDocIdSet()/bits() methods he has nothing more to do, he could 
 just use the filter as a normal query.
 I do not want to completely convert Filters to ConstantScoreQueries. The idea 
 is to combine Queries and Filters in such a way, that every Filter can 
 automatically be used at all places where a Query can be used (e.g. also 
 alone a search query without any other constraint). For that, the abstract 
 Query methods must be implemented and return a default weight for Filters 
 which is the current ConstantScore Logic. If the filter is used as a real 
 filter (where the API wants a Filter), the getDocIdSet part could be directly 
 used, the weight is useless (as it is currently, too). The constant score 
 default implementation is only used when the Filter is used as a Query (e.g. 
 as direct parameter to Searcher.search()). For the special case of 
 BooleanQueries combining Filters and Queries the idea is, to optimize the 
 BooleanQuery logic in such a way, that it detects if a BooleanClause is a 
 Filter (using instanceof) and then directly uses the Filter API and not take 
 the burden of the ConstantScoreQuery (see LUCENE-1345).
 Here some ideas how to implement Searcher.search() with Query and Filter:
 - User runs Searcher.search() using a Filter as the only parameter. As every 
 Filter is also a ConstantScoreQuery, the query can be executed and returns 
 score 1.0 for all matching documents.
 - User runs Searcher.search() using a Query as the only parameter: No change, 
 all is the same as before
 - User runs Searcher.search() using a BooleanQuery as parameter: If the 
 BooleanQuery does not contain a Query that is subclass of Filter (the new 
 Filter) everything as usual. If the BooleanQuery only contains exactly one 
 Filter and nothing else the Filter is used as a constant score query. If 
 BooleanQuery contains clauses with Queries and Filters the new algorithm 
 could be used: The queries are executed and the results filtered with the 
 filters.
 For the user this has the main advantage: That he can construct his query 
 using a simplified API without thinking about Filters oder Queries, you can 
 just combine clauses together. The scorer/weight logic then identifies the 
 cases to use the filter or the query weight API. Just like the query 
 optimizer of a RDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6239) Remove RAMUsageEstimator Unsafe calls

2015-02-11 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316311#comment-14316311
 ] 

Dawid Weiss commented on LUCENE-6239:
-

Nah, sorry but the work-on-mobile argument is not really convincing. I've done 
a lot of work on constrained platforms and I really don't think anybody who 
embeds Lucene (for indexing or search) on such a platform is doing the right 
thing, unsafe has nothing to do with it -- they'll have more problems than that 
to deal with... 

As for VM profiles... I know what they are. Which motivating element you think 
is specifically important other than the one resulting from potentially smaller 
memory load/ load time? Because I fail to see anything worthy mentioning other 
than that. With jigsaw already shipping you get most of the benefits of 
profiles anyway (these stem from the fact that there's no monolithic rt.jar to 
parse/ map into memory).


 Remove RAMUsageEstimator Unsafe calls
 -

 Key: LUCENE-6239
 URL: https://issues.apache.org/jira/browse/LUCENE-6239
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir

 This is unnecessary risk. We should remove this stuff, it is not needed here. 
 We are a search engine, not a ram calculator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6239) Remove RAMUsageEstimator Unsafe calls

2015-02-11 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316385#comment-14316385
 ] 

Robert Muir commented on LUCENE-6239:
-

Yes, as said before I am +1 to your proposal. Its cleaner.

 Remove RAMUsageEstimator Unsafe calls
 -

 Key: LUCENE-6239
 URL: https://issues.apache.org/jira/browse/LUCENE-6239
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir

 This is unnecessary risk. We should remove this stuff, it is not needed here. 
 We are a search engine, not a ram calculator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7084) FreeTextSuggester Nullpointer when building dictionary

2015-02-11 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-7084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-7084:
--
Fix Version/s: (was: 4.10.4)

 FreeTextSuggester Nullpointer when building dictionary
 --

 Key: SOLR-7084
 URL: https://issues.apache.org/jira/browse/SOLR-7084
 Project: Solr
  Issue Type: Bug
  Components: Suggester
Affects Versions: 4.10.2
Reporter: Jan Høydahl
Assignee: Jan Høydahl
 Fix For: Trunk, 5.1

 Attachments: SOLR-7084.patch


 Using {{FreeTextSuggester}}. When starting Solr or reloading core, all 
 suggest requests will fail due to a {{Nullpointer}}. There is a {{HTTP 500}} 
 response code with the following body. Note that the error returned does not 
 have a {{msg}} section but only a trace:
 {code}
 {
   error:{
 trace:java.lang.NullPointerException\n\tat 
 org.apache.lucene.search.suggest.analyzing.FreeTextSuggester.lookup(FreeTextSuggester.java:542)\n\tat
  
 org.apache.lucene.search.suggest.analyzing.FreeTextSuggester.lookup(FreeTextSuggester.java:440)\n\tat
  
 org.apache.lucene.search.suggest.analyzing.FreeTextSuggester.lookup(FreeTextSuggester.java:429)\n\tat
  
 org.apache.solr.spelling.suggest.SolrSuggester.getSuggestions(SolrSuggester.java:199)\n\tat
  
 ...
 {code}
 Offending line:
 {code}
   BytesReader bytesReader = fst.getBytesReader();
 {code}
 The fst is null at this time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6239) Remove RAMUsageEstimator Unsafe calls

2015-02-11 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316257#comment-14316257
 ] 

Robert Muir commented on LUCENE-6239:
-

{quote}
A similar argument could be made about the absolute need to compile and 
restrict the core to compact1 profile... why would you need that in a search 
library if it adds like a few milliseconds time at startup (once)?
{quote}

The main issue there is, people have complained in the past, that they cant use 
lucene on e.g. some mobile platform because it used XYZ api. My problem with 
supporting that in the past was, there was no way to test that used some 
restricted subset of the JDK apis. 

But now java 8 has this feature, which allows you to specify the subset, they 
provide this information in the javadocs, and the compiler will fail and all 
the infrastructure is in place, so I think we should only use what we need?

I think claiming that this only saves a few milliseconds is incorrect, perhaps 
you should read the article on the motivation for these profiles:
http://www.oracle.com/technetwork/articles/java/architect-profiles-2227131.html

But these profiles are unrelated to this issue. In this issue i just want to 
remove unnecessary Unsafe calls. its far more critical because Unsafe is well, 
Unsafe.

 Remove RAMUsageEstimator Unsafe calls
 -

 Key: LUCENE-6239
 URL: https://issues.apache.org/jira/browse/LUCENE-6239
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir

 This is unnecessary risk. We should remove this stuff, it is not needed here. 
 We are a search engine, not a ram calculator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-6198) two phase intersection

2015-02-11 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand reopened LUCENE-6198:
--
Lucene Fields:   (was: New)

I'll try to summarize API challenges that have been mentioned or that I can 
think of:

 - should match confirmation be built-in DocIdSetIterator (ie. adding a 
matches() method and requiring callers to always verify matches)? While it 
would work, one issue I have is that it would also make the simple cases such 
as TermScorer more complicated? So I like having an optional method or marker 
interface better.

 - ideally this would not be intrusive and just an incremental improvement over 
what we currently have today

 - this thing cannot be a marker interface, otherwise wrappers like 
ConstantScoreQuery could not work properly

 - we need to somehow reuse the DocIdSetIterator abstraction for code reuse 
(approximations cannot be a totally different object)

 - one concern was that it should work well for queries and filters, but since 
we are slowly merging both, it would probably ok to make it work for queries 
only (which potentially means that we could expose methods only on Scorer 
instead of DISI, at least as a start).

 - should we extend DocIdSetIterator and add a 'matches' method, or have 
another class that exposes a DocIdSetIterator 'approximation' and a 'matches' 
method. While the patch on LUCENE-6198 uses option 1, I like the fact that with 
option 2 we do not extend DocIdSetIterator and more clearly separate the 
approximation from the confirmation (like the API proposal on SOLR-7044)

 - in a conjunction disi, should there be a way to configure the order in which 
confirmations should be performed (kind-of similarly to the cost API, by trying 
to confirm the cheapest instances first)? I think so but I we can probably 
delay this problem to later?

Here is a new patch which is very similar to the current one, but with two main 
differences:
 - the approximation DISI has been replaced with a TwoPhaseDocIdSetIterator 
class which exposes an iterator called 'approximation' and a 'boolean 
matches()' method
 - approximation is only exposed on Scorer

 two phase intersection
 --

 Key: LUCENE-6198
 URL: https://issues.apache.org/jira/browse/LUCENE-6198
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-6198.patch


 Currently some scorers have to do a lot of per-document work to determine if 
 a document is a match. The simplest example is a phrase scorer, but there are 
 others (spans, sloppy phrase, geospatial, etc).
 Imagine a conjunction with two MUST clauses, one that is a term that matches 
 all odd documents, another that is a phrase matching all even documents. 
 Today this conjunction will be very expensive, because the zig-zag 
 intersection is reading a ton of useless positions.
 The same problem happens with filteredQuery and anything else that acts like 
 a conjunction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] 5.0.0 RC2

2015-02-11 Thread david.w.smi...@gmail.com
I found two problems, and I’m not sure what to make of them.

First, perhaps the simplest.  I ran it with Java 8 with this at the
command-line (copied from Uwe’s email, inserting my environment variable):

python3 -u dev-tools/scripts/smokeTestRelease.py --test-java8 $JAVA8_HOME
http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-rev1658469

And I got this:

Java 1.8
JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_20.jdk/Contents/Home
NOTE: output encoding is UTF-8

Load release URL 
http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-rev1658469
...
  unshortened:
http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-rev1658469/

Test Lucene...
  test basics...
  get KEYS
0.1 MB in 0.69 sec (0.2 MB/sec)
  check changes HTML...
  download lucene-5.0.0-src.tgz...
27.9 MB in 129.06 sec (0.2 MB/sec)
verify md5/sha1 digests
verify sig
verify trust
  GPG: gpg: WARNING: This key is not certified with a trusted signature!
  download lucene-5.0.0.tgz...
64.0 MB in 154.61 sec (0.4 MB/sec)
verify md5/sha1 digests
verify sig
verify trust
  GPG: gpg: WARNING: This key is not certified with a trusted signature!
  download lucene-5.0.0.zip...
73.5 MB in 223.35 sec (0.3 MB/sec)
verify md5/sha1 digests
verify sig
verify trust
  GPG: gpg: WARNING: This key is not certified with a trusted signature!
  unpack lucene-5.0.0.tgz...
verify JAR metadata/identity/no javax.* or java.* classes...
Traceback (most recent call last):
  File dev-tools/scripts/smokeTestRelease.py, line 1486, in module
main()
  File dev-tools/scripts/smokeTestRelease.py, line 1431, in main
smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, c.is_signed,
' '.join(c.test_args))
  File dev-tools/scripts/smokeTestRelease.py, line 1468, in smokeTest
unpackAndVerify(java, 'lucene', tmpDir, artifact, svnRevision, version,
testArgs, baseURL)
  File dev-tools/scripts/smokeTestRelease.py, line 616, in unpackAndVerify
verifyUnpacked(java, project, artifact, unpackPath, svnRevision,
version, testArgs, tmpDir, baseURL)
  File dev-tools/scripts/smokeTestRelease.py, line 737, in verifyUnpacked
checkAllJARs(os.getcwd(), project, svnRevision, version, tmpDir,
baseURL)
  File dev-tools/scripts/smokeTestRelease.py, line 257, in checkAllJARs
checkJARMetaData('JAR file %s' % fullPath, fullPath, svnRevision,
version)
  File dev-tools/scripts/smokeTestRelease.py, line 185, in
checkJARMetaData
(desc, verify))
RuntimeError: JAR file
/private/tmp/smoke_lucene_5.0.0_1658469_1/unpack/lucene-5.0.0/analysis/common/lucene-analyzers-common-5.0.0.jar
is missing X-Compile-Source-JDK: 1.8 inside its META-INF/MANIFEST.MF

When I executed the above command, my CWS was a trunk checkout. Should that
matter?  It seems unlikely; the specific error references the unpacked
location, not CWD.



I also executed with Java 7; I did this first, actually.  This time, my
JAVA_HOME is set to Java 7 and I ran this from my 5x checkout.  When the
Solr tests ran, I got a particular test failure.  It reproduces, but only
on the 5.0 checkout — not my 5x checkout:

ant test  -Dtestcase=SaslZkACLProviderTest
-Dtests.method=testSaslZkACLProvider -Dtests.seed=1E2F7F6DC94B2138
-Dtests.slow=true -Dtests.locale=hi_IN -Dtests.timezone=ACT
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

Does this trip for anyone else?  Again, use Java 7 and the release branch.

~ David


[jira] [Commented] (LUCENE-6239) Remove RAMUsageEstimator Unsafe calls

2015-02-11 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316251#comment-14316251
 ] 

Robert Muir commented on LUCENE-6239:
-

I would love it if we avoided unsafe usage and replaced it with something safer 
like that.

 Remove RAMUsageEstimator Unsafe calls
 -

 Key: LUCENE-6239
 URL: https://issues.apache.org/jira/browse/LUCENE-6239
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir

 This is unnecessary risk. We should remove this stuff, it is not needed here. 
 We are a search engine, not a ram calculator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6198) two phase intersection

2015-02-11 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6198:
-
Attachment: LUCENE-6198.patch

 two phase intersection
 --

 Key: LUCENE-6198
 URL: https://issues.apache.org/jira/browse/LUCENE-6198
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-6198.patch, LUCENE-6198.patch


 Currently some scorers have to do a lot of per-document work to determine if 
 a document is a match. The simplest example is a phrase scorer, but there are 
 others (spans, sloppy phrase, geospatial, etc).
 Imagine a conjunction with two MUST clauses, one that is a term that matches 
 all odd documents, another that is a phrase matching all even documents. 
 Today this conjunction will be very expensive, because the zig-zag 
 intersection is reading a ton of useless positions.
 The same problem happens with filteredQuery and anything else that acts like 
 a conjunction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6311) SearchHandler should use path when no qt or shard.qt parameter is specified

2015-02-11 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter reassigned SOLR-6311:


Assignee: Timothy Potter

 SearchHandler should use path when no qt or shard.qt parameter is specified
 ---

 Key: SOLR-6311
 URL: https://issues.apache.org/jira/browse/SOLR-6311
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Molloy
Assignee: Timothy Potter
 Attachments: SOLR-6311.patch


 When performing distributed searches, you have to specify shards.qt unless 
 you're on the default /select path for your handler. As this is configurable, 
 even the default search handler could be on another path. The shard requests 
 should thus default to the path if no shards.qt was specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] 5.0.0 RC2

2015-02-11 Thread Ryan Ernst

 And I got this:
 Java 1.8
 JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_20.jdk/Contents/Home


Did you change your JAVA_HOME to point to java 8 as well (that's what it
looks like since only jdk is listed in that output)? --test-java8 is meant
to take the java 8 home, but your regular JAVA_HOME should stay java 7.

On Wed, Feb 11, 2015 at 6:13 AM, david.w.smi...@gmail.com 
david.w.smi...@gmail.com wrote:

 I found two problems, and I’m not sure what to make of them.

 First, perhaps the simplest.  I ran it with Java 8 with this at the
 command-line (copied from Uwe’s email, inserting my environment variable):

 python3 -u dev-tools/scripts/smokeTestRelease.py --test-java8 $JAVA8_HOME
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-rev1658469

 And I got this:

 Java 1.8
 JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_20.jdk/Contents/Home
 NOTE: output encoding is UTF-8

 Load release URL 
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-rev1658469
 ...
   unshortened:
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-rev1658469/

 Test Lucene...
   test basics...
   get KEYS
 0.1 MB in 0.69 sec (0.2 MB/sec)
   check changes HTML...
   download lucene-5.0.0-src.tgz...
 27.9 MB in 129.06 sec (0.2 MB/sec)
 verify md5/sha1 digests
 verify sig
 verify trust
   GPG: gpg: WARNING: This key is not certified with a trusted
 signature!
   download lucene-5.0.0.tgz...
 64.0 MB in 154.61 sec (0.4 MB/sec)
 verify md5/sha1 digests
 verify sig
 verify trust
   GPG: gpg: WARNING: This key is not certified with a trusted
 signature!
   download lucene-5.0.0.zip...
 73.5 MB in 223.35 sec (0.3 MB/sec)
 verify md5/sha1 digests
 verify sig
 verify trust
   GPG: gpg: WARNING: This key is not certified with a trusted
 signature!
   unpack lucene-5.0.0.tgz...
 verify JAR metadata/identity/no javax.* or java.* classes...
 Traceback (most recent call last):
   File dev-tools/scripts/smokeTestRelease.py, line 1486, in module
 main()
   File dev-tools/scripts/smokeTestRelease.py, line 1431, in main
 smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir,
 c.is_signed, ' '.join(c.test_args))
   File dev-tools/scripts/smokeTestRelease.py, line 1468, in smokeTest
 unpackAndVerify(java, 'lucene', tmpDir, artifact, svnRevision,
 version, testArgs, baseURL)
   File dev-tools/scripts/smokeTestRelease.py, line 616, in
 unpackAndVerify
 verifyUnpacked(java, project, artifact, unpackPath, svnRevision,
 version, testArgs, tmpDir, baseURL)
   File dev-tools/scripts/smokeTestRelease.py, line 737, in verifyUnpacked
 checkAllJARs(os.getcwd(), project, svnRevision, version, tmpDir,
 baseURL)
   File dev-tools/scripts/smokeTestRelease.py, line 257, in checkAllJARs
 checkJARMetaData('JAR file %s' % fullPath, fullPath, svnRevision,
 version)
   File dev-tools/scripts/smokeTestRelease.py, line 185, in
 checkJARMetaData
 (desc, verify))
 RuntimeError: JAR file
 /private/tmp/smoke_lucene_5.0.0_1658469_1/unpack/lucene-5.0.0/analysis/common/lucene-analyzers-common-5.0.0.jar
 is missing X-Compile-Source-JDK: 1.8 inside its META-INF/MANIFEST.MF

 When I executed the above command, my CWS was a trunk checkout. Should
 that matter?  It seems unlikely; the specific error references the unpacked
 location, not CWD.



 I also executed with Java 7; I did this first, actually.  This time, my
 JAVA_HOME is set to Java 7 and I ran this from my 5x checkout.  When the
 Solr tests ran, I got a particular test failure.  It reproduces, but only
 on the 5.0 checkout — not my 5x checkout:

 ant test  -Dtestcase=SaslZkACLProviderTest
 -Dtests.method=testSaslZkACLProvider -Dtests.seed=1E2F7F6DC94B2138
 -Dtests.slow=true -Dtests.locale=hi_IN -Dtests.timezone=ACT
 -Dtests.asserts=true -Dtests.file.encoding=UTF-8

 Does this trip for anyone else?  Again, use Java 7 and the release branch.

 ~ David



[jira] [Commented] (LUCENE-6069) compile with compact profiles

2015-02-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316349#comment-14316349
 ] 

Michael McCandless commented on LUCENE-6069:


+1 to compile with compact profiles.

I think in general Lucene should use as minimal APIs as are truly needed to get 
indexing and searching done.

E.g., this same Occam's razor philosophy has served us well in pruning back 
the Directory API over time.

Also, this motivation is completely separate from claims that this change might 
help abusive use cases, like mobile.

 compile with compact profiles
 -

 Key: LUCENE-6069
 URL: https://issues.apache.org/jira/browse/LUCENE-6069
 Project: Lucene - Core
  Issue Type: Task
  Components: general/build
Affects Versions: Trunk
Reporter: Robert Muir
Assignee: Robert Muir
 Fix For: Trunk

 Attachments: LUCENE-6069.patch, LUCENE-6069.patch, LUCENE-6069.patch


 If we clean up the 'alignment' calculator in RamUsageEstimator, we can 
 compile core with compact1, and the rest of lucene (except tests) with 
 compact2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6239) Remove RAMUsageEstimator Unsafe calls

2015-02-11 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316368#comment-14316368
 ] 

Robert Muir commented on LUCENE-6239:
-

{quote}
Which motivating element you think is specifically important other than the one 
resulting from potentially smaller memory load/ load time? Because I fail to 
see anything worthy mentioning other than that. With jigsaw already shipping 
you get most of the benefits of profiles anyway (these stem from the fact that 
there's no monolithic rt.jar to parse/ map into memory).
{quote}

I think mike's response on that issue already explains my perspective on it. 
and to boot, by doing this i found a test bug as well (use of 
javax.management.Query when it should have been 
org.apache.lucene.search.Query). To me its just proper java 8 adoption, to 
define what portions of the JDK we are using. If there is really some feature 
we want in lucene-core that requires compact3 or whatever, i mean this is like 
a warning sign, why do we need such an advanced API to implement search?

But please, this should really be discussed on LUCENE-6069.

And I think your reasoning is slightly biased, we all know the only thing 
standing in the way of this stuff is RamUsageEstimator.

 Remove RAMUsageEstimator Unsafe calls
 -

 Key: LUCENE-6239
 URL: https://issues.apache.org/jira/browse/LUCENE-6239
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir

 This is unnecessary risk. We should remove this stuff, it is not needed here. 
 We are a search engine, not a ram calculator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6311) SearchHandler should use path when no qt or shard.qt parameter is specified

2015-02-11 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316242#comment-14316242
 ] 

David Smiley commented on SOLR-6311:


bq. i think we should just bite the bullet on making the switch...

+1 to that!  It should have been done this way to begin with. I consider it a 
bug that distributed requests were apparently hard-coded to use /select

 SearchHandler should use path when no qt or shard.qt parameter is specified
 ---

 Key: SOLR-6311
 URL: https://issues.apache.org/jira/browse/SOLR-6311
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Molloy
 Attachments: SOLR-6311.patch


 When performing distributed searches, you have to specify shards.qt unless 
 you're on the default /select path for your handler. As this is configurable, 
 even the default search handler could be on another path. The shard requests 
 should thus default to the path if no shards.qt was specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7099) bin/solr -cloud mode should launch a local ZK in its own process using zkcli's runzk option (instead of embedded in the first Solr process)

2015-02-11 Thread Timothy Potter (JIRA)
Timothy Potter created SOLR-7099:


 Summary: bin/solr -cloud mode should launch a local ZK in its own 
process using zkcli's runzk option (instead of embedded in the first Solr 
process)
 Key: SOLR-7099
 URL: https://issues.apache.org/jira/browse/SOLR-7099
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Timothy Potter


Embedded ZK is great for unit testing and quick examples, but as soon as 
someone wants to restart their cluster, embedded mode causes a lot of issues, 
esp. if you restart the node that embeds ZK. Of course we don't want users to 
have to install ZooKeeper just to get started with Solr either. 

Thankfully, ZkCLI already includes a way to launch ZooKeeper in its own process 
but still within the Solr directory structure. We can hide the details and 
complexity of working with ZK in the bin/solr script. The solution to this 
should still make it very clear that this is for getting started / examples and 
not to be used in production.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [VOTE] 5.0.0 RC2

2015-02-11 Thread Uwe Schindler
I think the problem is the inverse:

 

RuntimeError: JAR file 
/private/tmp/smoke_lucene_5.0.0_1658469_1/unpack/lucene-5.0.0/analysis/common/lucene-analyzers-common-5.0.0.jar
 is missing X-Compile-Source-JDK: 1.8 inside its META-INF/MANIFEST.MF

 

The problem: Smoketester expects to find Java 1.8 in the JAR file’s metadata. 
The problem: Shalin said, he runs trunk’s smoke tester on the 5.0 branch. This 
will break here, because Trunk’s smoketester expects Lucene compiled with Java 
8.

 

Uwe

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

 http://www.thetaphi.de/ http://www.thetaphi.de

eMail: u...@thetaphi.de

 

From: Ryan Ernst [mailto:r...@iernst.net] 
Sent: Wednesday, February 11, 2015 3:27 PM
To: dev@lucene.apache.org
Subject: Re: [VOTE] 5.0.0 RC2

 

And I got this:
Java 1.8 
JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_20.jdk/Contents/Home

 

Did you change your JAVA_HOME to point to java 8 as well (that's what it looks 
like since only jdk is listed in that output)? --test-java8 is meant to take 
the java 8 home, but your regular JAVA_HOME should stay java 7. 

 

On Wed, Feb 11, 2015 at 6:13 AM, david.w.smi...@gmail.com 
david.w.smi...@gmail.com wrote:

I found two problems, and I’m not sure what to make of them.

 

First, perhaps the simplest.  I ran it with Java 8 with this at the 
command-line (copied from Uwe’s email, inserting my environment variable):

 

python3 -u dev-tools/scripts/smokeTestRelease.py --test-java8 $JAVA8_HOME 
http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-rev1658469

 

And I got this:

 

Java 1.8 
JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_20.jdk/Contents/Home

NOTE: output encoding is UTF-8

 

Load release URL 
http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-rev1658469;...

  unshortened: 
http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-rev1658469/

 

Test Lucene...

  test basics...

  get KEYS

0.1 MB in 0.69 sec (0.2 MB/sec)

  check changes HTML...

  download lucene-5.0.0-src.tgz...

27.9 MB in 129.06 sec (0.2 MB/sec)

verify md5/sha1 digests

verify sig

verify trust

  GPG: gpg: WARNING: This key is not certified with a trusted signature!

  download lucene-5.0.0.tgz...

64.0 MB in 154.61 sec (0.4 MB/sec)

verify md5/sha1 digests

verify sig

verify trust

  GPG: gpg: WARNING: This key is not certified with a trusted signature!

  download lucene-5.0.0.zip...

73.5 MB in 223.35 sec (0.3 MB/sec)

verify md5/sha1 digests

verify sig

verify trust

  GPG: gpg: WARNING: This key is not certified with a trusted signature!

  unpack lucene-5.0.0.tgz...

verify JAR metadata/identity/no javax.* or java.* classes...

Traceback (most recent call last):

  File dev-tools/scripts/smokeTestRelease.py, line 1486, in module

main()

  File dev-tools/scripts/smokeTestRelease.py, line 1431, in main

smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, c.is_signed, ' 
'.join(c.test_args))

  File dev-tools/scripts/smokeTestRelease.py, line 1468, in smokeTest

unpackAndVerify(java, 'lucene', tmpDir, artifact, svnRevision, version, 
testArgs, baseURL)

  File dev-tools/scripts/smokeTestRelease.py, line 616, in unpackAndVerify

verifyUnpacked(java, project, artifact, unpackPath, svnRevision, version, 
testArgs, tmpDir, baseURL)

  File dev-tools/scripts/smokeTestRelease.py, line 737, in verifyUnpacked

checkAllJARs(os.getcwd(), project, svnRevision, version, tmpDir, baseURL)

  File dev-tools/scripts/smokeTestRelease.py, line 257, in checkAllJARs

checkJARMetaData('JAR file %s' % fullPath, fullPath, svnRevision, version)

  File dev-tools/scripts/smokeTestRelease.py, line 185, in checkJARMetaData

(desc, verify))

RuntimeError: JAR file 
/private/tmp/smoke_lucene_5.0.0_1658469_1/unpack/lucene-5.0.0/analysis/common/lucene-analyzers-common-5.0.0.jar
 is missing X-Compile-Source-JDK: 1.8 inside its META-INF/MANIFEST.MF

 

When I executed the above command, my CWS was a trunk checkout. Should that 
matter?  It seems unlikely; the specific error references the unpacked 
location, not CWD.

 

 

 

I also executed with Java 7; I did this first, actually.  This time, my 
JAVA_HOME is set to Java 7 and I ran this from my 5x checkout.  When the Solr 
tests ran, I got a particular test failure.  It reproduces, but only on the 5.0 
checkout — not my 5x checkout:

 

ant test  -Dtestcase=SaslZkACLProviderTest -Dtests.method=testSaslZkACLProvider 
-Dtests.seed=1E2F7F6DC94B2138 -Dtests.slow=true -Dtests.locale=hi_IN 
-Dtests.timezone=ACT -Dtests.asserts=true -Dtests.file.encoding=UTF-8

 

Does this trip for anyone else?  Again, use Java 7 and the release branch.

 

~ David

 



[jira] [Updated] (LUCENE-6069) compile with compact profiles

2015-02-11 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-6069:

Attachment: LUCENE-6069.patch

Updated patch: this one is a compromise, i use Uwe's changes for 
RamUsageEstimator + my changes for build and tests.

tests pass.

 compile with compact profiles
 -

 Key: LUCENE-6069
 URL: https://issues.apache.org/jira/browse/LUCENE-6069
 Project: Lucene - Core
  Issue Type: Task
  Components: general/build
Affects Versions: Trunk
Reporter: Robert Muir
Assignee: Robert Muir
 Fix For: Trunk

 Attachments: LUCENE-6069.patch, LUCENE-6069.patch, LUCENE-6069.patch, 
 LUCENE-6069.patch


 If we clean up the 'alignment' calculator in RamUsageEstimator, we can 
 compile core with compact1, and the rest of lucene (except tests) with 
 compact2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] 5.0.0 RC2

2015-02-11 Thread Shalin Shekhar Mangar
The test failure is recorded in
https://issues.apache.org/jira/browse/SOLR-6915. We can safely ignore it.

On Wed, Feb 11, 2015 at 7:43 PM, david.w.smi...@gmail.com 
david.w.smi...@gmail.com wrote:

 I found two problems, and I’m not sure what to make of them.

 First, perhaps the simplest.  I ran it with Java 8 with this at the
 command-line (copied from Uwe’s email, inserting my environment variable):

 python3 -u dev-tools/scripts/smokeTestRelease.py --test-java8 $JAVA8_HOME
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-rev1658469

 And I got this:

 Java 1.8
 JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_20.jdk/Contents/Home
 NOTE: output encoding is UTF-8

 Load release URL 
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-rev1658469
 ...
   unshortened:
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-rev1658469/

 Test Lucene...
   test basics...
   get KEYS
 0.1 MB in 0.69 sec (0.2 MB/sec)
   check changes HTML...
   download lucene-5.0.0-src.tgz...
 27.9 MB in 129.06 sec (0.2 MB/sec)
 verify md5/sha1 digests
 verify sig
 verify trust
   GPG: gpg: WARNING: This key is not certified with a trusted
 signature!
   download lucene-5.0.0.tgz...
 64.0 MB in 154.61 sec (0.4 MB/sec)
 verify md5/sha1 digests
 verify sig
 verify trust
   GPG: gpg: WARNING: This key is not certified with a trusted
 signature!
   download lucene-5.0.0.zip...
 73.5 MB in 223.35 sec (0.3 MB/sec)
 verify md5/sha1 digests
 verify sig
 verify trust
   GPG: gpg: WARNING: This key is not certified with a trusted
 signature!
   unpack lucene-5.0.0.tgz...
 verify JAR metadata/identity/no javax.* or java.* classes...
 Traceback (most recent call last):
   File dev-tools/scripts/smokeTestRelease.py, line 1486, in module
 main()
   File dev-tools/scripts/smokeTestRelease.py, line 1431, in main
 smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir,
 c.is_signed, ' '.join(c.test_args))
   File dev-tools/scripts/smokeTestRelease.py, line 1468, in smokeTest
 unpackAndVerify(java, 'lucene', tmpDir, artifact, svnRevision,
 version, testArgs, baseURL)
   File dev-tools/scripts/smokeTestRelease.py, line 616, in
 unpackAndVerify
 verifyUnpacked(java, project, artifact, unpackPath, svnRevision,
 version, testArgs, tmpDir, baseURL)
   File dev-tools/scripts/smokeTestRelease.py, line 737, in verifyUnpacked
 checkAllJARs(os.getcwd(), project, svnRevision, version, tmpDir,
 baseURL)
   File dev-tools/scripts/smokeTestRelease.py, line 257, in checkAllJARs
 checkJARMetaData('JAR file %s' % fullPath, fullPath, svnRevision,
 version)
   File dev-tools/scripts/smokeTestRelease.py, line 185, in
 checkJARMetaData
 (desc, verify))
 RuntimeError: JAR file
 /private/tmp/smoke_lucene_5.0.0_1658469_1/unpack/lucene-5.0.0/analysis/common/lucene-analyzers-common-5.0.0.jar
 is missing X-Compile-Source-JDK: 1.8 inside its META-INF/MANIFEST.MF

 When I executed the above command, my CWS was a trunk checkout. Should
 that matter?  It seems unlikely; the specific error references the unpacked
 location, not CWD.



 I also executed with Java 7; I did this first, actually.  This time, my
 JAVA_HOME is set to Java 7 and I ran this from my 5x checkout.  When the
 Solr tests ran, I got a particular test failure.  It reproduces, but only
 on the 5.0 checkout — not my 5x checkout:

 ant test  -Dtestcase=SaslZkACLProviderTest
 -Dtests.method=testSaslZkACLProvider -Dtests.seed=1E2F7F6DC94B2138
 -Dtests.slow=true -Dtests.locale=hi_IN -Dtests.timezone=ACT
 -Dtests.asserts=true -Dtests.file.encoding=UTF-8

 Does this trip for anyone else?  Again, use Java 7 and the release branch.

 ~ David




-- 
Regards,
Shalin Shekhar Mangar.


[jira] [Commented] (LUCENE-6069) compile with compact profiles

2015-02-11 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316387#comment-14316387
 ] 

Uwe Schindler commented on LUCENE-6069:
---

+1 To Robert's patch here, with the reflection I provided as separate patch!

The reflection will be needed for the nuke of Unsafe anyways, because we need 
HotspotMXBean to detect reference size (which is still important, I just 
reviewed the code using all parts using the constant from RAMUsageEstimator). 
If one runs on compact profile on 64 bit without HotspotBean it will just 
assume that a refenece pointer is 8 bytes. If its uses compressed Oops we just 
calculate the size of arrays wrong (by factor of 2).

In my opinion: We should have good constants for huge systems, because with 
heap sizes around several gigabytes, the memory reporting by 
Lucene/Solr/Elasticsearch should not be wrong by factor 2 for some structures. 
I don't care about static object headers or alignments, if they are wrong - it 
has less effect (because we generally have few huge objects in 
FST/Filter/DocValues).

If you use compact profile on some platform, the ram usage reporting is in most 
cases not so interesting, because you are already limited by the platform...

 compile with compact profiles
 -

 Key: LUCENE-6069
 URL: https://issues.apache.org/jira/browse/LUCENE-6069
 Project: Lucene - Core
  Issue Type: Task
  Components: general/build
Affects Versions: Trunk
Reporter: Robert Muir
Assignee: Robert Muir
 Fix For: Trunk

 Attachments: LUCENE-6069.patch, LUCENE-6069.patch, LUCENE-6069.patch


 If we clean up the 'alignment' calculator in RamUsageEstimator, we can 
 compile core with compact1, and the rest of lucene (except tests) with 
 compact2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7099) bin/solr -cloud mode should launch a local ZK in its own process using zkcli's runzk option (instead of embedded in the first Solr process)

2015-02-11 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316895#comment-14316895
 ] 

Anshum Gupta commented on SOLR-7099:


About the bin/solr zk call, I think it might make sense to have a more generic 
name. That 1. hides the implementation detail of running zk for anyone who 
doesn't want/need to know. 2. Gives us the freedom to replace the configuration 
manager (zk) with something else, if it ever comes to that.

and yes, totally +1 for this change.

 bin/solr -cloud mode should launch a local ZK in its own process using 
 zkcli's runzk option (instead of embedded in the first Solr process)
 ---

 Key: SOLR-7099
 URL: https://issues.apache.org/jira/browse/SOLR-7099
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Timothy Potter

 Embedded ZK is great for unit testing and quick examples, but as soon as 
 someone wants to restart their cluster, embedded mode causes a lot of issues, 
 esp. if you restart the node that embeds ZK. Of course we don't want users to 
 have to install ZooKeeper just to get started with Solr either. 
 Thankfully, ZkCLI already includes a way to launch ZooKeeper in its own 
 process but still within the Solr directory structure. We can hide the 
 details and complexity of working with ZK in the bin/solr script. The 
 solution to this should still make it very clear that this is for getting 
 started / examples and not to be used in production.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6311) SearchHandler should use path when no qt or shard.qt parameter is specified

2015-02-11 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-6311:
-
Attachment: SOLR-6311.patch

Patch that implements the conditional logic based on the luceneMatchVersion. 
I'm intending this fix to be included in 5.1. The 
{{TermVectorComponentDistributedTest}} test now works without specifying the 
{{shards.qt}} query param. Feedback welcome!

 SearchHandler should use path when no qt or shard.qt parameter is specified
 ---

 Key: SOLR-6311
 URL: https://issues.apache.org/jira/browse/SOLR-6311
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Molloy
Assignee: Timothy Potter
 Attachments: SOLR-6311.patch, SOLR-6311.patch


 When performing distributed searches, you have to specify shards.qt unless 
 you're on the default /select path for your handler. As this is configurable, 
 even the default search handler could be on another path. The shard requests 
 should thus default to the path if no shards.qt was specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7097) Update other Document in DocTransformer

2015-02-11 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316875#comment-14316875
 ] 

Noble Paul commented on SOLR-7097:
--

I could not really understand the use case. Can you give out a PoC patch ?

 Update other Document in DocTransformer
 ---

 Key: SOLR-7097
 URL: https://issues.apache.org/jira/browse/SOLR-7097
 Project: Solr
  Issue Type: Improvement
Reporter: yuanyun.cn
Priority: Minor
  Labels: searcher, transformers

 Solr DocTransformer is good, but it only allows us to change current 
 document: add or remove, update fields.
 It would be great if we can update other document(previous especially) , or 
 better we can delete doc(especially useful during test) or add doc in 
 DocTransformer.
 User case:
 We can use flat group mode(group.main=true) to put parent and child close to 
 each other(parent first), then we can use DocTransformer to update parent 
 document when access its child document.
 Some thought about Implementation:
 org.apache.solr.response.TextResponseWriter.writeDocuments(String, 
 ResultContext, ReturnFields)
 when cachMode=true, in the for loop, after transform, we can store the 
 solrdoc in a list, write these doc at the end.
 cachMode = req.getParams().getBool(cachMode, false);
 SolrDocument[] cachedDocs = new SolrDocument[sz];
 for (int i = 0; i  sz; i++) {
  SolrDocument sdoc = toSolrDocument(doc);
  if (transformer != null) {
   transformer.transform(sdoc, id);
  }
  if(cachMode)
  {
 cachedDocs[i] = sdoc;
  }
  else{
 writeSolrDocument( null, sdoc, returnFields, i );
  }
   
 }
 if (transformer != null) {
  transformer.setContext(null);
 }
 if(cachMode) {
  for (int i = 0; i  sz; i++) {
   writeSolrDocument(null, cachedDocs[i], returnFields, i);
  }
 }
 writeEndDocumentList();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-11 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316943#comment-14316943
 ] 

Mark Miller commented on SOLR-6736:
---

How does this address the security concerns raised in the issue 
[~erickerickson] was working on to allow uploading config from the UI?

 A collections-like request handler to manage solr configurations on zookeeper
 -

 Key: SOLR-6736
 URL: https://issues.apache.org/jira/browse/SOLR-6736
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Varun Rajput
Assignee: Anshum Gupta
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6736.patch, SOLR-6736.patch


 Managing Solr configuration files on zookeeper becomes cumbersome while using 
 solr in cloud mode, especially while trying out changes in the 
 configurations. 
 It will be great if there is a request handler that can provide an API to 
 manage the configurations similar to the collections handler that would allow 
 actions like uploading new configurations, linking them to a collection, 
 deleting configurations, etc.
 example : 
 {code}
 #use the following command to upload a new configset called mynewconf. This 
 will fail if there is alredy a conf called 'mynewconf'. The file could be a 
 jar , zip or a tar file which contains all the files for the this conf.
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @testconf.zip http://localhost:8983/solr/admin/configs/mynewconf
 {code}
 A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
 available
 A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
 list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6971) TestRebalanceLeaders fails too often.

2015-02-11 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316944#comment-14316944
 ] 

Mark Miller commented on SOLR-6971:
---

Thanks Erick - I'll try to get to this soon.

 TestRebalanceLeaders fails too often.
 -

 Key: SOLR-6971
 URL: https://issues.apache.org/jira/browse/SOLR-6971
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Assignee: Erick Erickson
Priority: Minor
 Attachments: SOLR-6971-dumper.patch


 I see this fail too much - I've seen 3 different fail types so far.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6311) SearchHandler should use path when no qt or shard.qt parameter is specified

2015-02-11 Thread Steve Molloy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316967#comment-14316967
 ] 

Steve Molloy commented on SOLR-6311:


bq. Definitely not a bug. you have to remember the context of how distributed 
search was added 

Thanks for the history, makes it clearer why it was needed.

bq. But now is not then

Indeed, now distributed/SolrCloud is pretty much the norm...

So anyhow, patch with logic on version makes sense for me, so +1. 

 SearchHandler should use path when no qt or shard.qt parameter is specified
 ---

 Key: SOLR-6311
 URL: https://issues.apache.org/jira/browse/SOLR-6311
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Molloy
Assignee: Timothy Potter
 Attachments: SOLR-6311.patch, SOLR-6311.patch


 When performing distributed searches, you have to specify shards.qt unless 
 you're on the default /select path for your handler. As this is configurable, 
 even the default search handler could be on another path. The shard requests 
 should thus default to the path if no shards.qt was specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2628 - Still Failing

2015-02-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2628/

5 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:51989/c8n_1x2_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:51989/c8n_1x2_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([38537C0A8CA1F23:8BD1081A063672DB]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:787)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:730)
at 
org.apache.solr.cloud.HttpPartitionTest.doSendDoc(HttpPartitionTest.java:484)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:501)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:193)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-11 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316958#comment-14316958
 ] 

Erick Erickson commented on SOLR-6736:
--

A bit of clarification, I'm not really actively working on that, assuming it'll 
all be superseded by the managed stuff, that's assigned to me to keep from 
losing track of it.

But this is a very interesting point. The objection to being able to upload 
arbitrary XML from a client is a security vulnerability as per Uwe's comments 
here: https://issues.apache.org/jira/browse/SOLR-5287 (about half way down, 
dated 30-Nov-2013). It's not clear to me that this capability is similar, 
although I rather assume it is. Sorry for not bringing this up earlier.

We need to be sure of this before committing.


 A collections-like request handler to manage solr configurations on zookeeper
 -

 Key: SOLR-6736
 URL: https://issues.apache.org/jira/browse/SOLR-6736
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Varun Rajput
Assignee: Anshum Gupta
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6736.patch, SOLR-6736.patch


 Managing Solr configuration files on zookeeper becomes cumbersome while using 
 solr in cloud mode, especially while trying out changes in the 
 configurations. 
 It will be great if there is a request handler that can provide an API to 
 manage the configurations similar to the collections handler that would allow 
 actions like uploading new configurations, linking them to a collection, 
 deleting configurations, etc.
 example : 
 {code}
 #use the following command to upload a new configset called mynewconf. This 
 will fail if there is alredy a conf called 'mynewconf'. The file could be a 
 jar , zip or a tar file which contains all the files for the this conf.
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @testconf.zip http://localhost:8983/solr/admin/configs/mynewconf
 {code}
 A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
 available
 A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
 list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-11 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316969#comment-14316969
 ] 

Anshum Gupta commented on SOLR-6736:


Thanks for brining it up Mark and Erick. Here are a few things:
# This would not allow linking of configs to collections and only 
upload/replacing/deleting (may be) of configsets.
# Uploading a configset shouldn't be an issue unless a configset is actually 
used.
# The configs API allows, or at least is moving on the lines of being able to 
update the config via API.
# This issue doesn't involve exposing anything via the Admin UI.

I may be missing out on something but so far, I think this is on similar lines 
as the config/blob storage API.

 A collections-like request handler to manage solr configurations on zookeeper
 -

 Key: SOLR-6736
 URL: https://issues.apache.org/jira/browse/SOLR-6736
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Varun Rajput
Assignee: Anshum Gupta
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6736.patch, SOLR-6736.patch


 Managing Solr configuration files on zookeeper becomes cumbersome while using 
 solr in cloud mode, especially while trying out changes in the 
 configurations. 
 It will be great if there is a request handler that can provide an API to 
 manage the configurations similar to the collections handler that would allow 
 actions like uploading new configurations, linking them to a collection, 
 deleting configurations, etc.
 example : 
 {code}
 #use the following command to upload a new configset called mynewconf. This 
 will fail if there is alredy a conf called 'mynewconf'. The file could be a 
 jar , zip or a tar file which contains all the files for the this conf.
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @testconf.zip http://localhost:8983/solr/admin/configs/mynewconf
 {code}
 A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
 available
 A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
 list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7096) The Solr service script doesn't like SOLR_HOME pointing to a path containing a symlink

2015-02-11 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317069#comment-14317069
 ] 

Hoss Man commented on SOLR-7096:


if/when this behavior is changed, the mention of symbolic links on this ref 
guide page should be removed...
https://cwiki.apache.org/confluence/display/solr/Upgrading+a+Solr+4.x+Cluster+to+Solr+5.0

 The Solr service script doesn't like SOLR_HOME pointing to a path containing 
 a symlink
 --

 Key: SOLR-7096
 URL: https://issues.apache.org/jira/browse/SOLR-7096
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Timothy Potter
Assignee: Timothy Potter
 Fix For: 5.1


 While documenting the process to upgrade a SolrCloud cluster from 4.x to 5.0, 
 I discovered that the init.d/solr script doesn't like the SOLR_HOME pointing 
 to a path that contains a symlink. Work-around is to use an absolute path



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7102) bin/solr should activate cloud mode if ZK_HOST is set

2015-02-11 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317072#comment-14317072
 ] 

Hoss Man commented on SOLR-7102:


if/when this behavior is changed, the Note box regarding SOLR_MODE on this 
page should be removed...
https://cwiki.apache.org/confluence/display/solr/Upgrading+a+Solr+4.x+Cluster+to+Solr+5.0

 bin/solr should activate cloud mode if ZK_HOST is set
 -

 Key: SOLR-7102
 URL: https://issues.apache.org/jira/browse/SOLR-7102
 Project: Solr
  Issue Type: Improvement
Reporter: Timothy Potter
Assignee: Timothy Potter

 you have to set SOLR_MODE=solrcloud in the /var/solr/solr.in.sh to get the 
 init.d/solr script to start Solr in cloud mode (since it doesn't pass -c). 
 What should happen is that the bin/solr script should assume cloud mode if 
 ZK_HOST is set.
 This mainly affects the /etc/init.d/solr script because it doesn't pass the 
 -c | -cloud option. If working with bin/solr directly, you can just pass the 
 -c explicitly to get cloud mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6239) Remove RAMUsageEstimator Unsafe calls

2015-02-11 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6239:
--
Attachment: LUCENE-6239.patch

New patch. I did additional comparisons with the Unsafe detected constants. I 
tested various JVMs, all is consistent now.

I changed the code a little bit so 32 bit and 64 bits JVMs are handled 
separately. For 32 bit JVMs it does not even try to get the alignment size or 
compressed oops value. Also I fixed the array haeder, on 32 bit it is not 
aligned.

I think it's ready, maybe [~dweiss] can have a look, too.

About backporting: We can do this, but reference size detection would not work 
correctly with IBM J9, so it would not detect compressed references there and 
always assume 64 bits. But J9 does not enable compressed refs by default...

 Remove RAMUsageEstimator Unsafe calls
 -

 Key: LUCENE-6239
 URL: https://issues.apache.org/jira/browse/LUCENE-6239
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6239.patch, LUCENE-6239.patch


 This is unnecessary risk. We should remove this stuff, it is not needed here. 
 We are a search engine, not a ram calculator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6239) Remove RAMUsageEstimator Unsafe calls

2015-02-11 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317101#comment-14317101
 ] 

Robert Muir commented on LUCENE-6239:
-

+1 to backport as well. If the reference size is wrong on IBM J9 it wont have a 
huge impact on the ramBytesUsed of lucene's data structures, as we have all 
mentioned on this issue.

Furthermore, I don't know of a configuration of J9 that actually works right 
now, you will get false NPE's in normswriter when indexing, etc etc.

 Remove RAMUsageEstimator Unsafe calls
 -

 Key: LUCENE-6239
 URL: https://issues.apache.org/jira/browse/LUCENE-6239
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6239.patch, LUCENE-6239.patch


 This is unnecessary risk. We should remove this stuff, it is not needed here. 
 We are a search engine, not a ram calculator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7099) bin/solr -cloud mode should launch a local ZK in its own process using zkcli's runzk option (instead of embedded in the first Solr process)

2015-02-11 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317151#comment-14317151
 ] 

Anshum Gupta commented on SOLR-7099:


Sure, it was more of an idea than anything.

 bin/solr -cloud mode should launch a local ZK in its own process using 
 zkcli's runzk option (instead of embedded in the first Solr process)
 ---

 Key: SOLR-7099
 URL: https://issues.apache.org/jira/browse/SOLR-7099
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Timothy Potter

 Embedded ZK is great for unit testing and quick examples, but as soon as 
 someone wants to restart their cluster, embedded mode causes a lot of issues, 
 esp. if you restart the node that embeds ZK. Of course we don't want users to 
 have to install ZooKeeper just to get started with Solr either. 
 Thankfully, ZkCLI already includes a way to launch ZooKeeper in its own 
 process but still within the Solr directory structure. We can hide the 
 details and complexity of working with ZK in the bin/solr script. The 
 solution to this should still make it very clear that this is for getting 
 started / examples and not to be used in production.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7019) Can't change the field key for interval faceting

2015-02-11 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe resolved SOLR-7019.
-
   Resolution: Fixed
Fix Version/s: 5.1
   Trunk
 Assignee: Tomás Fernández Löbbe

 Can't change the field key for interval faceting
 

 Key: SOLR-7019
 URL: https://issues.apache.org/jira/browse/SOLR-7019
 Project: Solr
  Issue Type: Bug
Reporter: Tomás Fernández Löbbe
Assignee: Tomás Fernández Löbbe
 Fix For: Trunk, 5.1

 Attachments: SOLR-7019.patch, SOLR-7019.patch


 Right now it is possible to set the key for each interval when using interval 
 faceting, but it's not possible to change the field key. For example:
 Supported: 
 {noformat}
 ...facet.interval=popularity
 facet.interval.set={!key=bad}[0,5]
 facet.interval.set={!key=good}[5,*]
 facet=true
 {noformat}
 Not Supported: 
 {noformat}
 ...facet.interval={!key=popularity}some_field
 facet.interval.set={!key=bad}[0,5]
 facet.interval.set={!key=good}[5,*]
 facet=true
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 1949 - Failure!

2015-02-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1949/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandlerBackup.doTestBackup

Error Message:
no segments* file found in 
SimpleFSDirectory@/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandlerBackup
 
195973B0FE6A4B35-001/solr-instance-001/collection1/data/snapshot.20150212072808752
 lockFactory=org.apache.lucene.store.NativeFSLockFactory@17544c97: files: 
[_0.fnm, _0.nvm]

Stack Trace:
org.apache.lucene.index.IndexNotFoundException: no segments* file found in 
SimpleFSDirectory@/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandlerBackup
 
195973B0FE6A4B35-001/solr-instance-001/collection1/data/snapshot.20150212072808752
 lockFactory=org.apache.lucene.store.NativeFSLockFactory@17544c97: files: 
[_0.fnm, _0.nvm]
at 
__randomizedtesting.SeedInfo.seed([195973B0FE6A4B35:58D253D5D9D4B87A]:0)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:632)
at 
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:68)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:63)
at 
org.apache.solr.handler.TestReplicationHandlerBackup.verify(TestReplicationHandlerBackup.java:139)
at 
org.apache.solr.handler.TestReplicationHandlerBackup.doTestBackup(TestReplicationHandlerBackup.java:205)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-11 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317242#comment-14317242
 ] 

Mark Miller commented on SOLR-6736:
---

bq. I think this is on similar lines as the config/blob storage API.

Maybe that's a security issue too :)

Certainly this issue appears to be.

 A collections-like request handler to manage solr configurations on zookeeper
 -

 Key: SOLR-6736
 URL: https://issues.apache.org/jira/browse/SOLR-6736
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Varun Rajput
Assignee: Anshum Gupta
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6736.patch, SOLR-6736.patch


 Managing Solr configuration files on zookeeper becomes cumbersome while using 
 solr in cloud mode, especially while trying out changes in the 
 configurations. 
 It will be great if there is a request handler that can provide an API to 
 manage the configurations similar to the collections handler that would allow 
 actions like uploading new configurations, linking them to a collection, 
 deleting configurations, etc.
 example : 
 {code}
 #use the following command to upload a new configset called mynewconf. This 
 will fail if there is alredy a conf called 'mynewconf'. The file could be a 
 jar , zip or a tar file which contains all the files for the this conf.
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @testconf.zip http://localhost:8983/solr/admin/configs/mynewconf
 {code}
 A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
 available
 A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
 list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 'ant test' -- calculation for tests.jvms

2015-02-11 Thread Uwe Schindler
The easiest to work around this is to put a lucene.build.properties into your 
home directory and specify tests.jvms there.

I have this next to other settings like disabling slow tests. The Jenkins 
machines are set up the same way.

Am 12. Februar 2015 00:05:25 MEZ, schrieb Shawn Heisey apa...@elyograg.org:
On 2/11/2015 12:42 PM, Dawid Weiss wrote:
 IMHO, this calculation should be adjusted so that a 3-core system
gets a value of 2.
 A 3-core system? What happened to one of its, ahem, gems? :)

This is the processor I have:

http://www.newegg.com/Product/Product.aspx?Item=N82E16819103683

The X3 chip line consists of 4-core chips that have had one of the
cores
disabled.  Initially AMD did this because sometimes one of the cores
would be bad and fail tests, but later they also used it as a way to
sell perfectly good 4-core chips at a lower price point, by disabling
one of the cores.  There's no way to know (aside from testing) why any
specific chip is an X3 instead of an X4, but apparently most of the X3
chips on the market have 4 perfectly good cores.

The motherboard I'm using will enable the disabled core, but when I
enabled the relevant BIOS setting (which also overclocked the chip a
little bit), I had stability problems with the machine, so I disabled
it
and now I'm back down to three cores at the labelled speed.  Eventually
I will get around to figuring out whether the disabled core is bad or
the stability problems were due to overclocking.

Is this JVM calculation only done in the carrotsearch randomized
testing, or is it also found in JUnit itself?

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

--
Uwe Schindler
H.-H.-Meier-Allee 63, 28213 Bremen
http://www.thetaphi.de

[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-11 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317246#comment-14317246
 ] 

Mark Miller commented on SOLR-6736:
---

Hey [~thetaphi] - could we get your expert advice on this patch?

 A collections-like request handler to manage solr configurations on zookeeper
 -

 Key: SOLR-6736
 URL: https://issues.apache.org/jira/browse/SOLR-6736
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Varun Rajput
Assignee: Anshum Gupta
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6736.patch, SOLR-6736.patch


 Managing Solr configuration files on zookeeper becomes cumbersome while using 
 solr in cloud mode, especially while trying out changes in the 
 configurations. 
 It will be great if there is a request handler that can provide an API to 
 manage the configurations similar to the collections handler that would allow 
 actions like uploading new configurations, linking them to a collection, 
 deleting configurations, etc.
 example : 
 {code}
 #use the following command to upload a new configset called mynewconf. This 
 will fail if there is alredy a conf called 'mynewconf'. The file could be a 
 jar , zip or a tar file which contains all the files for the this conf.
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @testconf.zip http://localhost:8983/solr/admin/configs/mynewconf
 {code}
 A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
 available
 A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
 list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >