JCC for Java - C++ and initializeClass

2015-03-12 Thread William Schilp
i realize this question has been asked in the past (sept-2013) but the
answer seems to be less than useful.

i'm using JCC via C++ and having java crashing issues. the problem appears
to be with the use of initializeClass(bool) and storing instances of JCC
objects.

first off, what does initializeClass(bool) do and what does the boolean
parameter mean/affect/do ... i see in other code that initializeClass tends
to be called with false as the parameter but with no explanation.

there is also no good explanation as to when initializeClass(bool) needs to
be called and why. do i have to call it before accessing any method within
the instance of the class? do i only call it once when constructing the
instance of the class? what happens if i call it twice on a class? will
this cause a crash? can i just call initializeClass on all classes that i
expect to instantiate at the beginning of the executable? i'm using 64bit
java so memory usage is not a issue..

is there any documentation on using JCC with C++, yes i have been to the
pylucene webpage but there is only references to using JCC with python...

bill schilp


[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2050 - Still Failing!

2015-03-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2050/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  'org.apache.solr.core.RuntimeLibReqHandler' for 
path 'overlay/requestHandler/\/runtime/class' full output: {   
responseHeader:{ status:0, QTime:0},   overlay:{ 
znodeVersion:1, runtimeLib:{colltest:{ name:colltest, 
version:1}}, requestHandler:{/test1:{ name:/test1,
 class:org.apache.solr.core.BlobStoreTestRequestHandler, 
runtimeLib:true

Stack Trace:
java.lang.AssertionError: Could not get expected value  
'org.apache.solr.core.RuntimeLibReqHandler' for path 
'overlay/requestHandler/\/runtime/class' full output: {
  responseHeader:{
status:0,
QTime:0},
  overlay:{
znodeVersion:1,
runtimeLib:{colltest:{
name:colltest,
version:1}},
requestHandler:{/test1:{
name:/test1,
class:org.apache.solr.core.BlobStoreTestRequestHandler,
runtimeLib:true
at 
__randomizedtesting.SeedInfo.seed([D859B06A71A2CAA5:149D3D867F6F05]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:399)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:172)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[jira] [Commented] (LUCENE-6336) AnalyzingInfixSuggester needs duplicate handling

2015-03-12 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356985#comment-14356985
 ] 

Michael McCandless commented on LUCENE-6336:


I think whether a given suggester dedups is really up to each impl.

But separately I think it makes sense to add enable dedup for AIS somehow.

Or maybe we add a DedupDictionaryWrapper, which does an offline sort to remove 
dups?  This way we can dedup for any suggester that doesn't handle it itself... 
and we keep the simplicity in responsibility for AIS.

 AnalyzingInfixSuggester needs duplicate handling
 

 Key: LUCENE-6336
 URL: https://issues.apache.org/jira/browse/LUCENE-6336
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.10.3, 5.0
Reporter: Jan Høydahl
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6336.patch


 Spinoff from LUCENE-5833 but else unrelated.
 Using {{AnalyzingInfixSuggester}} which is backed by a Lucene index and 
 stores payload and score together with the suggest text.
 I did some testing with Solr, producing the DocumentDictionary from an index 
 with multiple documents containing the same text, but with random weights 
 between 0-100. Then I got duplicate identical suggestions sorted by weight:
 {code}
 {
   suggest:{languages:{
   engl:{
 numFound:101,
 suggestions:[{
 term:bEngl/bish,
 weight:100,
 payload:0},
   {
 term:bEngl/bish,
 weight:99,
 payload:0},
   {
 term:bEngl/bish,
 weight:98,
 payload:0},
 ---etc all the way down to 0---
 {code}
 I also reproduced the same behavior in AnalyzingInfixSuggester directly. So 
 there is a need for some duplicate removal here, either while building the 
 local suggest index or during lookup. Only the highest weight suggestion for 
 a given term should be returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2753 - Still Failing

2015-03-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2753/

All tests passed

Build Log:
[...truncated 9128 lines...]
[javac] Compiling 516 source files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/classes/test
[javac] 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/test/org/apache/solr/handler/FieldAnalysisRequestHandlerTest.java:1:
 error: illegal character: \65279
[javac] ?/*
[javac] ^
[javac] 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/test/org/apache/solr/handler/FieldAnalysisRequestHandlerTest.java:18:
 error: class, interface, or enum expected
[javac] package org.apache.solr.handler;
[javac] ^
[javac] 2 errors

BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:529:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:477:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:61:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/extra-targets.xml:39:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build.xml:191:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/common-build.xml:509:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:799:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:813:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:1882:
 Compile failed; see the compiler error output for details.

Total time: 22 minutes 39 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Sending artifact delta relative to Lucene-Solr-Tests-5.x-Java7 #2431
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 464 bytes
Compression is 0.0%
Took 14 ms
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Comment Edited] (SOLR-7229) Allow DIH to handle attachments as separate documents

2015-03-12 Thread Tim Allison (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356812#comment-14356812
 ] 

Tim Allison edited comment on SOLR-7229 at 3/11/15 5:30 PM:


Y, that's what I was getting at, and that was the answer I was hoping for.  
Apologies, I'm still trying to learn the preferences for the boundary between 
custom hard coding and configuration over here.  I'll open another issue to add 
that (SOLR-7231)

And, on another note, I just noticed that the code that adds metadata is just 
pulling the first value; in short, if there is a multivalued Solr field, and 
there's more than one metadata value in the metadata object, the values after 
the first are being ignored.  Looks like another issue. :) (SOLR-7232)


was (Author: talli...@mitre.org):
Y, that's what I was getting at, and that was the answer I was hoping for.  
Apologies, I'm still trying to learn the preferences for the boundary between 
custom hard coding and configuration over here.  I'll open another issue to add 
that.  

And, on another note, I just noticed that the code that adds metadata is just 
pulling the first value; in short, if there is a multivalued Solr field, and 
there's more than one metadata value in the metadata object, the values after 
the first are being ignored.  Looks like another issue. :)

 Allow DIH to handle attachments as separate documents
 -

 Key: SOLR-7229
 URL: https://issues.apache.org/jira/browse/SOLR-7229
 Project: Solr
  Issue Type: Improvement
Reporter: Tim Allison
Priority: Minor

 With Tika 1.7's RecursiveParserWrapper, it is possible to maintain metadata 
 of individual attachments/embedded documents.  Tika's default handling was to 
 maintain the metadata of the container document and concatenate the contents 
 of all embedded files.  With SOLR-7189, we added the legacy behavior.
 It might be handy, for example, to be able to send an MSG file through DIH 
 and treat the container email as well each attachment as separate (child?) 
 documents, or send a zip of jpeg files and correctly index the geo locations 
 for each image file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7230) An API to plugin security into Solr

2015-03-12 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358367#comment-14358367
 ] 

Noble Paul commented on SOLR-7230:
--

bq.This issue lacks a lot of context and high level end user focus. 

Yes, you are right and it is done on purpose. This issue is targeted at 
developers of actual security implementations

bq.I think time may be ripe for adding security and user login to stock Solr

I'm not sure if we should dilute our efforts by littering Solr source code with 
users and credentials . It can be a heavy distraction . I believe that security 
can be orthogonal to what we do other wise. Trying to provide a fast, super 
scalable , reliable search system. 


bq.What is your concrete use case that triggered this Jira, Noble?

We have customers asking for integrating with their preferred 
authentication/authorization mechanism. At the same time, I don't want it to 
preempt any alternate implementations which some other customers want. Believe 
me, enterprise is a crazy heterogenous place with numerous different security 
practices. We may not be able to satisfy everyone and at the same time. Hence, 
there is a need for a implementation agnostic approach . 



 

 An API to plugin security into Solr
 ---

 Key: SOLR-7230
 URL: https://issues.apache.org/jira/browse/SOLR-7230
 Project: Solr
  Issue Type: New Feature
Reporter: Noble Paul

 The objective is to define a API that a plugin can implement to protect 
 various operations performed on Solr. It may have various implementations . 
 Some built in and some external.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7230) An API to plugin security into Solr

2015-03-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358348#comment-14358348
 ] 

Jan Høydahl commented on SOLR-7230:
---

This issue lacks a lot of context and high level end user focus. The area of 
security has been something we have avoided in our code base until now - for a 
reason - and this Jira seems to want to rush something home-cooked in without a 
bigger plan. Sort of starting in the wrong end.

I think time may be ripe for adding security and user login to stock Solr. It 
will further solidify Solr as the choice for the enterprise.

The area of security is big and with many external integration needs. I think 
we should create an umbrella Jira outlining a master plan for what it takes to 
secure Solr and what our user's most urgent needs are. Then sub-tasks will 
follow.

What is your concrete use case that triggered this Jira, Noble?

 An API to plugin security into Solr
 ---

 Key: SOLR-7230
 URL: https://issues.apache.org/jira/browse/SOLR-7230
 Project: Solr
  Issue Type: New Feature
Reporter: Noble Paul

 The objective is to define a API that a plugin can implement to protect 
 various operations performed on Solr. It may have various implementations . 
 Some built in and some external.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6347) MultiFieldQueryParser doesnt catch invalid syntax properly (due to user using regexpression syntax unwittingly)

2015-03-12 Thread Paul taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356998#comment-14356998
 ] 

Paul taylor commented on LUCENE-6347:
-

It only shows the NPE when you run the query, that was the initial problem (and 
I think covered by https://issues.apache.org/jira/browse/LUCENE-6345 ) . But 
the issue raised here shows that it is failing to throw a ParseException for an 
invalid query in the first place when it should, and that is what this issue is 
about. TO be they are two different (albeit connected issues) that can be fixed 
independently.

 MultiFieldQueryParser doesnt catch invalid syntax properly (due to user using 
 regexpression syntax unwittingly)
 ---

 Key: LUCENE-6347
 URL: https://issues.apache.org/jira/browse/LUCENE-6347
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Affects Versions: 4.1
Reporter: Paul taylor

 MultiFieldQueryParser doesnt catch invalid syntax properly (due to user using 
 regexpression syntax unwittingly)
 {code} 
 import org.apache.lucene.analysis.standard.StandardAnalyzer;
 import org.apache.lucene.queryparser.classic.MultiFieldQueryParser;
 import org.apache.lucene.queryparser.classic.ParseException;
 import org.apache.lucene.queryparser.classic.QueryParser;
 import org.apache.lucene.util.Version;
 import org.junit.Test;
 import static org.junit.Assert.assertNotNull;
 import static org.junit.Assert.assertTrue;
 /**
  * Lucene tests
  */
 public class LuceneRegExParseTest
 {
 @Test
 public void testSearch411LuceneBugReport() throws Exception
 {
 Exception e = null;
 try
 {
 String[] fields = new String[2];
 fields[0] = artist;
 fields[1] = recording;
 QueryParser qp = new MultiFieldQueryParser(Version.LUCENE_41, 
 fields, new StandardAnalyzer(Version.LUCENE_41));
 qp.parse(artist:pandora /reyli  recording:yo/Alguien);
 }
 catch(Exception ex)
 {
 e=ex;
 }
 assertNotNull(e);
 assertTrue(e instanceof ParseException );
 }
 }
 {code}
 With assertions disabled this test fails as no exception is thrown.
 With assertions enabled we get
 {code}
 java.lang.AssertionError
   at 
 org.apache.lucene.search.MultiTermQuery.init(MultiTermQuery.java:252)
   at 
 org.apache.lucene.search.AutomatonQuery.init(AutomatonQuery.java:65)
   at org.apache.lucene.search.RegexpQuery.init(RegexpQuery.java:90)
   at org.apache.lucene.search.RegexpQuery.init(RegexpQuery.java:79)
   at org.apache.lucene.search.RegexpQuery.init(RegexpQuery.java:69)
   at 
 org.apache.lucene.queryparser.classic.QueryParserBase.newRegexpQuery(QueryParserBase.java:790)
   at 
 org.apache.lucene.queryparser.classic.QueryParserBase.getRegexpQuery(QueryParserBase.java:1005)
   at 
 org.apache.lucene.queryparser.classic.QueryParserBase.handleBareTokenQuery(QueryParserBase.java:1075)
   at 
 org.apache.lucene.queryparser.classic.QueryParser.Term(QueryParser.java:359)
   at 
 org.apache.lucene.queryparser.classic.QueryParser.Clause(QueryParser.java:258)
   at 
 org.apache.lucene.queryparser.classic.QueryParser.Query(QueryParser.java:213)
   at 
 org.apache.lucene.queryparser.classic.QueryParser.TopLevelQuery(QueryParser.java:171)
   at 
 org.apache.lucene.queryparser.classic.QueryParserBase.parse(QueryParserBase.java:120)
   at 
 org.musicbrainz.search.servlet.LuceneRegExParseTest.testSearch411LuceneBugReport(LuceneRegExParseTest.java:30)
 but this should throw an exception without assertions enabled. Because no 
 exception is thrown a search then faikls with the following stack trace
 java.lang.NullPointerException
 at java.util.TreeMap.getEntry(TreeMap.java:342)
 at java.util.TreeMap.get(TreeMap.java:273)
 at 
 org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.terms(PerFieldPostingsFormat.java:215)
 at 
 org.apache.lucene.search.TermCollectingRewrite.collectTerms(TermCollectingRewrite.java:58)
 at 
 org.apache.lucene.search.ConstantScoreAutoRewrite.rewrite(ConstantScoreAutoRewrite.java:95)
 at 
 org.apache.lucene.search.MultiTermQuery$ConstantScoreAutoRewrite.rewrite(MultiTermQuery.java:220)
 at org.apache.lucene.search.MultiTermQuery.rewrite(MultiTermQuery.java:286)
 at org.apache.lucene.search.BooleanQuery.rewrite(BooleanQuery.java:429)
 at org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:616)
 at 
 org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:663)
 at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)
 at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)
 {code}



--
This 

[jira] [Created] (SOLR-7236) Securing Solr (umbrella issue)

2015-03-12 Thread JIRA
Jan Høydahl created SOLR-7236:
-

 Summary: Securing Solr (umbrella issue)
 Key: SOLR-7236
 URL: https://issues.apache.org/jira/browse/SOLR-7236
 Project: Solr
  Issue Type: New Feature
Reporter: Jan Høydahl


This is an umbrella issue for adding security to Solr. The discussion here 
should discuss real user needs and high-level strategy, before deciding on 
implementation details. All work will be done in sub tasks and linked issues.

Solr has not traditionally concerned itself with security. And It has been a 
general view among the committers that it may be better to stay out of it to 
avoid blood on our hands in this mine-field. Still, Solr has lately seen SSL 
support, securing of ZK, and signing of jars, and discussions have begun about 
securing operations in Solr.

Some of the topics to address are
* User management (flat file, AD/LDAP etc)
* Authentication (Admin UI, Admin and data/query operations. Tons of auth 
protocols: basic, digest, oauth, pki..)
* Authorization (who can do what with what API, collection, doc)
* Pluggability (no user's needs are equal)
* And we could go on and on but this is what we've seen the most demand for




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2757 - Still Failing

2015-03-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2757/

5 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:61014/c8n_1x2_shard1_replica2

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:61014/c8n_1x2_shard1_replica2
at 
__randomizedtesting.SeedInfo.seed([E858A509427E7361:600C9AD3EC821E99]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:625)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:839)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:782)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.HttpPartitionTest.doSendDoc(HttpPartitionTest.java:484)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:501)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:193)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-7230) An API to plugin security into Solr

2015-03-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358382#comment-14358382
 ] 

Jan Høydahl commented on SOLR-7230:
---

Created SOLR-7236 as an umbrella.

 An API to plugin security into Solr
 ---

 Key: SOLR-7230
 URL: https://issues.apache.org/jira/browse/SOLR-7230
 Project: Solr
  Issue Type: New Feature
Reporter: Noble Paul

 The objective is to define a API that a plugin can implement to protect 
 various operations performed on Solr. It may have various implementations . 
 Some built in and some external.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6332) join query scanning toField docValue

2015-03-12 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated LUCENE-6332:
-
Attachment: LUCENE-6332.patch

attaching DocValuesScanQuery.java with a trivial test. It works for BinaryDV 
only (ord- coded Sorted(Set)DV can be implemented too, but avoiding term 
lookups is more preferable, eg LUCENE-6352). 
One more idea is to check number of collected terms at from side and choose 
between TermsQuery (lookup in term dict, current JoinUtil) and this approach - 
scanning all DV column. WDYT? 

 join query scanning toField docValue   
 -

 Key: LUCENE-6332
 URL: https://issues.apache.org/jira/browse/LUCENE-6332
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/join
Affects Versions: 5.0
Reporter: Mikhail Khludnev
 Attachments: LUCENE-6332.patch


 I want to contribute the subj which should do something like ..{{WHERE EXISTS 
 (SELECT 1 FROM fromSearcher.search(fromQuery) WHERE fromField=toField)}}. It 
 turns out, that it can be returned by the current method 
 {{createJoinQuery(...ScoreMode.None)}}
 * at first, it should run {{fromQuery}} first, collect {{fromField}} into 
 {{BytesRefHash}} by {{TermsCollector}}, like it's done now
 * then it should return query with _TwoPhase_ Scorer
 * which obtains {{toField}} docValue on {{matches()}} and check term for 
 existence in  {{BytesRefHash}}
 Do you think it's ever useful? if you do, I can bake a patch. 
 Anyway, suggest the better API eg separate method, or enum and actual  name!  
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7236) Securing Solr (umbrella issue)

2015-03-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358401#comment-14358401
 ] 

Jan Høydahl edited comment on SOLR-7236 at 3/12/15 9:58 AM:


Various enterprises have different security needs. The smallest just needs to 
enter a few users with permissions locally, others want to integrate with AD or 
LDAP instead of duplicating users. Some only need to secure the Admin UI and 
admin APIs, others need full update/query security on collection or even 
document level. As an enterprise product, Solr should not restrict this to only 
one or two implementations.

There are multiple existing frameworks to simplify the task of abstracting 
security implementations in Java apps, among them are 
[JAAS|https://en.wikipedia.org/wiki/Java_Authentication_and_Authorization_Service]
 , [Spring Security|http://projects.spring.io/spring-security/] and [Apache 
Shiro|http://shiro.apache.org/]. They are created to do the hard and scary 
stuff, provide simple APIs for developers and also provide out of the box 
integrations with all the various protocols. We really don't want to maintain 
support for Kerberos etc in Solr-code.

Although any of these could probably do the job, I'm pitching Apache Shiro as 
the main API for all security related implementations in Solr. Without having 
used it, seems to be built just for this purpose. Solr users with some crazy 
legacy security system inhouse can write plugins for that to Shiro itself, 
instead of writing Solr code. http://shiro.apache.org/


was (Author: janhoy):
There are multiple existing frameworks to simplify the task of abstracting 
security implementations in Java apps, among them are 
[JAAS|https://en.wikipedia.org/wiki/Java_Authentication_and_Authorization_Service]
 , [Spring Security|http://projects.spring.io/spring-security/] and [Apache 
Shiro|http://shiro.apache.org/]. They are created to do the hard and scary 
stuff, provide simple APIs for developers and also provide out of the box 
integrations with all the various protocols. We really don't want to maintain 
support for Kerberos etc in Solr-code.

Although any of these could probably do the job, I'm pitching Apache Shiro as 
the main API for all security related implementations in Solr. Without having 
used it, seems to be built just for this purpose. Solr users with some crazy 
legacy security system inhouse can write plugins for that to Shiro itself, 
instead of writing Solr code. http://shiro.apache.org/

 Securing Solr (umbrella issue)
 --

 Key: SOLR-7236
 URL: https://issues.apache.org/jira/browse/SOLR-7236
 Project: Solr
  Issue Type: New Feature
Reporter: Jan Høydahl
  Labels: Security

 This is an umbrella issue for adding security to Solr. The discussion here 
 should discuss real user needs and high-level strategy, before deciding on 
 implementation details. All work will be done in sub tasks and linked issues.
 Solr has not traditionally concerned itself with security. And It has been a 
 general view among the committers that it may be better to stay out of it to 
 avoid blood on our hands in this mine-field. Still, Solr has lately seen 
 SSL support, securing of ZK, and signing of jars, and discussions have begun 
 about securing operations in Solr.
 Some of the topics to address are
 * User management (flat file, AD/LDAP etc)
 * Authentication (Admin UI, Admin and data/query operations. Tons of auth 
 protocols: basic, digest, oauth, pki..)
 * Authorization (who can do what with what API, collection, doc)
 * Pluggability (no user's needs are equal)
 * And we could go on and on but this is what we've seen the most demand for



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7217) Auto-detect HTTP body content-type

2015-03-12 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358486#comment-14358486
 ] 

Yonik Seeley edited comment on SOLR-7217 at 3/12/15 11:03 AM:
--

Right, so the logic is to autodetect when there is no content-type or if the 
client is curl with the default that curl adds.


was (Author: ysee...@gmail.com):
Right, so the logic is to autodetect if there is no content-type or if the 
client is curl.

 Auto-detect HTTP body content-type
 --

 Key: SOLR-7217
 URL: https://issues.apache.org/jira/browse/SOLR-7217
 Project: Solr
  Issue Type: Improvement
Reporter: Yonik Seeley

 It's nice to be able to leave off the specification of content type when hand 
 crafting a request (i.e. from the command line) and for our documentation 
 examples.
 For example:
 {code}
 curl http://localhost:8983/solr/query -d '
 {
   query:hero
 }'
 {code}
 Note the missing 
 {code}
 -H 'Content-type:application/json'
 {code}
 that would otherwise be needed everywhere



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7217) Auto-detect HTTP body content-type

2015-03-12 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358486#comment-14358486
 ] 

Yonik Seeley commented on SOLR-7217:


Right, so the logic is to autodetect if there is no content-type or if the 
client is curl.

 Auto-detect HTTP body content-type
 --

 Key: SOLR-7217
 URL: https://issues.apache.org/jira/browse/SOLR-7217
 Project: Solr
  Issue Type: Improvement
Reporter: Yonik Seeley

 It's nice to be able to leave off the specification of content type when hand 
 crafting a request (i.e. from the command line) and for our documentation 
 examples.
 For example:
 {code}
 curl http://localhost:8983/solr/query -d '
 {
   query:hero
 }'
 {code}
 Note the missing 
 {code}
 -H 'Content-type:application/json'
 {code}
 that would otherwise be needed everywhere



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7217) Auto-detect HTTP body content-type

2015-03-12 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358527#comment-14358527
 ] 

Noble Paul commented on SOLR-7217:
--

I didn't get that. what does  the server do if content-type 
{{application/x-www-form-urlencoded}} ? 

 Auto-detect HTTP body content-type
 --

 Key: SOLR-7217
 URL: https://issues.apache.org/jira/browse/SOLR-7217
 Project: Solr
  Issue Type: Improvement
Reporter: Yonik Seeley

 It's nice to be able to leave off the specification of content type when hand 
 crafting a request (i.e. from the command line) and for our documentation 
 examples.
 For example:
 {code}
 curl http://localhost:8983/solr/query -d '
 {
   query:hero
 }'
 {code}
 Note the missing 
 {code}
 -H 'Content-type:application/json'
 {code}
 that would otherwise be needed everywhere



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7217) Auto-detect HTTP body content-type

2015-03-12 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358547#comment-14358547
 ] 

Yonik Seeley commented on SOLR-7217:


If anyone wants to try it out in practice before it gets backported here, it's 
implemented in heliosearch.

 Auto-detect HTTP body content-type
 --

 Key: SOLR-7217
 URL: https://issues.apache.org/jira/browse/SOLR-7217
 Project: Solr
  Issue Type: Improvement
Reporter: Yonik Seeley

 It's nice to be able to leave off the specification of content type when hand 
 crafting a request (i.e. from the command line) and for our documentation 
 examples.
 For example:
 {code}
 curl http://localhost:8983/solr/query -d '
 {
   query:hero
 }'
 {code}
 Note the missing 
 {code}
 -H 'Content-type:application/json'
 {code}
 that would otherwise be needed everywhere



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2758 - Still Failing

2015-03-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2758/

1 tests failed.
REGRESSION:  
org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings

Error Message:
some thread(s) failed

Stack Trace:
java.lang.RuntimeException: some thread(s) failed
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:531)
at 
org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings(TestRandomChains.java:929)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 2701 lines...]
   [junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
   [junit4]   2 TEST FAIL: useCharFilter=false text='csuvfer 
F\u0556\u0002\u6ba0\uf09b\u05491 omfvho bodgfvxfpn apgrjffk gfezfdi qkf'
   [junit4]   2 ??? 12, 2015 7:04:37 PM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2 WARNING: Uncaught exception in thread: 
Thread[Thread-294,5,TGRP-TestRandomChains]
   [junit4]   2 java.lang.OutOfMemoryError: GC overhead limit exceeded
   [junit4]   2at 
__randomizedtesting.SeedInfo.seed([F7D1F05B60E6A470]:0)
   [junit4]   2at 

[jira] [Updated] (SOLR-6892) Improve the way update processors are used and make it simpler

2015-03-12 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6892:
-
Attachment: SOLR-6892.patch

 Improve the way update processors are used and make it simpler
 --

 Key: SOLR-6892
 URL: https://issues.apache.org/jira/browse/SOLR-6892
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6892.patch


 The current update processor chain is rather cumbersome and we should be able 
 to use the updateprocessors without a chain.
 The scope of this ticket is 
 * A new tag {{updateProcessor}}  becomes a toplevel tag and it will be 
 equivalent to the {{processor}} tag inside 
 {{updateRequestProcessorChain}} . The only difference is that it should 
 require a {{name}} attribute. The {{updateProcessorChain}} tag will 
 continue to exist and it should be possible to define {{processor}} inside 
 as well . It should also be possible to reference a named URP in a chain.
 * processors will be added in the request with their names . Example 
 {{processor=a,b,c}} , {{pre-processor=p,q,r}} or {{post-processor=x,y,z}} . 
 This creates an implicit chain of the named URPs the order they are specified
 * There are multiple request parameters supported by update request 
 ** pre-processor : This chain is executed at the node that receives the 
 request. Other nodes will not execute this
 ** processor : This chain is executed executed at the leader right before the 
 LogUpdateProcessorFactory + DistributedUpdateProcessorFactory . The replicas 
 will not execute this. 
 ** post-processor : This chain is executed right before the 
 RunUpdateProcessor in all replicas , including the leader
 * What happens to the update.chain parameter ? {{update.chain}} will be 
 honored . The implicit chain is created by merging both the update.chain and 
 the request params. {{post-processor}} will be inserted right after the 
 DistributedUpdateProcessor in the chain.   and {{processor}} will be inserted 
 right in the beginning of the update.chain



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7218) constant score query syntax

2015-03-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358656#comment-14358656
 ] 

ASF subversion and git services commented on SOLR-7218:
---

Commit 1666183 from [~yo...@apache.org] in branch 'dev/trunk'
[ https://svn.apache.org/r1666183 ]

SOLR-7218: Use ^= for constant score query

 constant score query syntax
 ---

 Key: SOLR-7218
 URL: https://issues.apache.org/jira/browse/SOLR-7218
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
 Attachments: SOLR-7218.patch


 A ConstantScoreQuery is like a boosted query, but it produces the same score 
 for every document that matches the query. The score produced is equal to the 
 query boost. The ^= operator is used to turn any query clause into a 
 ConstantScoreQuery.
 Constant Score Query Examples:
 {code}
 +color:blue^=1 text:shoes
 (inStock:true text:heliosearch)^=100 native code faceting
 {code}
 Syntax rational: since boosting (multiplication) is term^value, the syntax 
 for having a constant score can be term^=value



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6787) API to manage blobs in Solr

2015-03-12 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358673#comment-14358673
 ] 

Noble Paul commented on SOLR-6787:
--

I agree with you that recursive screw up is possible. Instead of removing that 
API itself , we should add safeguard the caller  and prevent recursive calls in 
that method itself.

IMHO recursive loops are caught almost immediately but resource leaks are not 
found until it's too late

 API to manage blobs in  Solr
 

 Key: SOLR-6787
 URL: https://issues.apache.org/jira/browse/SOLR-6787
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, Trunk

 Attachments: SOLR-6787.patch, SOLR-6787.patch


 A special collection called .system needs to be created by the user to 
 store/manage blobs. The schema/solrconfig of that collection need to be 
 automatically supplied by the system so that there are no errors
 APIs need to be created to manage the content of that collection
 {code}
 #create your .system collection first
 http://localhost:8983/solr/admin/collections?action=CREATEname=.systemreplicationFactor=2
 #The config for this collection is automatically created . numShards for this 
 collection is hardcoded to 1
 #create a new jar or add a new version of a jar
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @mycomponent.jar http://localhost:8983/solr/.system/blob/mycomponent
 #  GET on the end point would give a list of jars and other details
 curl http://localhost:8983/solr/.system/blob 
 # GET on the end point with jar name would give  details of various versions 
 of the available jars
 curl http://localhost:8983/solr/.system/blob/mycomponent
 # GET on the end point with jar name and version with a wt=filestream to get 
 the actual file
 curl http://localhost:8983/solr/.system/blob/mycomponent/1?wt=filestream  
 mycomponent.1.jar
 # GET on the end point with jar name and wt=filestream to get the latest 
 version of the file
 curl http://localhost:8983/solr/.system/blob/mycomponent?wt=filestream  
 mycomponent.jar
 {code}
 Please note that the jars are never deleted. a new version is added to the 
 system everytime a new jar is posted for the name. You must use the standard 
 delete commands to delete the old entries



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7217) Auto-detect HTTP body content-type

2015-03-12 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358674#comment-14358674
 ] 

Yonik Seeley commented on SOLR-7217:


I haven't created a patch yet... it was part of a larger commit in helio:  
ff43c0a 2014-12-03 | json requests \[yonik\]

 Auto-detect HTTP body content-type
 --

 Key: SOLR-7217
 URL: https://issues.apache.org/jira/browse/SOLR-7217
 Project: Solr
  Issue Type: Improvement
Reporter: Yonik Seeley

 It's nice to be able to leave off the specification of content type when hand 
 crafting a request (i.e. from the command line) and for our documentation 
 examples.
 For example:
 {code}
 curl http://localhost:8983/solr/query -d '
 {
   query:hero
 }'
 {code}
 Note the missing 
 {code}
 -H 'Content-type:application/json'
 {code}
 that would otherwise be needed everywhere



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [GitHub] lucene-solr pull request: Extending schema api

2015-03-12 Thread Noble Paul
The DELETE and PUT formats are going to be deprecated in favor of the bulk
APIs

Please take a look at the ref guide
https://cwiki.apache.org/confluence/display/solr/Schema+API

On Thu, Mar 12, 2015 at 9:32 AM, Dourm g...@git.apache.org wrote:

 Github user Dourm commented on the pull request:

 https://github.com/apache/lucene-solr/pull/119#issuecomment-78420357

 I need this function desperately !


 ---
 If your project is set up for it, you can reply to this email and have your
 reply appear on GitHub as well. If your project does not have this feature
 enabled and wishes so, or if the feature is enabled but not working, please
 contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
 with INFRA.
 ---

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
-
Noble Paul


[jira] [Commented] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections

2015-03-12 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358685#comment-14358685
 ] 

Shawn Heisey commented on SOLR-7191:


[~dk]:

The first thing I thought when I saw that you were trying 10K cores was that 
you would run out of threads unless you change the servlet container config.  
There is another limit looming after that ... the number of processes that you 
can create.  A Linux/Unix system uses a 16-bit identifier for process IDs, so 
the absolute upper limit of processes (including all OS-related processes) is 
65535.  On Linux (and likely other Unix/Unix-like systems), threads take up a 
PID, although they are not visible to programs like top or ps without 
specific options.  I have no idea what the situation is on Windows.

On your patch:

The first patch section removes a null check.  This is never a good idea, 
because the fact that a null check exists tends to mean that the object 
identifier has the potential to be null, and presumably the first result on the 
trinary operator will fail (NullPointerException) somehow if the checked object 
actually is null.

On the last patch section: Imposing a limit in the code without giving the user 
the option of configuring that limit will eventually cause problems for 
somebody.  Also, someone who is really familiar with how the ZkContainer code 
works will need to let us know if reducing the number of threads might have 
unintended consequences.

On LotsOfCores: SolrCloud brings a lot of complications to the situation, and 
when Erick did his work on that, he told all of us that trying to use transient 
cores in conjunction with SolrCloud would likely not work correctly.  I think 
that the goal is to eventually make the two features coexist, but a lot of 
thought and work needs to happen.

General observation:  A patch like this is not likely to be backported to the 
4.10 branch.  That branch is in maintenance mode, so only trivial fixes or 
patches for major bugs will be committed, and new releases from the maintenance 
mode branch are not common.


 Improve stability and startup performance of SolrCloud with thousands of 
 collections
 

 Key: SOLR-7191
 URL: https://issues.apache.org/jira/browse/SOLR-7191
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.0
Reporter: Shawn Heisey
  Labels: performance, scalability
 Attachments: SOLR-7191.patch, 
 lots-of-zkstatereader-updates-branch_5x.log


 A user on the mailing list with thousands of collections (5000 on 4.10.3, 
 4000 on 5.0) is having severe problems with getting Solr to restart.
 I tried as hard as I could to duplicate the user setup, but I ran into many 
 problems myself even before I was able to get 4000 collections created on a 
 5.0 example cloud setup.  Restarting Solr takes a very long time, and it is 
 not very stable once it's up and running.
 This kind of setup is very much pushing the envelope on SolrCloud performance 
 and scalability.  It doesn't help that I'm running both Solr nodes on one 
 machine (I started with 'bin/solr -e cloud') and that ZK is embedded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6787) API to manage blobs in Solr

2015-03-12 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358638#comment-14358638
 ] 

Yonik Seeley commented on SOLR-6787:


bq. The problem I see with our internal APIs is that they are mostly expert 
only and easy to screw up.It is easy to forget to close the request here. That 
is why I created a non-expert method which anyone can use.

Right... but I'm not sure at all that this won't screw up in the general case, 
so I don't think it's ready for non-expert use.  I don't think handleRequest 
was really written to be recursive (i.e be called from handleRequest itself).

 API to manage blobs in  Solr
 

 Key: SOLR-6787
 URL: https://issues.apache.org/jira/browse/SOLR-6787
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, Trunk

 Attachments: SOLR-6787.patch, SOLR-6787.patch


 A special collection called .system needs to be created by the user to 
 store/manage blobs. The schema/solrconfig of that collection need to be 
 automatically supplied by the system so that there are no errors
 APIs need to be created to manage the content of that collection
 {code}
 #create your .system collection first
 http://localhost:8983/solr/admin/collections?action=CREATEname=.systemreplicationFactor=2
 #The config for this collection is automatically created . numShards for this 
 collection is hardcoded to 1
 #create a new jar or add a new version of a jar
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @mycomponent.jar http://localhost:8983/solr/.system/blob/mycomponent
 #  GET on the end point would give a list of jars and other details
 curl http://localhost:8983/solr/.system/blob 
 # GET on the end point with jar name would give  details of various versions 
 of the available jars
 curl http://localhost:8983/solr/.system/blob/mycomponent
 # GET on the end point with jar name and version with a wt=filestream to get 
 the actual file
 curl http://localhost:8983/solr/.system/blob/mycomponent/1?wt=filestream  
 mycomponent.1.jar
 # GET on the end point with jar name and wt=filestream to get the latest 
 version of the file
 curl http://localhost:8983/solr/.system/blob/mycomponent?wt=filestream  
 mycomponent.jar
 {code}
 Please note that the jars are never deleted. a new version is added to the 
 system everytime a new jar is posted for the name. You must use the standard 
 delete commands to delete the old entries



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7237) Add boost to @Field annotation

2015-03-12 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358665#comment-14358665
 ] 

Noble Paul commented on SOLR-7237:
--

It is possible, But, but it is not enough if considering that the system can 
have separate boost for a given field in each document. It is not good to fix 
the value of a runtime thing at compile time

I would ideally wish to have another field for the boost

{noformat}
@Field(boostValField=myFieldBoost)
String myField

Float myFieldBoost;

{noformat}

 Add boost to @Field annotation
 --

 Key: SOLR-7237
 URL: https://issues.apache.org/jira/browse/SOLR-7237
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Affects Versions: 5.0
Reporter: Karl Kildén

 DocumentObjectBinder is great but it hard codes the boost like this:
 doc.setField(field.name, field.get(obj), 1.0f);
 Why not offer boost on the @Field annotation when you construct the bean?
 @Field(name=MY_FIELD, boost=2.0f)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7217) Auto-detect HTTP body content-type

2015-03-12 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358634#comment-14358634
 ] 

Noble Paul commented on SOLR-7217:
--

Good , I overlooked it. I was trying to cleanup our examples and the user-agent 
thing didn't strike me. 
++1  

 Auto-detect HTTP body content-type
 --

 Key: SOLR-7217
 URL: https://issues.apache.org/jira/browse/SOLR-7217
 Project: Solr
  Issue Type: Improvement
Reporter: Yonik Seeley

 It's nice to be able to leave off the specification of content type when hand 
 crafting a request (i.e. from the command line) and for our documentation 
 examples.
 For example:
 {code}
 curl http://localhost:8983/solr/query -d '
 {
   query:hero
 }'
 {code}
 Note the missing 
 {code}
 -H 'Content-type:application/json'
 {code}
 that would otherwise be needed everywhere



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Functionality of legacyCloud=false

2015-03-12 Thread Noble Paul
how is copying a core dir from one node to another a normal use case ?
On Mar 12, 2015 7:22 PM, Varun Thacker varunthacker1...@gmail.com wrote:

 Hi Noble,

 Well I was just playing around to see if there were scenarios where
 different coreNodeNames could register themselves even if they weren't
 creating using the Collections API.

 So I was doing it intentionally here to see what happens. But I can
 totally imagine users running into the second scenario where an old node
 comes back up and ends up messing up that replica in the collection
 accidentally.

 On Thu, Mar 12, 2015 at 7:01 PM, Noble Paul noble.p...@gmail.com wrote:

 It is totally possible.
 The point is , it was not a security feature and it is extremely easy to
 spoof it.
 The question is , was it a normal scenario or was it an effort to prove
 that the system was not foolproof

 --Noble

 On Thu, Mar 12, 2015 at 6:23 PM, Varun Thacker 
 varunthacker1...@gmail.com wrote:

 Two scenarios I observed where we can bring up a replica even when I
 think it shouldn't. legacyCloud is set to false.

- I have two nodes A and B.
- CREATE collection 'test' with 1 shard, 1 replica. It gets created
on node A.
- manually copy test_shard1_replica1 folder to node B's solr home.
- Bring down node A.
- Restart node B. The shard comes up registering itself on node B
and becomes 'active'


- I have two nodes A and B ( this is down currently ).
- CREATE collection 'test' with 1 shard, 1 replica. It gets created
on node A.
- manually copy test_shard1_replica1 folder to node B's solr home.
- Start node B. The shard comes up registering itself on node B and
stays 'down'. The reason being the leader is still node A but 
 clusterstate
has base_url of Node B. This is the error in the logs - Error getting
leader from zk for shard shard1

 In legacyCloud=false you get a 'no_such_replica in clusterstate' error
 if the 'coreNodeName' is not present in clusterstate.

 But in my two observations the 'coreNodeName' were the same, hence I ran
 into that scenario.

 Should we make the check more stringent to not allow this to happen?
 Check against base_url also?

 Also should we be making legacyCloud=false as default in 5.x?
 --


 Regards,
 Varun Thacker
 http://www.vthacker.in/




 --
 -
 Noble Paul




 --


 Regards,
 Varun Thacker
 http://www.vthacker.in/



[jira] [Commented] (SOLR-7218) constant score query syntax

2015-03-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358659#comment-14358659
 ] 

ASF subversion and git services commented on SOLR-7218:
---

Commit 1666186 from [~yo...@apache.org] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1666186 ]

SOLR-7218: Use ^= for constant score query

 constant score query syntax
 ---

 Key: SOLR-7218
 URL: https://issues.apache.org/jira/browse/SOLR-7218
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
 Attachments: SOLR-7218.patch


 A ConstantScoreQuery is like a boosted query, but it produces the same score 
 for every document that matches the query. The score produced is equal to the 
 query boost. The ^= operator is used to turn any query clause into a 
 ConstantScoreQuery.
 Constant Score Query Examples:
 {code}
 +color:blue^=1 text:shoes
 (inStock:true text:heliosearch)^=100 native code faceting
 {code}
 Syntax rational: since boosting (multiplication) is term^value, the syntax 
 for having a constant score can be term^=value



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_31) - Build # 4543 - Still Failing!

2015-03-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4543/
Java: 32bit/jdk1.8.0_31 -server -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.TestDistributedSearch.test

Error Message:
Error from server at http://127.0.0.1:56034//collection1: 
java.lang.NullPointerException  at 
org.apache.solr.search.grouping.distributed.responseprocessor.TopGroupsShardResponseProcessor.process(TopGroupsShardResponseProcessor.java:102)
  at 
org.apache.solr.handler.component.QueryComponent.handleGroupedResponses(QueryComponent.java:738)
  at 
org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:721)
  at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:359)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:142)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:1967)  at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:808) 
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:435)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:218)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:103)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
  at 
org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83)  
at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:364)  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) 
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)  
at org.eclipse.jetty.server.Server.handle(Server.java:497)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)  at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)  at 
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) 
 at java.lang.Thread.run(Thread.java:745) 

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:56034//collection1: 
java.lang.NullPointerException
at 
org.apache.solr.search.grouping.distributed.responseprocessor.TopGroupsShardResponseProcessor.process(TopGroupsShardResponseProcessor.java:102)
at 
org.apache.solr.handler.component.QueryComponent.handleGroupedResponses(QueryComponent.java:738)
at 
org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:721)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:359)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:142)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1967)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:808)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:435)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:218)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:103)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at 
org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83)
at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:364)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at 

Re: Functionality of legacyCloud=false

2015-03-12 Thread Noble Paul
It is totally possible.
The point is , it was not a security feature and it is extremely easy to
spoof it.
The question is , was it a normal scenario or was it an effort to prove
that the system was not foolproof

--Noble

On Thu, Mar 12, 2015 at 6:23 PM, Varun Thacker varunthacker1...@gmail.com
wrote:

 Two scenarios I observed where we can bring up a replica even when I think
 it shouldn't. legacyCloud is set to false.

- I have two nodes A and B.
- CREATE collection 'test' with 1 shard, 1 replica. It gets created on
node A.
- manually copy test_shard1_replica1 folder to node B's solr home.
- Bring down node A.
- Restart node B. The shard comes up registering itself on node B and
becomes 'active'


- I have two nodes A and B ( this is down currently ).
- CREATE collection 'test' with 1 shard, 1 replica. It gets created on
node A.
- manually copy test_shard1_replica1 folder to node B's solr home.
- Start node B. The shard comes up registering itself on node B and
stays 'down'. The reason being the leader is still node A but clusterstate
has base_url of Node B. This is the error in the logs - Error getting
leader from zk for shard shard1

 In legacyCloud=false you get a 'no_such_replica in clusterstate' error if
 the 'coreNodeName' is not present in clusterstate.

 But in my two observations the 'coreNodeName' were the same, hence I ran
 into that scenario.

 Should we make the check more stringent to not allow this to happen? Check
 against base_url also?

 Also should we be making legacyCloud=false as default in 5.x?
 --


 Regards,
 Varun Thacker
 http://www.vthacker.in/




-- 
-
Noble Paul


[jira] [Resolved] (SOLR-7218) constant score query syntax

2015-03-12 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-7218.

   Resolution: Fixed
Fix Version/s: 5.1
   Trunk

 constant score query syntax
 ---

 Key: SOLR-7218
 URL: https://issues.apache.org/jira/browse/SOLR-7218
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
 Fix For: Trunk, 5.1

 Attachments: SOLR-7218.patch


 A ConstantScoreQuery is like a boosted query, but it produces the same score 
 for every document that matches the query. The score produced is equal to the 
 query boost. The ^= operator is used to turn any query clause into a 
 ConstantScoreQuery.
 Constant Score Query Examples:
 {code}
 +color:blue^=1 text:shoes
 (inStock:true text:heliosearch)^=100 native code faceting
 {code}
 Syntax rational: since boosting (multiplication) is term^value, the syntax 
 for having a constant score can be term^=value



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6787) API to manage blobs in Solr

2015-03-12 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358720#comment-14358720
 ] 

Noble Paul commented on SOLR-6787:
--

bq. don't use a response object after the request has been closed .(it may 
contain state tied to the request object).

if the caller wants to see the output of another handler , what is the 
solution? serialize the response an deserialize it?



 API to manage blobs in  Solr
 

 Key: SOLR-6787
 URL: https://issues.apache.org/jira/browse/SOLR-6787
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, Trunk

 Attachments: SOLR-6787.patch, SOLR-6787.patch


 A special collection called .system needs to be created by the user to 
 store/manage blobs. The schema/solrconfig of that collection need to be 
 automatically supplied by the system so that there are no errors
 APIs need to be created to manage the content of that collection
 {code}
 #create your .system collection first
 http://localhost:8983/solr/admin/collections?action=CREATEname=.systemreplicationFactor=2
 #The config for this collection is automatically created . numShards for this 
 collection is hardcoded to 1
 #create a new jar or add a new version of a jar
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @mycomponent.jar http://localhost:8983/solr/.system/blob/mycomponent
 #  GET on the end point would give a list of jars and other details
 curl http://localhost:8983/solr/.system/blob 
 # GET on the end point with jar name would give  details of various versions 
 of the available jars
 curl http://localhost:8983/solr/.system/blob/mycomponent
 # GET on the end point with jar name and version with a wt=filestream to get 
 the actual file
 curl http://localhost:8983/solr/.system/blob/mycomponent/1?wt=filestream  
 mycomponent.1.jar
 # GET on the end point with jar name and wt=filestream to get the latest 
 version of the file
 curl http://localhost:8983/solr/.system/blob/mycomponent?wt=filestream  
 mycomponent.jar
 {code}
 Please note that the jars are never deleted. a new version is added to the 
 system everytime a new jar is posted for the name. You must use the standard 
 delete commands to delete the old entries



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6347) MultiFieldQueryParser doesnt catch invalid syntax properly (due to user using regexpression syntax unwittingly)

2015-03-12 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358669#comment-14358669
 ] 

Michael McCandless commented on LUCENE-6347:


Thanks Paul, indeed, now I can see the assert trip too!  Phew.  Sorry for the 
confusion... I'll dig.

 MultiFieldQueryParser doesnt catch invalid syntax properly (due to user using 
 regexpression syntax unwittingly)
 ---

 Key: LUCENE-6347
 URL: https://issues.apache.org/jira/browse/LUCENE-6347
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Affects Versions: 4.1
Reporter: Paul taylor

 MultiFieldQueryParser doesnt catch invalid syntax properly (due to user using 
 regexpression syntax unwittingly)
 {code} 
 import org.apache.lucene.analysis.standard.StandardAnalyzer;
 import org.apache.lucene.queryparser.classic.MultiFieldQueryParser;
 import org.apache.lucene.queryparser.classic.ParseException;
 import org.apache.lucene.queryparser.classic.QueryParser;
 import org.apache.lucene.util.Version;
 import org.junit.Test;
 import static org.junit.Assert.assertNotNull;
 import static org.junit.Assert.assertTrue;
 /**
  * Lucene tests
  */
 public class LuceneRegExParseTest
 {
 @Test
 public void testSearch411LuceneBugReport() throws Exception
 {
 Exception e = null;
 try
 {
 String[] fields = new String[2];
 fields[0] = artist;
 fields[1] = recording;
 QueryParser qp = new MultiFieldQueryParser(Version.LUCENE_41, 
 fields, new StandardAnalyzer(Version.LUCENE_41));
 qp.parse(artist:pandora /reyli  recording:yo/Alguien);
 }
 catch(Exception ex)
 {
 e=ex;
 }
 assertNotNull(e);
 assertTrue(e instanceof ParseException );
 }
 }
 {code}
 With assertions disabled this test fails as no exception is thrown.
 With assertions enabled we get
 {code}
 java.lang.AssertionError
   at 
 org.apache.lucene.search.MultiTermQuery.init(MultiTermQuery.java:252)
   at 
 org.apache.lucene.search.AutomatonQuery.init(AutomatonQuery.java:65)
   at org.apache.lucene.search.RegexpQuery.init(RegexpQuery.java:90)
   at org.apache.lucene.search.RegexpQuery.init(RegexpQuery.java:79)
   at org.apache.lucene.search.RegexpQuery.init(RegexpQuery.java:69)
   at 
 org.apache.lucene.queryparser.classic.QueryParserBase.newRegexpQuery(QueryParserBase.java:790)
   at 
 org.apache.lucene.queryparser.classic.QueryParserBase.getRegexpQuery(QueryParserBase.java:1005)
   at 
 org.apache.lucene.queryparser.classic.QueryParserBase.handleBareTokenQuery(QueryParserBase.java:1075)
   at 
 org.apache.lucene.queryparser.classic.QueryParser.Term(QueryParser.java:359)
   at 
 org.apache.lucene.queryparser.classic.QueryParser.Clause(QueryParser.java:258)
   at 
 org.apache.lucene.queryparser.classic.QueryParser.Query(QueryParser.java:213)
   at 
 org.apache.lucene.queryparser.classic.QueryParser.TopLevelQuery(QueryParser.java:171)
   at 
 org.apache.lucene.queryparser.classic.QueryParserBase.parse(QueryParserBase.java:120)
   at 
 org.musicbrainz.search.servlet.LuceneRegExParseTest.testSearch411LuceneBugReport(LuceneRegExParseTest.java:30)
 but this should throw an exception without assertions enabled. Because no 
 exception is thrown a search then faikls with the following stack trace
 java.lang.NullPointerException
 at java.util.TreeMap.getEntry(TreeMap.java:342)
 at java.util.TreeMap.get(TreeMap.java:273)
 at 
 org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.terms(PerFieldPostingsFormat.java:215)
 at 
 org.apache.lucene.search.TermCollectingRewrite.collectTerms(TermCollectingRewrite.java:58)
 at 
 org.apache.lucene.search.ConstantScoreAutoRewrite.rewrite(ConstantScoreAutoRewrite.java:95)
 at 
 org.apache.lucene.search.MultiTermQuery$ConstantScoreAutoRewrite.rewrite(MultiTermQuery.java:220)
 at org.apache.lucene.search.MultiTermQuery.rewrite(MultiTermQuery.java:286)
 at org.apache.lucene.search.BooleanQuery.rewrite(BooleanQuery.java:429)
 at org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:616)
 at 
 org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:663)
 at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)
 at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6347) MultiFieldQueryParser doesnt catch invalid syntax properly (due to user using regexpression syntax unwittingly)

2015-03-12 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-6347.

Resolution: Duplicate

I dug into this, saw it wouldn't repro on 4.10.x but would on 4.1.x, and then 
hit http://jirasearch.mikemccandless.com and found the duplicate issue 
LUCENE-4878.  Thanks Paul!

 MultiFieldQueryParser doesnt catch invalid syntax properly (due to user using 
 regexpression syntax unwittingly)
 ---

 Key: LUCENE-6347
 URL: https://issues.apache.org/jira/browse/LUCENE-6347
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Affects Versions: 4.1
Reporter: Paul taylor

 MultiFieldQueryParser doesnt catch invalid syntax properly (due to user using 
 regexpression syntax unwittingly)
 {code} 
 import org.apache.lucene.analysis.standard.StandardAnalyzer;
 import org.apache.lucene.queryparser.classic.MultiFieldQueryParser;
 import org.apache.lucene.queryparser.classic.ParseException;
 import org.apache.lucene.queryparser.classic.QueryParser;
 import org.apache.lucene.util.Version;
 import org.junit.Test;
 import static org.junit.Assert.assertNotNull;
 import static org.junit.Assert.assertTrue;
 /**
  * Lucene tests
  */
 public class LuceneRegExParseTest
 {
 @Test
 public void testSearch411LuceneBugReport() throws Exception
 {
 Exception e = null;
 try
 {
 String[] fields = new String[2];
 fields[0] = artist;
 fields[1] = recording;
 QueryParser qp = new MultiFieldQueryParser(Version.LUCENE_41, 
 fields, new StandardAnalyzer(Version.LUCENE_41));
 qp.parse(artist:pandora /reyli  recording:yo/Alguien);
 }
 catch(Exception ex)
 {
 e=ex;
 }
 assertNotNull(e);
 assertTrue(e instanceof ParseException );
 }
 }
 {code}
 With assertions disabled this test fails as no exception is thrown.
 With assertions enabled we get
 {code}
 java.lang.AssertionError
   at 
 org.apache.lucene.search.MultiTermQuery.init(MultiTermQuery.java:252)
   at 
 org.apache.lucene.search.AutomatonQuery.init(AutomatonQuery.java:65)
   at org.apache.lucene.search.RegexpQuery.init(RegexpQuery.java:90)
   at org.apache.lucene.search.RegexpQuery.init(RegexpQuery.java:79)
   at org.apache.lucene.search.RegexpQuery.init(RegexpQuery.java:69)
   at 
 org.apache.lucene.queryparser.classic.QueryParserBase.newRegexpQuery(QueryParserBase.java:790)
   at 
 org.apache.lucene.queryparser.classic.QueryParserBase.getRegexpQuery(QueryParserBase.java:1005)
   at 
 org.apache.lucene.queryparser.classic.QueryParserBase.handleBareTokenQuery(QueryParserBase.java:1075)
   at 
 org.apache.lucene.queryparser.classic.QueryParser.Term(QueryParser.java:359)
   at 
 org.apache.lucene.queryparser.classic.QueryParser.Clause(QueryParser.java:258)
   at 
 org.apache.lucene.queryparser.classic.QueryParser.Query(QueryParser.java:213)
   at 
 org.apache.lucene.queryparser.classic.QueryParser.TopLevelQuery(QueryParser.java:171)
   at 
 org.apache.lucene.queryparser.classic.QueryParserBase.parse(QueryParserBase.java:120)
   at 
 org.musicbrainz.search.servlet.LuceneRegExParseTest.testSearch411LuceneBugReport(LuceneRegExParseTest.java:30)
 but this should throw an exception without assertions enabled. Because no 
 exception is thrown a search then faikls with the following stack trace
 java.lang.NullPointerException
 at java.util.TreeMap.getEntry(TreeMap.java:342)
 at java.util.TreeMap.get(TreeMap.java:273)
 at 
 org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.terms(PerFieldPostingsFormat.java:215)
 at 
 org.apache.lucene.search.TermCollectingRewrite.collectTerms(TermCollectingRewrite.java:58)
 at 
 org.apache.lucene.search.ConstantScoreAutoRewrite.rewrite(ConstantScoreAutoRewrite.java:95)
 at 
 org.apache.lucene.search.MultiTermQuery$ConstantScoreAutoRewrite.rewrite(MultiTermQuery.java:220)
 at org.apache.lucene.search.MultiTermQuery.rewrite(MultiTermQuery.java:286)
 at org.apache.lucene.search.BooleanQuery.rewrite(BooleanQuery.java:429)
 at org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:616)
 at 
 org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:663)
 at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)
 at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7217) Auto-detect HTTP body content-type

2015-03-12 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358650#comment-14358650
 ] 

Uwe Schindler commented on SOLR-7217:
-

Hi, where is a patch or commit link in Heliosearch?

 Auto-detect HTTP body content-type
 --

 Key: SOLR-7217
 URL: https://issues.apache.org/jira/browse/SOLR-7217
 Project: Solr
  Issue Type: Improvement
Reporter: Yonik Seeley

 It's nice to be able to leave off the specification of content type when hand 
 crafting a request (i.e. from the command line) and for our documentation 
 examples.
 For example:
 {code}
 curl http://localhost:8983/solr/query -d '
 {
   query:hero
 }'
 {code}
 Note the missing 
 {code}
 -H 'Content-type:application/json'
 {code}
 that would otherwise be needed everywhere



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Functionality of legacyCloud=false

2015-03-12 Thread Varun Thacker
Hi Noble,

Well I was just playing around to see if there were scenarios where
different coreNodeNames could register themselves even if they weren't
creating using the Collections API.

So I was doing it intentionally here to see what happens. But I can totally
imagine users running into the second scenario where an old node comes back
up and ends up messing up that replica in the collection accidentally.

On Thu, Mar 12, 2015 at 7:01 PM, Noble Paul noble.p...@gmail.com wrote:

 It is totally possible.
 The point is , it was not a security feature and it is extremely easy to
 spoof it.
 The question is , was it a normal scenario or was it an effort to prove
 that the system was not foolproof

 --Noble

 On Thu, Mar 12, 2015 at 6:23 PM, Varun Thacker varunthacker1...@gmail.com
  wrote:

 Two scenarios I observed where we can bring up a replica even when I
 think it shouldn't. legacyCloud is set to false.

- I have two nodes A and B.
- CREATE collection 'test' with 1 shard, 1 replica. It gets created
on node A.
- manually copy test_shard1_replica1 folder to node B's solr home.
- Bring down node A.
- Restart node B. The shard comes up registering itself on node B and
becomes 'active'


- I have two nodes A and B ( this is down currently ).
- CREATE collection 'test' with 1 shard, 1 replica. It gets created
on node A.
- manually copy test_shard1_replica1 folder to node B's solr home.
- Start node B. The shard comes up registering itself on node B and
stays 'down'. The reason being the leader is still node A but clusterstate
has base_url of Node B. This is the error in the logs - Error getting
leader from zk for shard shard1

 In legacyCloud=false you get a 'no_such_replica in clusterstate' error if
 the 'coreNodeName' is not present in clusterstate.

 But in my two observations the 'coreNodeName' were the same, hence I ran
 into that scenario.

 Should we make the check more stringent to not allow this to happen?
 Check against base_url also?

 Also should we be making legacyCloud=false as default in 5.x?
 --


 Regards,
 Varun Thacker
 http://www.vthacker.in/




 --
 -
 Noble Paul




-- 


Regards,
Varun Thacker
http://www.vthacker.in/


[jira] [Commented] (SOLR-6787) API to manage blobs in Solr

2015-03-12 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358693#comment-14358693
 ] 

Yonik Seeley commented on SOLR-6787:


Even with a different handler though... forwarding to another request handler 
was never a first class supported operation before.  The whole thing feels a 
bit squirrelly.  For example: the new request object that is being created... 
it's closed automatically when the method returns, *but* the response object is 
still there and will presumably either be used/looked at by the caller, or used 
to ultimately write the response.  That breaks a previously held invariant - 
don't use a response object after the request has been closed (it may contain 
state tied to the request object).

 API to manage blobs in  Solr
 

 Key: SOLR-6787
 URL: https://issues.apache.org/jira/browse/SOLR-6787
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, Trunk

 Attachments: SOLR-6787.patch, SOLR-6787.patch


 A special collection called .system needs to be created by the user to 
 store/manage blobs. The schema/solrconfig of that collection need to be 
 automatically supplied by the system so that there are no errors
 APIs need to be created to manage the content of that collection
 {code}
 #create your .system collection first
 http://localhost:8983/solr/admin/collections?action=CREATEname=.systemreplicationFactor=2
 #The config for this collection is automatically created . numShards for this 
 collection is hardcoded to 1
 #create a new jar or add a new version of a jar
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @mycomponent.jar http://localhost:8983/solr/.system/blob/mycomponent
 #  GET on the end point would give a list of jars and other details
 curl http://localhost:8983/solr/.system/blob 
 # GET on the end point with jar name would give  details of various versions 
 of the available jars
 curl http://localhost:8983/solr/.system/blob/mycomponent
 # GET on the end point with jar name and version with a wt=filestream to get 
 the actual file
 curl http://localhost:8983/solr/.system/blob/mycomponent/1?wt=filestream  
 mycomponent.1.jar
 # GET on the end point with jar name and wt=filestream to get the latest 
 version of the file
 curl http://localhost:8983/solr/.system/blob/mycomponent?wt=filestream  
 mycomponent.jar
 {code}
 Please note that the jars are never deleted. a new version is added to the 
 system everytime a new jar is posted for the name. You must use the standard 
 delete commands to delete the old entries



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6357) FSIndexOutput.toString is unhelpful

2015-03-12 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-6357.

Resolution: Duplicate

OK this is a dup of LUCENE-6084, where we added a required String 
resourceDescription arg to IndexOutput ctor.

 FSIndexOutput.toString is unhelpful
 ---

 Key: LUCENE-6357
 URL: https://issues.apache.org/jira/browse/LUCENE-6357
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.1


 It should include the path name and file name...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Newbie question

2015-03-12 Thread Alexandre Rafalovitch
On 12 March 2015 at 21:43, Kitty kittyontra...@hotmail.com wrote:
 I did some further digging in the long test output log. There are no error
 messages, actually it says:

 The following error occurred while executing this line:

 And then nothing following that. Empty line.

Don't know about the actual suits that are expected to pass or fail,
but the strange/missing error messages might be clarified by checking
what command/instruction actually run.

Try -verbose or -debug flag:
https://ant.apache.org/problems.html

Regards,
   Alex.

Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
http://www.solr-start.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7193) Concatenate words from token stream

2015-03-12 Thread abhishek bafna (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14359939#comment-14359939
 ] 

abhishek bafna commented on SOLR-7193:
--

[~jmtd890917] Did you get the point I tried to convey. Can you please provide 
your further comment for the patch.

 Concatenate words from token stream
 ---

 Key: SOLR-7193
 URL: https://issues.apache.org/jira/browse/SOLR-7193
 Project: Solr
  Issue Type: New Feature
  Components: Schema and Analysis
Reporter: abhishek bafna
 Attachments: concatenate_words.patch


 The user entered data often don't have proper spacing between words and words 
 spelling and format also varies from data like business names, address etc. 
 After tokenizing data, we might perform pattern replacement, stop word 
 filtering etc. Later we want to concatenate all the tokens and generate 
 n-grams token for indexing business name and perform the fuzzy match.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2366) Facet Range Gaps

2015-03-12 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14359890#comment-14359890
 ] 

Erick Erickson commented on SOLR-2366:
--

I don't think this ever got committed, but the ref guide and Wiki page 
documents this feature! We need to either commit this or change the docs Or 
I'm missing something.

[~tomasflobbe] do you have any idea what the status is here?

 Facet Range Gaps
 

 Key: SOLR-2366
 URL: https://issues.apache.org/jira/browse/SOLR-2366
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 4.9, Trunk

 Attachments: SOLR-2366.patch, SOLR-2366.patch, SOLR-2366.patch


 There really is no reason why the range gap for date and numeric faceting 
 needs to be evenly spaced.  For instance, if and when SOLR-1581 is completed 
 and one were doing spatial distance calculations, one could facet by function 
 into 3 different sized buckets: walking distance (0-5KM), driving distance 
 (5KM-150KM) and everything else (150KM+), for instance.  We should be able to 
 quantize the results into arbitrarily sized buckets.
 (Original syntax proposal removed, see discussion for concrete syntax)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1666186 - in /lucene/dev/branches/branch_5x: ./ solr/ solr/core/ solr/core/src/java/org/apache/solr/parser/ solr/core/src/test/org/apache/solr/search/

2015-03-12 Thread Ramkumar R. Aiyengar
That explains a lot. Thanks Mike!
On 13 Mar 2015 00:46, Michael McCandless luc...@mikemccandless.com
wrote:

 On Thu, Mar 12, 2015 at 5:38 PM, Yonik Seeley ysee...@gmail.com wrote:
  On Thu, Mar 12, 2015 at 8:04 PM, Ramkumar R. Aiyengar
  andyetitmo...@gmail.com wrote:
  This actually brings me to a question I have had for a while. Why do we
  check in auto generated code? Shouldn't the build system run javacc as a
  prereq to compiling instead?
 
  Historically, the compilation wasn't automated (you had to find +
  install JavaCC yourself, run it yourself, etc).
  I don't know the current reasons however.

 Some discussion about this here:
 https://issues.apache.org/jira/browse/LUCENE-4335

 Mike McCandless

 http://blog.mikemccandless.com

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.7.0_76) - Build # 4435 - Still Failing!

2015-03-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4435/
Java: 32bit/jdk1.7.0_76 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=11219, name=collection1, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=11219, name=collection1, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:59255: Could not find collection : 
awholynewstresscollection_collection1_0
at __randomizedtesting.SeedInfo.seed([ED1CECE7364DBE4]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:584)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:236)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:228)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:370)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1067)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:839)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:782)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:892)




Build Log:
[...truncated 10065 lines...]
   [junit4] Suite: org.apache.solr.cloud.CollectionsAPIDistributedZkTest
   [junit4]   2 Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.CollectionsAPIDistributedZkTest
 ED1CECE7364DBE4-001\init-core-data-001
   [junit4]   2 3260180 T10841 oas.SolrTestCaseJ4.buildSSLConfig Randomized 
ssl (false) and clientAuth (false)
   [junit4]   2 3260181 T10841 
oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system 
property: /
   [junit4]   2 3260189 T10841 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   1 client port:0.0.0.0/0.0.0.0:0
   [junit4]   2 3260191 T10842 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2 3260290 T10841 oasc.ZkTestServer.run start zk server on 
port:59242
   [junit4]   2 3260290 T10841 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 3260293 T10841 oascc.ConnectionManager.waitForConnected 
Waiting for client to connect to ZooKeeper
   [junit4]   2 3260335 T10849 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@12b9e84 name:ZooKeeperConnection 
Watcher:127.0.0.1:59242 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2 3260335 T10841 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2 3260337 T10841 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2 3260337 T10841 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2 3260342 T10841 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 3260345 T10841 oascc.ConnectionManager.waitForConnected 
Waiting for client to connect to ZooKeeper
   [junit4]   2 3260347 T10852 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@1a7a689 name:ZooKeeperConnection 
Watcher:127.0.0.1:59242/solr got event WatchedEvent state:SyncConnected 
type:None path:null path:null type:None
   [junit4]   2 3260348 T10841 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2 3260348 T10841 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2 3260348 T10841 oascc.SolrZkClient.makePath makePath: 
/collections/collection1
   [junit4]   2 3260352 T10841 oascc.SolrZkClient.makePath makePath: 
/collections/collection1/shards
   [junit4]   2 3260357 T10841 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection
   [junit4]   2 3260360 T10841 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection/shards
   [junit4]   2 3260368 T10841 oasc.AbstractZkTestCase.putConfig put 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\collection1\conf\solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2 3260371 T10841 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.xml
   [junit4]   2 3260377 T10841 oasc.AbstractZkTestCase.putConfig 

Re: svn commit: r1666186 - in /lucene/dev/branches/branch_5x: ./ solr/ solr/core/ solr/core/src/java/org/apache/solr/parser/ solr/core/src/test/org/apache/solr/search/

2015-03-12 Thread Alan Woodward
Hey Yonik,

I think you've inadvertently added a couple of deprecated methods back in here?

 
 Modified: 
 lucene/dev/branches/branch_5x/solr/core/src/java/org/apache/solr/parser/CharStream.java
 URL: 
 http://svn.apache.org/viewvc/lucene/dev/branches/branch_5x/solr/core/src/java/org/apache/solr/parser/CharStream.java?rev=1666186r1=1666185r2=1666186view=diff
 ==
 --- 
 lucene/dev/branches/branch_5x/solr/core/src/java/org/apache/solr/parser/CharStream.java
  (original)
 +++ 
 lucene/dev/branches/branch_5x/solr/core/src/java/org/apache/solr/parser/CharStream.java
  Thu Mar 12 13:33:32 2015
 @@ -27,6 +27,22 @@ interface CharStream {
*/
   char readChar() throws java.io.IOException;
 
 +  @Deprecated
 +  /**
 +   * Returns the column position of the character last read.
 +   * @deprecated
 +   * @see #getEndColumn
 +   */
 +  int getColumn();
 +
 +  @Deprecated
 +  /**
 +   * Returns the line number of the character last read.
 +   * @deprecated
 +   * @see #getEndLine
 +   */
 +  int getLine();
 +
   /**
* Returns the column number of the last character for current token (being
* matched after the last call to BeginTOken).
 @@ -96,4 +112,4 @@ interface CharStream {
   void Done();
 
 }
 -/* JavaCC - OriginalChecksum=a81c9280a3ec4578458c607a9d95acb4 (do not edit 
 this line) */
 +/* JavaCC - OriginalChecksum=48b70e7c01825c8f301c7362bf1028d8 (do not edit 
 this line) */
 
 Modified: 
 lucene/dev/branches/branch_5x/solr/core/src/java/org/apache/solr/parser/FastCharStream.java
 URL: 
 http://svn.apache.org/viewvc/lucene/dev/branches/branch_5x/solr/core/src/java/org/apache/solr/parser/FastCharStream.java?rev=1666186r1=1666185r2=1666186view=diff
 ==
 --- 
 lucene/dev/branches/branch_5x/solr/core/src/java/org/apache/solr/parser/FastCharStream.java
  (original)
 +++ 
 lucene/dev/branches/branch_5x/solr/core/src/java/org/apache/solr/parser/FastCharStream.java
  Thu Mar 12 13:33:32 2015
 @@ -108,6 +108,15 @@ public final class FastCharStream implem
 }
   }
 
 +  @Override
 +  public final int getColumn() {
 +return bufferStart + bufferPosition;
 +  }
 +  @Override
 +  public final int getLine() {
 +return 1;
 +  }
 +  @Override
   public final int getEndColumn() {
 return bufferStart + bufferPosition;
   }
 
 Modified: 
 lucene/dev/branches/branch_5x/solr/core/src/java/org/apache/solr/parser/ParseException.java
 URL: 
 http://svn.apache.org/viewvc/lucene/dev/branches/branch_5x/solr/core/src/java/org/apache/solr/parser/ParseException.java?rev=1666186r1=1666185r2=1666186view=diff
 ==
 --- 
 lucene/dev/branches/branch_5x/solr/core/src/java/org/apache/solr/parser/ParseException.java
  (original)
 +++ 
 lucene/dev/branches/branch_5x/solr/core/src/java/org/apache/solr/parser/ParseException.java
  Thu Mar 12 13:33:32 2015
 @@ -184,4 +184,4 @@ public class ParseException extends Exce
}
 
 }
 -/* JavaCC - OriginalChecksum=d7aa203ee92ebbb23011a23311e60537 (do not edit 
 this line) */
 +/* JavaCC - OriginalChecksum=25e1ae9ad9614c4ce31c4b83f8a7397b (do not edit 
 this line) */
 
 Modified: 
 lucene/dev/branches/branch_5x/solr/core/src/java/org/apache/solr/parser/QueryParser.java
 URL: 
 http://svn.apache.org/viewvc/lucene/dev/branches/branch_5x/solr/core/src/java/org/apache/solr/parser/QueryParser.java?rev=1666186r1=1666185r2=1666186view=diff
 ==
 --- 
 lucene/dev/branches/branch_5x/solr/core/src/java/org/apache/solr/parser/QueryParser.java
  (original)
 +++ 
 lucene/dev/branches/branch_5x/solr/core/src/java/org/apache/solr/parser/QueryParser.java
  Thu Mar 12 13:33:32 2015
 @@ -100,7 +100,7 @@ public class QueryParser extends SolrQue
   }
 
   final public Query Query(String field) throws ParseException, SyntaxError {
 -  ListBooleanClause clauses = new ArrayList();
 +  ListBooleanClause clauses = new ArrayListBooleanClause();
   Query q, firstQuery=null;
   int conj, mods;
 mods = Modifiers();
 @@ -581,7 +581,7 @@ public class QueryParser extends SolrQue
   return (jj_ntk = jj_nt.kind);
   }
 
 -  private java.util.Listint[] jj_expentries = new java.util.ArrayList();
 +  private java.util.Listint[] jj_expentries = new 
 java.util.ArrayListint[]();
   private int[] jj_expentry;
   private int jj_kind = -1;
   private int[] jj_lasttokens = new int[100];
 
 Modified: 
 lucene/dev/branches/branch_5x/solr/core/src/java/org/apache/solr/parser/QueryParser.jj
 URL: 
 http://svn.apache.org/viewvc/lucene/dev/branches/branch_5x/solr/core/src/java/org/apache/solr/parser/QueryParser.jj?rev=1666186r1=1666185r2=1666186view=diff
 ==
 --- 
 lucene/dev/branches/branch_5x/solr/core/src/java/org/apache/solr/parser/QueryParser.jj
  

Re: Functionality of legacyCloud=false

2015-03-12 Thread Noble Paul
bq.Or they're testing out restoring backups

This is in the context of ZK as truth functionality. I guess , in that case
you expect those nodes to work exactly as the other replica

On Thu, Mar 12, 2015 at 8:36 PM, Erick Erickson erickerick...@gmail.com
wrote:

 bq: how is copying a core dir from one node to another a normal use case ?

 A user is trying to move a replica from one place to another. While I
 agree they should use ADDREPLICA for the new one then DELTERPLICA on
 the old replica..

 Or they're testing out restoring backups.

 I've had clients do both of these things.

 On Thu, Mar 12, 2015 at 7:00 AM, Noble Paul noble.p...@gmail.com wrote:
  how is copying a core dir from one node to another a normal use case ?
 
  On Mar 12, 2015 7:22 PM, Varun Thacker varunthacker1...@gmail.com
 wrote:
 
  Hi Noble,
 
  Well I was just playing around to see if there were scenarios where
  different coreNodeNames could register themselves even if they weren't
  creating using the Collections API.
 
  So I was doing it intentionally here to see what happens. But I can
  totally imagine users running into the second scenario where an old node
  comes back up and ends up messing up that replica in the collection
  accidentally.
 
  On Thu, Mar 12, 2015 at 7:01 PM, Noble Paul noble.p...@gmail.com
 wrote:
 
  It is totally possible.
  The point is , it was not a security feature and it is extremely easy
 to
  spoof it.
  The question is , was it a normal scenario or was it an effort to prove
  that the system was not foolproof
 
  --Noble
 
  On Thu, Mar 12, 2015 at 6:23 PM, Varun Thacker
  varunthacker1...@gmail.com wrote:
 
  Two scenarios I observed where we can bring up a replica even when I
  think it shouldn't. legacyCloud is set to false.
 
  I have two nodes A and B.
  CREATE collection 'test' with 1 shard, 1 replica. It gets created on
  node A.
  manually copy test_shard1_replica1 folder to node B's solr home.
  Bring down node A.
  Restart node B. The shard comes up registering itself on node B and
  becomes 'active'
 
  I have two nodes A and B ( this is down currently ).
  CREATE collection 'test' with 1 shard, 1 replica. It gets created on
  node A.
  manually copy test_shard1_replica1 folder to node B's solr home.
  Start node B. The shard comes up registering itself on node B and
 stays
  'down'. The reason being the leader is still node A but clusterstate
 has
  base_url of Node B. This is the error in the logs - Error getting
 leader
  from zk for shard shard1
 
  In legacyCloud=false you get a 'no_such_replica in clusterstate' error
  if the 'coreNodeName' is not present in clusterstate.
 
  But in my two observations the 'coreNodeName' were the same, hence I
 ran
  into that scenario.
 
  Should we make the check more stringent to not allow this to happen?
  Check against base_url also?
 
  Also should we be making legacyCloud=false as default in 5.x?
  --
 
 
  Regards,
  Varun Thacker
  http://www.vthacker.in/
 
 
 
 
  --
  -
  Noble Paul
 
 
 
 
  --
 
 
  Regards,
  Varun Thacker
  http://www.vthacker.in/

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
-
Noble Paul


Re: svn commit: r1666186 - in /lucene/dev/branches/branch_5x: ./ solr/ solr/core/ solr/core/src/java/org/apache/solr/parser/ solr/core/src/test/org/apache/solr/search/

2015-03-12 Thread Yonik Seeley
Looks like the original removal of these deprecated methods happened
in SOLR-6976, but probably should not have given that this is a
generated file?
-Yonik


On Thu, Mar 12, 2015 at 11:19 AM, Yonik Seeley ysee...@gmail.com wrote:
 On Thu, Mar 12, 2015 at 11:08 AM, Alan Woodward a...@flax.co.uk wrote:
 Hey Yonik,

 I think you've inadvertently added a couple of deprecated methods back in 
 here?

 Hmmm, but CharStream.java is generated by JavaCC...
 When I got a compile error in FastCharStream.java, I simply copied the
 lucene version.

 I built it using the following method:
 $ cd solr/core
 $ ant javacc

 -Yonik

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Functionality of legacyCloud=false

2015-03-12 Thread Erick Erickson
bq: how is copying a core dir from one node to another a normal use case ?

A user is trying to move a replica from one place to another. While I
agree they should use ADDREPLICA for the new one then DELTERPLICA on
the old replica..

Or they're testing out restoring backups.

I've had clients do both of these things.

On Thu, Mar 12, 2015 at 7:00 AM, Noble Paul noble.p...@gmail.com wrote:
 how is copying a core dir from one node to another a normal use case ?

 On Mar 12, 2015 7:22 PM, Varun Thacker varunthacker1...@gmail.com wrote:

 Hi Noble,

 Well I was just playing around to see if there were scenarios where
 different coreNodeNames could register themselves even if they weren't
 creating using the Collections API.

 So I was doing it intentionally here to see what happens. But I can
 totally imagine users running into the second scenario where an old node
 comes back up and ends up messing up that replica in the collection
 accidentally.

 On Thu, Mar 12, 2015 at 7:01 PM, Noble Paul noble.p...@gmail.com wrote:

 It is totally possible.
 The point is , it was not a security feature and it is extremely easy to
 spoof it.
 The question is , was it a normal scenario or was it an effort to prove
 that the system was not foolproof

 --Noble

 On Thu, Mar 12, 2015 at 6:23 PM, Varun Thacker
 varunthacker1...@gmail.com wrote:

 Two scenarios I observed where we can bring up a replica even when I
 think it shouldn't. legacyCloud is set to false.

 I have two nodes A and B.
 CREATE collection 'test' with 1 shard, 1 replica. It gets created on
 node A.
 manually copy test_shard1_replica1 folder to node B's solr home.
 Bring down node A.
 Restart node B. The shard comes up registering itself on node B and
 becomes 'active'

 I have two nodes A and B ( this is down currently ).
 CREATE collection 'test' with 1 shard, 1 replica. It gets created on
 node A.
 manually copy test_shard1_replica1 folder to node B's solr home.
 Start node B. The shard comes up registering itself on node B and stays
 'down'. The reason being the leader is still node A but clusterstate has
 base_url of Node B. This is the error in the logs - Error getting leader
 from zk for shard shard1

 In legacyCloud=false you get a 'no_such_replica in clusterstate' error
 if the 'coreNodeName' is not present in clusterstate.

 But in my two observations the 'coreNodeName' were the same, hence I ran
 into that scenario.

 Should we make the check more stringent to not allow this to happen?
 Check against base_url also?

 Also should we be making legacyCloud=false as default in 5.x?
 --


 Regards,
 Varun Thacker
 http://www.vthacker.in/




 --
 -
 Noble Paul




 --


 Regards,
 Varun Thacker
 http://www.vthacker.in/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.7.0_76) - Build # 4434 - Still Failing!

2015-03-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4434/
Java: 32bit/jdk1.7.0_76 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.ReplicationFactorTest.test

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:56537/repfacttest_c8n_1x3_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:56537/repfacttest_c8n_1x3_shard1_replica1
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:625)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:839)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:782)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.ReplicationFactorTest.testRf3(ReplicationFactorTest.java:283)
at 
org.apache.solr.cloud.ReplicationFactorTest.test(ReplicationFactorTest.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: svn commit: r1666186 - in /lucene/dev/branches/branch_5x: ./ solr/ solr/core/ solr/core/src/java/org/apache/solr/parser/ solr/core/src/test/org/apache/solr/search/

2015-03-12 Thread Yonik Seeley
On Thu, Mar 12, 2015 at 11:08 AM, Alan Woodward a...@flax.co.uk wrote:
 Hey Yonik,

 I think you've inadvertently added a couple of deprecated methods back in 
 here?

Hmmm, but CharStream.java is generated by JavaCC...
When I got a compile error in FastCharStream.java, I simply copied the
lucene version.

I built it using the following method:
$ cd solr/core
$ ant javacc

-Yonik

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 784 - Still Failing

2015-03-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/784/

8 tests failed.
REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test

Error Message:
shard3 is not consistent.  Got 110 from 
http://127.0.0.1:59031/collection1lastClient and got 47 from 
http://127.0.0.1:59057/collection1

Stack Trace:
java.lang.AssertionError: shard3 is not consistent.  Got 110 from 
http://127.0.0.1:59031/collection1lastClient and got 47 from 
http://127.0.0.1:59057/collection1
at 
__randomizedtesting.SeedInfo.seed([6E6801B0270E8FC3:E63C3E6A89F2E23B]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1286)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1265)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test(ChaosMonkeySafeLeaderTest.java:161)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Commented] (SOLR-6070) Cannot use multiple highlighting components in a single solrconfig

2015-03-12 Thread Luc Vanlerberghe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358957#comment-14358957
 ] 

Luc Vanlerberghe commented on SOLR-6070:


I also tried using different highlighters for two different requestHandlers.

It turns out that as soon as a HighlightComponent is defined in the 
solrConfig.xml it is automatically also used as the default HighlightComponent.
i.e: If you define a highlight component as a searchComponent with name 
customHighlighter, it will automatically be used for the name hightlight as 
well.
A workaround is to define the default highlighter explicitly *after* your 
definition with the default highlight name like this:
{code}
  searchComponent class=solr.HighlightComponent name=highlight/
{code}

The culprit is indeed in the loadSearchComponents method in SolrCore.java which 
special cases components that are instanceof HighlightComponent to 
automatically register using the highlight name as well.

{code}
for (String name : searchComponents.keySet()) {
  if (searchComponents.isLoaded(name)  searchComponents.get(name) 
instanceof HighlightComponent) {
if (!HighlightComponent.COMPONENT_NAME.equals(name)) {
  searchComponents.put(HighlightComponent.COMPONENT_NAME, 
searchComponents.getRegistry().get(name));
}
break;
  }
}
{code}

This code was introduced as part of SOLR-1696 probably to maintain backwards 
compatibility and still persists today (see the commits for SOLR-7073 where it 
was updated but not removed)

I would be strongly in favor of removing this special case for 
HighlighterComponent (perhaps depending on the luceneMatchVersion of the 
solrconfig.xml file)
As a minimum, it should be mentioned in the docs for solrconfig.xml somewhere


 Cannot use multiple highlighting components in a single solrconfig
 --

 Key: SOLR-6070
 URL: https://issues.apache.org/jira/browse/SOLR-6070
 Project: Solr
  Issue Type: Bug
  Components: highlighter
Affects Versions: 4.7.2, 4.8
Reporter: Elaine Cario
  Labels: highlighting

 I'm trying to use both the PostingsHighlighter and the FastVectorHighlighter 
 in the same solrconfig (selection driven by different request handlers), but 
 once I define 2 search components in the config, it always picks the Postings 
 Highlighter (even if I never reference it in any request handler).
 I think the culprit is some specific code in SolrCore.loadSearchComponents(), 
 which overwrites the highlighting component with the contents of the 
 postingshighlight component - so the components map has 2 entries, but they 
 both point to the same highlighting class (the PostingsHighlighter).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Functionality of legacyCloud=false

2015-03-12 Thread Varun Thacker
bq. how is copying a core dir from one node to another a normal use case ?

That was just for testing what happens.

Okay here is a real world scenario -

   - I create a collection.
   - The collection fails to create since it had a bad config. The empty
   folders for the replicas gets left behind.
   - Now I fix the config and issue a create again. The replicas get
   created but on different nodes on my cluster.
   - In the future if I bounce the nodes which had the left over folders,
   they end up interfering with the healthy replicas for that collection.

So apart from checking coreNodeName we should also check against baseUrl
and make sure they are the same when legacyCloud=false. I will create a
Jira for it.

On Thu, Mar 12, 2015 at 9:52 PM, Noble Paul noble.p...@gmail.com wrote:

 bq.Or they're testing out restoring backups

 This is in the context of ZK as truth functionality. I guess , in that
 case you expect those nodes to work exactly as the other replica

 On Thu, Mar 12, 2015 at 8:36 PM, Erick Erickson erickerick...@gmail.com
 wrote:

 bq: how is copying a core dir from one node to another a normal use case ?

 A user is trying to move a replica from one place to another. While I
 agree they should use ADDREPLICA for the new one then DELTERPLICA on
 the old replica..

 Or they're testing out restoring backups.

 I've had clients do both of these things.

 On Thu, Mar 12, 2015 at 7:00 AM, Noble Paul noble.p...@gmail.com wrote:
  how is copying a core dir from one node to another a normal use case ?
 
  On Mar 12, 2015 7:22 PM, Varun Thacker varunthacker1...@gmail.com
 wrote:
 
  Hi Noble,
 
  Well I was just playing around to see if there were scenarios where
  different coreNodeNames could register themselves even if they weren't
  creating using the Collections API.
 
  So I was doing it intentionally here to see what happens. But I can
  totally imagine users running into the second scenario where an old
 node
  comes back up and ends up messing up that replica in the collection
  accidentally.
 
  On Thu, Mar 12, 2015 at 7:01 PM, Noble Paul noble.p...@gmail.com
 wrote:
 
  It is totally possible.
  The point is , it was not a security feature and it is extremely easy
 to
  spoof it.
  The question is , was it a normal scenario or was it an effort to
 prove
  that the system was not foolproof
 
  --Noble
 
  On Thu, Mar 12, 2015 at 6:23 PM, Varun Thacker
  varunthacker1...@gmail.com wrote:
 
  Two scenarios I observed where we can bring up a replica even when I
  think it shouldn't. legacyCloud is set to false.
 
  I have two nodes A and B.
  CREATE collection 'test' with 1 shard, 1 replica. It gets created on
  node A.
  manually copy test_shard1_replica1 folder to node B's solr home.
  Bring down node A.
  Restart node B. The shard comes up registering itself on node B and
  becomes 'active'
 
  I have two nodes A and B ( this is down currently ).
  CREATE collection 'test' with 1 shard, 1 replica. It gets created on
  node A.
  manually copy test_shard1_replica1 folder to node B's solr home.
  Start node B. The shard comes up registering itself on node B and
 stays
  'down'. The reason being the leader is still node A but clusterstate
 has
  base_url of Node B. This is the error in the logs - Error getting
 leader
  from zk for shard shard1
 
  In legacyCloud=false you get a 'no_such_replica in clusterstate'
 error
  if the 'coreNodeName' is not present in clusterstate.
 
  But in my two observations the 'coreNodeName' were the same, hence I
 ran
  into that scenario.
 
  Should we make the check more stringent to not allow this to happen?
  Check against base_url also?
 
  Also should we be making legacyCloud=false as default in 5.x?
  --
 
 
  Regards,
  Varun Thacker
  http://www.vthacker.in/
 
 
 
 
  --
  -
  Noble Paul
 
 
 
 
  --
 
 
  Regards,
  Varun Thacker
  http://www.vthacker.in/

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




 --
 -
 Noble Paul




-- 


Regards,
Varun Thacker
http://www.vthacker.in/


[jira] [Commented] (SOLR-6787) API to manage blobs in Solr

2015-03-12 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14359029#comment-14359029
 ] 

Yonik Seeley commented on SOLR-6787:


bq. what is the solution? 

I'm pointing out the possible issues and why my gut fee was to not encourage 
the use of this API.  The answer is to develop a solid API if we want this 
feature.

Another big issue off the top of my head (that would be much harder to catch 
via testing):
A different searcher may be used by the sub-request than is used by the parent 
request.  That's going to cause all sorts of problems.
Also less likely, the schema can change and be different from parent to sub 
request.
There's also the question of lost context, and anything that may use that (I 
see you use that to only do useParams once, for example.  is that OK to do 
again?)

 API to manage blobs in  Solr
 

 Key: SOLR-6787
 URL: https://issues.apache.org/jira/browse/SOLR-6787
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, Trunk

 Attachments: SOLR-6787.patch, SOLR-6787.patch


 A special collection called .system needs to be created by the user to 
 store/manage blobs. The schema/solrconfig of that collection need to be 
 automatically supplied by the system so that there are no errors
 APIs need to be created to manage the content of that collection
 {code}
 #create your .system collection first
 http://localhost:8983/solr/admin/collections?action=CREATEname=.systemreplicationFactor=2
 #The config for this collection is automatically created . numShards for this 
 collection is hardcoded to 1
 #create a new jar or add a new version of a jar
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @mycomponent.jar http://localhost:8983/solr/.system/blob/mycomponent
 #  GET on the end point would give a list of jars and other details
 curl http://localhost:8983/solr/.system/blob 
 # GET on the end point with jar name would give  details of various versions 
 of the available jars
 curl http://localhost:8983/solr/.system/blob/mycomponent
 # GET on the end point with jar name and version with a wt=filestream to get 
 the actual file
 curl http://localhost:8983/solr/.system/blob/mycomponent/1?wt=filestream  
 mycomponent.1.jar
 # GET on the end point with jar name and wt=filestream to get the latest 
 version of the file
 curl http://localhost:8983/solr/.system/blob/mycomponent?wt=filestream  
 mycomponent.jar
 {code}
 Please note that the jars are never deleted. a new version is added to the 
 system everytime a new jar is posted for the name. You must use the standard 
 delete commands to delete the old entries



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2759 - Still Failing

2015-03-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2759/

5 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:56926/c8n_1x2_shard1_replica2

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:56926/c8n_1x2_shard1_replica2
at 
__randomizedtesting.SeedInfo.seed([309317A7F7419469:B8C7287D59BDF991]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:625)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:839)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:782)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.HttpPartitionTest.doSendDoc(HttpPartitionTest.java:484)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:501)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:193)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-7173) Fix ReplicationFactorTest on Windows

2015-03-12 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14359084#comment-14359084
 ] 

Timothy Potter commented on SOLR-7173:
--

thanks [~ichattopadhyaya]!

 Fix ReplicationFactorTest on Windows
 

 Key: SOLR-7173
 URL: https://issues.apache.org/jira/browse/SOLR-7173
 Project: Solr
  Issue Type: Bug
Reporter: Ishan Chattopadhyaya
Assignee: Timothy Potter
 Fix For: 5.1

 Attachments: SOLR-7173.patch, SOLR-7173.patch, SOLR-7173.patch


 The ReplicationFactorTest fails on the Windows build with 
 NoHttpResponseException, as seen here: 
 http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4502/testReport/junit/org.apache.solr.cloud/ReplicationFactorTest/test/
 Adding a retry logic similar to HttpPartitionTest's doSend() method makes the 
 test pass on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7173) Fix ReplicationFactorTest on Windows

2015-03-12 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter reassigned SOLR-7173:


Assignee: Timothy Potter

 Fix ReplicationFactorTest on Windows
 

 Key: SOLR-7173
 URL: https://issues.apache.org/jira/browse/SOLR-7173
 Project: Solr
  Issue Type: Bug
Reporter: Ishan Chattopadhyaya
Assignee: Timothy Potter
 Fix For: 5.1

 Attachments: SOLR-7173.patch, SOLR-7173.patch, SOLR-7173.patch


 The ReplicationFactorTest fails on the Windows build with 
 NoHttpResponseException, as seen here: 
 http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4502/testReport/junit/org.apache.solr.cloud/ReplicationFactorTest/test/
 Adding a retry logic similar to HttpPartitionTest's doSend() method makes the 
 test pass on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-7234) Error adding fields : error message This IndexSchema is not mutable with a classicSchemaIndexFactory

2015-03-12 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey closed SOLR-7234.
--
Assignee: (was: Steve Rowe)

I believe that the problem here is that the user changed the schemaFactory from 
the managed version to the classic version, but did not remove the 
AddSchemaFieldsUpdateProcessorFactory section from the 
updateRequestProcessorChain config.

 Error adding fields : error message This IndexSchema is not mutable with a 
 classicSchemaIndexFactory
 

 Key: SOLR-7234
 URL: https://issues.apache.org/jira/browse/SOLR-7234
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 5.0
Reporter: mastermind1981
 Attachments: schema.xml, solrconfig.xml


 Hi,
 i have installed solr 5 and created a core by default it was created with a 
 managed schema i modified solrconfig.xml to use my own schema.xml,
 but when i try to index new docs via the post tool or the ui i have an error 
 telling me that my index schema.
 ErrorMessage  :This IndexSchema is not mutable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7239) StatsComponent perf improvement for min, max, and situations where all stats disabled

2015-03-12 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-7239.

   Resolution: Fixed
Fix Version/s: 5.1
   Trunk

 StatsComponent perf improvement for min, max, and situations where all stats 
 disabled
 -

 Key: SOLR-7239
 URL: https://issues.apache.org/jira/browse/SOLR-7239
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: Trunk, 5.1

 Attachments: SOLR-7324.patch


 as mentioned in SOLR-6349, when i started doing perf testing of requesting 
 individual stats, i noticed that min (and it turns out max) were slower to 
 compute then more complex stats like sum  mean.
 While investigating, i realized that we can also optimize the case where a 
 stats.field param is specified, but no stats are computed for example: 
 stats.field={!min=$doMin}fieldnamedoMin=false



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7239) StatsComponent perf improvement for min, max, and situations where all stats disabled

2015-03-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14359353#comment-14359353
 ] 

ASF subversion and git services commented on SOLR-7239:
---

Commit 1666294 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1666294 ]

SOLR-7239: improved performance of min  max in StatsComponent, as well as 
situations where local params disable all stats

 StatsComponent perf improvement for min, max, and situations where all stats 
 disabled
 -

 Key: SOLR-7239
 URL: https://issues.apache.org/jira/browse/SOLR-7239
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
Assignee: Hoss Man
 Attachments: SOLR-7324.patch


 as mentioned in SOLR-6349, when i started doing perf testing of requesting 
 individual stats, i noticed that min (and it turns out max) were slower to 
 compute then more complex stats like sum  mean.
 While investigating, i realized that we can also optimize the case where a 
 stats.field param is specified, but no stats are computed for example: 
 stats.field={!min=$doMin}fieldnamedoMin=false



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7173) Fix ReplicationFactorTest on Windows

2015-03-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14359331#comment-14359331
 ] 

ASF subversion and git services commented on SOLR-7173:
---

Commit 1666289 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1666289 ]

SOLR-7173: Fix ReplicationFactorTest on Windows

 Fix ReplicationFactorTest on Windows
 

 Key: SOLR-7173
 URL: https://issues.apache.org/jira/browse/SOLR-7173
 Project: Solr
  Issue Type: Bug
Reporter: Ishan Chattopadhyaya
Assignee: Timothy Potter
 Fix For: 5.1

 Attachments: SOLR-7173.patch, SOLR-7173.patch, SOLR-7173.patch


 The ReplicationFactorTest fails on the Windows build with 
 NoHttpResponseException, as seen here: 
 http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4502/testReport/junit/org.apache.solr.cloud/ReplicationFactorTest/test/
 Adding a retry logic similar to HttpPartitionTest's doSend() method makes the 
 test pass on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7173) Fix ReplicationFactorTest on Windows

2015-03-12 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter resolved SOLR-7173.
--
Resolution: Fixed

 Fix ReplicationFactorTest on Windows
 

 Key: SOLR-7173
 URL: https://issues.apache.org/jira/browse/SOLR-7173
 Project: Solr
  Issue Type: Bug
Reporter: Ishan Chattopadhyaya
Assignee: Timothy Potter
 Fix For: 5.1

 Attachments: SOLR-7173.patch, SOLR-7173.patch, SOLR-7173.patch


 The ReplicationFactorTest fails on the Windows build with 
 NoHttpResponseException, as seen here: 
 http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4502/testReport/junit/org.apache.solr.cloud/ReplicationFactorTest/test/
 Adding a retry logic similar to HttpPartitionTest's doSend() method makes the 
 test pass on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2760 - Still Failing

2015-03-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2760/

5 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.NoHttpResponseException: The target server failed to respond
at 
__randomizedtesting.SeedInfo.seed([7E44D0BBA53E8E60:F610EF610BC2E398]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:929)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:782)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.HttpPartitionTest.doSendDoc(HttpPartitionTest.java:484)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:501)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:193)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[jira] [Commented] (SOLR-7239) StatsComponent perf improvement for min, max, and situations where all stats disabled

2015-03-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14359436#comment-14359436
 ] 

ASF subversion and git services commented on SOLR-7239:
---

Commit 1666310 from hoss...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1666310 ]

SOLR-7239: improved performance of min  max in StatsComponent, as well as 
situations where local params disable all stats (merge r1666294)

 StatsComponent perf improvement for min, max, and situations where all stats 
 disabled
 -

 Key: SOLR-7239
 URL: https://issues.apache.org/jira/browse/SOLR-7239
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
Assignee: Hoss Man
 Attachments: SOLR-7324.patch


 as mentioned in SOLR-6349, when i started doing perf testing of requesting 
 individual stats, i noticed that min (and it turns out max) were slower to 
 compute then more complex stats like sum  mean.
 While investigating, i realized that we can also optimize the case where a 
 stats.field param is specified, but no stats are computed for example: 
 stats.field={!min=$doMin}fieldnamedoMin=false



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7215) non reproducible Suite failures due to excessive sysout due to HDFS lease renewal WARN logs due to connection refused -- even if test doesn't use HDFS (ie: threads leaki

2015-03-12 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14359227#comment-14359227
 ] 

Dawid Weiss commented on SOLR-7215:
---

Uncomment the ThreadLeakFilters, Hoss. Nothing should get through. 
SolrIgnoredThreadsFilter has way too many exclusions -- these have to be shut 
down and cleaned properly, not ignored (leading to errors like this one):
{code}
/*
 * IMPORTANT! IMPORTANT!
 * 
 * Any threads added here should have ABSOLUTELY NO SIDE EFFECTS
 * (should be stateless). This includes no references to cores or other
 * test-dependent information.
 */

String threadName = t.getName();
if (threadName.equals(TimerThread.THREAD_NAME)) {
  return true;
}

if (threadName.startsWith(facetExecutor-) || 
threadName.startsWith(cmdDistribExecutor-) ||
threadName.startsWith(httpShardExecutor-)) {
  return true;
}

// This is a bug in ZooKeeper where they call System.exit(11) when
// this thread receives an interrupt signal.
if (threadName.startsWith(SyncThread)) {
  return true;
}

// THESE ARE LIKELY BUGS - these threads should be closed!
if (threadName.startsWith(Overseer-) ||
threadName.startsWith(aliveCheckExecutor-) ||
threadName.startsWith(concurrentUpdateScheduler-)) {
  return true;
}

return false;
{code}

 non reproducible Suite failures due to excessive sysout due to HDFS lease 
 renewal WARN logs due to connection refused -- even if test doesn't use HDFS 
 (ie: threads leaking between tests)
 --

 Key: SOLR-7215
 URL: https://issues.apache.org/jira/browse/SOLR-7215
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
 Attachments: tests-report.txt_suite-failure-due-to-sysout.txt.zip


 On my local machine, i've noticed lately a lot of sporadic, non reproducible, 
 failures like these...
 {noformat}
   2 NOTE: reproduce with: ant test  -Dtestcase=ScriptEngineTest 
 -Dtests.seed=E254A7E69EC7212A -Dtests.slow=true -Dtests.locale=sv 
 -Dtests.timezone=SystemV/CST6 -Dtests.asserts=true -Dtests.file.encoding=UTF-8
 [14:34:23.749] ERROR   0.00s J1 | ScriptEngineTest (suite) 
 Throwable #1: java.lang.AssertionError: The test or suite printed 10984 
 bytes to stdout and stderr, even though the limit was set to 8192 bytes. 
 Increase the limit with @Limit, ignore it completely with 
 @SuppressSysoutChecks or run with -Dtests.verbose=true
  at __randomizedtesting.SeedInfo.seed([E254A7E69EC7212A]:0)
  at 
 org.apache.lucene.util.TestRuleLimitSysouts.afterIfSuccessful(TestRuleLimitSysouts.java:212)
 {noformat}
 Invariably, looking at the logs of test that fail for this reason, i see 
 multiple instances of these WARN msgs...
 {noformat}
   2 601361 T3064 oahh.LeaseRenewer.run WARN Failed to renew lease for 
 [DFSClient_NONMAPREDUCE_-253604438_2947] for 92 seconds.  Will retry shortly 
 ... java.net.ConnectException: Call From frisbee/127.0.1.1 to localhost:40618 
 failed on connection exception: java.net.ConnectException: Connection 
 refused; For more details see:  
 http://wiki.apache.org/hadoop/ConnectionRefused
   2  at sun.reflect.GeneratedConstructorAccessor268.newInstance(Unknown 
 Source)
   2  at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  ...
 {noformat}
 ...the full stack traces of these exceptions typically being 36 lines long 
 (not counting the supressed ... 17 more at the end)
 doing some basic crunching of the tests-report.txt file from a recent run 
 of all solr-core tests (that caused the above failure) leads to some pretty 
 damn disconcerting numbers...
 {noformat}
 hossman@frisbee:~/tmp$ wc -l tests-report.txt_suite-failure-due-to-sysout.txt
 1049177 tests-report.txt_suite-failure-due-to-sysout.txt
 hossman@frisbee:~/tmp$ grep Suite: org.apache.solr 
 tests-report.txt_suite-failure-due-to-sysout.txt | wc -l
 465
 hossman@frisbee:~/tmp$ grep LeaseRenewer.run WARN Failed to renew lease 
 tests-report.txt_suite-failure-due-to-sysout.txt | grep 
 http://wiki.apache.org/hadoop/ConnectionRefused | wc -l
 1988
 hossman@frisbee:~/tmp$ calc
 1988 * 36
 71568
 {noformat}
 So running 465 Solr test suites, we got ~2 thousand of these Failed to renew 
 lease WARNings.  Of the ~1 million total lines of log messages from all 
 tests, ~70 thousand (~7%) are coming from these WARNing mesages -- which can 
 evidently be safetly ignored?
 Something seems broken here.
 Someone who understands this area of the code should either:
 * investigate  fix the code/test not to have these lease renewal problems
 * tweak our test logging configs to supress 

[jira] [Comment Edited] (SOLR-7231) Allow DIH to create single geo-field from lat/lon metadata extracted via Tika

2015-03-12 Thread Tim Allison (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14359214#comment-14359214
 ] 

Tim Allison edited comment on SOLR-7231 at 3/12/15 7:37 PM:


Patch attached.  Borrowed heavily from SpatialFilterTest.

This patch sets the {code}spatialMetadataField{code} in the firstInit(), on the 
assumption that users will always want to index the geo point in the same 
field.  Is this reasonable or should we move that choice to {code}nextRow{code} 
so that users can specify a different field for each doc?


was (Author: talli...@mitre.org):
Patch attached.  Borrowed heavily from SpatialFilterTest.

 Allow DIH to create single geo-field from lat/lon metadata extracted via Tika
 -

 Key: SOLR-7231
 URL: https://issues.apache.org/jira/browse/SOLR-7231
 Project: Solr
  Issue Type: Improvement
Reporter: Tim Allison
Priority: Trivial
 Attachments: SOLR-7231.patch, test_jpeg.jpg


 Tika can extract latitude and longitude data from image (and other) files.  
 It would be handy to allow the user to choose to have DIH populate a single 
 geofield (LatLonType or RPT) from the two metadata values extracted by Tika.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7238) SolrQueryRequest.forward is buggy

2015-03-12 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-7238:
--

 Summary: SolrQueryRequest.forward is buggy
 Key: SOLR-7238
 URL: https://issues.apache.org/jira/browse/SOLR-7238
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Yonik Seeley


The current API/implementation has a number of potential issues, including 
encouraging the use of the response object after the locally created request 
object has been closed, and the fact that the child request has no actual 
relationship with the parent request, meaning that either the searcher or the 
schema objects could change.  A searcher changing would most commonly manifest 
as incorrect documents being returned or other random exceptions during the 
writing of the response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7239) StatsComponent perf improvement for min, max, and situations where all stats disabled

2015-03-12 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-7239:
---
Attachment: SOLR-7324.patch

this patch optimizes away the none stat case, as well as caches the min/max 
numeric values in a double primitive (the Double object is still used for the 
null check in the event that no values exist at all)

 StatsComponent perf improvement for min, max, and situations where all stats 
 disabled
 -

 Key: SOLR-7239
 URL: https://issues.apache.org/jira/browse/SOLR-7239
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
Assignee: Hoss Man
 Attachments: SOLR-7324.patch


 as mentioned in SOLR-6349, when i started doing perf testing of requesting 
 individual stats, i noticed that min (and it turns out max) were slower to 
 compute then more complex stats like sum  mean.
 While investigating, i realized that we can also optimize the case where a 
 stats.field param is specified, but no stats are computed for example: 
 stats.field={!min=$doMin}fieldnamedoMin=false



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7231) Allow DIH to create single geo-field from lat/lon metadata extracted via Tika

2015-03-12 Thread Tim Allison (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Allison updated SOLR-7231:
--
Attachment: test_jpeg.jpg
SOLR-7231.patch

Patch attached.  Borrowed heavily from SpatialFilterTest.

 Allow DIH to create single geo-field from lat/lon metadata extracted via Tika
 -

 Key: SOLR-7231
 URL: https://issues.apache.org/jira/browse/SOLR-7231
 Project: Solr
  Issue Type: Improvement
Reporter: Tim Allison
Priority: Trivial
 Attachments: SOLR-7231.patch, test_jpeg.jpg


 Tika can extract latitude and longitude data from image (and other) files.  
 It would be handy to allow the user to choose to have DIH populate a single 
 geofield (LatLonType or RPT) from the two metadata values extracted by Tika.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1666186 - in /lucene/dev/branches/branch_5x: ./ solr/ solr/core/ solr/core/src/java/org/apache/solr/parser/ solr/core/src/test/org/apache/solr/search/

2015-03-12 Thread Alan Woodward
Ah, OK.  Sorry for the noise!

On 12 Mar 2015, at 15:19, Yonik Seeley wrote:

 On Thu, Mar 12, 2015 at 11:08 AM, Alan Woodward a...@flax.co.uk wrote:
 Hey Yonik,
 
 I think you've inadvertently added a couple of deprecated methods back in 
 here?
 
 Hmmm, but CharStream.java is generated by JavaCC...
 When I got a compile error in FastCharStream.java, I simply copied the
 lucene version.
 
 I built it using the following method:
 $ cd solr/core
 $ ant javacc
 
 -Yonik
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7215) non reproducible Suite failures due to excessive sysout due to HDFS lease renewal WARN logs due to connection refused

2015-03-12 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14359123#comment-14359123
 ] 

Hoss Man commented on SOLR-7215:


This is more jacked up then i thought -- i just got one of the Suite failures 
from TestDocSet which directly extends LuceneTestCase and doesn't do _ANY_ 
initialization of any Solr specific functionality (no CoreContainers, no 
SolrCores, no ZooKeeper)

which means not only are these HDFS Client ConnectExceptions causing test 
failures due to too much logging -- these threads appear to be leaking from the 
test suites and affecting other tests run in the same JVM *EVEN WHEN WHATEVER 
TEST CREATED THESE THREADS PASSES* ... The _only_ failure i got was from 
TestDocSet. and yet it failed because of excessive logging from a thread 
created by some other test that had already passed.

{noformat}
hossman@frisbee:~/lucene/dev/solr$ ant test
...
   [junit4] Suite: org.apache.solr.search.TestDocSet
   [junit4]   2 1460665 T5379 oahh.LeaseRenewer.run WARN Failed to renew lease 
for [DFSClient_NONMAPREDUCE_1277984620_5262] for 402 seconds.  Will retry 
shortly ... java.net.ConnectException: Call From frisbee/127.0.1.1 to 
localhost:47570 failed on connection exception: java.net.ConnectException: 
Connection refused; For more details see:  
http://wiki.apache.org/hadoop/ConnectionRefused
   [junit4]   2at 
sun.reflect.GeneratedConstructorAccessor303.newInstance(Unknown Source)
   [junit4]   2at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   [junit4]   2at 
java.lang.reflect.Constructor.newInstance(Constructor.java:408)
   [junit4]   2at 
org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
   [junit4]   2at 
org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
   [junit4]   2at org.apache.hadoop.ipc.Client.call(Client.java:1410)
   [junit4]   2at org.apache.hadoop.ipc.Client.call(Client.java:1359)
   [junit4]   2at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
   [junit4]   2at com.sun.proxy.$Proxy43.renewLease(Unknown Source)
   [junit4]   2at sun.reflect.GeneratedMethodAccessor60.invoke(Unknown 
Source)
   [junit4]   2at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   [junit4]   2at java.lang.reflect.Method.invoke(Method.java:483)
   [junit4]   2at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
   [junit4]   2at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
   [junit4]   2at com.sun.proxy.$Proxy43.renewLease(Unknown Source)
   [junit4]   2at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:519)
   [junit4]   2at 
org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:773)
   [junit4]   2at 
org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:417)
   [junit4]   2at 
org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:442)
   [junit4]   2at 
org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)
   [junit4]   2at 
org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298)
   [junit4]   2at java.lang.Thread.run(Thread.java:745)
   [junit4]   2 Caused by: java.net.ConnectException: Connection refused
   [junit4]   2at sun.nio.ch.SocketChannelImpl.checkConnect(Native 
Method)
   [junit4]   2at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
   [junit4]   2at 
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
   [junit4]   2at 
org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
   [junit4]   2at 
org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
   [junit4]   2at 
org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:601)
   [junit4]   2at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:696)
   [junit4]   2at 
org.apache.hadoop.ipc.Client$Connection.access$2700(Client.java:367)
   [junit4]   2at 
org.apache.hadoop.ipc.Client.getConnection(Client.java:1458)
   [junit4]   2at org.apache.hadoop.ipc.Client.call(Client.java:1377)
   [junit4]   2... 16 more
   [junit4]   2 
   [junit4]   2 1460924 T8206 oahh.LeaseRenewer.run WARN Failed to renew lease 
for [DFSClient_NONMAPREDUCE_602751345_8088] for 91 seconds.  Will retry shortly 
... java.net.ConnectException: Call From frisbee/127.0.1.1 to localhost:47687 
failed on connection exception: java.net.ConnectException: Connection refused; 
For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
   [junit4]  

[jira] [Created] (SOLR-7239) StatsComponent perf improvement for min, max, and situations where all stats disabled

2015-03-12 Thread Hoss Man (JIRA)
Hoss Man created SOLR-7239:
--

 Summary: StatsComponent perf improvement for min, max, and 
situations where all stats disabled
 Key: SOLR-7239
 URL: https://issues.apache.org/jira/browse/SOLR-7239
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
Assignee: Hoss Man


as mentioned in SOLR-6349, when i started doing perf testing of requesting 
individual stats, i noticed that min (and it turns out max) were slower to 
compute then more complex stats like sum  mean.

While investigating, i realized that we can also optimize the case where a 
stats.field param is specified, but no stats are computed for example: 
stats.field={!min=$doMin}fieldnamedoMin=false



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7173) Fix ReplicationFactorTest on Windows

2015-03-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14359081#comment-14359081
 ] 

ASF subversion and git services commented on SOLR-7173:
---

Commit 1666266 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1666266 ]

SOLR-7173: Fix ReplicationFactorTest on Windows

 Fix ReplicationFactorTest on Windows
 

 Key: SOLR-7173
 URL: https://issues.apache.org/jira/browse/SOLR-7173
 Project: Solr
  Issue Type: Bug
Reporter: Ishan Chattopadhyaya
 Fix For: 5.1

 Attachments: SOLR-7173.patch, SOLR-7173.patch, SOLR-7173.patch


 The ReplicationFactorTest fails on the Windows build with 
 NoHttpResponseException, as seen here: 
 http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4502/testReport/junit/org.apache.solr.cloud/ReplicationFactorTest/test/
 Adding a retry logic similar to HttpPartitionTest's doSend() method makes the 
 test pass on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2051 - Still Failing!

2015-03-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2051/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Didn't see replicas [core_node2, core_node3] come up within 9 ms! 
ClusterState: DocCollection(c8n_1x3_lf)={   replicationFactor:3,   
shards:{shard1:{   range:8000-7fff,   state:active, 
  replicas:{ core_node1:{   
core:c8n_1x3_lf_shard1_replica2,   
base_url:http://127.0.0.1:50907/_yrq/b;,   
node_name:127.0.0.1:50907__yrq%2Fb,   state:down}, 
core_node2:{   core:c8n_1x3_lf_shard1_replica1,   
base_url:http://127.0.0.1:50902/_yrq/b;,   
node_name:127.0.0.1:50902__yrq%2Fb,   state:recovering},
 core_node3:{   core:c8n_1x3_lf_shard1_replica3,   
base_url:http://127.0.0.1:50911/_yrq/b;,   
node_name:127.0.0.1:50911__yrq%2Fb,   state:active,   
leader:true,   router:{name:compositeId},   
maxShardsPerNode:1,   autoAddReplicas:false}

Stack Trace:
java.lang.AssertionError: Didn't see replicas [core_node2, core_node3] come up 
within 9 ms! ClusterState: DocCollection(c8n_1x3_lf)={
  replicationFactor:3,
  shards:{shard1:{
  range:8000-7fff,
  state:active,
  replicas:{
core_node1:{
  core:c8n_1x3_lf_shard1_replica2,
  base_url:http://127.0.0.1:50907/_yrq/b;,
  node_name:127.0.0.1:50907__yrq%2Fb,
  state:down},
core_node2:{
  core:c8n_1x3_lf_shard1_replica1,
  base_url:http://127.0.0.1:50902/_yrq/b;,
  node_name:127.0.0.1:50902__yrq%2Fb,
  state:recovering},
core_node3:{
  core:c8n_1x3_lf_shard1_replica3,
  base_url:http://127.0.0.1:50911/_yrq/b;,
  node_name:127.0.0.1:50911__yrq%2Fb,
  state:active,
  leader:true,
  router:{name:compositeId},
  maxShardsPerNode:1,
  autoAddReplicas:false}
at 
__randomizedtesting.SeedInfo.seed([8A9D0DE972E4F27F:2C93233DC189F87]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.HttpPartitionTest.waitToSeeReplicasActive(HttpPartitionTest.java:572)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:178)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 

[jira] [Commented] (SOLR-7238) SolrQueryRequest.forward is buggy

2015-03-12 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14359161#comment-14359161
 ] 

Yonik Seeley commented on SOLR-7238:


The current Solr uses are all in BlobHandler.
The one using /get is probably OK (just based on the current implementation 
of that handler), but the other two uses that use the query component can 
return the wrong document due to internal ids shifting across searcher versions.

 SolrQueryRequest.forward is buggy
 -

 Key: SOLR-7238
 URL: https://issues.apache.org/jira/browse/SOLR-7238
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Yonik Seeley

 The current API/implementation has a number of potential issues, including 
 encouraging the use of the response object after the locally created request 
 object has been closed, and the fact that the child request has no actual 
 relationship with the parent request, meaning that either the searcher or 
 the schema objects could change.  A searcher changing would most commonly 
 manifest as incorrect documents being returned or other random exceptions 
 during the writing of the response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7215) non reproducible Suite failures due to excessive sysout due to HDFS lease renewal WARN logs due to connection refused -- even if test doesn't use HDFS (ie: threads leaking

2015-03-12 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-7215:
---
Summary: non reproducible Suite failures due to excessive sysout due to 
HDFS lease renewal WARN logs due to connection refused -- even if test doesn't 
use HDFS (ie: threads leaking between tests)  (was: non reproducible Suite 
failures due to excessive sysout due to HDFS lease renewal WARN logs due to 
connection refused)

 non reproducible Suite failures due to excessive sysout due to HDFS lease 
 renewal WARN logs due to connection refused -- even if test doesn't use HDFS 
 (ie: threads leaking between tests)
 --

 Key: SOLR-7215
 URL: https://issues.apache.org/jira/browse/SOLR-7215
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
 Attachments: tests-report.txt_suite-failure-due-to-sysout.txt.zip


 On my local machine, i've noticed lately a lot of sporadic, non reproducible, 
 failures like these...
 {noformat}
   2 NOTE: reproduce with: ant test  -Dtestcase=ScriptEngineTest 
 -Dtests.seed=E254A7E69EC7212A -Dtests.slow=true -Dtests.locale=sv 
 -Dtests.timezone=SystemV/CST6 -Dtests.asserts=true -Dtests.file.encoding=UTF-8
 [14:34:23.749] ERROR   0.00s J1 | ScriptEngineTest (suite) 
 Throwable #1: java.lang.AssertionError: The test or suite printed 10984 
 bytes to stdout and stderr, even though the limit was set to 8192 bytes. 
 Increase the limit with @Limit, ignore it completely with 
 @SuppressSysoutChecks or run with -Dtests.verbose=true
  at __randomizedtesting.SeedInfo.seed([E254A7E69EC7212A]:0)
  at 
 org.apache.lucene.util.TestRuleLimitSysouts.afterIfSuccessful(TestRuleLimitSysouts.java:212)
 {noformat}
 Invariably, looking at the logs of test that fail for this reason, i see 
 multiple instances of these WARN msgs...
 {noformat}
   2 601361 T3064 oahh.LeaseRenewer.run WARN Failed to renew lease for 
 [DFSClient_NONMAPREDUCE_-253604438_2947] for 92 seconds.  Will retry shortly 
 ... java.net.ConnectException: Call From frisbee/127.0.1.1 to localhost:40618 
 failed on connection exception: java.net.ConnectException: Connection 
 refused; For more details see:  
 http://wiki.apache.org/hadoop/ConnectionRefused
   2  at sun.reflect.GeneratedConstructorAccessor268.newInstance(Unknown 
 Source)
   2  at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  ...
 {noformat}
 ...the full stack traces of these exceptions typically being 36 lines long 
 (not counting the supressed ... 17 more at the end)
 doing some basic crunching of the tests-report.txt file from a recent run 
 of all solr-core tests (that caused the above failure) leads to some pretty 
 damn disconcerting numbers...
 {noformat}
 hossman@frisbee:~/tmp$ wc -l tests-report.txt_suite-failure-due-to-sysout.txt
 1049177 tests-report.txt_suite-failure-due-to-sysout.txt
 hossman@frisbee:~/tmp$ grep Suite: org.apache.solr 
 tests-report.txt_suite-failure-due-to-sysout.txt | wc -l
 465
 hossman@frisbee:~/tmp$ grep LeaseRenewer.run WARN Failed to renew lease 
 tests-report.txt_suite-failure-due-to-sysout.txt | grep 
 http://wiki.apache.org/hadoop/ConnectionRefused | wc -l
 1988
 hossman@frisbee:~/tmp$ calc
 1988 * 36
 71568
 {noformat}
 So running 465 Solr test suites, we got ~2 thousand of these Failed to renew 
 lease WARNings.  Of the ~1 million total lines of log messages from all 
 tests, ~70 thousand (~7%) are coming from these WARNing mesages -- which can 
 evidently be safetly ignored?
 Something seems broken here.
 Someone who understands this area of the code should either:
 * investigate  fix the code/test not to have these lease renewal problems
 * tweak our test logging configs to supress these WARN messages



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Functionality of legacyCloud=false

2015-03-12 Thread Varun Thacker
Two scenarios I observed where we can bring up a replica even when I think
it shouldn't. legacyCloud is set to false.

   - I have two nodes A and B.
   - CREATE collection 'test' with 1 shard, 1 replica. It gets created on
   node A.
   - manually copy test_shard1_replica1 folder to node B's solr home.
   - Bring down node A.
   - Restart node B. The shard comes up registering itself on node B and
   becomes 'active'


   - I have two nodes A and B ( this is down currently ).
   - CREATE collection 'test' with 1 shard, 1 replica. It gets created on
   node A.
   - manually copy test_shard1_replica1 folder to node B's solr home.
   - Start node B. The shard comes up registering itself on node B and
   stays 'down'. The reason being the leader is still node A but clusterstate
   has base_url of Node B. This is the error in the logs - Error getting
   leader from zk for shard shard1

In legacyCloud=false you get a 'no_such_replica in clusterstate' error if
the 'coreNodeName' is not present in clusterstate.

But in my two observations the 'coreNodeName' were the same, hence I ran
into that scenario.

Should we make the check more stringent to not allow this to happen? Check
against base_url also?

Also should we be making legacyCloud=false as default in 5.x?
--


Regards,
Varun Thacker
http://www.vthacker.in/


[jira] [Created] (SOLR-7237) Add boost to @Field annotation

2015-03-12 Thread JIRA
Karl Kildén created SOLR-7237:
-

 Summary: Add boost to @Field annotation
 Key: SOLR-7237
 URL: https://issues.apache.org/jira/browse/SOLR-7237
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Affects Versions: 5.0
Reporter: Karl Kildén


DocumentObjectBinder is great but it hard codes the boost like this:

doc.setField(field.name, field.get(obj), 1.0f);


Why not offer boost on the @Field annotation when you construct the bean?

@Field(name=MY_FIELD, boost=2.0f)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1666186 - in /lucene/dev/branches/branch_5x: ./ solr/ solr/core/ solr/core/src/java/org/apache/solr/parser/ solr/core/src/test/org/apache/solr/search/

2015-03-12 Thread Ramkumar R. Aiyengar
This actually brings me to a question I have had for a while. Why do we
check in auto generated code? Shouldn't the build system run javacc as a
prereq to compiling instead?
On 12 Mar 2015 18:08, Alan Woodward a...@flax.co.uk wrote:

 Ah, OK.  Sorry for the noise!

 On 12 Mar 2015, at 15:19, Yonik Seeley wrote:

  On Thu, Mar 12, 2015 at 11:08 AM, Alan Woodward a...@flax.co.uk wrote:
  Hey Yonik,
 
  I think you've inadvertently added a couple of deprecated methods back
 in here?
 
  Hmmm, but CharStream.java is generated by JavaCC...
  When I got a compile error in FastCharStream.java, I simply copied the
  lucene version.
 
  I built it using the following method:
  $ cd solr/core
  $ ant javacc
 
  -Yonik
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
  For additional commands, e-mail: dev-h...@lucene.apache.org
 


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Commented] (SOLR-5994) Add Jetty configuration to serve JavaDocs

2015-03-12 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14359682#comment-14359682
 ] 

Alexandre Rafalovitch commented on SOLR-5994:
-

I think this issue is dead with Jetty becoming implementation detail, SOLR-7240 
and thinking behind LUCENE-6257.

So, we can close it as WillNotFix.

 Add Jetty configuration to serve JavaDocs 
 --

 Key: SOLR-5994
 URL: https://issues.apache.org/jira/browse/SOLR-5994
 Project: Solr
  Issue Type: Improvement
  Components: documentation, web gui
Affects Versions: 4.7
Reporter: Alexandre Rafalovitch
Priority: Minor
  Labels: javadoc
 Fix For: Trunk

 Attachments: SOLR-5994.patch, javadoc-jetty-context.xml


 It's possible to add another context file for Jetty that will serve Javadocs 
 from the server.
 This avoids some Javascript issues, makes the documentation more visible, and 
 opens the door for better integration in the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-5994) Add Jetty configuration to serve JavaDocs

2015-03-12 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-5994.
---
   Resolution: Won't Fix
Fix Version/s: (was: Trunk)

No longer relevant.

 Add Jetty configuration to serve JavaDocs 
 --

 Key: SOLR-5994
 URL: https://issues.apache.org/jira/browse/SOLR-5994
 Project: Solr
  Issue Type: Improvement
  Components: documentation, web gui
Affects Versions: 4.7
Reporter: Alexandre Rafalovitch
Priority: Minor
  Labels: javadoc
 Attachments: SOLR-5994.patch, javadoc-jetty-context.xml


 It's possible to add another context file for Jetty that will serve Javadocs 
 from the server.
 This avoids some Javascript issues, makes the documentation more visible, and 
 opens the door for better integration in the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7240) redirect / to /solr

2015-03-12 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14359691#comment-14359691
 ] 

Ramkumar Aiyengar commented on SOLR-7240:
-

Re: your comment on having /solr so that we could have an incompatible /v2 in 
the future: wouldn't the same concern apply to the redirect as well? I.e. what 
API is the root url going to service during such an API shift? It leads to the 
root URL being on the oldest version if it stays on v1, but if always redirect 
it to v2, we are waiving off all backward compatibility requirements for the 
root url alone -- how do we communicate this discrepancy in compatibility 
guarantees between the two URLs? And how useful is it going to be if we say 
that the meaning of the redirect is going to change under your feet without 
notice?

 redirect / to /solr 
 

 Key: SOLR-7240
 URL: https://issues.apache.org/jira/browse/SOLR-7240
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
 Attachments: SOLR-7240.patch


 Prior to Solr 5, we avoided doing anything fancy with our jetty configs 
 because we didn't want to overly customize the example beyond things that 
 involved loading the solr.war.
 That's not longer an issue, so we might as well plop in some jetty config 
 features to redirect / to /solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1666186 - in /lucene/dev/branches/branch_5x: ./ solr/ solr/core/ solr/core/src/java/org/apache/solr/parser/ solr/core/src/test/org/apache/solr/search/

2015-03-12 Thread Yonik Seeley
On Thu, Mar 12, 2015 at 8:04 PM, Ramkumar R. Aiyengar
andyetitmo...@gmail.com wrote:
 This actually brings me to a question I have had for a while. Why do we
 check in auto generated code? Shouldn't the build system run javacc as a
 prereq to compiling instead?

Historically, the compilation wasn't automated (you had to find +
install JavaCC yourself, run it yourself, etc).
I don't know the current reasons however.

-Yonik

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1666186 - in /lucene/dev/branches/branch_5x: ./ solr/ solr/core/ solr/core/src/java/org/apache/solr/parser/ solr/core/src/test/org/apache/solr/search/

2015-03-12 Thread Michael McCandless
On Thu, Mar 12, 2015 at 5:38 PM, Yonik Seeley ysee...@gmail.com wrote:
 On Thu, Mar 12, 2015 at 8:04 PM, Ramkumar R. Aiyengar
 andyetitmo...@gmail.com wrote:
 This actually brings me to a question I have had for a while. Why do we
 check in auto generated code? Shouldn't the build system run javacc as a
 prereq to compiling instead?

 Historically, the compilation wasn't automated (you had to find +
 install JavaCC yourself, run it yourself, etc).
 I don't know the current reasons however.

Some discussion about this here:
https://issues.apache.org/jira/browse/LUCENE-4335

Mike McCandless

http://blog.mikemccandless.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_31) - Build # 11967 - Failure!

2015-03-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11967/
Java: 64bit/jdk1.8.0_31 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
There were too many update fails - we expect it can happen, but shouldn't easily

Stack Trace:
java.lang.AssertionError: There were too many update fails - we expect it can 
happen, but shouldn't easily
at 
__randomizedtesting.SeedInfo.seed([FCB38AC1AA089717:74E7B51B04F4FAEF]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:222)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-5743) Faceting with BlockJoin support

2015-03-12 Thread Jacob Carter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14359562#comment-14359562
 ] 

Jacob Carter commented on SOLR-5743:


I've applied this patch to the Solr 5.0.0 and with a index containing around 
400k parent documents and 1.5 million child documents it's taking over a minute 
to return the values of a child facet and their counts.  Is this performance to 
be expected at the present time or have I potentially misconfigured my instance?

 Faceting with BlockJoin support
 ---

 Key: SOLR-5743
 URL: https://issues.apache.org/jira/browse/SOLR-5743
 Project: Solr
  Issue Type: New Feature
Reporter: abipc
  Labels: features
 Attachments: SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch, 
 SOLR-5743.patch, SOLR-5743.patch


 For a sample inventory(note - nested documents) like this -   
  doc
 field name=id10/field
 field name=type_sparent/field
 field name=BRAND_sNike/field
 doc
 field name=id11/field
 field name=COLOR_sRed/field
 field name=SIZE_sXL/field
 /doc
 doc
 field name=id12/field
 field name=COLOR_sBlue/field
 field name=SIZE_sXL/field
 /doc
 /doc
 Faceting results must contain - 
 Red(1)
 XL(1) 
 Blue(1) 
 for a q=* query. 
 PS : The inventory example has been taken from this blog - 
 http://blog.griddynamics.com/2013/09/solr-block-join-support.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7236) Securing Solr (umbrella issue)

2015-03-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358401#comment-14358401
 ] 

Jan Høydahl commented on SOLR-7236:
---

There are multiple existing frameworks to simplify the task of abstracting 
security implementations in Java apps, among them are 
[JAAS|https://en.wikipedia.org/wiki/Java_Authentication_and_Authorization_Service]
 , [Spring Security|http://projects.spring.io/spring-security/] and [Apache 
Shiro|http://shiro.apache.org/]. They are created to do the hard and scary 
stuff, provide simple APIs for developers and also provide out of the box 
integrations with all the various protocols. We really don't want to maintain 
support for Kerberos etc in Solr-code.

Although any of these could probably do the job, I'm pitching Apache Shiro as 
the main API for all security related implementations in Solr. Without having 
used it, seems to be built just for this purpose. Solr users with some crazy 
legacy security system inhouse can write plugins for that to Shiro itself, 
instead of writing Solr code. http://shiro.apache.org/

 Securing Solr (umbrella issue)
 --

 Key: SOLR-7236
 URL: https://issues.apache.org/jira/browse/SOLR-7236
 Project: Solr
  Issue Type: New Feature
Reporter: Jan Høydahl
  Labels: Security

 This is an umbrella issue for adding security to Solr. The discussion here 
 should discuss real user needs and high-level strategy, before deciding on 
 implementation details. All work will be done in sub tasks and linked issues.
 Solr has not traditionally concerned itself with security. And It has been a 
 general view among the committers that it may be better to stay out of it to 
 avoid blood on our hands in this mine-field. Still, Solr has lately seen 
 SSL support, securing of ZK, and signing of jars, and discussions have begun 
 about securing operations in Solr.
 Some of the topics to address are
 * User management (flat file, AD/LDAP etc)
 * Authentication (Admin UI, Admin and data/query operations. Tons of auth 
 protocols: basic, digest, oauth, pki..)
 * Authorization (who can do what with what API, collection, doc)
 * Pluggability (no user's needs are equal)
 * And we could go on and on but this is what we've seen the most demand for



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6347) MultiFieldQueryParser doesnt catch invalid syntax properly (due to user using regexpression syntax unwittingly)

2015-03-12 Thread Paul taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358440#comment-14358440
 ] 

Paul taylor commented on LUCENE-6347:
-

Hm, Ive just retested it and with assertions enabled for me it does give the 
following assertion stack trace:

java.lang.AssertionError
at 
org.apache.lucene.search.MultiTermQuery.init(MultiTermQuery.java:252)
at 
org.apache.lucene.search.AutomatonQuery.init(AutomatonQuery.java:65)
at org.apache.lucene.search.RegexpQuery.init(RegexpQuery.java:90)
at org.apache.lucene.search.RegexpQuery.init(RegexpQuery.java:79)
at org.apache.lucene.search.RegexpQuery.init(RegexpQuery.java:69)
at 
org.apache.lucene.queryparser.classic.QueryParserBase.newRegexpQuery(QueryParserBase.java:790)
at 
org.apache.lucene.queryparser.classic.QueryParserBase.getRegexpQuery(QueryParserBase.java:1005)
at 
org.apache.lucene.queryparser.classic.QueryParserBase.handleBareTokenQuery(QueryParserBase.java:1075)
at 
org.apache.lucene.queryparser.classic.QueryParser.Term(QueryParser.java:359)
at 
org.apache.lucene.queryparser.classic.QueryParser.Clause(QueryParser.java:258)
at 
org.apache.lucene.queryparser.classic.QueryParser.Query(QueryParser.java:213)
at 
org.apache.lucene.queryparser.classic.QueryParser.TopLevelQuery(QueryParser.java:171)
at 
org.apache.lucene.queryparser.classic.QueryParserBase.parse(QueryParserBase.java:120)
at 
org.musicbrainz.search.servlet.LuceneRegExParseTest.testSearch411LuceneBugReport(LuceneRegExParseTest.java:31)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at org.junit.runner.JUnitCore.run(JUnitCore.java:157)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:74)
at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:211)
at 
com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)


 MultiFieldQueryParser doesnt catch invalid syntax properly (due to user using 
 regexpression syntax unwittingly)
 ---

 Key: LUCENE-6347
 URL: https://issues.apache.org/jira/browse/LUCENE-6347
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Affects Versions: 4.1
Reporter: Paul taylor

 MultiFieldQueryParser doesnt catch invalid syntax properly (due to user using 
 regexpression syntax unwittingly)
 {code} 
 import org.apache.lucene.analysis.standard.StandardAnalyzer;
 import org.apache.lucene.queryparser.classic.MultiFieldQueryParser;
 import org.apache.lucene.queryparser.classic.ParseException;
 import org.apache.lucene.queryparser.classic.QueryParser;
 import org.apache.lucene.util.Version;
 import org.junit.Test;
 import static org.junit.Assert.assertNotNull;
 import static org.junit.Assert.assertTrue;
 /**
  * Lucene tests
  */
 public class LuceneRegExParseTest
 {
 @Test
 public void testSearch411LuceneBugReport() throws Exception
 {
 Exception e = null;
 try
 {
 String[] fields = new String[2];
 fields[0] = artist;
 fields[1] = recording;
 

[jira] [Commented] (SOLR-7217) Auto-detect HTTP body content-type

2015-03-12 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358587#comment-14358587
 ] 

Yonik Seeley commented on SOLR-7217:


bq. But how do we know that client is curl? does it send an extra header?

Yes.

 Auto-detect HTTP body content-type
 --

 Key: SOLR-7217
 URL: https://issues.apache.org/jira/browse/SOLR-7217
 Project: Solr
  Issue Type: Improvement
Reporter: Yonik Seeley

 It's nice to be able to leave off the specification of content type when hand 
 crafting a request (i.e. from the command line) and for our documentation 
 examples.
 For example:
 {code}
 curl http://localhost:8983/solr/query -d '
 {
   query:hero
 }'
 {code}
 Note the missing 
 {code}
 -H 'Content-type:application/json'
 {code}
 that would otherwise be needed everywhere



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7217) Auto-detect HTTP body content-type

2015-03-12 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358581#comment-14358581
 ] 

Noble Paul commented on SOLR-7217:
--

But how do we know that client is curl? does it send an extra header?

 Auto-detect HTTP body content-type
 --

 Key: SOLR-7217
 URL: https://issues.apache.org/jira/browse/SOLR-7217
 Project: Solr
  Issue Type: Improvement
Reporter: Yonik Seeley

 It's nice to be able to leave off the specification of content type when hand 
 crafting a request (i.e. from the command line) and for our documentation 
 examples.
 For example:
 {code}
 curl http://localhost:8983/solr/query -d '
 {
   query:hero
 }'
 {code}
 Note the missing 
 {code}
 -H 'Content-type:application/json'
 {code}
 that would otherwise be needed everywhere



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7217) Auto-detect HTTP body content-type

2015-03-12 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358546#comment-14358546
 ] 

Yonik Seeley commented on SOLR-7217:


If the client is curl AND the content-type is curl's default (i.e. 
application/x-www-form-urlencoded) then we auto-detect instead of just 
trusting curl.

 Auto-detect HTTP body content-type
 --

 Key: SOLR-7217
 URL: https://issues.apache.org/jira/browse/SOLR-7217
 Project: Solr
  Issue Type: Improvement
Reporter: Yonik Seeley

 It's nice to be able to leave off the specification of content type when hand 
 crafting a request (i.e. from the command line) and for our documentation 
 examples.
 For example:
 {code}
 curl http://localhost:8983/solr/query -d '
 {
   query:hero
 }'
 {code}
 Note the missing 
 {code}
 -H 'Content-type:application/json'
 {code}
 that would otherwise be needed everywhere



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1666073 - in /lucene/board-reports/2015: ./ board-report-march.txt

2015-03-12 Thread Steve Rowe
Hi Mark,

On Wednesday, March 11, 2015, markrmil...@apache.org wrote:

 +## Releases:
 +
 + - 5.0 was released on Fri Feb 20 2015
 + - 4.10.3 was released on Mon Dec 29 2014


4.10.4 is missing - maybe its metadata is missing from a doap file or
something?


 + PyLucene
 +

[...]

 +
 +No releases where made last quarter.


s/where/were

Steve


[jira] [Created] (SOLR-7240) redirect / to /solr

2015-03-12 Thread Hoss Man (JIRA)
Hoss Man created SOLR-7240:
--

 Summary: redirect / to /solr 
 Key: SOLR-7240
 URL: https://issues.apache.org/jira/browse/SOLR-7240
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man


Prior to Solr 5, we avoided doing anything fancy with our jetty configs because 
we didn't want to overly customize the example beyond things that involved 
loading the solr.war.

That's not longer an issue, so we might as well plop in some jetty config 
features to redirect / to /solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7240) redirect / to /solr

2015-03-12 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14359635#comment-14359635
 ] 

Hoss Man commented on SOLR-7240:


In response to the inevitable question why not just move all /solr/* URLs to 
/, i re-iterate my comment to this topic on the mailing list last month...

{quote}
bq. PS: Same goes for the default URL. We could move to toplevel now 
http://localhost:8983/

-0 ... i don't see any downside to leaving /solr/ in the URL, and if/when we 
rip out the jetty stack completley and stop beholding to the servlet APIs 
internally it gives us flexibility if we want to start deprecating/retring 
things to be able to say All of the legacy, pre-Solr X.0, APIs use a base path 
of '/solr/' and all the new hotness APIs use a base path of '/v2/' ... or 
something like that.
{quote}



 redirect / to /solr 
 

 Key: SOLR-7240
 URL: https://issues.apache.org/jira/browse/SOLR-7240
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man

 Prior to Solr 5, we avoided doing anything fancy with our jetty configs 
 because we didn't want to overly customize the example beyond things that 
 involved loading the solr.war.
 That's not longer an issue, so we might as well plop in some jetty config 
 features to redirect / to /solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7240) redirect / to /solr

2015-03-12 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-7240:
---
Attachment: SOLR-7240.patch

patch leveraging a bit of jetty magic to do this.

 redirect / to /solr 
 

 Key: SOLR-7240
 URL: https://issues.apache.org/jira/browse/SOLR-7240
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
 Attachments: SOLR-7240.patch


 Prior to Solr 5, we avoided doing anything fancy with our jetty configs 
 because we didn't want to overly customize the example beyond things that 
 involved loading the solr.war.
 That's not longer an issue, so we might as well plop in some jetty config 
 features to redirect / to /solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7240) redirect / to /solr

2015-03-12 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14359648#comment-14359648
 ] 

Hoss Man commented on SOLR-7240:


Hmm, not sure i'm a fan of this solution actually...

it only only redirects / it also redirects /anything_other_then_solr...

http://localhost:8983/garbage - http://localhost:8983/solr/

...this seems like a bad idea.  My goal was simply to make 
http://localhost:8983/; send you someplace useful, but if people are making up 
giberish URLs -- or have typos in client connection urls (eg: 
http://localhost:8983/Solr/MyCollection/select;) those should really just 
return 404 rather then silently rewriting to .../solr/

 redirect / to /solr 
 

 Key: SOLR-7240
 URL: https://issues.apache.org/jira/browse/SOLR-7240
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
 Attachments: SOLR-7240.patch


 Prior to Solr 5, we avoided doing anything fancy with our jetty configs 
 because we didn't want to overly customize the example beyond things that 
 involved loading the solr.war.
 That's not longer an issue, so we might as well plop in some jetty config 
 features to redirect / to /solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2761 - Still Failing

2015-03-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2761/

3 tests failed.
FAILED:  org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:64839/c8n_1x3_commits_shard1_replica3

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:64839/c8n_1x3_commits_shard1_replica3
at 
__randomizedtesting.SeedInfo.seed([75A2757272791838:FDF64AA8DC8575C0]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:598)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:236)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:228)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:483)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:464)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.oneShardTest(LeaderInitiatedRecoveryOnCommitTest.java:130)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test(LeaderInitiatedRecoveryOnCommitTest.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:933)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-6892) Improve the way update processors are used and make it simpler

2015-03-12 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14359751#comment-14359751
 ] 

Alexandre Rafalovitch commented on SOLR-6892:
-

We better have the debugging statements explaining exactly what the final chain 
is. The mailing list is already starting to see people getting confused by 
chains defined as default being ignored because there is another declaration 
somewhere in the initParams section. It's a hell to troubleshoot.

So, please make sure that there is a debug level log statement that at least 
names the classes in the sequence created.

 Improve the way update processors are used and make it simpler
 --

 Key: SOLR-6892
 URL: https://issues.apache.org/jira/browse/SOLR-6892
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6892.patch


 The current update processor chain is rather cumbersome and we should be able 
 to use the updateprocessors without a chain.
 The scope of this ticket is 
 * A new tag {{updateProcessor}}  becomes a toplevel tag and it will be 
 equivalent to the {{processor}} tag inside 
 {{updateRequestProcessorChain}} . The only difference is that it should 
 require a {{name}} attribute. The {{updateProcessorChain}} tag will 
 continue to exist and it should be possible to define {{processor}} inside 
 as well . It should also be possible to reference a named URP in a chain.
 * processors will be added in the request with their names . Example 
 {{processor=a,b,c}} , {{pre-processor=p,q,r}} or {{post-processor=x,y,z}} . 
 This creates an implicit chain of the named URPs the order they are specified
 * There are multiple request parameters supported by update request 
 ** pre-processor : This chain is executed at the node that receives the 
 request. Other nodes will not execute this
 ** processor : This chain is executed executed at the leader right before the 
 LogUpdateProcessorFactory + DistributedUpdateProcessorFactory . The replicas 
 will not execute this. 
 ** post-processor : This chain is executed right before the 
 RunUpdateProcessor in all replicas , including the leader
 * What happens to the update.chain parameter ? {{update.chain}} will be 
 honored . The implicit chain is created by merging both the update.chain and 
 the request params. {{post-processor}} will be inserted right after the 
 DistributedUpdateProcessor in the chain.   and {{processor}} will be inserted 
 right in the beginning of the update.chain



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Newbie question

2015-03-12 Thread Kitty
Hi all,
I'm not entirely sure this is not a user question, but it is a newbie question 
from a dev perspective still, so not sure if it belongs here on the dev list or 
not... I am open to coaching in that case on where to go with this question.

I am very new to the experience of contributing to open source projects and 
especially Solr. I am however very excited to give it a try and hope to learn a 
lot while doing so!

This is my question: I just installed my Ubuntu 14.04 machine and set up my dev 
environment, after which I checked out trunk/solr (using SVN) to my machine. 
Before doing any local changes at all, I decided to run 'ant clean test', to 
make sure everything works before I start...

The run failed. Is that normal for trunk or is that the known current state of 
the tests or should I suspect there is something in my (all newly installed) 
environment that is incorrectly set up? Just need a clue of where to start 
fixing...

Thanks a bunch! And apologies if this should have really been in the user 
mailing list.
  

Re: Newbie question

2015-03-12 Thread Alexandre Rafalovitch
Welcome.

Right list, strange problem, what's the actual error? Could be related
to missing ivy, but the specific message will help.

Regards,
Alex.

Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
http://www.solr-start.com/


On 12 March 2015 at 21:30, Kitty kittyontra...@hotmail.com wrote:
 Hi all,
 I'm not entirely sure this is not a user question, but it is a newbie
 question from a dev perspective still, so not sure if it belongs here on the
 dev list or not... I am open to coaching in that case on where to go with
 this question.

 I am very new to the experience of contributing to open source projects and
 especially Solr. I am however very excited to give it a try and hope to
 learn a lot while doing so!

 This is my question: I just installed my Ubuntu 14.04 machine and set up my
 dev environment, after which I checked out trunk/solr (using SVN) to my
 machine. Before doing any local changes at all, I decided to run 'ant clean
 test', to make sure everything works before I start...

 The run failed. Is that normal for trunk or is that the known current state
 of the tests or should I suspect there is something in my (all newly
 installed) environment that is incorrectly set up? Just need a clue of where
 to start fixing...

 Thanks a bunch! And apologies if this should have really been in the user
 mailing list.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >