[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-fcs-b132) - Build # 9727 - Still Failing!

2014-03-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/9727/
Java: 64bit/jdk1.8.0-fcs-b132 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
REGRESSION:  
org.apache.solr.client.solrj.impl.CloudSolrServerTest.testDistribSearch

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:53000 within 45000 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:53000 within 45000 ms
at 
__randomizedtesting.SeedInfo.seed([65B69655C5237DCB:E450184DB27C1DF7]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.init(SolrZkClient.java:150)
at 
org.apache.solr.common.cloud.SolrZkClient.init(SolrZkClient.java:101)
at 
org.apache.solr.common.cloud.SolrZkClient.init(SolrZkClient.java:91)
at 
org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:89)
at 
org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:83)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.setUp(AbstractDistribZkTestBase.java:70)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.setUp(AbstractFullDistribZkTestBase.java:200)
at 
org.apache.solr.client.solrj.impl.CloudSolrServerTest.setUp(CloudSolrServerTest.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:771)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)

[jira] [Created] (SOLR-5842) facet.pivot need provide the more information and additional function

2014-03-09 Thread Raintung Li (JIRA)
Raintung Li created SOLR-5842:
-

 Summary: facet.pivot need provide the more information and 
additional function
 Key: SOLR-5842
 URL: https://issues.apache.org/jira/browse/SOLR-5842
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.6
Reporter: Raintung Li


Because facet can set the facet.limit and facet.offset, we can't get the next 
array size for facet.pivot. If you want to get the next pivot size, you have to 
set  the facet.limit to max integer then to count the array size. In that way 
you will get a lot of terms for pivot field that's impact the network and 
Client. 




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2894) Implement distributed pivot faceting

2014-03-09 Thread Elran Dvir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925156#comment-13925156
 ] 

Elran Dvir commented on SOLR-2894:
--

I think I solved the the toObject problem with datetime fields.
Please see the patch attached.
All tests pass now.
Let me know what you think.
Thanks.

 Implement distributed pivot faceting
 

 Key: SOLR-2894
 URL: https://issues.apache.org/jira/browse/SOLR-2894
 Project: Solr
  Issue Type: Improvement
Reporter: Erik Hatcher
 Fix For: 4.7

 Attachments: SOLR-2894-reworked.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 dateToObject.patch


 Following up on SOLR-792, pivot faceting currently only supports 
 undistributed mode.  Distributed pivot faceting needs to be implemented.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2894) Implement distributed pivot faceting

2014-03-09 Thread Elran Dvir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elran Dvir updated SOLR-2894:
-

Attachment: dateToObject.patch

 Implement distributed pivot faceting
 

 Key: SOLR-2894
 URL: https://issues.apache.org/jira/browse/SOLR-2894
 Project: Solr
  Issue Type: Improvement
Reporter: Erik Hatcher
 Fix For: 4.7

 Attachments: SOLR-2894-reworked.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 dateToObject.patch


 Following up on SOLR-792, pivot faceting currently only supports 
 undistributed mode.  Distributed pivot faceting needs to be implemented.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5476) Facet sampling

2014-03-09 Thread Gilad Barkai (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925158#comment-13925158
 ] 

Gilad Barkai commented on LUCENE-5476:
--

{quote}
The limit should also take under account the total number of hits for the 
query, otherwise the estimate and the multiplication with the sampling factor 
may yield a larger number than the actual results.
{quote}

I understand this statement is confusing, I'll try to elaborate.
If the sample was *exactly* at the sampling ratio, this would not be a problem, 
but since the sample - being random as it is - may be a bit larger, adjusting 
according to the original sampling ratio (rather than the actual one) may yield 
larger counts than the actual results. 
This could be solved by either limiting to the number of results, or adjusting 
the {{samplingRate}} to be the exact, post-sampling, ratio.

 Facet sampling
 --

 Key: LUCENE-5476
 URL: https://issues.apache.org/jira/browse/LUCENE-5476
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Rob Audenaerde
 Attachments: LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, 
 LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, 
 SamplingComparison_SamplingFacetsCollector.java, SamplingFacetsCollector.java


 With LUCENE-5339 facet sampling disappeared. 
 When trying to display facet counts on large datasets (10M documents) 
 counting facets is rather expensive, as all the hits are collected and 
 processed. 
 Sampling greatly reduced this and thus provided a nice speedup. Could it be 
 brought back?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5476) Facet sampling

2014-03-09 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925160#comment-13925160
 ] 

Shai Erera commented on LUCENE-5476:


bq. Asserting seems redundant, but is that not the point in unit-tests?

The problem is when those false alarms cause noise. The previous sampling tests 
had a mechanism to reduce the noise as much as possible, but they didn't 
eliminate it. For example, the test was run few times, each time w/ increasing 
sample until it gave up and failed. At which point someone had to inspect the 
log and determine that this is a false positive. Since you expect at most 5 
categories, and any number between 1-5 is fair game, I prefer not to assert on 
#categories at all. If you really want to assert something, then make sure {{0 
 #categories = 5}}?

 Facet sampling
 --

 Key: LUCENE-5476
 URL: https://issues.apache.org/jira/browse/LUCENE-5476
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Rob Audenaerde
 Attachments: LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, 
 LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, LUCENE-5476.patch, 
 SamplingComparison_SamplingFacetsCollector.java, SamplingFacetsCollector.java


 With LUCENE-5339 facet sampling disappeared. 
 When trying to display facet counts on large datasets (10M documents) 
 counting facets is rather expensive, as all the hits are collected and 
 processed. 
 Sampling greatly reduced this and thus provided a nice speedup. Could it be 
 brought back?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [GoSC] I'm interested in LUCENE-3333

2014-03-09 Thread Michael McCandless
Hi, Da,

On Sun, Mar 9, 2014 at 1:30 AM, Da Huang dhuang...@gmail.com wrote:

 I have spent some time considering your suggestions in last mail. I find
 that I'm interested in the suggestion  Filter and Query should be more
 'combined' .

OK, cool, and ambitious; it might be safer to choose a less
ambitious/controversial change for a GSoC project.

Maybe, have a look at LUCENE-1518?  There was lots of discussion there.

 In my opinion, to implement this suggestion, a new class FilterQuery,
 which is a subclass of Query,  should be created. If FilterQuery is
 implemented, then it can be the query element of BooleanClause, and the
 BooleanQuery can naturally add a Filter as a BooleanClause. I think
 one of the most important things is to deal with the scores, as Filter does
 not contribute anything to score.

I feel like it should be the opposite?  Like, a Filter has less
functionality that a Query, because it does only matching?  So I would
think a Quey would subclass Filter and then add scoring onto it?  But
there was lots of discussion on the above issue that I don't
remember...

Mike McCandless

http://blog.mikemccandless.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5653) Create a RESTManager to provide REST API endpoints for reconfigurable plugins

2014-03-09 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925184#comment-13925184
 ] 

Alexandre Rafalovitch commented on SOLR-5653:
-

In terms of documenting the REST API, would something like Swagger be useful: 
https://helloreverb.com/developers/swagger ?

 Create a RESTManager to provide REST API endpoints for reconfigurable plugins
 -

 Key: SOLR-5653
 URL: https://issues.apache.org/jira/browse/SOLR-5653
 Project: Solr
  Issue Type: Sub-task
Reporter: Steve Rowe
 Attachments: SOLR-5653.patch, SOLR-5653.patch, SOLR-5653.patch


 It should be possible to reconfigure Solr plugins' resources and init params 
 without directly editing the serialized schema or {{solrconfig.xml}} (see 
 Hoss's arguments about this in the context of the schema, which also apply to 
 {{solrconfig.xml}}, in the description of SOLR-4658)
 The RESTManager should allow plugins declared in either the schema or in 
 {{solrconfig.xml}} to register one or more REST endpoints, one endpoint per 
 reconfigurable resource, including init params.  To allow for multiple plugin 
 instances, registering plugins will need to provide a handle of some form to 
 distinguish the instances.
 This RESTManager should also be able to create new instances of plugins that 
 it has been configured to allow.  The RESTManager will need its own 
 serialized configuration to remember these plugin declarations.
 Example endpoints:
 * SynonymFilterFactory
 ** init params: {{/solr/collection1/config/syns/myinstance/options}}
 ** synonyms resource: 
 {{/solr/collection1/config/syns/myinstance/synonyms-list}}
 * /select request handler
 ** init params: {{/solr/collection1/config/requestHandlers/select/options}}
 We should aim for full CRUD over init params and structured resources.  The 
 plugins will bear responsibility for handling resource modification requests, 
 though we should provide utility methods to make this easy.
 However, since we won't be directly modifying the serialized schema and 
 {{solrconfig.xml}}, anything configured in those two places can't be 
 invalidated by configuration serialized elsewhere.  As a result, it won't be 
 possible to remove plugins declared in the serialized schema or 
 {{solrconfig.xml}}.  Similarly, any init params declared in either place 
 won't be modifiable.  Instead, there should be some form of init param that 
 declares that the plugin is reconfigurable, maybe using something like 
 managed - note that request handlers already provide a handle - the 
 request handler name - and so don't need that to be separately specified:
 {code:xml}
 requestHandler name=/select class=solr.SearchHandler
managed/
 /requestHandler
 {code}
 and in the serialized schema - a handle needs to be specified here:
 {code:xml}
 fieldType name=text_general class=solr.TextField 
 positionIncrementGap=100
 ...
   analyzer type=query
 tokenizer class=solr.StandardTokenizerFactory/
 filter class=solr.SynonymFilterFactory managed=english-synonyms/
 ...
 {code}
 All of the above examples use the existing plugin factory class names, but 
 we'll have to create new RESTManager-aware classes to handle registration 
 with RESTManager.
 Core/collection reloading should not be performed automatically when a REST 
 API call is made to one of these RESTManager-mediated REST endpoints, since 
 for batched config modifications, that could take way too long.  But maybe 
 reloading could be a query parameter to these REST API calls. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Release Lucene 5?

2014-03-09 Thread Erick Erickson
So what are we thinking of here for bringing the code up to java 7?
One approach would be this massive effort to use, say, diamonds.
You know, a checkin of probably all the files in Solr and Lucene. What the
heck, let's re-format it all at the same time. And while we're at it

OK, That's Not Going To Happen. Is the sense here that moving
forward with Java 7 idioms will be on an as-needed basis? One
refactoring mantra is you should only change stuff you're working
on, not unrelated bits of code.



On Sat, Mar 8, 2014 at 11:08 AM, Uwe Schindler u...@thetaphi.de wrote:
 From: Robert Muir [mailto:rcm...@gmail.com]
 On Sat, Mar 8, 2014 at 8:47 AM, Uwe Schindler u...@thetaphi.de wrote:
  And let's also move trunk to Java 8 then.
 
  My suggestion is either:
  - Backport all Java 8 changes from trunk to 4.x (and reassign fix
  versions to 4.8)

 Assuming you mean java7, +1 :)

 Thanks, sorry for the typo.

  - Or re-branch trunk to branch_4x, incorporating *all* changes from
  trunk (so svn rm branch_4x; svn cp trunk branch_4x)
 

 I don't want this: there are some api problems to be resolved in trunk. I am
 unhappy about StoredDocument/IndexableDocument, which is intended to
 remove the confusion around not getting your whole document back
 when things arent stored: because docvalues fields appear in the stored
 document. so this really needs to be sorted out.

 I agree. At the time when we splitted the APIs, DocValues was not yet matured 
 in 4.x and trunk. I would love to have the StoredIndexableDocument stuff in 
 Lucene, it is now pending since 1.5 years in trunk (since my GSoC student did 
 it). I agree, we need to improve the API! But this would not have prevented 
 me from releasing it. It is not worse than the duplicate Sorter API, you 
 resolved last week :-) So feel free to improve the API, too!

 Once we agree here (I will add a separate vote now to move to Java 7 in 
 branch 4.x), I will open a back port issue and try to back port as much stuff 
 as possible. The first thing would be stuff like reverting commits to work 
 around missing Long.compare/Integer.compare in Java 6.

 Uwe


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Release Lucene 5?

2014-03-09 Thread Robert Muir
On Sun, Mar 9, 2014 at 8:45 AM, Erick Erickson erickerick...@gmail.com wrote:

 OK, That's Not Going To Happen. Is the sense here that moving
 forward with Java 7 idioms will be on an as-needed basis? One
 refactoring mantra is you should only change stuff you're working
 on, not unrelated bits of code.


It may happen. I'd be willing to help with such a thing for lucene, I
think its worth it. Uwe did much of the work for a thing like this in
java 5.

 as far as not refactoring stuff until you change it, how well is that
working out for solr? :)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Move to Java 7 in Lucene/Solr 4.8, use Java 8 in trunk (once officially released)

2014-03-09 Thread Tommaso Teofili
2014-03-08 17:17 GMT+01:00 Uwe Schindler u...@thetaphi.de:

 Hi all,

 Java 8 will get released (hopefully, but I trust the release plan!) on
 March 18, 2014. Because of this, lots of developers will move to Java 8,
 too. This makes maintaining 3 versions for developing Lucene 4.x not easy
 anymore (unless you have cool JAVA_HOME cmd launcher scripts using
 StExBar available for your Windows Explorer - or similar stuff in
 Linux/Mäc).

 We already discussed in another thread about moving to release trunk as
 5.0, but people disagreed and preferred to release 4.8 with a minimum of
 Java 7. This is perfectly fine, as nobody should run Lucene or Solr on an
 unsupported platform anymore. If they upgrade to 4.8, they should also
 upgrade their infrastructure - this is a no-brainer. In Lucene trunk we
 switch to Java 8 as soon as it is released (in 10 days).

 Now the good things: We don't need to support JRockit anymore, no need to
 support IBM J9 in trunk (unless they release a new version based on Java 8).

 So the vote here is about:

 [.] Move Lucene/Solr 4.8 (means branch_4x) to Java 7 and backport all Java
 7-related issues (FileChannel improvements, diamond operator,...).


+1


 [.] Move Lucene/Solr trunk to Java 8 and allow closures in source code.
 This would make some APIs much nicer. Our infrastructure mostly supports
 this, only ECJ Javadoc linting is not yet possible, but forbidden-apis
 supports Java 8 with all its crazy new stuff.


-1 I think a move to Java 8 is worth only if and when Java 8 has proven to
be stable, also I don't think (that's another thread though) we're (and
should be) moving fast towards release 5, so there's likely plenty of time
for having Java 8 out for some time before we have 5.0 out.

Tommaso



 You can vote separately for both items!

 Uwe

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de




 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




Re: Release Lucene 5?

2014-03-09 Thread Robert Muir
On Sat, Mar 8, 2014 at 11:08 AM, Uwe Schindler u...@thetaphi.de wrote:
]

 I don't want this: there are some api problems to be resolved in trunk. I am
 unhappy about StoredDocument/IndexableDocument, which is intended to
 remove the confusion around not getting your whole document back
 when things arent stored: because docvalues fields appear in the stored
 document. so this really needs to be sorted out.

 I agree. At the time when we splitted the APIs, DocValues was not yet matured 
 in 4.x and trunk. I would love to have the StoredIndexableDocument stuff in 
 Lucene, it is now pending since 1.5 years in trunk (since my GSoC student did 
 it). I agree, we need to improve the API! But this would not have prevented 
 me from releasing it. It is not worse than the duplicate Sorter API, you 
 resolved last week :-) So feel free to improve the API, too!


It is worse because every single lucene user must use the
o.a.l.document API. So this is radically different from the index
sorting api.

As long as docvalues come back in a storedocument, then the
stored/indexable change doesn't make sense at all, and we shouldnt
release it that way.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Suggestions about writing / extending QueryParsers

2014-03-09 Thread Tommaso Teofili
Hi Tim,

2014-03-07 15:20 GMT+01:00 Allison, Timothy B. talli...@mitre.org:

  Tommaso,

   Ah, now I see.  If you want to add new operators, you'll have to modify
 the javacc files.  For the SpanQueryParser, I added a handful of new
 operators and chose to go with regexes instead of javacc...not sure that was
 the right decision, but given my lack of knowledge of javacc, it was
 expedient.  If you have time or already know javacc, it shouldn't be
 difficult.


thanks, I've used javacc in the past, but I'm definitely not experienced
with it, I'll see what fits best.


As for nobrainer on the Solr side, y, it shouldn't be a problem.
 However, as of now the basic queryparser is a copy and paste job between
 Lucene and Solr, so you'll just have to redo your code in Solrunless you
 do something smarter.


uh ok, that seems to be something to fix though, don't know if there're
specific reasons for copy pasting instead of reusing...


If you'd be willing to wait for LUCENE-5205 to be brought into Lucene,
 I'd consider adding this functionality into the SpanQueryParser as a later
 step.


cool, thanks Tim, that'd be really nice.
Thanks,
Tommaso




   Cheers,



  Tim



 *From:* Tommaso Teofili [mailto:tommaso.teof...@gmail.com]
 *Sent:* Friday, March 07, 2014 3:17 AM
 *To:* dev@lucene.apache.org
 *Subject:* Re: Suggestions about writing / extending QueryParsers



 Thanks Tim and Upayavira for your replies.



 I still need to decide what the final syntax could be, however generally
 speaking the ideal would be that I am able to extend the current Lucene
 syntax with a new expression which will trigger the creation of a more like
 this query with something like +title:foo +text for similar docs%2 where
 the phrase between quotes will generate a MoreLikeThisQuery on that text if
 it's followed by the % character (and the number 2 may control the MLT
 configuration, e.g. min document freq == min term freq = 2), similarly to
 what it's done for proximity search (not sure about using %, it's just a
 syntax example).

 I guess then I'd need to extend the classic query parser, as per Tim's
 suggestions and I'd assume that if this goes into the classic qp it should
 be a no brainer on the Solr side.

 Does it sound correct / feasible?



 Regards,

 Tommaso

 2014-03-06 15:08 GMT+01:00 Upayavira u...@odoko.co.uk:

 Tommaso,



 Do say more about what you're thinking of. I'm currently getting my dev
 environment up to look into enhancing the MoreLikeThisHandler to be able
 handle function query boosts. This should be eminently possible from my
 initial research. However, if you're thinking of something more powerful,
 perhaps we can work together.



 Upayavira





 On Thu, Mar 6, 2014, at 11:23 AM, Tommaso Teofili wrote:

  Hi all,



 I'm thinking about writing/extending a QueryParser for MLT queries; I've
 never really looked into that code too much, while I'm doing that now, I'm
 wondering if anyone has suggestions on how to start with such a topic.

 Should I write a new grammar for that ? Or can I just extend an existing
 grammar / class?



 Thanks in advance,

 Tommaso





[jira] [Created] (LUCENE-5510) Docvalues need to be indexablefield (not storable)

2014-03-09 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5510:
---

 Summary: Docvalues need to be indexablefield (not storable)
 Key: LUCENE-5510
 URL: https://issues.apache.org/jira/browse/LUCENE-5510
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Robert Muir
Assignee: Robert Muir
Priority: Blocker
 Fix For: 5.0






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Suggestions about writing / extending QueryParsers

2014-03-09 Thread Tommaso Teofili
thanks Jack for the reference, I didn't know it.
Regards,
Tommaso


2014-03-08 1:25 GMT+01:00 Jack Krupansky j...@basetechnology.com:

   For reference, the LucidWorks Search query parser has two MLT features:

 1. Like terms - does MLT on a list of terms.

 For example:

 like:(Four score and seven years ago our fathers brought forth)

 See:
 http://docs.lucidworks.com/display/lweug/Like+Term+Keyword+Option

 This is effectively an OR operator on the terms.

 2. Like document - does MLT on a Solr document, given it's id:

 For example:

 Washington like:http://cnn.com; -New York

 See:
 http://docs.lucidworks.com/display/lweug/Like+Document+Term+Keyword+Option

 -- Jack Krupansky

  *From:* Tommaso Teofili tommaso.teof...@gmail.com
 *Sent:* Thursday, March 6, 2014 6:23 AM
 *To:* dev@lucene.apache.org
 *Subject:* Suggestions about writing / extending QueryParsers

  Hi all,

 I'm thinking about writing/extending a QueryParser for MLT queries; I've
 never really looked into that code too much, while I'm doing that now, I'm
 wondering if anyone has suggestions on how to start with such a topic.
 Should I write a new grammar for that ? Or can I just extend an existing
 grammar / class?

 Thanks in advance,
 Tommaso



Re: [GoSC] I'm interested in LUCENE-3333

2014-03-09 Thread Da Huang
Hi, Mike.

You're right. After having a look at the comments on LUCENE-1518, I find
that my idea about that has many bugs. Sorry for that.

Thus, I have checked some other suggestions you gave me to see whether
relevant comments can be found in jira.

I think I have some idea on LUCENE-4396: BooleanScorer should sometimes be
used for MUST clauses.
Can we adjust the query to make the problem easier? For the query +a b c
+e +f as an example, maybe we can
turn it into (+a +e +f) b c which has only one MUST clause. Then, it
would be easier to judge which scorer to use?

Besides, I seems that the suggestion we should pass a needsScorers boolean
up-front to Weight.scorer
is not on jira. But it sounds that it can be done by adjusting some class
methods' arguments and return value
to pass the needsScorers? not sure.

At last, recently I find something strange in the code about heap. I find
heap has been implemented duplicately
for many times in the trunk, and a PriorityQueue is also implemented in the
package org.apache.lucene.util.
I remember java has already implemented the PriorityQueue. Why not use that?


Thanks,
Da Huang


-- 
黄达(Da Huang)
Team of Search Engine  Web Mining
School of Electronic Engineering  Computer Science
Peking University, Beijing, 100871, P.R.China


Re: [VOTE] Move to Java 7 in Lucene/Solr 4.8, use Java 8 in trunk (once officially released)

2014-03-09 Thread Erick Erickson
Solr/Lucene 4.8 - Java 7
+1

Solr/Lucene 5.0 - Java8

-1 for now. +1 as we get closer to releasing 5.0. There's still plenty
of cruft in trunk that's there only because of needing to support
Java6 in the 4.x code line, I think having a period when we can freely
clean up some of the Java 6 leftovers in trunk and 4.8+ without having
to _additionally_ deal with Java 8 changes that only apply to trunk
would be useful. Wouldn't it be nice to have just a few months where
one didn't have to even think about it ;)

As far as 5.0 is concerned... The point that organizations move much
more slowly than we do in terms of adopting new Java releases is well
taken. I suspect that, no matter what, if we move 5.0 to Java 8, we'll
have quite a long period (3 years as a wild guess) where some people
will be unable to use 5.x because of organizational (not technical)
issues.

IMO, it's perfectly legitimate to say that Solr development shouldn't
be held up because organization X is unwilling to use Java8 thus I
think we should go forward with 5x and Java8, just not quite yet.

Just don't be surprised by people saying that they can't use Java8 in
2016 and would someone backport fix/improvements X, Y, and Z :)...



On Sun, Mar 9, 2014 at 9:31 AM, Tommaso Teofili
tommaso.teof...@gmail.com wrote:



 2014-03-08 17:17 GMT+01:00 Uwe Schindler u...@thetaphi.de:

 Hi all,

 Java 8 will get released (hopefully, but I trust the release plan!) on
 March 18, 2014. Because of this, lots of developers will move to Java 8,
 too. This makes maintaining 3 versions for developing Lucene 4.x not easy
 anymore (unless you have cool JAVA_HOME cmd launcher scripts using StExBar
 available for your Windows Explorer - or similar stuff in Linux/Mäc).

 We already discussed in another thread about moving to release trunk as
 5.0, but people disagreed and preferred to release 4.8 with a minimum of
 Java 7. This is perfectly fine, as nobody should run Lucene or Solr on an
 unsupported platform anymore. If they upgrade to 4.8, they should also
 upgrade their infrastructure - this is a no-brainer. In Lucene trunk we
 switch to Java 8 as soon as it is released (in 10 days).

 Now the good things: We don't need to support JRockit anymore, no need to
 support IBM J9 in trunk (unless they release a new version based on Java 8).

 So the vote here is about:

 [.] Move Lucene/Solr 4.8 (means branch_4x) to Java 7 and backport all Java
 7-related issues (FileChannel improvements, diamond operator,...).


 +1


 [.] Move Lucene/Solr trunk to Java 8 and allow closures in source code.
 This would make some APIs much nicer. Our infrastructure mostly supports
 this, only ECJ Javadoc linting is not yet possible, but forbidden-apis
 supports Java 8 with all its crazy new stuff.


 -1 I think a move to Java 8 is worth only if and when Java 8 has proven to
 be stable, also I don't think (that's another thread though) we're (and
 should be) moving fast towards release 5, so there's likely plenty of time
 for having Java 8 out for some time before we have 5.0 out.

 Tommaso



 You can vote separately for both items!

 Uwe

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de




 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Release Lucene 5?

2014-03-09 Thread Erick Erickson
bq:  as far as not refactoring stuff until you change it, how well is that
working out for solr? :)

Well, it's the theory I try for at least, I don't control others.

On Sun, Mar 9, 2014 at 8:56 AM, Robert Muir rcm...@gmail.com wrote:
 On Sun, Mar 9, 2014 at 8:45 AM, Erick Erickson erickerick...@gmail.com 
 wrote:

 OK, That's Not Going To Happen. Is the sense here that moving
 forward with Java 7 idioms will be on an as-needed basis? One
 refactoring mantra is you should only change stuff you're working
 on, not unrelated bits of code.


 It may happen. I'd be willing to help with such a thing for lucene, I
 think its worth it. Uwe did much of the work for a thing like this in
 java 5.

  as far as not refactoring stuff until you change it, how well is that
 working out for solr? :)

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [GoSC] I'm interested in LUCENE-3333

2014-03-09 Thread Michael McCandless
On Sun, Mar 9, 2014 at 9:55 AM, Da Huang dhuang...@gmail.com wrote:
 Hi, Mike.

 You're right. After having a look at the comments on LUCENE-1518, I find
 that my idea about that has many bugs. Sorry for that.

It's fine, it's a VERY hard fix :)  This is why it hasn't been done yet!

 Thus, I have checked some other suggestions you gave me to see whether
 relevant comments can be found in jira.

 I think I have some idea on LUCENE-4396: BooleanScorer should sometimes be
 used for MUST clauses.
 Can we adjust the query to make the problem easier? For the query +a b c +e
 +f as an example, maybe we can
 turn it into (+a +e +f) b c which has only one MUST clause. Then, it would
 be easier to judge which scorer to use?

You mean create nesting when there wasn't before, by grouping all MUST
clauses together?  We could explore that ...

Or we could pass all the clauses (still flat) to BooleanScorer.  I
think this would only be faster when the MUST clauses are high cost
relative to all other clauses.  E.g. a super-rare MUST'd clause would
probably be faster with BooleanScorer2.

I think this could make a good GSoC project.

 Besides, I seems that the suggestion we should pass a needsScorers boolean
 up-front to Weight.scorer
 is not on jira. But it sounds that it can be done by adjusting some class
 methods' arguments and return value
 to pass the needsScorers? not sure.

I think it's this Jira: https://issues.apache.org/jira/browse/LUCENE-3331

(I just searched for needs scores on
http://jirasearch.mikemccandless.com and it was one of the
suggestions).

All that should be needed here is to add a boolean needsScores (or
something) to the Weight.scorer method, and fix the numerous places
where this method is invoked to pass the right value.  E.g.
ConstantScoreQuery would pass false, and this would mean e.g. if it
wraps a TermQuery, we could avoid decoding freq blocks from the
postings.

 At last, recently I find something strange in the code about heap. I find
 heap has been implemented duplicately
 for many times in the trunk, and a PriorityQueue is also implemented in the
 package org.apache.lucene.util.
 I remember java has already implemented the PriorityQueue. Why not use that?

Good question!  There is a fair amount of duplicated code, and we
should fix that over time.  Lucene has had its own PQ class forever,
and we do strange things like pre-filling the queue with a sentinel
value to avoid if (queueIsNotFullYet) checks in collect(int doc),
and we can replace the top value and re-heap ... but maybe these do
not in fact matter in practice and if so we should stop duplicating
code :)

Mike McCandless

http://blog.mikemccandless.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [GoSC] I'm interested in LUCENE-3333

2014-03-09 Thread Da Huang
Thanks a lot. That's very helpful.

I think you get exactly what I mean about the LUCENE-4396.
By grouping up the MUST clauses, the conjunctive query can be
done specifiedly with easy way. Then, the original query would have
no more than 1 MUST clause. I think in this situation, it's much more
easier to judge whether to use BooleanScorer or BooleanScorer2. :)


Thanks,
Da Huang


-- 
黄达(Da Huang)
Team of Search Engine  Web Mining
School of Electronic Engineering  Computer Science
Peking University, Beijing, 100871, P.R.China


Re: [VOTE] Move to Java 7 in Lucene/Solr 4.8, use Java 8 in trunk (once officially released)

2014-03-09 Thread Furkan KAMACI
Hi All;

I am not a committer yet but I want share my thoughts as a contributor and
a Solr user to give an example from real life. I use SolrCloud for one year
(our product is at pre-prod step) and I have hundreds of servers at my
company and nearly half of them are SolrCloud. We also have an Hadoop
cluster which runs Map/Reduce jobs. We also use and test Ambari and
Hortonworks and we also use Giraph at our Hadoop side. Just a few weeks ago
we've faced with an issue that some of projects that we use was not
compatible with Java 7 (yes its weird.) So we had to change the Java
version of our servers until we maintain it (I've updated a project from
Java 7 to Java 6 just because of that.)

I've separated the SolrCloud nodes from other parts of our ecosystem as
usual and I've never faced with a problem as like that with my management.
I use Java 1.7 u25 as recommended and because of it is stable. However I
see that even it is more logical to upgrade to new versions of Java (if it
is stable) sometimes there maybe some limitations for companies. If I could
generate Solr indexes via Map/Reduce at my current architecture I was not
able to use it because of that Java 6 problem.

I'm currently reviewing Java 8 book of Manning and I know that Java 8 has
great features. However my thought is that: Java 6 might be considered as
outdated and it is applicable that if we don't support it. However I think
that Java 7 will be used for a long time within companies and trunk should
support Java 7 too at least until Java 8 has a common usage or until the
end of life of Java 7 support (it seems that it will be supported at least
until March 2015).

Thanks;
Furkan KAMACI


2014-03-09 16:07 GMT+02:00 Erick Erickson erickerick...@gmail.com:

 Solr/Lucene 4.8 - Java 7
 +1

 Solr/Lucene 5.0 - Java8

 -1 for now. +1 as we get closer to releasing 5.0. There's still plenty
 of cruft in trunk that's there only because of needing to support
 Java6 in the 4.x code line, I think having a period when we can freely
 clean up some of the Java 6 leftovers in trunk and 4.8+ without having
 to _additionally_ deal with Java 8 changes that only apply to trunk
 would be useful. Wouldn't it be nice to have just a few months where
 one didn't have to even think about it ;)

 As far as 5.0 is concerned... The point that organizations move much
 more slowly than we do in terms of adopting new Java releases is well
 taken. I suspect that, no matter what, if we move 5.0 to Java 8, we'll
 have quite a long period (3 years as a wild guess) where some people
 will be unable to use 5.x because of organizational (not technical)
 issues.

 IMO, it's perfectly legitimate to say that Solr development shouldn't
 be held up because organization X is unwilling to use Java8 thus I
 think we should go forward with 5x and Java8, just not quite yet.

 Just don't be surprised by people saying that they can't use Java8 in
 2016 and would someone backport fix/improvements X, Y, and Z :)...



 On Sun, Mar 9, 2014 at 9:31 AM, Tommaso Teofili
 tommaso.teof...@gmail.com wrote:
 
 
 
  2014-03-08 17:17 GMT+01:00 Uwe Schindler u...@thetaphi.de:
 
  Hi all,
 
  Java 8 will get released (hopefully, but I trust the release plan!) on
  March 18, 2014. Because of this, lots of developers will move to Java 8,
  too. This makes maintaining 3 versions for developing Lucene 4.x not
 easy
  anymore (unless you have cool JAVA_HOME cmd launcher scripts using
 StExBar
  available for your Windows Explorer - or similar stuff in Linux/Mäc).
 
  We already discussed in another thread about moving to release trunk as
  5.0, but people disagreed and preferred to release 4.8 with a minimum of
  Java 7. This is perfectly fine, as nobody should run Lucene or Solr on
 an
  unsupported platform anymore. If they upgrade to 4.8, they should also
  upgrade their infrastructure - this is a no-brainer. In Lucene trunk we
  switch to Java 8 as soon as it is released (in 10 days).
 
  Now the good things: We don't need to support JRockit anymore, no need
 to
  support IBM J9 in trunk (unless they release a new version based on
 Java 8).
 
  So the vote here is about:
 
  [.] Move Lucene/Solr 4.8 (means branch_4x) to Java 7 and backport all
 Java
  7-related issues (FileChannel improvements, diamond operator,...).
 
 
  +1
 
 
  [.] Move Lucene/Solr trunk to Java 8 and allow closures in source code.
  This would make some APIs much nicer. Our infrastructure mostly supports
  this, only ECJ Javadoc linting is not yet possible, but forbidden-apis
  supports Java 8 with all its crazy new stuff.
 
 
  -1 I think a move to Java 8 is worth only if and when Java 8 has proven
 to
  be stable, also I don't think (that's another thread though) we're (and
  should be) moving fast towards release 5, so there's likely plenty of
 time
  for having Java 8 out for some time before we have 5.0 out.
 
  Tommaso
 
 
 
  You can vote separately for both items!
 
  Uwe
 
  -
  Uwe Schindler
  H.-H.-Meier-Allee 63, D-28213 

[jira] [Commented] (SOLR-2894) Implement distributed pivot faceting

2014-03-09 Thread Elran Dvir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925211#comment-13925211
 ] 

Elran Dvir commented on SOLR-2894:
--

I have checked the latest patch.
Problem 3 (field with negative limit threw exception) is now solved. Thanks!
But I still see problem 1 (f.field.facet.limit=-1 is not being respected).

Thank you very much.  

 Implement distributed pivot faceting
 

 Key: SOLR-2894
 URL: https://issues.apache.org/jira/browse/SOLR-2894
 Project: Solr
  Issue Type: Improvement
Reporter: Erik Hatcher
 Fix For: 4.7

 Attachments: SOLR-2894-reworked.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 dateToObject.patch


 Following up on SOLR-792, pivot faceting currently only supports 
 undistributed mode.  Distributed pivot faceting needs to be implemented.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5511) Upgrade to SvnKit 1.8.4 for checks

2014-03-09 Thread Uwe Schindler (JIRA)
Uwe Schindler created LUCENE-5511:
-

 Summary: Upgrade to SvnKit 1.8.4 for checks
 Key: LUCENE-5511
 URL: https://issues.apache.org/jira/browse/LUCENE-5511
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Affects Versions: 4.7
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0


We had a hack since LUCENE-5385 in our build, because svnkit 1.8.0 - 1.8.3 were 
not able to read svn 1.7 checkouts. Because of this the user had to choose the 
right svnkit version when executing {{ant svn-check-working-copy}}.

Since svnkit 1.8.4 we can read all svn working copy formats again, so our 
checks will work on any checkout without forcefully choosing svnkit version.

This patch removes the extra warnings and error messages and update to 1.8.4.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5511) Upgrade to SvnKit 1.8.4 for checks

2014-03-09 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925220#comment-13925220
 ] 

Robert Muir commented on LUCENE-5511:
-

+1

tested with 1.8 (I commented out svnkit.version=1.8.2 in build.properties)
{noformat}
  [svn] Initializing working copy...
  [svn] Getting all versioned and unversioned files...
  [svn] Filtering files with existing svn:eol-style...
  [svn] Filtering files with binary svn:mime-type...

BUILD SUCCESSFUL
Total time: 21 seconds
{noformat}

 Upgrade to SvnKit 1.8.4 for checks
 --

 Key: LUCENE-5511
 URL: https://issues.apache.org/jira/browse/LUCENE-5511
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Affects Versions: 4.7
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5511.patch


 We had a hack since LUCENE-5385 in our build, because svnkit 1.8.0 - 1.8.3 
 were not able to read svn 1.7 checkouts. Because of this the user had to 
 choose the right svnkit version when executing {{ant svn-check-working-copy}}.
 Since svnkit 1.8.4 we can read all svn working copy formats again, so our 
 checks will work on any checkout without forcefully choosing svnkit version.
 This patch removes the extra warnings and error messages and update to 1.8.4.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5511) Upgrade to SvnKit 1.8.4 for checks

2014-03-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925222#comment-13925222
 ] 

ASF subversion and git services commented on LUCENE-5511:
-

Commit 1575714 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1575714 ]

LUCENE-5511: ant precommit / ant check-svn-working-copy now work again with 
any working copy format (thanks to svnkit 1.8.4).

 Upgrade to SvnKit 1.8.4 for checks
 --

 Key: LUCENE-5511
 URL: https://issues.apache.org/jira/browse/LUCENE-5511
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Affects Versions: 4.7
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5511.patch


 We had a hack since LUCENE-5385 in our build, because svnkit 1.8.0 - 1.8.3 
 were not able to read svn 1.7 checkouts. Because of this the user had to 
 choose the right svnkit version when executing {{ant svn-check-working-copy}}.
 Since svnkit 1.8.4 we can read all svn working copy formats again, so our 
 checks will work on any checkout without forcefully choosing svnkit version.
 This patch removes the extra warnings and error messages and update to 1.8.4.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5511) Upgrade to SvnKit 1.8.4 for checks

2014-03-09 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-5511.
---

Resolution: Fixed

 Upgrade to SvnKit 1.8.4 for checks
 --

 Key: LUCENE-5511
 URL: https://issues.apache.org/jira/browse/LUCENE-5511
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Affects Versions: 4.7
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5511.patch


 We had a hack since LUCENE-5385 in our build, because svnkit 1.8.0 - 1.8.3 
 were not able to read svn 1.7 checkouts. Because of this the user had to 
 choose the right svnkit version when executing {{ant svn-check-working-copy}}.
 Since svnkit 1.8.4 we can read all svn working copy formats again, so our 
 checks will work on any checkout without forcefully choosing svnkit version.
 This patch removes the extra warnings and error messages and update to 1.8.4.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5511) Upgrade to SvnKit 1.8.4 for checks

2014-03-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925224#comment-13925224
 ] 

ASF subversion and git services commented on LUCENE-5511:
-

Commit 1575715 from [~thetaphi] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1575715 ]

Merged revision(s) 1575714 from lucene/dev/trunk:
LUCENE-5511: ant precommit / ant check-svn-working-copy now work again with 
any working copy format (thanks to svnkit 1.8.4).

 Upgrade to SvnKit 1.8.4 for checks
 --

 Key: LUCENE-5511
 URL: https://issues.apache.org/jira/browse/LUCENE-5511
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Affects Versions: 4.7
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5511.patch


 We had a hack since LUCENE-5385 in our build, because svnkit 1.8.0 - 1.8.3 
 were not able to read svn 1.7 checkouts. Because of this the user had to 
 choose the right svnkit version when executing {{ant svn-check-working-copy}}.
 Since svnkit 1.8.4 we can read all svn working copy formats again, so our 
 checks will work on any checkout without forcefully choosing svnkit version.
 This patch removes the extra warnings and error messages and update to 1.8.4.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5511) Upgrade to SvnKit 1.8.4 for checks

2014-03-09 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5511:
--

Description: 
We had a hack since LUCENE-5385 in our build, because svnkit 1.8.0 - 1.8.3 were 
not able to read svn 1.7 checkouts. Because of this the user had to choose the 
right svnkit version when executing {{ant check-svn-working-copy}}.

Since svnkit 1.8.4 we can read all svn working copy formats again, so our 
checks will work on any checkout without forcefully choosing svnkit version.

This patch removes the extra warnings and error messages and update to 1.8.4.

  was:
We had a hack since LUCENE-5385 in our build, because svnkit 1.8.0 - 1.8.3 were 
not able to read svn 1.7 checkouts. Because of this the user had to choose the 
right svnkit version when executing {{ant svn-check-working-copy}}.

Since svnkit 1.8.4 we can read all svn working copy formats again, so our 
checks will work on any checkout without forcefully choosing svnkit version.

This patch removes the extra warnings and error messages and update to 1.8.4.


 Upgrade to SvnKit 1.8.4 for checks
 --

 Key: LUCENE-5511
 URL: https://issues.apache.org/jira/browse/LUCENE-5511
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Affects Versions: 4.7
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5511.patch


 We had a hack since LUCENE-5385 in our build, because svnkit 1.8.0 - 1.8.3 
 were not able to read svn 1.7 checkouts. Because of this the user had to 
 choose the right svnkit version when executing {{ant check-svn-working-copy}}.
 Since svnkit 1.8.4 we can read all svn working copy formats again, so our 
 checks will work on any checkout without forcefully choosing svnkit version.
 This patch removes the extra warnings and error messages and update to 1.8.4.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1123: POMs out of sync

2014-03-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1123/

1 tests failed.
REGRESSION:  org.apache.solr.cloud.OverseerTest.testOverseerFailure

Error Message:
Could not register as the leader because creating the ephemeral registration 
node in ZooKeeper failed

Stack Trace:
org.apache.solr.common.SolrException: Could not register as the leader because 
creating the ephemeral registration node in ZooKeeper failed
at org.apache.zookeeper.KeeperException.create(KeeperException.java:119)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
at 
org.apache.solr.common.cloud.SolrZkClient$10.execute(SolrZkClient.java:431)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:73)
at 
org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:428)
at 
org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:385)
at 
org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:372)
at 
org.apache.solr.cloud.ShardLeaderElectionContextBase$1.execute(ElectionContext.java:127)
at 
org.apache.solr.common.util.RetryUtil.retryOnThrowable(RetryUtil.java:31)
at 
org.apache.solr.cloud.ShardLeaderElectionContextBase.runLeaderProcess(ElectionContext.java:122)
at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:164)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:108)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:156)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:156)
at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:289)
at 
org.apache.solr.cloud.OverseerTest$MockZKController.publishState(OverseerTest.java:155)
at 
org.apache.solr.cloud.OverseerTest.testOverseerFailure(OverseerTest.java:666)




Build Log:
[...truncated 53213 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:488: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:176: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/extra-targets.xml:77:
 Java returned: 1

Total time: 144 minutes 8 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-5831) Scale score PostFilter

2014-03-09 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925227#comment-13925227
 ] 

Joel Bernstein commented on SOLR-5831:
--

Peter,

I was able to do a first review of the code before heading out on vacation.

Very cool piece of code. How is this performing compared to using the scale() 
function?

The following issues were in early versions of the CollaspingQParserPlugin so 
you can look at the most recent version to see how they were resolved:

1) The ScoreScaleFilter class needs to only have instance variables that are 
needed for the hashCode() and equals() method otherwise they'll be all kinds of 
bugs with the Solr caches. So any work you're doing in the constructor of this 
class and hanging onto needs to be moved to the getFilterCollector() method.

2) The DummyScore also needs to implement the docID() method. Pretty simple to 
do, check the latest CollapsingQParserPlugin to see how this is handled.

3) I think getting this working with the QueryResultCache will be important. 
Early versions of the CollapsingQParserPlugin didn't do this, but standard 
grouping didn't either, so it wasn't a downgrade in functionality for 
FieldCollapsing. But people who use this feature will be surprised if the 
QueryResultCache stops working. So hashCode() and equals() will need to be 
implemented.

4) The value source needs a proper context (rcontext in the code). Latest 
version of the CollapsingQParserPlugin demonstrates this as well.

Also having good tests will be important and probably somewhat tricky to write. 
 Using some form of randomized testing would be good to ensure that random 
scores get normalized properly.

I'll checkin on this when I get back from vacation.

Joel

  




 Scale score PostFilter
 --

 Key: SOLR-5831
 URL: https://issues.apache.org/jira/browse/SOLR-5831
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 4.7
Reporter: Peter Keegan
Priority: Minor
 Attachments: SOLR-5831.patch


 The ScaleScoreQParserPlugin is a PostFilter that performs score scaling.
 This is an alternative to using a function query wrapping a scale() wrapping 
 a query(). For example:
 select?qq={!edismax v='news' qf='title^2 
 body'}scaledQ=scale(product(query($qq),1),0,1)q={!func}sum(product(0.75,$scaledQ),product(0.25,field(myfield)))fq={!query
  v=$qq}
 The problem with this query is that it has to scale every hit. Usually, only 
 the returned hits need to be scaled,
 but there may be use cases where the number of hits to be scaled is greater 
 than the returned hit count,
 but less than or equal to the total hit count.
 Sample syntax:
 fq={!scalescore+l=0.0 u=1.0 maxscalehits=1 
 func=sum(product(sscore(),0.75),product(field(myfield),0.25))}
 l=0.0 u=1.0   //Scale scores to values between 0-1, inclusive 
 maxscalehits=1//The maximum number of result scores to scale (-1 = 
 all hits, 0 = results 'page' size)
 func=...  //Apply the composite function to each hit. The 
 scaled score value is accessed by the 'score()' value source
 All parameters are optional. The defaults are:
 l=0.0 u=1.0
 maxscalehits=0 (result window size)
 func=(null)
  
 Note: this patch is not complete, as it contains no test cases and may not 
 conform 
 to all the guidelines in http://wiki.apache.org/solr/HowToContribute. 
  
 I would appreciate any feedback on the usability and implementation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5512) Remove redundant typing (diamond operator) in trunk

2014-03-09 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5512:
---

 Summary: Remove redundant typing (diamond operator) in trunk
 Key: LUCENE-5512
 URL: https://issues.apache.org/jira/browse/LUCENE-5512
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5825) Separate http request creation and execution in SolrJ

2014-03-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925232#comment-13925232
 ] 

ASF subversion and git services commented on SOLR-5825:
---

Commit 1575722 from [~erickoerickson] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1575722 ]

SOLR-5825, Separate http request creation and execution in SolrJ. Thanks Steve.

 Separate http request creation and execution in SolrJ
 -

 Key: SOLR-5825
 URL: https://issues.apache.org/jira/browse/SOLR-5825
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Reporter: Steven Bower
Assignee: Erick Erickson
 Attachments: SOLR-5825.patch, SOLR-5825.patch


 In order to implement some custom behaviors I split the request() method in 
 HttpSolrServer into 2 distinct method createMethod() and executeMethod(). 
 This allows for customization of either/both of these phases vs having it in 
 a single function.
 In my use case I extended HttpSolrServer to support client side timeouts 
 (so_timeout, connectTimeout and request timeout).. without duplicating the 
 code in request() I couldn't accomplish..



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5825) Separate http request creation and execution in SolrJ

2014-03-09 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-5825.
--

Resolution: Fixed

Thanks Steve!

 Separate http request creation and execution in SolrJ
 -

 Key: SOLR-5825
 URL: https://issues.apache.org/jira/browse/SOLR-5825
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Reporter: Steven Bower
Assignee: Erick Erickson
 Attachments: SOLR-5825.patch, SOLR-5825.patch


 In order to implement some custom behaviors I split the request() method in 
 HttpSolrServer into 2 distinct method createMethod() and executeMethod(). 
 This allows for customization of either/both of these phases vs having it in 
 a single function.
 In my use case I extended HttpSolrServer to support client side timeouts 
 (so_timeout, connectTimeout and request timeout).. without duplicating the 
 code in request() I couldn't accomplish..



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.6.0_45) - Build # 9625 - Still Failing!

2014-03-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/9625/
Java: 32bit/jdk1.6.0_45 -client -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 618 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/core/test/temp/junit4-J1-20140309_162051_736.sysout
   [junit4]  JVM J1: stdout (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  SIGSEGV (0xb) at pc=0xf8b3, pid=12029, tid=4085967680
   [junit4] #
   [junit4] # JRE version: 6.0_45-b06
   [junit4] # Java VM: Java HotSpot(TM) Client VM (20.45-b01 mixed mode, 
sharing linux-x86 )
   [junit4] # Problematic frame:
   [junit4] # J  
org.apache.lucene.store.TestDirectory.testDirectInstantiation()V
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/core/test/J1/hs_err_pid12029.log
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://java.sun.com/webapps/bugreport/crash.jsp
   [junit4] #
   [junit4]  JVM J1: EOF 

[...truncated 779 lines...]
   [junit4] ERROR: JVM J1 ended with an exception, command line: 
/var/lib/jenkins/tools/java/32bit/jdk1.6.0_45/jre/bin/java -client 
-XX:+UseSerialGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/heapdumps 
-Dtests.prefix=tests -Dtests.seed=FAE04F5E0D90F1BC -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=4.8 
-Dtests.cleanthreads=perMethod 
-Djava.util.logging.config.file=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true 
-Dtests.asserts.gracious=false -Dtests.multiplier=3 -DtempDir=. 
-Djava.io.tmpdir=. 
-Djunit4.tempDir=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/core/test/temp
 
-Dclover.db.dir=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/clover/db
 -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Djava.security.policy=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/junit4/tests.policy
 -Dlucene.version=4.8-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.disableHdfs=true -Dfile.encoding=ISO-8859-1 -classpath 

[jira] [Commented] (LUCENE-5512) Remove redundant typing (diamond operator) in trunk

2014-03-09 Thread Furkan KAMACI (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925242#comment-13925242
 ] 

Furkan KAMACI commented on LUCENE-5512:
---

Currently I've found 1542 usage for it at trunk. I can work for this issue.

 Remove redundant typing (diamond operator) in trunk
 ---

 Key: LUCENE-5512
 URL: https://issues.apache.org/jira/browse/LUCENE-5512
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir





--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5502) equals method of TermsFilter might equate two different filters

2014-03-09 Thread Igor Motov (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Motov updated LUCENE-5502:
---

Attachment: LUCENE-5502.patch

Thanks Adrien. You are right, I missed offsets. Here is an updated version. I 
cannot use Arrays.equals for termsBytes and offsets because we compare only 
parts of the arrays, but I can switch to ArrayUtil.equals if you think it would 
make more sense.

 equals method of TermsFilter might equate two different filters
 ---

 Key: LUCENE-5502
 URL: https://issues.apache.org/jira/browse/LUCENE-5502
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/query/scoring
Affects Versions: 4.7
Reporter: Igor Motov
 Attachments: LUCENE-5502.patch, LUCENE-5502.patch


 If two terms filters have 1) the same number of terms, 2) use the same field 
 in all these terms and 3) term values happened to have the same hash codes, 
 these two filter are considered to be equal as long as the first term is the 
 same in both filters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5512) Remove redundant typing (diamond operator) in trunk

2014-03-09 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925246#comment-13925246
 ] 

Robert Muir commented on LUCENE-5512:
-

There are way more than that. I don't recommend the use of automated tools (it 
sounds easy, but it doesnt take care of style, generated code, etc).


 Remove redundant typing (diamond operator) in trunk
 ---

 Key: LUCENE-5512
 URL: https://issues.apache.org/jira/browse/LUCENE-5512
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir





--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5512) Remove redundant typing (diamond operator) in trunk

2014-03-09 Thread Furkan KAMACI (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925247#comment-13925247
 ] 

Furkan KAMACI commented on LUCENE-5512:
---

I'll not use an automated tool because of it is an important thing so we should 
be careful.

 Remove redundant typing (diamond operator) in trunk
 ---

 Key: LUCENE-5512
 URL: https://issues.apache.org/jira/browse/LUCENE-5512
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir





--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5512) Remove redundant typing (diamond operator) in trunk

2014-03-09 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925249#comment-13925249
 ] 

Erick Erickson commented on LUCENE-5512:


Sure hope the eventual (massive) check-in/merge works. Is there any merit in 
doing this in chunks that are more bite-sized? Perhaps making this an umbrella 
JIRA?

I just worry that this is going to touch lots and lots and lots of files in the 
code base, inconveniencing people who are in the middle of some work.

And I have to ask, what _good_ is this doing us? Does it make any functional 
difference or is this simply esthetic? If the latter, then I suspect that doing 
this is going to cause some disruption to no good purpose. Reconciling any 
update issues for people who have significant outstanding chunks of code with 
changes may be interesting. 

Or I may be imagining problems that don't actually exist. I guess under any 
circumstances since I'm not doing the work I don't really have much say...

 Remove redundant typing (diamond operator) in trunk
 ---

 Key: LUCENE-5512
 URL: https://issues.apache.org/jira/browse/LUCENE-5512
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir





--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5512) Remove redundant typing (diamond operator) in trunk

2014-03-09 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925250#comment-13925250
 ] 

Robert Muir commented on LUCENE-5512:
-

Furkan: i'll give you my patch if you want to take over?

The safest approach: make it a compile error in eclipse.

 Remove redundant typing (diamond operator) in trunk
 ---

 Key: LUCENE-5512
 URL: https://issues.apache.org/jira/browse/LUCENE-5512
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir





--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5512) Remove redundant typing (diamond operator) in trunk

2014-03-09 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5512:


Attachment: LUCENE-5512.patch

Furkan: attached is my patch, i did some parts of the codebase. Happy to have 
you take over here!

 Remove redundant typing (diamond operator) in trunk
 ---

 Key: LUCENE-5512
 URL: https://issues.apache.org/jira/browse/LUCENE-5512
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-5512.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5512) Remove redundant typing (diamond operator) in trunk

2014-03-09 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925258#comment-13925258
 ] 

Uwe Schindler commented on LUCENE-5512:
---

I think before backporting to 4.x, I would do the merge of the previous 
patches. Once the vote is over, I will start and backport as many as possible 
of the previous commits for Java 7. This includes reverting the quick fix 
commits to prevent compile issues in 4.x.

My personal opinion about the diamond operator is mixed: I don't see this as 
important. Much more important is migrating over the code to try-with resources 
and only use IOUtils at places where the open/close is not in the same code 
block. But this needs more careful review!

 Remove redundant typing (diamond operator) in trunk
 ---

 Key: LUCENE-5512
 URL: https://issues.apache.org/jira/browse/LUCENE-5512
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-5512.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5507) fix hunspell affix file loading

2014-03-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925257#comment-13925257
 ] 

ASF subversion and git services commented on LUCENE-5507:
-

Commit 1575729 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1575729 ]

LUCENE-5507: fix hunspell affix loading for certain dictionaries

 fix hunspell affix file loading
 ---

 Key: LUCENE-5507
 URL: https://issues.apache.org/jira/browse/LUCENE-5507
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
Reporter: Robert Muir
 Attachments: LUCENE-5507.patch


 Some newer dictionaries cant be loaded (arabic, hungarian, turkmen) just 
 because we do a hackish mark/reset thing to go find the SET to know the 
 encoding and then revisit.
 Problem is: we would need a 2MB buffer for some of these newer ones, thats a 
 little extreme. So instead we just copy to a tempfile and do 2 passes.
 Also fixes a bug where an alias that goes to no flags would cause an 
 exception (this is ok).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5512) Remove redundant typing (diamond operator) in trunk

2014-03-09 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925260#comment-13925260
 ] 

Robert Muir commented on LUCENE-5512:
-

But we don't need to wait on anything to clean up trunk. Its been on java7 for 
a long time.

 Remove redundant typing (diamond operator) in trunk
 ---

 Key: LUCENE-5512
 URL: https://issues.apache.org/jira/browse/LUCENE-5512
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-5512.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5507) fix hunspell affix file loading

2014-03-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925261#comment-13925261
 ] 

ASF subversion and git services commented on LUCENE-5507:
-

Commit 1575730 from [~rcmuir] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1575730 ]

LUCENE-5507: fix hunspell affix loading for certain dictionaries

 fix hunspell affix file loading
 ---

 Key: LUCENE-5507
 URL: https://issues.apache.org/jira/browse/LUCENE-5507
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
Reporter: Robert Muir
 Attachments: LUCENE-5507.patch


 Some newer dictionaries cant be loaded (arabic, hungarian, turkmen) just 
 because we do a hackish mark/reset thing to go find the SET to know the 
 encoding and then revisit.
 Problem is: we would need a 2MB buffer for some of these newer ones, thats a 
 little extreme. So instead we just copy to a tempfile and do 2 passes.
 Also fixes a bug where an alias that goes to no flags would cause an 
 exception (this is ok).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5507) fix hunspell affix file loading

2014-03-09 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-5507.
-

   Resolution: Fixed
Fix Version/s: 5.0
   4.8

 fix hunspell affix file loading
 ---

 Key: LUCENE-5507
 URL: https://issues.apache.org/jira/browse/LUCENE-5507
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
Reporter: Robert Muir
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5507.patch


 Some newer dictionaries cant be loaded (arabic, hungarian, turkmen) just 
 because we do a hackish mark/reset thing to go find the SET to know the 
 encoding and then revisit.
 Problem is: we would need a 2MB buffer for some of these newer ones, thats a 
 little extreme. So instead we just copy to a tempfile and do 2 passes.
 Also fixes a bug where an alias that goes to no flags would cause an 
 exception (this is ok).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5512) Remove redundant typing (diamond operator) in trunk

2014-03-09 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925292#comment-13925292
 ] 

Uwe Schindler commented on LUCENE-5512:
---

I was just referring to the backport of this. We should do this, once I 
backported the earlier stuff. I am already working on this (backporting 
smoketester, build files, initial FileChannel changes in NIO/MMapDir,...). I 
will open issue, once the vote succeeded and post patches and manage the 
backports.

 Remove redundant typing (diamond operator) in trunk
 ---

 Key: LUCENE-5512
 URL: https://issues.apache.org/jira/browse/LUCENE-5512
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-5512.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5768) Add a distrib.singlePass parameter to make EXECUTE_QUERY phase fetch all fields and skip GET_FIELDS

2014-03-09 Thread Gregg Donovan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregg Donovan updated SOLR-5768:


Attachment: SOLR-5768.diff

Is this all that's needed for a distrib.singlePass parameter? It seems like 
SOLR-1880 may have done most of the work.

 Add a distrib.singlePass parameter to make EXECUTE_QUERY phase fetch all 
 fields and skip GET_FIELDS
 ---

 Key: SOLR-5768
 URL: https://issues.apache.org/jira/browse/SOLR-5768
 Project: Solr
  Issue Type: Improvement
Reporter: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 4.8, 5.0

 Attachments: SOLR-5768.diff


 Suggested by Yonik on solr-user:
 http://www.mail-archive.com/solr-user@lucene.apache.org/msg95045.html
 {quote}
 Although it seems like it should be relatively simple to make it work
 with other fields as well, by passing down the complete fl requested
 if some optional parameter is set (distrib.singlePass?)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5513) Binary DocValues Updates

2014-03-09 Thread Mikhail Khludnev (JIRA)
Mikhail Khludnev created LUCENE-5513:


 Summary: Binary DocValues Updates
 Key: LUCENE-5513
 URL: https://issues.apache.org/jira/browse/LUCENE-5513
 Project: Lucene - Core
  Issue Type: Wish
Reporter: Mikhail Khludnev
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5513) Binary DocValues Updates

2014-03-09 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated LUCENE-5513:
-

Component/s: core/index
Description: LUCENE-5189 was a great move toward. I wish to continue. The 
reason for having this feature is to have join-index - to write children 
docnums into parent's binaryDV. I can try to proceed the implementation, but 
I'm not so experienced in such deep Lucene internals. [~shaie], any hint to 
begin with is much appreciated. 

 Binary DocValues Updates
 

 Key: LUCENE-5513
 URL: https://issues.apache.org/jira/browse/LUCENE-5513
 Project: Lucene - Core
  Issue Type: Wish
  Components: core/index
Reporter: Mikhail Khludnev
Priority: Minor

 LUCENE-5189 was a great move toward. I wish to continue. The reason for 
 having this feature is to have join-index - to write children docnums into 
 parent's binaryDV. I can try to proceed the implementation, but I'm not so 
 experienced in such deep Lucene internals. [~shaie], any hint to begin with 
 is much appreciated. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5843) No way to clear error state of a core that doesn't even exist any more

2014-03-09 Thread Nathan Neulinger (JIRA)
Nathan Neulinger created SOLR-5843:
--

 Summary: No way to clear error state of a core that doesn't even 
exist any more
 Key: SOLR-5843
 URL: https://issues.apache.org/jira/browse/SOLR-5843
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6.1
Reporter: Nathan Neulinger


Created collections with missing configs - this is known to create a problem 
state. Those collections have all since been deleted -- but one of my nodes 
still insists that there are initialization errors.

There are no references to those 'failed' cores in any of the cloud tabs, or in 
ZK, or in the directories on the server itself. 

There should be some easy way to refresh this state or to clear them out 
without having to restart the instance. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.7.0_51) - Build # 9627 - Failure!

2014-03-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/9627/
Java: 64bit/jdk1.7.0_51 -XX:-UseCompressedOops -XX:+UseG1GC -XX:-UseSuperWord

1 tests failed.
REGRESSION:  
org.apache.solr.client.solrj.impl.CloudSolrServerTest.testDistribSearch

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:59116 within 45000 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:59116 within 45000 ms
at 
__randomizedtesting.SeedInfo.seed([134AB052D99CD15C:92AC3E4AAEC3B160]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.init(SolrZkClient.java:150)
at 
org.apache.solr.common.cloud.SolrZkClient.init(SolrZkClient.java:101)
at 
org.apache.solr.common.cloud.SolrZkClient.init(SolrZkClient.java:91)
at 
org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:89)
at 
org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:83)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.setUp(AbstractDistribZkTestBase.java:70)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.setUp(AbstractFullDistribZkTestBase.java:200)
at 
org.apache.solr.client.solrj.impl.CloudSolrServerTest.setUp(CloudSolrServerTest.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:771)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)

[jira] [Commented] (SOLR-5843) No way to clear error state of a core that doesn't even exist any more

2014-03-09 Thread Furkan KAMACI (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925316#comment-13925316
 ] 

Furkan KAMACI commented on SOLR-5843:
-

Could you write down a scenario that I can produce it? I can work on this issue.

 No way to clear error state of a core that doesn't even exist any more
 --

 Key: SOLR-5843
 URL: https://issues.apache.org/jira/browse/SOLR-5843
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6.1
Reporter: Nathan Neulinger
  Labels: cloud, failure, initialization

 Created collections with missing configs - this is known to create a problem 
 state. Those collections have all since been deleted -- but one of my nodes 
 still insists that there are initialization errors.
 There are no references to those 'failed' cores in any of the cloud tabs, or 
 in ZK, or in the directories on the server itself. 
 There should be some easy way to refresh this state or to clear them out 
 without having to restart the instance. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5843) No way to clear error state of a core that doesn't even exist any more

2014-03-09 Thread Nathan Neulinger (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925317#comment-13925317
 ] 

Nathan Neulinger commented on SOLR-5843:


Two node SolrCloud 4.6.1 deployment.

Do a collection create with a config name that isn't mapped in ZK. You'll get 
the initialization failures.

Now - this part is a bit vague - I don't remember the exact cleanup operations 
I did - but I think if you go and delete the invalid collection it may or may 
not make the errors go away. I thought that in previous cases when I issued the 
calls to delete the improperly created collections that it cleaned the errors 
up, but it doesn't appear to have in this case. It's possible that one of the 
nodes was in a weird state at the time, not sure. 

My current situation is I have two nodes, and ONE of them still has the bogus 
errors on it, even though all the tabs (and zk tree view) all show no 
references to the invalid cores.


beta1_urlDebug_x_v16_shard1_replica2: 
org.apache.solr.common.cloud.ZooKeeperException:org.apache.solr.common.cloud.ZooKeeperException:
 Could not find configName for collection 
beta1_urlDebug_x_v16_shard1_replica2 found:[c-v17, default]

It's almost like it lost track of the fact that the collection was deleted for 
the purpose of the error reporting.

I also can't find ANY reference to that error in the logs currently on the box, 
so appears to be in-memory only. 

 No way to clear error state of a core that doesn't even exist any more
 --

 Key: SOLR-5843
 URL: https://issues.apache.org/jira/browse/SOLR-5843
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6.1
Reporter: Nathan Neulinger
  Labels: cloud, failure, initialization

 Created collections with missing configs - this is known to create a problem 
 state. Those collections have all since been deleted -- but one of my nodes 
 still insists that there are initialization errors.
 There are no references to those 'failed' cores in any of the cloud tabs, or 
 in ZK, or in the directories on the server itself. 
 There should be some easy way to refresh this state or to clear them out 
 without having to restart the instance. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5843) No way to clear error state of a core that doesn't even exist any more

2014-03-09 Thread Nathan Neulinger (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925319#comment-13925319
 ] 

Nathan Neulinger commented on SOLR-5843:


External 3 node zk ensemble being used.

 No way to clear error state of a core that doesn't even exist any more
 --

 Key: SOLR-5843
 URL: https://issues.apache.org/jira/browse/SOLR-5843
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6.1
Reporter: Nathan Neulinger
  Labels: cloud, failure, initialization

 Created collections with missing configs - this is known to create a problem 
 state. Those collections have all since been deleted -- but one of my nodes 
 still insists that there are initialization errors.
 There are no references to those 'failed' cores in any of the cloud tabs, or 
 in ZK, or in the directories on the server itself. 
 There should be some easy way to refresh this state or to clear them out 
 without having to restart the instance. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4922) A SpatialPrefixTree based on the Hilbert Curve and variable grid sizes

2014-03-09 Thread Varun V Shenoy (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925320#comment-13925320
 ] 

Varun  V Shenoy commented on LUCENE-4922:
-

Thanks Hatim, I have been going through your proposal, and it helped me a lot. 
I am getting really excited.

 A SpatialPrefixTree based on the Hilbert Curve and variable grid sizes
 --

 Key: LUCENE-4922
 URL: https://issues.apache.org/jira/browse/LUCENE-4922
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley
  Labels: gsoc2014
 Attachments: HilbertConverter.zip


 My wish-list for an ideal SpatialPrefixTree has these properties:
 * Hilbert Curve ordering
 * Variable grid size per level (ex: 256 at the top, 64 at the bottom, 16 for 
 all in-between)
 * Compact binary encoding (so-called Morton number)
 * Works for geodetic (i.e. lat  lon) and non-geodetic
 Some bonus wishes for use in geospatial:
 * Use an equal-area projection such that each cell has an equal area to all 
 others at the same level.
 * When advancing a grid level, if a cell's width is less than half its 
 height. then divide it as 4 vertically stacked instead of 2 by 2. The point 
 is to avoid super-skinny cells which occurs towards the poles and degrades 
 performance.
 All of this requires some basic performance benchmarks to measure the effects 
 of these characteristics.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5512) Remove redundant typing (diamond operator) in trunk

2014-03-09 Thread Furkan KAMACI (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925325#comment-13925325
 ] 

Furkan KAMACI commented on LUCENE-5512:
---

When I finish it I will attach the patch file.

 Remove redundant typing (diamond operator) in trunk
 ---

 Key: LUCENE-5512
 URL: https://issues.apache.org/jira/browse/LUCENE-5512
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-5512.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.7.0_51) - Build # 9734 - Failure!

2014-03-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/9734/
Java: 64bit/jdk1.7.0_51 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 
-XX:-UseSuperWord

1 tests failed.
REGRESSION:  
org.apache.solr.client.solrj.impl.CloudSolrServerTest.testDistribSearch

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:53236 within 45000 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:53236 within 45000 ms
at 
__randomizedtesting.SeedInfo.seed([CA88004149DD4CC1:4B6E8E593E822CFD]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.init(SolrZkClient.java:150)
at 
org.apache.solr.common.cloud.SolrZkClient.init(SolrZkClient.java:101)
at 
org.apache.solr.common.cloud.SolrZkClient.init(SolrZkClient.java:91)
at 
org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:89)
at 
org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:83)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.setUp(AbstractDistribZkTestBase.java:70)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.setUp(AbstractFullDistribZkTestBase.java:200)
at 
org.apache.solr.client.solrj.impl.CloudSolrServerTest.setUp(CloudSolrServerTest.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:771)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 

[jira] [Created] (LUCENE-5514) Backport Java 7 changes from trunk to Lucene 4.8

2014-03-09 Thread Uwe Schindler (JIRA)
Uwe Schindler created LUCENE-5514:
-

 Summary: Backport Java 7 changes from trunk to Lucene 4.8
 Key: LUCENE-5514
 URL: https://issues.apache.org/jira/browse/LUCENE-5514
 Project: Lucene - Core
  Issue Type: Task
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8


This issue tracks the backporting of various issues that are related to Java 7 
to 4.8.

It will also revert build fixes that worked around compile failures 
(especially stuff like {{Long/Integer.compare()}}.

I will attach a patch soon (for review).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-1632) Distributed IDF

2014-03-09 Thread Vitaliy Zhovtyuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Zhovtyuk updated SOLR-1632:
---

Attachment: SOLR-1632.patch

Updated to latest trunk.
Cleaned code duplicates. Fixed org.apache.solr.search.stats.TestLRUStatsCache, 
added test for org.apache.solr.search.stats.ExactSharedStatsCache.
Fixed javadocs.

 Distributed IDF
 ---

 Key: SOLR-1632
 URL: https://issues.apache.org/jira/browse/SOLR-1632
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 1.5
Reporter: Andrzej Bialecki 
Assignee: Mark Miller
 Fix For: 4.7, 5.0

 Attachments: 3x_SOLR-1632_doesntwork.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, 
 SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, distrib-2.patch, 
 distrib.patch


 Distributed IDF is a valuable enhancement for distributed search across 
 non-uniform shards. This issue tracks the proposed implementation of an API 
 to support this functionality in Solr.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5514) Backport Java 7 changes from trunk to Lucene 4.8

2014-03-09 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5514:
--

Attachment: LUCENE-5514.patch

Here is the backport / revert of the following commits:

{noformat}
Merged revision(s) 1575519, 1575446 from 
lucene/dev/branches/branch_4x/solr/core/src/java/org/apache/solr/handler/component/QueryComponent.java:
SOLR-5818: Prevent overflow in Hoss' fix for Java 1.6 source compatibility

SOLR-5818: Fix java1.6 source compatibility

Merged revision(s) 1531667 from lucene/dev/branches/branch_4x:
LUCENE-5260: don't use java7-only API


Merged revision(s) 1450248 from lucene/dev/branches/branch_4x:
LUCENE-4798: use java6 compatible method


Merged revision(s) 1509017 from lucene/dev/branches/branch_4x:
SOLR-4221: Fix compile error on Java6 due to use of diamond operator


Merged revision(s) 1520642 from lucene/dev/branches/branch_4x:
java6


Merged revision(s) 1544816 from lucene/dev/branches/branch_4x:
SOLR-5378: Fix compile issues on Java6


Merged revision(s) 1571338 from lucene/dev/branches/branch_4x:
fixing  build failure. remove use of java 7 API


Merged revision(s) 1571338 from lucene/dev/branches/branch_4x:
fixing  build failure. remove use of java 7 API


Merged revision(s) 1575318 from lucene/dev/branches/branch_4x:
avoid Integer.compare


Merged revision(s) 1466733 from lucene/dev/branches/branch_4x:
AssertionError(String,Throwable) doesn't exist in Java 6.



Merged revision(s) 1538895 from 
lucene/dev/branches/branch_4x/lucene/misc/src/test/org/apache/lucene/index/sorter/TestBlockJoinSorter.java:
Fix test: Java 6 doesn't have Long.compare.



Merged revision(s) 1538895 from 
lucene/dev/branches/branch_4x/lucene/misc/src/test/org/apache/lucene/index/sorter/TestBlockJoinSorter.java:
Fix test: Java 6 doesn't have Long.compare.



Merged revision(s) 1457751 from lucene/dev/trunk:
LUCENE-4747: Remove reflection from IOUtils for supressing caughth Exceptions


Merged revision(s) 1459437, 1499935 from lucene/dev/trunk:
LUCENE-4848: Use Java 7 NIO2-FileChannel instead of RandomAccessFile for 
NIOFSDirectory and MMapDirectory

LUCENE-5086: RamUsageEstimator now uses official Java 7 API or a proprietary 
Oracle Java 6 API to get Hotspot MX bean, preventing AWT classes to be loaded 
on MacOSX

{noformat}

 Backport Java 7 changes from trunk to Lucene 4.8
 

 Key: LUCENE-5514
 URL: https://issues.apache.org/jira/browse/LUCENE-5514
 Project: Lucene - Core
  Issue Type: Task
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8

 Attachments: LUCENE-5514.patch


 This issue tracks the backporting of various issues that are related to Java 
 7 to 4.8.
 It will also revert build fixes that worked around compile failures 
 (especially stuff like {{Long/Integer.compare()}}.
 I will attach a patch soon (for review).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5488) FilteredQuery.explain does not honor FilterStrategy

2014-03-09 Thread John Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Wang updated LUCENE-5488:
--

Attachment: LUCENE-5488.patch

 FilteredQuery.explain does not honor FilterStrategy
 ---

 Key: LUCENE-5488
 URL: https://issues.apache.org/jira/browse/LUCENE-5488
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 4.6.1
Reporter: John Wang
Assignee: Michael Busch
 Attachments: LUCENE-5488.patch, LUCENE-5488.patch, LUCENE-5488.patch


 Some Filter implementations produce DocIdSets without the iterator() 
 implementation, such as o.a.l.facet.range.Range.getFilter(). It is done with 
 the intention to be used in conjunction with FilteredQuery with 
 FilterStrategy set to be QUERY_FIRST_FILTER_STRATEGY for performance reasons.
 However, this behavior is not honored by FilteredQuery.explain where 
 docidset.iterator is called regardless and causing such valid usages of above 
 filter types to fail.
 The fix is to check bits() first and and fall back to iterator if bits is 
 null. In which case, the input Filter is indeed bad.
 See attached unit test, which fails without this patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5488) FilteredQuery.explain does not honor FilterStrategy

2014-03-09 Thread John Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925353#comment-13925353
 ] 

John Wang commented on LUCENE-5488:
---

Thanks Lei for pointing this out.
New patch attached.

 FilteredQuery.explain does not honor FilterStrategy
 ---

 Key: LUCENE-5488
 URL: https://issues.apache.org/jira/browse/LUCENE-5488
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 4.6.1
Reporter: John Wang
Assignee: Michael Busch
 Attachments: LUCENE-5488.patch, LUCENE-5488.patch, LUCENE-5488.patch


 Some Filter implementations produce DocIdSets without the iterator() 
 implementation, such as o.a.l.facet.range.Range.getFilter(). It is done with 
 the intention to be used in conjunction with FilteredQuery with 
 FilterStrategy set to be QUERY_FIRST_FILTER_STRATEGY for performance reasons.
 However, this behavior is not honored by FilteredQuery.explain where 
 docidset.iterator is called regardless and causing such valid usages of above 
 filter types to fail.
 The fix is to check bits() first and and fall back to iterator if bits is 
 null. In which case, the input Filter is indeed bad.
 See attached unit test, which fails without this patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5514) Backport Java 7 changes from trunk to Lucene 4.8

2014-03-09 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925354#comment-13925354
 ] 

Uwe Schindler commented on LUCENE-5514:
---

I think this is fine as a first step. It does not contain all the changes 
needed to have identical Java 7 code in trunk and 4.x. The problem are merged 
stuff that was fixed to be Java 6 compliant while merging. This is hard to 
detect. Some of those were already fixed (like compare methods implemented with 
Long.signum), but others like multi-catch not. I don't think we should put too 
much effort in fixing this. It is important that the optimized methods for Java 
7 are used in the comparators (intrinsics) and also the new FileChannel APIs 
are used.

 Backport Java 7 changes from trunk to Lucene 4.8
 

 Key: LUCENE-5514
 URL: https://issues.apache.org/jira/browse/LUCENE-5514
 Project: Lucene - Core
  Issue Type: Task
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8

 Attachments: LUCENE-5514.patch


 This issue tracks the backporting of various issues that are related to Java 
 7 to 4.8.
 It will also revert build fixes that worked around compile failures 
 (especially stuff like {{Long/Integer.compare()}}.
 I will attach a patch soon (for review).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5514) Backport Java 7 changes from trunk to Lucene 4.8

2014-03-09 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925355#comment-13925355
 ] 

Uwe Schindler commented on LUCENE-5514:
---

I will first commit this stuff and later cleanup the documentation / 
changes.txt. This needs to be done in trunk first and separately backported.

 Backport Java 7 changes from trunk to Lucene 4.8
 

 Key: LUCENE-5514
 URL: https://issues.apache.org/jira/browse/LUCENE-5514
 Project: Lucene - Core
  Issue Type: Task
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8

 Attachments: LUCENE-5514.patch


 This issue tracks the backporting of various issues that are related to Java 
 7 to 4.8.
 It will also revert build fixes that worked around compile failures 
 (especially stuff like {{Long/Integer.compare()}}.
 I will attach a patch soon (for review).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5514) Backport Java 7 changes from trunk to Lucene 4.8

2014-03-09 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5514:
--

Attachment: LUCENE-5514.patch

Last patch missed changes in {{RAMUsageEstimator}}. We no longer need 
reflection to get managament bean without proxy (that starts crazy AWT on OSX). 
This is the code from trunk.

 Backport Java 7 changes from trunk to Lucene 4.8
 

 Key: LUCENE-5514
 URL: https://issues.apache.org/jira/browse/LUCENE-5514
 Project: Lucene - Core
  Issue Type: Task
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8

 Attachments: LUCENE-5514.patch, LUCENE-5514.patch


 This issue tracks the backporting of various issues that are related to Java 
 7 to 4.8.
 It will also revert build fixes that worked around compile failures 
 (especially stuff like {{Long/Integer.compare()}}.
 I will attach a patch soon (for review).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5512) Remove redundant typing (diamond operator) in trunk

2014-03-09 Thread Furkan KAMACI (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925373#comment-13925373
 ] 

Furkan KAMACI commented on LUCENE-5512:
---

I've finished it. Compilation and tests did not give any error. I will check it 
one more time and attach the patch. On the other hand I will apply changes for 
lucene module. Will anybody open a Jira issue for Solr module too or I can 
apply same things for Solr module too?

[~erickerickson] you are right. I've touched many many files in the code base. 
However I think that it will not cause any conflict (at least any real 
conflict) for anybody who is working on any issue. I think that the source code 
of Lucene became cleaner.

[~rcmuir] if you want I can do same thing for try-with resources at another 
Jira issue?

 Remove redundant typing (diamond operator) in trunk
 ---

 Key: LUCENE-5512
 URL: https://issues.apache.org/jira/browse/LUCENE-5512
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-5512.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5512) Remove redundant typing (diamond operator) in trunk

2014-03-09 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925378#comment-13925378
 ] 

Erick Erickson commented on LUCENE-5512:


Fire away. Personally the only thing I might have that requires some work is 
the whole Analytics thing that I've had hanging pending getting the test 
failures to stop. But that's almost entirely new code so I really don't 
anticipate much to do.

And don't get me wrong, I think moving to Java 7 is a fine thing. I was somehow 
thinking that it would be inappropriate to do that before 5.0, but clearly I 
was wrong. As evidence I offer the enthusiasm with which moving to Java 7 for 
Solr/Lucene 4.8 has been received.

I guess what I envision at this point is that those things that have been 
bugging people will get attention now that the Java 6 compatibility issue is 
being removed. And the whole try-with thing is significant IMO, I've been 
tripped up by this before; Uwe rescued me.

Thanks for putting the effort in here!


 Remove redundant typing (diamond operator) in trunk
 ---

 Key: LUCENE-5512
 URL: https://issues.apache.org/jira/browse/LUCENE-5512
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-5512.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0-fcs-b132) - Build # 9736 - Failure!

2014-03-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/9736/
Java: 32bit/jdk1.8.0-fcs-b132 -server -XX:+UseG1GC

1 tests failed.
REGRESSION:  
org.apache.solr.client.solrj.impl.CloudSolrServerTest.testDistribSearch

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:51843 within 45000 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:51843 within 45000 ms
at 
__randomizedtesting.SeedInfo.seed([74AE33267E9D4C2D:F548BD3E09C22C11]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.init(SolrZkClient.java:150)
at 
org.apache.solr.common.cloud.SolrZkClient.init(SolrZkClient.java:101)
at 
org.apache.solr.common.cloud.SolrZkClient.init(SolrZkClient.java:91)
at 
org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:89)
at 
org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:83)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.setUp(AbstractDistribZkTestBase.java:70)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.setUp(AbstractFullDistribZkTestBase.java:200)
at 
org.apache.solr.client.solrj.impl.CloudSolrServerTest.setUp(CloudSolrServerTest.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:771)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.7.0_60-ea-b07) - Build # 9630 - Failure!

2014-03-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/9630/
Java: 32bit/jdk1.7.0_60-ea-b07 -server -XX:+UseParallelGC

1 tests failed.
REGRESSION:  
org.apache.solr.client.solrj.impl.CloudSolrServerTest.testDistribSearch

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:60072 within 45000 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:60072 within 45000 ms
at 
__randomizedtesting.SeedInfo.seed([29D3843EB8EA4BF5:A8350A26CFB52BC9]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.init(SolrZkClient.java:150)
at 
org.apache.solr.common.cloud.SolrZkClient.init(SolrZkClient.java:101)
at 
org.apache.solr.common.cloud.SolrZkClient.init(SolrZkClient.java:91)
at 
org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:89)
at 
org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:83)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.setUp(AbstractDistribZkTestBase.java:70)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.setUp(AbstractFullDistribZkTestBase.java:200)
at 
org.apache.solr.client.solrj.impl.CloudSolrServerTest.setUp(CloudSolrServerTest.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:771)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[jira] [Commented] (SOLR-5843) No way to clear error state of a core that doesn't even exist any more

2014-03-09 Thread Nathan Neulinger (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925416#comment-13925416
 ] 

Nathan Neulinger commented on SOLR-5843:


In case it helps any - interesting result - I restarted the node with the error 
this evening, and that bogus/broken collection spontaneously tried to recreate 
itself on that node. No error msg in the log though - just shows as all 
replicas being down.

I issued a delete against it and got this error:

?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status0/intint 
name=QTime423/int/lstlst name=failurestr 
name=10.220.16.191:8983_solrorg.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Server
 Error

request: http://10.220.16.191:8983/solr/admin/cores/str/lst
/response


but, it did clear it out... Not sure what state it was in. If y'all think this 
is a goofy edge case that isn't likely to re-occur, go ahead and close this. 
Either way though, I do think there should be a way to tell solr clear your 
errors and retry/refresh. 

 No way to clear error state of a core that doesn't even exist any more
 --

 Key: SOLR-5843
 URL: https://issues.apache.org/jira/browse/SOLR-5843
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6.1
Reporter: Nathan Neulinger
  Labels: cloud, failure, initialization

 Created collections with missing configs - this is known to create a problem 
 state. Those collections have all since been deleted -- but one of my nodes 
 still insists that there are initialization errors.
 There are no references to those 'failed' cores in any of the cloud tabs, or 
 in ZK, or in the directories on the server itself. 
 There should be some easy way to refresh this state or to clear them out 
 without having to restart the instance. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5842) facet.pivot need provide the more information and additional function

2014-03-09 Thread Raintung Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raintung Li updated SOLR-5842:
--

Description: 
Because facet can set the facet.limit and facet.offset, we can't get the next 
array size for facet.pivot. If you want to get the next pivot size, you have to 
set  the facet.limit to max integer then to count the array size. In that way 
you will get a lot of terms for pivot field that's impact the network and 
Client. 
Update some functions in the API
For example:
facet=truefacet.pivot=test,testb,id
facet.pivot.min.field=id -- Get the id min value
facet.pivot.max.field=id -- Get the id max value 
facet.pivot.sum.field=id-- Sum the id value
facet.pivot.count=true --- Open the get array size function
facet.pivot.count.field=id  --Get the id array size
facet.pivot.count.next=true -- Get the next pivot field array size

Response:
lst name=facet_pivot
long name=idSUM572/long
long name=idMAX333/long
long name=idMIN1/long
long name=idArrCount12/long
arr name=test,testb,id
lst
str name=fieldtest/str
str name=valuechange.me/str
int name=count5/int
long name=idSUM91/long
long name=idMAX33/long
long name=idMIN1/long
long name=idArrCount5/long
long name=testbArrCount2/long
arr name=pivot
lst
str name=fieldtestb/str
str name=valuetest/str
int name=count1/int
long name=idSUM3/long
long name=idMAX3/long
long name=idMIN3/long
long name=idArrCount1/long
arr name=pivot
lst
str name=fieldid/str
int name=value3/int
int name=count1/int
/lst
/arr
/lst
lst
str name=fieldtestb/str
null name=value/
int name=count4/int
long name=idSUM88/long
long name=idMAX33/long
long name=idMIN1/long
long name=idArrCount4/long
arr name=pivot
lst
str name=fieldid/str
int name=value1/int
int name=count1/int
/lst
lst
str name=fieldid/str
int name=value22/int
int name=count1/int
/lst
lst
str name=fieldid/str
int name=value32/int
int name=count1/int
/lst
lst
str name=fieldid/str
int name=value33/int
int name=count1/int
/lst
/arr
/lst
/arr
/lst
lst
str name=fieldtest/str
str name=value100/str
int name=count1/int
long name=idSUM66/long
long name=idMAX66/long
long name=idMIN66/long
long name=idArrCount1/long
long name=testbArrCount1/long
arr name=pivot
lst
str name=fieldtestb/str
null name=value/
int name=count1/int
long name=idSUM66/long
long name=idMAX66/long
long name=idMIN66/long
long name=idArrCount1/long
arr name=pivot
lst
str name=fieldid/str
int name=value66/int
int name=count1/int
/lst
/arr
/lst
/arr
/lst
lst
str name=fieldtest/str
str name=value200/str
int name=count1/int
long name=idSUM34/long
long name=idMAX34/long
long name=idMIN34/long
long name=idArrCount1/long
long name=testbArrCount1/long
arr name=pivot
lst
str name=fieldtestb/str
null name=value/
int name=count1/int
long name=idSUM34/long
long name=idMAX34/long
long name=idMIN34/long
long name=idArrCount1/long
arr name=pivot
lst
str name=fieldid/str
int name=value34/int
int name=count1/int
/lst
/arr
/lst
/arr
/lst
lst
str name=fieldtest/str
str name=value500/str
int name=count1/int
long name=idSUM23/long
long name=idMAX23/long
long name=idMIN23/long
long name=idArrCount1/long
long name=testbArrCount1/long
arr name=pivot
lst
str name=fieldtestb/str
null name=value/
int name=count1/int
long name=idSUM23/long
long name=idMAX23/long
long name=idMIN23/long
long name=idArrCount1/long
arr name=pivot
lst
str name=fieldid/str
int name=value23/int
int name=count1/int
/lst
/arr
/lst
/arr
/lst
lst
str name=fieldtest/str
str name=valuechange.me1/str
int name=count1/int
long name=idSUM4/long
long name=idMAX4/long
long name=idMIN4/long
long name=idArrCount1/long
long name=testbArrCount1/long
arr name=pivot
lst
str name=fieldtestb/str
str name=valuetest1/str
int name=count1/int
long name=idSUM4/long
long name=idMAX4/long
long name=idMIN4/long
long name=idArrCount1/long
arr name=pivot
lst
str name=fieldid/str
int name=value4/int
int name=count1/int
/lst
/arr
/lst
/arr
/lst
lst
str name=fieldtest/str
str name=valueme/str
int name=count1/int
long name=idSUM11/long
long name=idMAX11/long
long name=idMIN11/long
long name=idArrCount1/long
long name=testbArrCount1/long
arr name=pivot
lst
str name=fieldtestb/str
str name=valuechange.me/str
int name=count1/int
long name=idSUM11/long
long name=idMAX11/long
long name=idMIN11/long
long name=idArrCount1/long
arr name=pivot
lst
str name=fieldid/str
int name=value11/int
int name=count1/int
/lst
/arr
/lst
/arr
/lst
lst
str name=fieldtest/str
str name=valueok/str
int name=count1/int
long name=idSUM333/long
long name=idMAX333/long
long name=idMIN333/long
long name=idArrCount1/long
long name=testbArrCount1/long
arr name=pivot
lst
str name=fieldtestb/str
str name=valueok/str
int name=count1/int
long name=idSUM333/long
long name=idMAX333/long
long name=idMIN333/long
long name=idArrCount1/long
arr name=pivot
lst
str name=fieldid/str
int name=value333/int
int name=count1/int
/lst
/arr
/lst
/arr
/lst
lst
str name=fieldtest/str
null name=value/
int 

[jira] [Updated] (SOLR-5842) facet.pivot need provide the more information and additional function

2014-03-09 Thread Raintung Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raintung Li updated SOLR-5842:
--

Attachment: patch-5842.txt

update code for functions

 facet.pivot need provide the more information and additional function
 -

 Key: SOLR-5842
 URL: https://issues.apache.org/jira/browse/SOLR-5842
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.6
Reporter: Raintung Li
 Attachments: patch-5842.txt


 Because facet can set the facet.limit and facet.offset, we can't get the next 
 array size for facet.pivot. If you want to get the next pivot size, you have 
 to set  the facet.limit to max integer then to count the array size. In that 
 way you will get a lot of terms for pivot field that's impact the network and 
 Client. 
 Update some functions in the API
 For example:
 facet=truefacet.pivot=test,testb,id
 facet.pivot.min.field=id -- Get the id min value
 facet.pivot.max.field=id -- Get the id max value 
 facet.pivot.sum.field=id-- Sum the id value
 facet.pivot.count=true --- Open the get array size function
 facet.pivot.count.field=id  --Get the id array size
 facet.pivot.count.next=true -- Get the next pivot field array size
 Response:
 lst name=facet_pivot
 long name=idSUM572/long
 long name=idMAX333/long
 long name=idMIN1/long
 long name=idArrCount12/long
 arr name=test,testb,id
 lst
 str name=fieldtest/str
 str name=valuechange.me/str
 int name=count5/int
 long name=idSUM91/long
 long name=idMAX33/long
 long name=idMIN1/long
 long name=idArrCount5/long
 long name=testbArrCount2/long
 arr name=pivot
 lst
 str name=fieldtestb/str
 str name=valuetest/str
 int name=count1/int
 long name=idSUM3/long
 long name=idMAX3/long
 long name=idMIN3/long
 long name=idArrCount1/long
 arr name=pivot
 lst
 str name=fieldid/str
 int name=value3/int
 int name=count1/int
 /lst
 /arr
 /lst
 lst
 str name=fieldtestb/str
 null name=value/
 int name=count4/int
 long name=idSUM88/long
 long name=idMAX33/long
 long name=idMIN1/long
 long name=idArrCount4/long
 arr name=pivot
 lst
 str name=fieldid/str
 int name=value1/int
 int name=count1/int
 /lst
 lst
 str name=fieldid/str
 int name=value22/int
 int name=count1/int
 /lst
 lst
 str name=fieldid/str
 int name=value32/int
 int name=count1/int
 /lst
 lst
 str name=fieldid/str
 int name=value33/int
 int name=count1/int
 /lst
 /arr
 /lst
 /arr
 /lst
 lst
 str name=fieldtest/str
 str name=value100/str
 int name=count1/int
 long name=idSUM66/long
 long name=idMAX66/long
 long name=idMIN66/long
 long name=idArrCount1/long
 long name=testbArrCount1/long
 arr name=pivot
 lst
 str name=fieldtestb/str
 null name=value/
 int name=count1/int
 long name=idSUM66/long
 long name=idMAX66/long
 long name=idMIN66/long
 long name=idArrCount1/long
 arr name=pivot
 lst
 str name=fieldid/str
 int name=value66/int
 int name=count1/int
 /lst
 /arr
 /lst
 /arr
 /lst
 lst
 str name=fieldtest/str
 str name=value200/str
 int name=count1/int
 long name=idSUM34/long
 long name=idMAX34/long
 long name=idMIN34/long
 long name=idArrCount1/long
 long name=testbArrCount1/long
 arr name=pivot
 lst
 str name=fieldtestb/str
 null name=value/
 int name=count1/int
 long name=idSUM34/long
 long name=idMAX34/long
 long name=idMIN34/long
 long name=idArrCount1/long
 arr name=pivot
 lst
 str name=fieldid/str
 int name=value34/int
 int name=count1/int
 /lst
 /arr
 /lst
 /arr
 /lst
 lst
 str name=fieldtest/str
 str name=value500/str
 int name=count1/int
 long name=idSUM23/long
 long name=idMAX23/long
 long name=idMIN23/long
 long name=idArrCount1/long
 long name=testbArrCount1/long
 arr name=pivot
 lst
 str name=fieldtestb/str
 null name=value/
 int name=count1/int
 long name=idSUM23/long
 long name=idMAX23/long
 long name=idMIN23/long
 long name=idArrCount1/long
 arr name=pivot
 lst
 str name=fieldid/str
 int name=value23/int
 int name=count1/int
 /lst
 /arr
 /lst
 /arr
 /lst
 lst
 str name=fieldtest/str
 str name=valuechange.me1/str
 int name=count1/int
 long name=idSUM4/long
 long name=idMAX4/long
 long name=idMIN4/long
 long name=idArrCount1/long
 long name=testbArrCount1/long
 arr name=pivot
 lst
 str name=fieldtestb/str
 str name=valuetest1/str
 int name=count1/int
 long name=idSUM4/long
 long name=idMAX4/long
 long name=idMIN4/long
 long name=idArrCount1/long
 arr name=pivot
 lst
 str name=fieldid/str
 int name=value4/int
 int name=count1/int
 /lst
 /arr
 /lst
 /arr
 /lst
 lst
 str name=fieldtest/str
 str name=valueme/str
 int name=count1/int
 long name=idSUM11/long
 long name=idMAX11/long
 long name=idMIN11/long
 long name=idArrCount1/long
 long name=testbArrCount1/long
 arr name=pivot
 lst
 str name=fieldtestb/str
 str name=valuechange.me/str
 int name=count1/int
 long name=idSUM11/long
 long name=idMAX11/long
 long 

[jira] [Commented] (SOLR-5768) Add a distrib.singlePass parameter to make EXECUTE_QUERY phase fetch all fields and skip GET_FIELDS

2014-03-09 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925441#comment-13925441
 ] 

Shalin Shekhar Mangar commented on SOLR-5768:
-

Thanks Gregg.

# We need tests for this.
# Any field other than id and score are not being passed to the shard queries 
so this patch is incomplete.

 Add a distrib.singlePass parameter to make EXECUTE_QUERY phase fetch all 
 fields and skip GET_FIELDS
 ---

 Key: SOLR-5768
 URL: https://issues.apache.org/jira/browse/SOLR-5768
 Project: Solr
  Issue Type: Improvement
Reporter: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 4.8, 5.0

 Attachments: SOLR-5768.diff


 Suggested by Yonik on solr-user:
 http://www.mail-archive.com/solr-user@lucene.apache.org/msg95045.html
 {quote}
 Although it seems like it should be relatively simple to make it work
 with other fields as well, by passing down the complete fl requested
 if some optional parameter is set (distrib.singlePass?)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5495) Boolean Filter does not handle FilterClauses with only bits() implemented

2014-03-09 Thread John Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925466#comment-13925466
 ] 

John Wang commented on LUCENE-5495:
---

Thanks Lei! Comments addressed, see new patch.

 Boolean Filter does not handle FilterClauses with only bits() implemented
 -

 Key: LUCENE-5495
 URL: https://issues.apache.org/jira/browse/LUCENE-5495
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 4.6.1
Reporter: John Wang
 Attachments: LUCENE-5495.patch, LUCENE-5495.patch


 Some Filter implementations produce DocIdSets without the iterator() 
 implementation, such as o.a.l.facet.range.Range.getFilter().
 Currently, such filters cannot be added to a BooleanFilter because 
 BooleanFilter expects all FilterClauses with Filters that have iterator() 
 implemented.
 This patch improves the behavior by taking Filters with bits() implemented 
 and treat them separately.
 This behavior would be faster in the case for Filters with a forward index as 
 the underlying data structure, where there would be no need to scan the index 
 to build an iterator.
 See attached unit test, which fails without this patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5495) Boolean Filter does not handle FilterClauses with only bits() implemented

2014-03-09 Thread John Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Wang updated LUCENE-5495:
--

Attachment: LUCENE-5495.patch

 Boolean Filter does not handle FilterClauses with only bits() implemented
 -

 Key: LUCENE-5495
 URL: https://issues.apache.org/jira/browse/LUCENE-5495
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 4.6.1
Reporter: John Wang
 Attachments: LUCENE-5495.patch, LUCENE-5495.patch


 Some Filter implementations produce DocIdSets without the iterator() 
 implementation, such as o.a.l.facet.range.Range.getFilter().
 Currently, such filters cannot be added to a BooleanFilter because 
 BooleanFilter expects all FilterClauses with Filters that have iterator() 
 implemented.
 This patch improves the behavior by taking Filters with bits() implemented 
 and treat them separately.
 This behavior would be faster in the case for Filters with a forward index as 
 the underlying data structure, where there would be no need to scan the index 
 to build an iterator.
 See attached unit test, which fails without this patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org