[jira] [Commented] (LUCENE-5166) PostingsHighlighter fails with IndexOutOfBoundsException

2014-05-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997305#comment-13997305
 ] 

ASF subversion and git services commented on LUCENE-5166:
-

Commit 1594464 from [~rcmuir] in branch 'dev/branches/lucene5666'
[ https://svn.apache.org/r1594464 ]

LUCENE-5166: clear most nocommits, move ord/rord to solr (and speed them up), 
nuke old purging stuff

 PostingsHighlighter fails with IndexOutOfBoundsException
 

 Key: LUCENE-5166
 URL: https://issues.apache.org/jira/browse/LUCENE-5166
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/highlighter
Affects Versions: 4.4
Reporter: Manuel Amoabeng
 Fix For: 4.5, 5.0

 Attachments: LUCENE-5166-2.patch, LUCENE-5166-revisited.patch, 
 LUCENE-5166.patch, LUCENE-5166.patch, LUCENE-5166.patch, LUCENE-5166.patch, 
 LUCENE-5166.patch, LUCENE-5166.patch


 Given a document with a match at a startIndex  PostingsHighlighter.maxlength 
 and an endIndex  PostingsHighlighter.maxLength, DefaultPassageFormatter will 
 throw an IndexOutOfBoundsException when DefaultPassageFormatter.append() is 
 invoked. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6069) The 'clusterstatus' API should return 'roles' information

2014-05-14 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-6069:
---

 Summary: The 'clusterstatus' API should return 'roles' information
 Key: SOLR-6069
 URL: https://issues.apache.org/jira/browse/SOLR-6069
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
 Fix For: 4.9, 5.0


We have 'addrole' and 'removerole' APIs but no way to return the roles. I think 
we should add this information to the 'clusterstatus' API.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5648) Index/search multi-valued time durations

2014-05-14 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-5648:
-

Attachment: LUCENE-5648.patch

Here's an update to the patch.  There were a few bugs fixed, and I do the 
decode of bytes to cell numbers lazily when needed, if at all.   I can run the 
test now with a thousand iterations of each predicate and no failures.

One notable change is that it will now optimize/normalize the precision of the 
range query, which is not only more efficient but it helped make some tests 
pass.  For example: April 1st - April 30th is the same thing as the month of 
April.  And likewise April 1st - May 10th is the same thing as April - May 
10th.  Note that trying to say April - April 10th is an error.

It's probably ready to commit to trunk but I'll wait for any feedback.

 Index/search multi-valued time durations
 

 Key: LUCENE-5648
 URL: https://issues.apache.org/jira/browse/LUCENE-5648
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley
 Attachments: LUCENE-5648.patch, LUCENE-5648.patch


 If you need to index a date/time duration, then the way to do that is to have 
 a pair of date fields; one for the start and one for the end -- pretty 
 straight-forward. But if you need to index a variable number of durations per 
 document, then the options aren't pretty, ranging from denormalization, to 
 joins, to using Lucene spatial with 2D as described 
 [here|http://wiki.apache.org/solr/SpatialForTimeDurations].  Ideally it would 
 be easier to index durations, and work in a more optimal way.
 This issue implements the aforementioned feature using Lucene-spatial with a 
 new single-dimensional SpatialPrefixTree implementation. Unlike the other two 
 SPT implementations, it's not based on floating point numbers. It will have a 
 Date based customization that indexes levels at meaningful quantities like 
 seconds, minutes, hours, etc.  The point of that alignment is to make it 
 faster to query across meaningful ranges (i.e. [2000 TO 2014]) and to enable 
 a follow-on issue to facet on the data in a really fast way.
 I'll expect to have a working patch up this week.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5619) TestBackwardsCompatibility needs updatable docvalues

2014-05-14 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13993571#comment-13993571
 ] 

Shai Erera commented on LUCENE-5619:


I am looking into this since it's important to have it in place before the work 
on LUCENE-5618 (and LUCENE-5636).

bq. I am not sure about the rules here: is it ok to apply updates to e.g. a 3.x 
or 4.0 index?

No, updating those indexes is not supported (we even suppress those codecs in 
tests), since those codecs did not take the segmentSuffix into account. We've 
decided that in order to use updatable DocValues, you need to index with 4.6+, 
or re-create the index if it was created with earlier versions. More so, old 
formats' consumers aren't shipped w/ Lucene anyway.

I'll look into adding those indexes to TestBackCompat -- recreate them w/ few 
numeric and binary doc-values and then try to update them with newer code.

 TestBackwardsCompatibility needs updatable docvalues
 

 Key: LUCENE-5619
 URL: https://issues.apache.org/jira/browse/LUCENE-5619
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir

 We don't test this at all in TestBackCompat. this is scary!



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5664) New meaning of equal sign in StandardQueryParser

2014-05-14 Thread Ahmet Arslan (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997085#comment-13997085
 ] 

Ahmet Arslan commented on LUCENE-5664:
--

bq. query the exact value of a field 
Can you give an example? You mean values containing special characters? like ! 
|| ? * ~  etc.

 New meaning of equal sign in StandardQueryParser
 

 Key: LUCENE-5664
 URL: https://issues.apache.org/jira/browse/LUCENE-5664
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/queryparser
Affects Versions: 4.5, 4.8
Reporter: Martin Blom
 Attachments: LUCENE-5664.patch


 The StandardSyntaxParser.jj has (undocumented?) support for the , =,  and 
 = operators that generate a TermRangeQueryNode. The equal operator, however, 
 behaves just like the colon and produces a regular Term node instead of a 
 TermRangeQueryNode.
 I've been using the attached patch in a project where we had to be able to 
 query the exact value of a field and I'm hoping there is interest to apply it 
 upstream.
 (Note that the colon operator works just as before, producing TermQuery or 
 PhraseQuery nodes.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] $PROJECT_NAME - Build # $BUILD_NUMBER - $BUILD_STATUS!

2014-05-14 Thread builder
Build: ${BUILD_URL}

${FAILED_TESTS}

Build Log:
${BUILD_LOG_MULTILINE_REGEX,regex=(?x: 
  \


  \
(?:.*\\[javac\\]\\s++(?![1-9]\\d*\\s+error).*\\r?\\n)*+.*\\[javac\\]\\s+[1-9]\\d*\\s+error.*\\r?\\n
   \


  \
|.*\\[junit4\\]\\s+Suite:.*+\\s++   
   \
 (?:.*\\[junit4\\]\\s++(?!Suite:)(?!Completed).*\\r?\\n)*+  
 \
 .*\\[junit4\\]\\s++Completed\\s+.*\\s*FAILURES!\\r?\\n  
   \


  \
|.*\\[junit4\\]\\s+JVM\\s+J\\d+:\\s+std(?:out|err)\\s+was\\s+not\\s+empty.*+\\s++
   \
 
(?:.*\\[junit4\\]\\s++(?!JVM\\s+\\d+:\\s+std)(?!s+JVM\\s+J\\d+:\\s+EOF).*\\r?\\n)*+
 \
 .*\\[junit4\\]\\s++\\s+JVM\\s+J\\d+:\\s+EOF.*\\r?\\n
 \


   \
|.*rat-sources:.*\\r?\\n
 \
 
(?:\\s*+\\[echo\\]\\s*\\r?\\n|\\s*+\\[echo\\]\\s++(?![1-9]\\d*\\s+Unknown\\s+License)\\S.*\\r?\\n)*+
\
 \\s*+\\[echo\\]\\s+[1-9]\\d*\\s+Unknown\\s+License.*\\r?\\n
\
 (?:\\s*+\\[echo\\].*\\r?\\n)*+ 
 \


   \
|(?:.*\\r?\\n){2}.*\\[licenses\\]\\s+MISSING\\s+sha1(?:.*\\r?\\n){2}
   \


\
|.*check-licenses:.*\\r?\\n\\s*\\[echo\\].*\\r?\\n  
\
 \\s*\\[licenses\\]\\s+(?:MISSING\\s+LICENSE|CHECKSUM\\s+FAILED).*\\r?\\n   
  \
 (?:\\s*+\\[licenses\\].*\\r?\\n)++ 
  \


 \
|(?:.*\\[javadoc\\]\\s++(?![1-9]\\d*\\s+(?:error|warning)).+\\r?\\n)*+  
  \
 .*\\[javadoc\\]\\s+[1-9]\\d*\\s+(?:error|warning).*\\r?\\n 
 \


 \
|.*javadocs-lint:.*\\r?\\n(?:.*\\[exec\\].*\\r?\\n)*+   
\


 \
|.*check.*:.*\\r?\\n

 \
 (?:\\s*+\\[forbidden-apis\\]\\s*\\r?\\n
 \
  |\\s*+\\[forbidden-apis\\]\\s++   
   \

(?!Scanned\\s+\\d+\\s+(?:\\(and\\s+\\d+\\s+related\\)\\s+)?class\\s+file\\(s\\))\\S.*\\r?\n)*+
\
 \\s*+\\[forbidden-apis\\]\\s++ 
   

[jira] [Commented] (LUCENE-5619) TestBackwardsCompatibility needs updatable docvalues

2014-05-14 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997358#comment-13997358
 ] 

Shai Erera commented on LUCENE-5619:


If there are no objections, I will commit it later today...

 TestBackwardsCompatibility needs updatable docvalues
 

 Key: LUCENE-5619
 URL: https://issues.apache.org/jira/browse/LUCENE-5619
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5619.patch, dvupdates.48.zip


 We don't test this at all in TestBackCompat. this is scary!



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-4x-Linux-Java7-64-test-only - Build # 20965 - Still Failing!

2014-05-14 Thread builder
Build: builds.flonkings.com/job/Lucene-4x-Linux-Java7-64-test-only/20965/

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.codecs.lucene3x.TestTermInfosReaderIndex

Error Message:
_1.fnm in dir=RAMDirectory@1e7dfad7 
lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@60df48e4

Stack Trace:
java.nio.file.NoSuchFileException: _1.fnm in dir=RAMDirectory@1e7dfad7 
lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@60df48e4
at __randomizedtesting.SeedInfo.seed([9FF46013EE8C78FC]:0)
at 
org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:579)
at 
org.apache.lucene.codecs.lucene3x.PreFlexRWFieldInfosReader.read(PreFlexRWFieldInfosReader.java:45)
at 
org.apache.lucene.codecs.lucene3x.TestTermInfosReaderIndex.beforeClass(TestTermInfosReaderIndex.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:767)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
at java.lang.Thread.run(Thread.java:724)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.codecs.lucene3x.TestTermInfosReaderIndex

Error Message:


Stack Trace:
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([9FF46013EE8C78FC]:0)
at 
org.apache.lucene.codecs.lucene3x.TestTermInfosReaderIndex.afterClass(TestTermInfosReaderIndex.java:118)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:790)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-5666) Add UninvertingReader

2014-05-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997359#comment-13997359
 ] 

ASF subversion and git services commented on LUCENE-5666:
-

Commit 1594505 from [~rcmuir] in branch 'dev/branches/lucene5666'
[ https://svn.apache.org/r1594505 ]

LUCENE-5666: clear nocommits and fix precommit

 Add UninvertingReader
 -

 Key: LUCENE-5666
 URL: https://issues.apache.org/jira/browse/LUCENE-5666
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: 5.0


 Currently the fieldcache is not pluggable at all. It would be better if 
 everything used the docvalues apis.
 This would allow people to customize the implementation, extend the classes 
 with custom subclasses with additional stuff, etc etc.
 FieldCache can be accessed via the docvalues apis, using the FilterReader api.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5666) Add UninvertingReader

2014-05-14 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5666:


Attachment: LUCENE-5666.patch

Patch (from diff-sources.py) showing the differences between trunk and branch.

Unfortunately I could not remove all fieldcache insanity in solr (i really 
tried), so there are two narrow cases where its explicitly enabled:
* ord/rord on single-valued numeric fields
* grouping with faceting (group.facet) on single-valued numeric fields.

Otherwise no more insanity and things are a lot more flexible.

 Add UninvertingReader
 -

 Key: LUCENE-5666
 URL: https://issues.apache.org/jira/browse/LUCENE-5666
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: 5.0

 Attachments: LUCENE-5666.patch


 Currently the fieldcache is not pluggable at all. It would be better if 
 everything used the docvalues apis.
 This would allow people to customize the implementation, extend the classes 
 with custom subclasses with additional stuff, etc etc.
 FieldCache can be accessed via the docvalues apis, using the FilterReader api.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5666) Add UninvertingReader

2014-05-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13996742#comment-13996742
 ] 

ASF subversion and git services commented on LUCENE-5666:
-

Commit 1594316 from [~rcmuir] in branch 'dev/branches/lucene5666'
[ https://svn.apache.org/r1594316 ]

LUCENE-5666: fix bug (null is no longer allowed)

 Add UninvertingReader
 -

 Key: LUCENE-5666
 URL: https://issues.apache.org/jira/browse/LUCENE-5666
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: 5.0


 Currently the fieldcache is not pluggable at all. It would be better if 
 everything used the docvalues apis.
 This would allow people to customize the implementation, extend the classes 
 with custom subclasses with additional stuff, etc etc.
 FieldCache can be accessed via the docvalues apis, using the FilterReader api.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5285) Solr response format should support child Docs

2014-05-14 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-5285:


Attachment: SOLR-5285.patch

1. Added JavaDocs to ChildDocTransformerFactory
2. Created a new binary file for backcompatibility and forwardcompatibility.

bq. Why is the tag name in the JSON format childDocs but in the XML format 
it's childDoc (no plural) ? ... seems like those should be consistent.

I guess because in JSON the input is a JSON array hence childDocs, while in 
XML we use multiple childDoc tags to represent nested documents.

bq. the 10 hardcoded in the getDocList call is garunteed to burn someone ... 
it can definitely default to 10, but we need to have a local param for it in 
the tnrasformer

Added a non mandatory parameter called numChildDocs which makes it 
configurable. Although I'm not sure if the name is correct.

bq. Well, looking at your test, a more specific way to put it is that the new 
child transformer actually returns all descendents of the return documents in 
a flat list. Which is fine if we document it that way – but it has me thinking: 
we should really add a childFilter option to the transformer to constrain the 
results. This would not only help with the grand child situation, but would 
also make it easy for people to constrain the types of children they want to 
get back. (and getDocList can already take in a Query filter)

Added a non mandatory parameter called childFilter which could be used to 
filter out which child documents to be nested in the parent documents to be 
returned.

TODO - I will work on adding randomized testing

 Solr response format should support child Docs
 --

 Key: SOLR-5285
 URL: https://issues.apache.org/jira/browse/SOLR-5285
 Project: Solr
  Issue Type: New Feature
Reporter: Varun Thacker
 Fix For: 4.9, 5.0

 Attachments: SOLR-5285.patch, SOLR-5285.patch, SOLR-5285.patch, 
 SOLR-5285.patch, SOLR-5285.patch, SOLR-5285.patch, SOLR-5285.patch


 Solr has added support for taking childDocs as input ( only XML till now ). 
 It's currently used for BlockJoinQuery. 
 I feel that if a user indexes a document with child docs, even if he isn't 
 using the BJQ features and is just searching which results in a hit on the 
 parentDoc, it's childDocs should be returned in the response format.
 [~hossman_luc...@fucit.org] on IRC suggested that the DocTransformers would 
 be the place to add childDocs to the response.
 Now given a docId one needs to find out all the childDoc id's. A couple of 
 approaches which I could think of are 
 1. Maintain the relation between a parentDoc and it's childDocs during 
 indexing time in maybe a separate index?
 2. Somehow emulate what happens in ToParentBlockJoinQuery.nextDoc() - Given a 
 parentDoc it finds out all the childDocs but this requires a childScorer.
 Am I missing something obvious on how to find the relation between a 
 parentDoc and it's childDocs because none of the above solutions for this 
 look right.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5608) SpatialPrefixTree API refactor

2014-05-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13996915#comment-13996915
 ] 

ASF subversion and git services commented on LUCENE-5608:
-

Commit 1594394 from [~dsmiley] in branch 'dev/trunk'
[ https://svn.apache.org/r1594394 ]

LUCENE-5608 better/more comments

 SpatialPrefixTree API refactor
 --

 Key: LUCENE-5608
 URL: https://issues.apache.org/jira/browse/LUCENE-5608
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.0

 Attachments: LUCENE-5608__SpatialPrefixTree_API_refactor.patch


 This is a refactor of the SpatialPrefixTree spatial API, in preparation for 
 more SPT implementations on the near horizon.  These are fairly internal 
 APIs; SpatialExample.java didn't have to change, nor the Solr adapters, and I 
 doubt ES would have to either.
 API changes:
 * SpatialPrefixTree  Cell had a fairly significant make-over. The existing 
 implementations for Geohash  Quad have been made to subclass 
 LegacyPrefixTree  LegacyCell shim's, and otherwise had very few changes 
 (performance _should_ be the same).  Cell is now an interface.
 * New CellIterator which is an IteratorCell. Includes 3 implementations.
 * PrefixTreeStrategy.simplifyIndexedCells was renamed to pruneLeafyBranches 
 and moved to RPT and made toggle'able with a setter. It's going to be removed 
 in the future but for the time being it remains a useful optimization.
 * RPT's pointsOnly  multiOverlappingIndexedShapes options now have setters.
 Future:
 * The AbstractVisitingPrefixTreeFilter (used by RPT's Intersects, Within, 
 Disjoint) really should be refactored to use the new CellIterator API as it 
 will reduce the amount of code and should make the code easier to follow 
 since it would be based on a well-knon design-pattern (an iterator).
 I wish I had done this as a series of commits on a GitHub branch; ah well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5668) Off-by-1 error in TieredMergePolicy

2014-05-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13995501#comment-13995501
 ] 

ASF subversion and git services commented on LUCENE-5668:
-

Commit 1594062 from [~mikemccand] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1594062 ]

LUCENE-5668: fix ob1 in TieredMergePolicy

 Off-by-1 error in TieredMergePolicy
 ---

 Key: LUCENE-5668
 URL: https://issues.apache.org/jira/browse/LUCENE-5668
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 4.8.1, 4.9, 5.0


 When I was comparing performance of different UUIDs, I noticed that TMP was 
 merging too soon and picking non-ideal merges as a result.  The fix is silly:
 Index: lucene/core/src/java/org/apache/lucene/index/TieredMergePolicy.java
 ===
 --- lucene/core/src/java/org/apache/lucene/index/TieredMergePolicy.java   
 (revision 1593975)
 +++ lucene/core/src/java/org/apache/lucene/index/TieredMergePolicy.java   
 (working copy)
 @@ -361,7 +361,7 @@
  return spec;
}
  
 -  if (eligible.size() = allowedSegCountInt) {
 +  if (eligible.size()  allowedSegCountInt) {
  
  // OK we are over budget -- find best merge!
  MergeScore bestScore = null;



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6071) Make solr install like other databases

2014-05-14 Thread Mike (JIRA)
Mike created SOLR-6071:
--

 Summary: Make solr install like other databases
 Key: SOLR-6071
 URL: https://issues.apache.org/jira/browse/SOLR-6071
 Project: Solr
  Issue Type: New Feature
  Components: scripts and tools
Affects Versions: 5.0
 Environment: Ubuntu
Reporter: Mike


It's long past time that Solr should have proper startup and log scripts. 

There are a number of reasons and much evidence why we should start including 
them:

1. In Solr-4792 we removed the war file from the distribution making it easier 
than ever before to set solr up with these scripts.
2. The StackOverflow question on this topic has been viewed more than 34k times 
and has several differing answers.
3. For non-java developers, figuring out the right way to start and daemonize 
Solr isn't obvious. Right now, my installation has a number of java flags that 
I've accumulated over the years (-jar means what? -server is only needed on 32 
bit machines? -xMX huh?) This leads to varied deployments and inconsistencies 
that common scripts could help alleviate.
4. Anecdotally I've heard endless bashing of Solr because it's such a pain to 
get set up. 
5. Solr is unlike any other database I know in the grittiness of starting it up.
6. Not having these scripts makes Solr look less polished than it would 
otherwise.

We discussed this on IRC a bit yesterday and there didn't seem to be any 
opposition to doing this. Consensus seemed to be simply that it hadn't been 
done...yet.

I am not an expert on these things, but I think we should get something put 
together for Solr 5, if there's time. Hopefully this thread can get the ball 
rolling -- I didn't see any previous discussion anywhere. Apologies if I missed 
it. 

This would be a great improvement to Solr.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5681) Make the OverseerCollectionProcessor multi-threaded

2014-05-14 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997335#comment-13997335
 ] 

Anshum Gupta commented on SOLR-5681:


Thanks Noble.

It would be good to get more eyes on this. Considering it touches important 
parts of the mechanics of Collection API processing, the more the better and 
the sooner the better too. I'd want to close this soon else it might soon turn 
into something that's very tough to maintain. 

 Make the OverseerCollectionProcessor multi-threaded
 ---

 Key: SOLR-5681
 URL: https://issues.apache.org/jira/browse/SOLR-5681
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Attachments: SOLR-5681-2.patch, SOLR-5681.patch, SOLR-5681.patch, 
 SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, 
 SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, 
 SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, 
 SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, 
 SOLR-5681.patch, SOLR-5681_OCPTEST.patch


 Right now, the OverseerCollectionProcessor is single threaded i.e submitting 
 anything long running would have it block processing of other mutually 
 exclusive tasks.
 When OCP tasks become optionally async (SOLR-5477), it'd be good to have 
 truly non-blocking behavior by multi-threading the OCP itself.
 For example, a ShardSplit call on Collection1 would block the thread and 
 thereby, not processing a create collection task (which would stay queued in 
 zk) though both the tasks are mutually exclusive.
 Here are a few of the challenges:
 * Mutual exclusivity: Only let mutually exclusive tasks run in parallel. An 
 easy way to handle that is to only let 1 task per collection run at a time.
 * ZK Distributed Queue to feed tasks: The OCP consumes tasks from a queue. 
 The task from the workQueue is only removed on completion so that in case of 
 a failure, the new Overseer can re-consume the same task and retry. A queue 
 is not the right data structure in the first place to look ahead i.e. get the 
 2nd task from the queue when the 1st one is in process. Also, deleting tasks 
 which are not at the head of a queue is not really an 'intuitive' thing.
 Proposed solutions for task management:
 * Task funnel and peekAfter(): The parent thread is responsible for getting 
 and passing the request to a new thread (or one from the pool). The parent 
 method uses a peekAfter(last element) instead of a peek(). The peekAfter 
 returns the task after the 'last element'. Maintain this request information 
 and use it for deleting/cleaning up the workQueue.
 * Another (almost duplicate) queue: While offering tasks to workQueue, also 
 offer them to a new queue (call it volatileWorkQueue?). The difference is, as 
 soon as a task from this is picked up for processing by the thread, it's 
 removed from the queue. At the end, the cleanup is done from the workQueue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2894) Implement distributed pivot faceting

2014-05-14 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-2894:
---

Attachment: SOLR-2894_cloud_test.patch

bq. I'm not able to reproduce this. Could you tell me a little more about your 
setup?

trunk, with patch applied, build the example and then run the [Simple Two-Shard 
Cluster|https://cwiki.apache.org/confluence/display/solr/Getting+Started+with+SolrCloud#GettingStartedwithSolrCloud-SimpleTwo-ShardClusterontheSameMachine]
 ...

{noformat}
hossman@frisbee:~/lucene/dev/solr$ cp -r example node1
hossman@frisbee:~/lucene/dev/solr$ cp -r example node2

# in term1...
hossman@frisbee:~/lucene/dev/solr/node1$ java -DzkRun -DnumShards=2 
-Dbootstrap_confdir=./solr/collection1/conf -Dcollection.configName=myconf -jar 
start.jar

# wait for node1 startup, then in term2...
hossman@frisbee:~/lucene/dev/solr/node2$ java -Djetty.port=7574 
-DzkHost=localhost:9983 -jar start.jar

# wait for node2 startup, then in term3...
hossman@frisbee:~/lucene/dev/solr/example/exampledocs$ java -jar post.jar *.xml
SimplePostTool version 1.5
Posting files to base url http://localhost:8983/solr/update using content-type 
application/xml..
...
14 files indexed.
COMMITting Solr index changes to http://localhost:8983/solr/update..
Time spent: 0:00:01.763
hossman@frisbee:~/lucene/dev/solr/example/exampledocs$ curl 
'http://localhost:8983/solr/select?q=*:*sort=id+descrows=2facet=truefacet.pivot=cat,manu_+id_s,inStockfacet.limit=3'
  /dev/null

# watch the logs in term1 and term2 go spinning like mad
{noformat}



bq. While the size of the shard parameters may not strictly be as efficient as 
possible, is it such that we can run with that for now and circle back to this 
at a later point, or are you uncomfortable with including the parameters as is 
in the initial commit?

Hmm... not sure how i feel about it w/o more testing - from what i was seeing, 
with non-trivial field names, term values, and facet.limit the refinements 
requests were getting *HUGE* so I suspect it's something we're going to want to 
tackle before releasing -- but refactoring it to be smaller definitely seems 
like something that should be a lower priority to some of the correctness 
related issues we're finding, and adding more tests (so we can be confident the 
refactoring is correct)



I'm attaching a SOLR-2894_cloud_test.patch that contains a new cloud based 
randomized test i've been working at off and on over the last few days (I 
created it as a standalone patch because i didn't want to conflict with 
anything Brett might be in the middle of, and it was easy to do - kept me 
focused on the test and not dabbling with the internals).  

The test builds up a bunch of random docs, then does a handfull of random pivot 
facet queries.  For each pivot query, it recursively walks the pivot response 
executing verification queries using fq params it builds up from the pivot 
constraints -- so if pivot.facet=a,b,c says that a has a term x with 4 
matching docs, it adds an fq=a:x to the original query and checks the count; 
then it looks a the pivot terms for field b under a:x and also executes a 
query for each of them with another fq added, etc...

As is, the patch currently passes, but that's only because of a few nocommits...

* randomization of mincount is disabled due to the refinement bug i mentioned 
before
* it's currently only doing pivots on 2 string fields (one multivalued and one 
single valued) ... any attempts at pivot faceting the numeric/date/boolean 
fields (already included in the docs) causes an NPE in the SolrJ QueryResponse 
class (i haven't investigated why yet)



 Implement distributed pivot faceting
 

 Key: SOLR-2894
 URL: https://issues.apache.org/jira/browse/SOLR-2894
 Project: Solr
  Issue Type: Improvement
Reporter: Erik Hatcher
 Fix For: 4.9, 5.0

 Attachments: SOLR-2894-reworked.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894_cloud_test.patch, dateToObject.patch, pivot_mincount_problem.sh


 Following up on SOLR-792, pivot faceting currently only supports 
 undistributed mode.  Distributed pivot faceting needs to be implemented.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] $PROJECT_NAME - Build # $BUILD_NUMBER - $BUILD_STATUS!

2014-05-14 Thread builder
Build: ${BUILD_URL}

${FAILED_TESTS}

Build Log:
${BUILD_LOG_MULTILINE_REGEX,regex=(?x: 
  \


  \
(?:.*\\[javac\\]\\s++(?![1-9]\\d*\\s+error).*\\r?\\n)*+.*\\[javac\\]\\s+[1-9]\\d*\\s+error.*\\r?\\n
   \


  \
|.*\\[junit4\\]\\s+Suite:.*+\\s++   
   \
 (?:.*\\[junit4\\]\\s++(?!Suite:)(?!Completed).*\\r?\\n)*+  
 \
 .*\\[junit4\\]\\s++Completed\\s+.*\\s*FAILURES!\\r?\\n  
   \


  \
|.*\\[junit4\\]\\s+JVM\\s+J\\d+:\\s+std(?:out|err)\\s+was\\s+not\\s+empty.*+\\s++
   \
 
(?:.*\\[junit4\\]\\s++(?!JVM\\s+\\d+:\\s+std)(?!s+JVM\\s+J\\d+:\\s+EOF).*\\r?\\n)*+
 \
 .*\\[junit4\\]\\s++\\s+JVM\\s+J\\d+:\\s+EOF.*\\r?\\n
 \


   \
|.*rat-sources:.*\\r?\\n
 \
 
(?:\\s*+\\[echo\\]\\s*\\r?\\n|\\s*+\\[echo\\]\\s++(?![1-9]\\d*\\s+Unknown\\s+License)\\S.*\\r?\\n)*+
\
 \\s*+\\[echo\\]\\s+[1-9]\\d*\\s+Unknown\\s+License.*\\r?\\n
\
 (?:\\s*+\\[echo\\].*\\r?\\n)*+ 
 \


   \
|(?:.*\\r?\\n){2}.*\\[licenses\\]\\s+MISSING\\s+sha1(?:.*\\r?\\n){2}
   \


\
|.*check-licenses:.*\\r?\\n\\s*\\[echo\\].*\\r?\\n  
\
 \\s*\\[licenses\\]\\s+(?:MISSING\\s+LICENSE|CHECKSUM\\s+FAILED).*\\r?\\n   
  \
 (?:\\s*+\\[licenses\\].*\\r?\\n)++ 
  \


 \
|(?:.*\\[javadoc\\]\\s++(?![1-9]\\d*\\s+(?:error|warning)).+\\r?\\n)*+  
  \
 .*\\[javadoc\\]\\s+[1-9]\\d*\\s+(?:error|warning).*\\r?\\n 
 \


 \
|.*javadocs-lint:.*\\r?\\n(?:.*\\[exec\\].*\\r?\\n)*+   
\


 \
|.*check.*:.*\\r?\\n

 \
 (?:\\s*+\\[forbidden-apis\\]\\s*\\r?\\n
 \
  |\\s*+\\[forbidden-apis\\]\\s++   
   \

(?!Scanned\\s+\\d+\\s+(?:\\(and\\s+\\d+\\s+related\\)\\s+)?class\\s+file\\(s\\))\\S.*\\r?\n)*+
\
 \\s*+\\[forbidden-apis\\]\\s++ 
   

[jira] [Updated] (SOLR-6066) CollapsingQParserPlugin + Elevation does not respects fq (filter query)

2014-05-14 Thread Herb Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Herb Jiang updated SOLR-6066:
-

Description: 
QueryElevationComponent respects the fq parameter. But when use 
CollapsingQParserPlugin with QueryElevationComponent, additional fq has no 
effect.

I use following test case to show this issue. (It will failed)


{code:java}
String[] doc = {id,1, term_s, , group_s, group1, 
category_s, cat2, test_ti, 5, test_tl, 10, test_tf, 2000};
assertU(adoc(doc));
assertU(commit());
String[] doc1 = {id,2, term_s,, group_s, group1, 
category_s, cat2, test_ti, 50, test_tl, 100, test_tf, 200};
assertU(adoc(doc1));



String[] doc2 = {id,3, term_s, , test_ti, 5000, test_tl, 
100, test_tf, 200};
assertU(adoc(doc2));
assertU(commit());
String[] doc3 = {id,4, term_s, , test_ti, 500, test_tl, 
1000, test_tf, 2000};
assertU(adoc(doc3));


String[] doc4 = {id,5, term_s, , group_s, group2, 
category_s, cat1, test_ti, 4, test_tl, 10, test_tf, 2000};
assertU(adoc(doc4));
assertU(commit());
String[] doc5 = {id,6, term_s,, group_s, group2, 
category_s, cat1, test_ti, 10, test_tl, 100, test_tf, 200};
assertU(adoc(doc5));
assertU(commit());

//Test additional filter query when using collapse
params = new ModifiableSolrParams();
params.add(q, );
params.add(fq, {!collapse field=group_s});
params.add(fq, category_s:cat1);
params.add(defType, edismax);
params.add(bf, field(test_ti));
params.add(qf, term_s);
params.add(qt, /elevate);
params.add(elevateIds, 2);
assertQ(req(params), *[count(//doc)=1],
//result/doc[1]/float[@name='id'][.='6.0']);
{code}

  was:
QueryElevationComponent respects the fq parameter. But when use 
CollapsingQParserPlugin with QueryElevationComponent, additional fq has no 
effect.

I use following test case to show this issue. (It will failed)


{code:java}
String[] doc = {id,1, term_s, , group_s, group1, 
category_s, cat1, test_ti, 5, test_tl, 10, test_tf, 2000};
assertU(adoc(doc));
assertU(commit());
String[] doc1 = {id,2, term_s,, group_s, group1, 
category_s, cat2, test_ti, 50, test_tl, 100, test_tf, 200};
assertU(adoc(doc1));



String[] doc2 = {id,3, term_s, , test_ti, 5000, test_tl, 
100, test_tf, 200};
assertU(adoc(doc2));
assertU(commit());
String[] doc3 = {id,4, term_s, , test_ti, 500, test_tl, 
1000, test_tf, 2000};
assertU(adoc(doc3));


String[] doc4 = {id,5, term_s, , group_s, group2, 
category_s, cat1, test_ti, 4, test_tl, 10, test_tf, 2000};
assertU(adoc(doc4));
assertU(commit());
String[] doc5 = {id,6, term_s,, group_s, group2, 
category_s, cat1, test_ti, 10, test_tl, 100, test_tf, 200};
assertU(adoc(doc5));
assertU(commit());

//Test additional filter query when using collapse
params = new ModifiableSolrParams();
params.add(q, );
params.add(fq, {!collapse field=group_s});
params.add(fq, category_s:cat1);
params.add(defType, edismax);
params.add(bf, field(test_ti));
params.add(qf, term_s);
params.add(qt, /elevate);
params.add(elevateIds, 2);
assertQ(req(params), *[count(//doc)=1],
//result/doc[1]/float[@name='id'][.='6.0']);
{code}


 CollapsingQParserPlugin + Elevation does not respects fq (filter query) 
 --

 Key: SOLR-6066
 URL: https://issues.apache.org/jira/browse/SOLR-6066
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.8
Reporter: Herb Jiang

 QueryElevationComponent respects the fq parameter. But when use 
 CollapsingQParserPlugin with QueryElevationComponent, additional fq has no 
 effect.
 I use following test case to show this issue. (It will failed)
 {code:java}
 String[] doc = {id,1, term_s, , group_s, group1, 
 category_s, cat2, test_ti, 5, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc));
 assertU(commit());
 String[] doc1 = {id,2, term_s,, group_s, group1, 
 category_s, cat2, test_ti, 50, test_tl, 100, test_tf, 200};
 assertU(adoc(doc1));
 String[] doc2 = {id,3, term_s, , test_ti, 5000, 
 test_tl, 100, test_tf, 200};
 assertU(adoc(doc2));
 assertU(commit());
 String[] doc3 = {id,4, term_s, , test_ti, 500, test_tl, 
 1000, test_tf, 2000};
 assertU(adoc(doc3));
 String[] doc4 = {id,5, term_s, , group_s, group2, 
 category_s, cat1, test_ti, 4, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc4));
 assertU(commit());
 String[] doc5 = {id,6, term_s,, group_s, group2, 
 category_s, cat1, test_ti, 10, test_tl, 100, test_tf, 200};
 assertU(adoc(doc5));
 assertU(commit());
 //Test additional filter query when using collapse
 params = 

[jira] [Commented] (SOLR-4037) Continuous Ping query caused exception: java.util.concurrent.RejectedExecutionException

2014-05-14 Thread Shirish (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997560#comment-13997560
 ] 

Shirish commented on SOLR-4037:
---

I am using 4.7.2 with java 1.8 . 

I did resolve this issue by refreshing my connection to Solr.

 Continuous Ping query caused exception: 
 java.util.concurrent.RejectedExecutionException
 ---

 Key: SOLR-4037
 URL: https://issues.apache.org/jira/browse/SOLR-4037
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0
 Environment: 5.0-SNAPSHOT 1366361:1404534M - markus - 2012-11-01 
 12:37:38
 Debian Squeeze, Tomcat 6, Sun Java 6, 10 nodes, 10 shards, rep. factor 2.
Reporter: Markus Jelsma
 Fix For: 4.9, 5.0


 See: 
 http://lucene.472066.n3.nabble.com/Continuous-Ping-query-caused-exception-java-util-concurrent-RejectedExecutionException-td4017470.html
 Using this week's trunk we sometime see nodes entering a some funky state 
 where it continuously reports exceptions. Replication and query handling is 
 still possible but there is an increase in CPU time:
 {code}
 2012-11-01 09:24:28,337 INFO [solr.core.SolrCore] - [http-8080-exec-4] - : 
 [openindex_f] webapp=/solr path=/admin/ping params={} status=500 QTime=21
 2012-11-01 09:24:28,337 ERROR [solr.core.SolrCore] - [http-8080-exec-4] - : 
 org.apache.solr.common.SolrException: Ping query caused exception: 
 java.util.concurrent.RejectedExecutionException
 at 
 org.apache.solr.handler.PingRequestHandler.handlePing(PingRequestHandler.java:259)
 at 
 org.apache.solr.handler.PingRequestHandler.handleRequestBody(PingRequestHandler.java:207)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:144)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1830)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:476)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:276)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
 at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
 at 
 org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
 at 
 org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:744)
 at 
 org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2274)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: org.apache.solr.common.SolrException: 
 java.util.concurrent.RejectedExecutionException
 at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1674)
 at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1330)
 at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1265)
 at 
 org.apache.solr.request.SolrQueryRequestBase.getSearcher(SolrQueryRequestBase.java:88)
 at 
 org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:214)
 at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:206)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:144)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1830)
 at 
 org.apache.solr.handler.PingRequestHandler.handlePing(PingRequestHandler.java:250)
 ... 19 more
 Caused by: java.util.concurrent.RejectedExecutionException
 at 
 java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:1768)
 at 
 java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:767)
 at 
 java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:658)
 at 
 java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92)
 at 
 

[jira] [Commented] (SOLR-6073) CollectionAdminRequest has createCollection methods with hard-coded router=implicit

2014-05-14 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997534#comment-13997534
 ] 

Noble Paul commented on SOLR-6073:
--

There are 8 overloaded methods for createcollection. We should just get rid of 
all of them and make users pass the Create Obect to a single createCollection 
method

 CollectionAdminRequest has createCollection methods with hard-coded 
 router=implicit
 -

 Key: SOLR-6073
 URL: https://issues.apache.org/jira/browse/SOLR-6073
 Project: Solr
  Issue Type: Bug
  Components: clients - java, SolrCloud
Affects Versions: 4.8
Reporter: Shalin Shekhar Mangar
 Fix For: 4.9, 5.0


 The CollectionAdminRequest has a createCollection() method which has the 
 following hard-coded:
 {code}
 req.setRouterName(implicit);
 {code}
 This is a bug and we should remove it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-5681) Make the OverseerCollectionProcessor multi-threaded

2014-05-14 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5681:
-

Comment: was deleted

(was: tests passing)

 Make the OverseerCollectionProcessor multi-threaded
 ---

 Key: SOLR-5681
 URL: https://issues.apache.org/jira/browse/SOLR-5681
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Attachments: SOLR-5681-2.patch, SOLR-5681.patch, SOLR-5681.patch, 
 SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, 
 SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, 
 SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, 
 SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, 
 SOLR-5681.patch


 Right now, the OverseerCollectionProcessor is single threaded i.e submitting 
 anything long running would have it block processing of other mutually 
 exclusive tasks.
 When OCP tasks become optionally async (SOLR-5477), it'd be good to have 
 truly non-blocking behavior by multi-threading the OCP itself.
 For example, a ShardSplit call on Collection1 would block the thread and 
 thereby, not processing a create collection task (which would stay queued in 
 zk) though both the tasks are mutually exclusive.
 Here are a few of the challenges:
 * Mutual exclusivity: Only let mutually exclusive tasks run in parallel. An 
 easy way to handle that is to only let 1 task per collection run at a time.
 * ZK Distributed Queue to feed tasks: The OCP consumes tasks from a queue. 
 The task from the workQueue is only removed on completion so that in case of 
 a failure, the new Overseer can re-consume the same task and retry. A queue 
 is not the right data structure in the first place to look ahead i.e. get the 
 2nd task from the queue when the 1st one is in process. Also, deleting tasks 
 which are not at the head of a queue is not really an 'intuitive' thing.
 Proposed solutions for task management:
 * Task funnel and peekAfter(): The parent thread is responsible for getting 
 and passing the request to a new thread (or one from the pool). The parent 
 method uses a peekAfter(last element) instead of a peek(). The peekAfter 
 returns the task after the 'last element'. Maintain this request information 
 and use it for deleting/cleaning up the workQueue.
 * Another (almost duplicate) queue: While offering tasks to workQueue, also 
 offer them to a new queue (call it volatileWorkQueue?). The difference is, as 
 soon as a task from this is picked up for processing by the thread, it's 
 removed from the queue. At the end, the cleanup is done from the workQueue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5618) DocValues updates send wrong fieldinfos to codec producers

2014-05-14 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated LUCENE-5618:
---

Attachment: LUCENE-5618.patch

After committing LUCENE-5619, with the previous patch 
TestBackwardsCompatibility failed since it didn't handle pre-4.9 indexes well 
-- it didn't handle the case where one generation references multiple fields... 
to resolve that I added in this patch:

* SegmentReader acts accordingly only pre-4.9 indexes: beyond sending all the 
FieldInfos to a certain DocValuesProducer's gen, it ensures each such DVP is 
initialized once per generation.

* Lucene45DocValuesProducer does a lenient fields check if the segment's 
version is pre-4.9.

Note that I didn't add this leniency to Lucene42DocValuesProducer since that 
one doesn't support DocValues updates anyway, and so doesn't experience this 
issue at all.

 DocValues updates send wrong fieldinfos to codec producers
 --

 Key: LUCENE-5618
 URL: https://issues.apache.org/jira/browse/LUCENE-5618
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
Assignee: Shai Erera
Priority: Blocker
 Fix For: 4.9

 Attachments: LUCENE-5618.patch, LUCENE-5618.patch


 Spinoff from LUCENE-5616.
 See the example there, docvalues readers get a fieldinfos, but it doesn't 
 contain the correct ones, so they have invalid field numbers at read time.
 This should really be fixed. Maybe a simple solution is to not write 
 batches of fields in updates but just have only one field per gen? 
 This removes many-many relationships and would make things easy to understand.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6073) CollectionAdminRequest has createCollection methods with hard-coded router=implicit

2014-05-14 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-6073:
---

 Summary: CollectionAdminRequest has createCollection methods with 
hard-coded router=implicit
 Key: SOLR-6073
 URL: https://issues.apache.org/jira/browse/SOLR-6073
 Project: Solr
  Issue Type: Bug
  Components: clients - java, SolrCloud
Affects Versions: 4.8
Reporter: Shalin Shekhar Mangar
 Fix For: 4.9, 5.0


The CollectionAdminRequest has a createCollection() method which has the 
following hard-coded:
{code}
req.setRouterName(implicit);
{code}

This is a bug and we should remove it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4037) Continuous Ping query caused exception: java.util.concurrent.RejectedExecutionException

2014-05-14 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997319#comment-13997319
 ] 

Anshum Gupta commented on SOLR-4037:


Can you confirm if you're seeing the same issue. Also the environment, Solr 
version, mode and other details.
If you're on 4.8, you may need to test with Java 7.

 Continuous Ping query caused exception: 
 java.util.concurrent.RejectedExecutionException
 ---

 Key: SOLR-4037
 URL: https://issues.apache.org/jira/browse/SOLR-4037
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0
 Environment: 5.0-SNAPSHOT 1366361:1404534M - markus - 2012-11-01 
 12:37:38
 Debian Squeeze, Tomcat 6, Sun Java 6, 10 nodes, 10 shards, rep. factor 2.
Reporter: Markus Jelsma
 Fix For: 4.9, 5.0


 See: 
 http://lucene.472066.n3.nabble.com/Continuous-Ping-query-caused-exception-java-util-concurrent-RejectedExecutionException-td4017470.html
 Using this week's trunk we sometime see nodes entering a some funky state 
 where it continuously reports exceptions. Replication and query handling is 
 still possible but there is an increase in CPU time:
 {code}
 2012-11-01 09:24:28,337 INFO [solr.core.SolrCore] - [http-8080-exec-4] - : 
 [openindex_f] webapp=/solr path=/admin/ping params={} status=500 QTime=21
 2012-11-01 09:24:28,337 ERROR [solr.core.SolrCore] - [http-8080-exec-4] - : 
 org.apache.solr.common.SolrException: Ping query caused exception: 
 java.util.concurrent.RejectedExecutionException
 at 
 org.apache.solr.handler.PingRequestHandler.handlePing(PingRequestHandler.java:259)
 at 
 org.apache.solr.handler.PingRequestHandler.handleRequestBody(PingRequestHandler.java:207)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:144)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1830)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:476)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:276)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
 at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
 at 
 org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
 at 
 org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:744)
 at 
 org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2274)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: org.apache.solr.common.SolrException: 
 java.util.concurrent.RejectedExecutionException
 at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1674)
 at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1330)
 at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1265)
 at 
 org.apache.solr.request.SolrQueryRequestBase.getSearcher(SolrQueryRequestBase.java:88)
 at 
 org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:214)
 at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:206)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:144)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1830)
 at 
 org.apache.solr.handler.PingRequestHandler.handlePing(PingRequestHandler.java:250)
 ... 19 more
 Caused by: java.util.concurrent.RejectedExecutionException
 at 
 java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:1768)
 at 
 java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:767)
 at 
 java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:658)
 at 
 

[jira] [Updated] (SOLR-5973) Pluggable Ranking Collectors

2014-05-14 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-5973:
-

Attachment: SOLR-5973.patch

 Pluggable Ranking Collectors
 

 Key: SOLR-5973
 URL: https://issues.apache.org/jira/browse/SOLR-5973
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Joel Bernstein
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 4.9

 Attachments: SOLR-5973.patch, SOLR-5973.patch, SOLR-5973.patch, 
 SOLR-5973.patch, SOLR-5973.patch, SOLR-5973.patch, SOLR-5973.patch, 
 SOLR-5973.patch, SOLR-5973.patch, SOLR-5973.patch, SOLR-5973.patch, 
 SOLR-5973.patch, SOLR-5973.patch, SOLR-5973.patch


 This ticket introduces a new RankQuery and MergeStrategy to Solr. By 
 extending the RankQuery class, and implementing it's interface, you can 
 specify a custom ranking collector (TopDocsCollector) and distributed merge 
 strategy for a Solr query. 
 A new rq http parameter was added to support specifying a rank query using 
 a custom QParserPlugin.
 Sample syntax:
 {code}
 q=*:*wt=jsonindent=truerq={!myranker}
 {code}
 In the sample above the param: {code}rq={!myranker}{code} points to a 
 QParserPlugin that returns a Query that extends RankQuery. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-5670) org.apache.lucene.util.fst.FST should skip over outputs it is not interested in

2014-05-14 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless reassigned LUCENE-5670:
--

Assignee: Michael McCandless

 org.apache.lucene.util.fst.FST should skip over outputs it is not interested 
 in
 ---

 Key: LUCENE-5670
 URL: https://issues.apache.org/jira/browse/LUCENE-5670
 Project: Lucene - Core
  Issue Type: Improvement
Affects Versions: 4.7
Reporter: Christian Ziech
Assignee: Michael McCandless
Priority: Minor
 Attachments: LUCENE-5670.patch


 Currently the FST uses the read(DataInput) method from the Outputs class to 
 skip over outputs it actually is not interested in. For most use cases this 
 just creates some additional objects that are immediately destroyed again.
 When traversing an FST with non-trivial data however this can easily add up 
 to several excess objects that nobody actually ever read.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5666) Add UninvertingReader

2014-05-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997663#comment-13997663
 ] 

ASF subversion and git services commented on LUCENE-5666:
-

Commit 1594615 from [~mikemccand] in branch 'dev/branches/lucene5666'
[ https://svn.apache.org/r1594615 ]

LUCENE-5666: fix javadocs

 Add UninvertingReader
 -

 Key: LUCENE-5666
 URL: https://issues.apache.org/jira/browse/LUCENE-5666
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: 5.0

 Attachments: LUCENE-5666.patch


 Currently the fieldcache is not pluggable at all. It would be better if 
 everything used the docvalues apis.
 This would allow people to customize the implementation, extend the classes 
 with custom subclasses with additional stuff, etc etc.
 FieldCache can be accessed via the docvalues apis, using the FilterReader api.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5670) org.apache.lucene.util.fst.FST should skip over outputs it is not interested in

2014-05-14 Thread Christian Ziech (JIRA)
Christian Ziech created LUCENE-5670:
---

 Summary: org.apache.lucene.util.fst.FST should skip over outputs 
it is not interested in
 Key: LUCENE-5670
 URL: https://issues.apache.org/jira/browse/LUCENE-5670
 Project: Lucene - Core
  Issue Type: Improvement
Affects Versions: 4.7
Reporter: Christian Ziech
Priority: Minor


Currently the FST uses the read(DataInput) method from the Outputs class to 
skip over outputs it actually is not interested in. For most use cases this 
just creates some additional objects that are immediately destroyed again.

When traversing an FST with non-trivial data however this can easily add up to 
several excess objects that nobody actually ever read.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5681) Make the OverseerCollectionProcessor multi-threaded

2014-05-14 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5681:
-

Attachment: SOLR-5681.patch

tests passing

 Make the OverseerCollectionProcessor multi-threaded
 ---

 Key: SOLR-5681
 URL: https://issues.apache.org/jira/browse/SOLR-5681
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Attachments: SOLR-5681-2.patch, SOLR-5681.patch, SOLR-5681.patch, 
 SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, 
 SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, 
 SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, 
 SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, 
 SOLR-5681.patch, SOLR-5681.patch


 Right now, the OverseerCollectionProcessor is single threaded i.e submitting 
 anything long running would have it block processing of other mutually 
 exclusive tasks.
 When OCP tasks become optionally async (SOLR-5477), it'd be good to have 
 truly non-blocking behavior by multi-threading the OCP itself.
 For example, a ShardSplit call on Collection1 would block the thread and 
 thereby, not processing a create collection task (which would stay queued in 
 zk) though both the tasks are mutually exclusive.
 Here are a few of the challenges:
 * Mutual exclusivity: Only let mutually exclusive tasks run in parallel. An 
 easy way to handle that is to only let 1 task per collection run at a time.
 * ZK Distributed Queue to feed tasks: The OCP consumes tasks from a queue. 
 The task from the workQueue is only removed on completion so that in case of 
 a failure, the new Overseer can re-consume the same task and retry. A queue 
 is not the right data structure in the first place to look ahead i.e. get the 
 2nd task from the queue when the 1st one is in process. Also, deleting tasks 
 which are not at the head of a queue is not really an 'intuitive' thing.
 Proposed solutions for task management:
 * Task funnel and peekAfter(): The parent thread is responsible for getting 
 and passing the request to a new thread (or one from the pool). The parent 
 method uses a peekAfter(last element) instead of a peek(). The peekAfter 
 returns the task after the 'last element'. Maintain this request information 
 and use it for deleting/cleaning up the workQueue.
 * Another (almost duplicate) queue: While offering tasks to workQueue, also 
 offer them to a new queue (call it volatileWorkQueue?). The difference is, as 
 soon as a task from this is picked up for processing by the thread, it's 
 removed from the queue. At the end, the cleanup is done from the workQueue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5973) Pluggable Ranking Collectors

2014-05-14 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-5973:
-

Description: 
This ticket introduces a new RankQuery and MergeStrategy to Solr. By extending 
the RankQuery class, and implementing it's interface, you can specify a custom 
ranking collector (TopDocsCollector) and distributed merge strategy for a Solr 
query. 



Sample syntax:

{code}
q={!customRank subquery=*:* param1=a param2=b}wt=jsonindent=true
{code}
In the sample above the param: {code}q={!customRank  subquery=*:* param1=a 
param2=b}{code} points to a QParserPlugin that returns a Query that extends 
RankQuery.  The RankQuery defines the custom ranking and merge strategy for 
it's  subquery.

The RankQuery impl will have to do several things:

1) Implement the RankQuery interface.
2) Wrap the subquery and proxy all calls to the Query interface to the 
subquery. Using local params syntax the subquery can be any valid Solr query. 
The custom QParserPlugin is responsible for parsing the subquery.
3)  Implement hashCode() and equals() so the queryResultCache works properly 
with subquery and custom ranking algorithm. 




  was:
This ticket introduces a new RankQuery and MergeStrategy to Solr. By extending 
the RankQuery class, and implementing it's interface, you can specify a custom 
ranking collector (TopDocsCollector) and distributed merge strategy for a Solr 
query. 

A new rq http parameter was added to support specifying a rank query using a 
custom QParserPlugin.

Sample syntax:

{code}
q=*:*wt=jsonindent=truerq={!myranker}
{code}
In the sample above the param: {code}rq={!myranker}{code} points to a 
QParserPlugin that returns a Query that extends RankQuery. 





 Pluggable Ranking Collectors
 

 Key: SOLR-5973
 URL: https://issues.apache.org/jira/browse/SOLR-5973
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Joel Bernstein
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 4.9

 Attachments: SOLR-5973.patch, SOLR-5973.patch, SOLR-5973.patch, 
 SOLR-5973.patch, SOLR-5973.patch, SOLR-5973.patch, SOLR-5973.patch, 
 SOLR-5973.patch, SOLR-5973.patch, SOLR-5973.patch, SOLR-5973.patch, 
 SOLR-5973.patch, SOLR-5973.patch, SOLR-5973.patch


 This ticket introduces a new RankQuery and MergeStrategy to Solr. By 
 extending the RankQuery class, and implementing it's interface, you can 
 specify a custom ranking collector (TopDocsCollector) and distributed merge 
 strategy for a Solr query. 
 Sample syntax:
 {code}
 q={!customRank subquery=*:* param1=a param2=b}wt=jsonindent=true
 {code}
 In the sample above the param: {code}q={!customRank  subquery=*:* param1=a 
 param2=b}{code} points to a QParserPlugin that returns a Query that extends 
 RankQuery.  The RankQuery defines the custom ranking and merge strategy for 
 it's  subquery.
 The RankQuery impl will have to do several things:
 1) Implement the RankQuery interface.
 2) Wrap the subquery and proxy all calls to the Query interface to the 
 subquery. Using local params syntax the subquery can be any valid Solr query. 
 The custom QParserPlugin is responsible for parsing the subquery.
 3)  Implement hashCode() and equals() so the queryResultCache works properly 
 with subquery and custom ranking algorithm. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6066) CollapsingQParserPlugin + Elevation does not respects fq (filter query)

2014-05-14 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-6066:


Assignee: Joel Bernstein

 CollapsingQParserPlugin + Elevation does not respects fq (filter query) 
 --

 Key: SOLR-6066
 URL: https://issues.apache.org/jira/browse/SOLR-6066
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.8
Reporter: Herb Jiang
Assignee: Joel Bernstein
 Attachments: TestCollapseQParserPlugin.java


 QueryElevationComponent respects the fq parameter. But when use 
 CollapsingQParserPlugin with QueryElevationComponent, additional fq has no 
 effect.
 I use following test case to show this issue. (It will failed)
 {code:java}
 String[] doc = {id,1, term_s, , group_s, group1, 
 category_s, cat2, test_ti, 5, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc));
 assertU(commit());
 String[] doc1 = {id,2, term_s,, group_s, group1, 
 category_s, cat2, test_ti, 50, test_tl, 100, test_tf, 200};
 assertU(adoc(doc1));
 String[] doc2 = {id,3, term_s, , test_ti, 5000, 
 test_tl, 100, test_tf, 200};
 assertU(adoc(doc2));
 assertU(commit());
 String[] doc3 = {id,4, term_s, , test_ti, 500, test_tl, 
 1000, test_tf, 2000};
 assertU(adoc(doc3));
 String[] doc4 = {id,5, term_s, , group_s, group2, 
 category_s, cat1, test_ti, 4, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc4));
 assertU(commit());
 String[] doc5 = {id,6, term_s,, group_s, group2, 
 category_s, cat1, test_ti, 10, test_tl, 100, test_tf, 200};
 assertU(adoc(doc5));
 assertU(commit());
 //Test additional filter query when using collapse
 params = new ModifiableSolrParams();
 params.add(q, );
 params.add(fq, {!collapse field=group_s});
 params.add(fq, category_s:cat1);
 params.add(defType, edismax);
 params.add(bf, field(test_ti));
 params.add(qf, term_s);
 params.add(qt, /elevate);
 params.add(elevateIds, 2);
 assertQ(req(params), *[count(//doc)=1],
 //result/doc[1]/float[@name='id'][.='6.0']);
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] $PROJECT_NAME - Build # $BUILD_NUMBER - $BUILD_STATUS!

2014-05-14 Thread builder
Build: ${BUILD_URL}

${FAILED_TESTS}

Build Log:
${BUILD_LOG_MULTILINE_REGEX,regex=(?x: 
  \


  \
(?:.*\\[javac\\]\\s++(?![1-9]\\d*\\s+error).*\\r?\\n)*+.*\\[javac\\]\\s+[1-9]\\d*\\s+error.*\\r?\\n
   \


  \
|.*\\[junit4\\]\\s+Suite:.*+\\s++   
   \
 (?:.*\\[junit4\\]\\s++(?!Suite:)(?!Completed).*\\r?\\n)*+  
 \
 .*\\[junit4\\]\\s++Completed\\s+.*\\s*FAILURES!\\r?\\n  
   \


  \
|.*\\[junit4\\]\\s+JVM\\s+J\\d+:\\s+std(?:out|err)\\s+was\\s+not\\s+empty.*+\\s++
   \
 
(?:.*\\[junit4\\]\\s++(?!JVM\\s+\\d+:\\s+std)(?!s+JVM\\s+J\\d+:\\s+EOF).*\\r?\\n)*+
 \
 .*\\[junit4\\]\\s++\\s+JVM\\s+J\\d+:\\s+EOF.*\\r?\\n
 \


   \
|.*rat-sources:.*\\r?\\n
 \
 
(?:\\s*+\\[echo\\]\\s*\\r?\\n|\\s*+\\[echo\\]\\s++(?![1-9]\\d*\\s+Unknown\\s+License)\\S.*\\r?\\n)*+
\
 \\s*+\\[echo\\]\\s+[1-9]\\d*\\s+Unknown\\s+License.*\\r?\\n
\
 (?:\\s*+\\[echo\\].*\\r?\\n)*+ 
 \


   \
|(?:.*\\r?\\n){2}.*\\[licenses\\]\\s+MISSING\\s+sha1(?:.*\\r?\\n){2}
   \


\
|.*check-licenses:.*\\r?\\n\\s*\\[echo\\].*\\r?\\n  
\
 \\s*\\[licenses\\]\\s+(?:MISSING\\s+LICENSE|CHECKSUM\\s+FAILED).*\\r?\\n   
  \
 (?:\\s*+\\[licenses\\].*\\r?\\n)++ 
  \


 \
|(?:.*\\[javadoc\\]\\s++(?![1-9]\\d*\\s+(?:error|warning)).+\\r?\\n)*+  
  \
 .*\\[javadoc\\]\\s+[1-9]\\d*\\s+(?:error|warning).*\\r?\\n 
 \


 \
|.*javadocs-lint:.*\\r?\\n(?:.*\\[exec\\].*\\r?\\n)*+   
\


 \
|.*check.*:.*\\r?\\n

 \
 (?:\\s*+\\[forbidden-apis\\]\\s*\\r?\\n
 \
  |\\s*+\\[forbidden-apis\\]\\s++   
   \

(?!Scanned\\s+\\d+\\s+(?:\\(and\\s+\\d+\\s+related\\)\\s+)?class\\s+file\\(s\\))\\S.*\\r?\n)*+
\
 \\s*+\\[forbidden-apis\\]\\s++ 
   

[JENKINS] Lucene-4x-Linux-Java7-64-test-only - Build # 21061 - Failure!

2014-05-14 Thread builder
Build: builds.flonkings.com/job/Lucene-4x-Linux-Java7-64-test-only/21061/

All tests passed

Build Log:
[...truncated 282 lines...]
   [junit4] JVM J3: stdout was not empty, see: 
/var/lib/jenkins/workspace/Lucene-4x-Linux-Java7-64-test-only/checkout/lucene/build/core/test/temp/junit4-J3-20140510_002925_461.sysout
   [junit4]  JVM J3: stdout (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  SIGSEGV (0xb) at pc=0x7ffbe12e60d8, pid=19221, 
tid=140719261460224
   [junit4] #
   [junit4] # JRE version: 7.0_25-b15
   [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (23.25-b01 mixed mode 
linux-amd64 compressed oops)
   [junit4] # Problematic frame:
   [junit4] # J  
org.apache.lucene.util.RamUsageEstimator.measureObjectSize(Ljava/lang/Object;)J
   [junit4] #
   [junit4] # Failed to write core dump. Core dumps have been disabled. To 
enable core dumping, try ulimit -c unlimited before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/var/lib/jenkins/workspace/Lucene-4x-Linux-Java7-64-test-only/checkout/lucene/build/core/test/J3/hs_err_pid19221.log
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.sun.com/bugreport/crash.jsp
   [junit4] #
   [junit4]  JVM J3: EOF 

[...truncated 1183 lines...]
   [junit4] ERROR: JVM J3 ended with an exception, command line: 
/var/lib/jenkins/tools/hudson.model.JDK/Java_7_64bit_u25/jre/bin/java 
-Dtests.prefix=tests -Dtests.seed=4124B110787FC396 -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=4.9 
-Dtests.cleanthreads=perMethod 
-Djava.util.logging.config.file=/var/lib/jenkins/workspace/Lucene-4x-Linux-Java7-64-test-only/checkout/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts.gracious=false -Dtests.multiplier=1 
-DtempDir=. -Djava.io.tmpdir=. 
-Djunit4.tempDir=/var/lib/jenkins/workspace/Lucene-4x-Linux-Java7-64-test-only/checkout/lucene/build/core/test/temp
 
-Dclover.db.dir=/var/lib/jenkins/workspace/Lucene-4x-Linux-Java7-64-test-only/checkout/lucene/build/clover/db
 -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Djava.security.policy=/var/lib/jenkins/workspace/Lucene-4x-Linux-Java7-64-test-only/checkout/lucene/tools/junit4/tests.policy
 -Dlucene.version=4.9-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.leaveTemporary=false -Dtests.filterstacks=true -classpath 

[jira] [Commented] (LUCENE-5670) org.apache.lucene.util.fst.FST should skip over outputs it is not interested in

2014-05-14 Thread Christian Ziech (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997744#comment-13997744
 ] 

Christian Ziech commented on LUCENE-5670:
-

Oh right! I only checked the 4.7 branch and there the DataInput didn't have the 
skipBytes() method yet. But now I saw that both trunk and the 4.8 branch have 
the skipBytes(long) already. So yes of course in that case we can drop it from 
the patch. If we can get consensus that the rest of the patch is worth doing I 
could implement it against 4.8 and attach it here.

 org.apache.lucene.util.fst.FST should skip over outputs it is not interested 
 in
 ---

 Key: LUCENE-5670
 URL: https://issues.apache.org/jira/browse/LUCENE-5670
 Project: Lucene - Core
  Issue Type: Improvement
Affects Versions: 4.7
Reporter: Christian Ziech
Assignee: Michael McCandless
Priority: Minor
 Attachments: LUCENE-5670.patch


 Currently the FST uses the read(DataInput) method from the Outputs class to 
 skip over outputs it actually is not interested in. For most use cases this 
 just creates some additional objects that are immediately destroyed again.
 When traversing an FST with non-trivial data however this can easily add up 
 to several excess objects that nobody actually ever read.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5666) Add UninvertingReader

2014-05-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997182#comment-13997182
 ] 

ASF subversion and git services commented on LUCENE-5666:
-

Commit 1594445 from [~rcmuir] in branch 'dev/branches/lucene5666'
[ https://svn.apache.org/r1594445 ]

LUCENE-5666: still return missing count etc when there are no terms

 Add UninvertingReader
 -

 Key: LUCENE-5666
 URL: https://issues.apache.org/jira/browse/LUCENE-5666
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: 5.0


 Currently the fieldcache is not pluggable at all. It would be better if 
 everything used the docvalues apis.
 This would allow people to customize the implementation, extend the classes 
 with custom subclasses with additional stuff, etc etc.
 FieldCache can be accessed via the docvalues apis, using the FilterReader api.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6067) add buildAndRunCollectorChain method to reduce code duplication in SolrIndexSearcher

2014-05-14 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997831#comment-13997831
 ] 

Christine Poerschke commented on SOLR-6067:
---

Hi. Thanks for reviewing and testing. I'll look into the test failures also. 
Could you share the exact test commands for one of the failed ones? Thank you.

 add buildAndRunCollectorChain method to reduce code duplication in 
 SolrIndexSearcher
 

 Key: SOLR-6067
 URL: https://issues.apache.org/jira/browse/SOLR-6067
 Project: Solr
  Issue Type: Improvement
Reporter: Christine Poerschke
Priority: Minor
 Attachments: SOLR-6067.patch


 https://github.com/apache/lucene-solr/pull/48 has the proposed change. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.7.0) - Build # 1539 - Failure!

2014-05-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/1539/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 11292 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/test/temp/junit4-J0-20140514_153436_157.syserr
   [junit4]  JVM J0: stderr (verbatim) 
   [junit4] java(295,0x14fa92000) malloc: *** error for object 0x14fe80e10: 
pointer being freed was not allocated
   [junit4] *** set a breakpoint in malloc_error_break to debug
   [junit4]  JVM J0: EOF 

[...truncated 1 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: 
/Library/Java/JavaVirtualMachines/jdk1.7.0_55.jdk/Contents/Home/jre/bin/java 
-XX:-UseCompressedOops -XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/heapdumps 
-Dtests.prefix=tests -Dtests.seed=CB6ED772E04227AA -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=4.9 
-Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts.gracious=false -Dtests.multiplier=1 
-DtempDir=. -Djava.io.tmpdir=. 
-Djunit4.tempDir=/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/test/temp
 
-Dclover.db.dir=/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/clover/db
 -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Djava.security.policy=/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/lucene/tools/junit4/tests.policy
 -Dlucene.version=4.9-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.leaveTemporary=false -Dtests.filterstacks=true -Dtests.disableHdfs=true 
-classpath 

[jira] [Commented] (SOLR-5473) Make one state.json per collection

2014-05-14 Thread Jessica Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997910#comment-13997910
 ] 

Jessica Cheng commented on SOLR-5473:
-

{quote}
I'm not sure which one is right here. I see Thread.currentThread().interrupt() 
in other places but I feel it should be Thread.interrupted()
{quote}

Thread.currentThread().interrupt() is the right thing to do. This blog entry 
gives a brief explanation: 
http://michaelscharf.blogspot.com/2006/09/dont-swallow-interruptedexception-call.html.

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-configname-fix.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4370) Let Collector know when all docs have been collected

2014-05-14 Thread Shikhar Bhushan (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shikhar Bhushan updated LUCENE-4370:


Attachment: LUCENE-4370.patch

attaching another version which adds a callback on both Collector {{void 
done();}} as well as on LeafCollector {{void leafDone();}}

 Let Collector know when all docs have been collected
 

 Key: LUCENE-4370
 URL: https://issues.apache.org/jira/browse/LUCENE-4370
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Affects Versions: 4.0-BETA
Reporter: Tomás Fernández Löbbe
Priority: Minor
 Attachments: LUCENE-4370.patch, LUCENE-4370.patch


 Collectors are a good point for extension/customization of Lucene/Solr, 
 however sometimes it's necessary to know when the last document has been 
 collected (for example, for flushing cached data).
 It would be nice to have a method that gets called after the last doc has 
 been collected.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4370) Let Collector know when all docs have been collected

2014-05-14 Thread Shikhar Bhushan (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997666#comment-13997666
 ] 

Shikhar Bhushan commented on LUCENE-4370:
-

 On one hand I think a Collector.finish() would be nice, but the argument 
 could be made you could handle this yourself (its done with 
 IndexSearcher.search returns).

Such a technique does not compose easily e.g. when you want to wrap collectors 
in other collectors, unless you customize each and every one in the chain.

 Let Collector know when all docs have been collected
 

 Key: LUCENE-4370
 URL: https://issues.apache.org/jira/browse/LUCENE-4370
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Affects Versions: 4.0-BETA
Reporter: Tomás Fernández Löbbe
Priority: Minor
 Attachments: LUCENE-4370.patch, LUCENE-4370.patch


 Collectors are a good point for extension/customization of Lucene/Solr, 
 however sometimes it's necessary to know when the last document has been 
 collected (for example, for flushing cached data).
 It would be nice to have a method that gets called after the last doc has 
 been collected.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6066) CollapsingQParserPlugin + Elevation does not respects fq (filter query)

2014-05-14 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6066:
-

Fix Version/s: 4.9

 CollapsingQParserPlugin + Elevation does not respects fq (filter query) 
 --

 Key: SOLR-6066
 URL: https://issues.apache.org/jira/browse/SOLR-6066
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.8
Reporter: Herb Jiang
Assignee: Joel Bernstein
 Fix For: 4.9

 Attachments: TestCollapseQParserPlugin.java


 QueryElevationComponent respects the fq parameter. But when use 
 CollapsingQParserPlugin with QueryElevationComponent, additional fq has no 
 effect.
 I use following test case to show this issue. (It will failed)
 {code:java}
 String[] doc = {id,1, term_s, , group_s, group1, 
 category_s, cat2, test_ti, 5, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc));
 assertU(commit());
 String[] doc1 = {id,2, term_s,, group_s, group1, 
 category_s, cat2, test_ti, 50, test_tl, 100, test_tf, 200};
 assertU(adoc(doc1));
 String[] doc2 = {id,3, term_s, , test_ti, 5000, 
 test_tl, 100, test_tf, 200};
 assertU(adoc(doc2));
 assertU(commit());
 String[] doc3 = {id,4, term_s, , test_ti, 500, test_tl, 
 1000, test_tf, 2000};
 assertU(adoc(doc3));
 String[] doc4 = {id,5, term_s, , group_s, group2, 
 category_s, cat1, test_ti, 4, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc4));
 assertU(commit());
 String[] doc5 = {id,6, term_s,, group_s, group2, 
 category_s, cat1, test_ti, 10, test_tl, 100, test_tf, 200};
 assertU(adoc(doc5));
 assertU(commit());
 //Test additional filter query when using collapse
 params = new ModifiableSolrParams();
 params.add(q, );
 params.add(fq, {!collapse field=group_s});
 params.add(fq, category_s:cat1);
 params.add(defType, edismax);
 params.add(bf, field(test_ti));
 params.add(qf, term_s);
 params.add(qt, /elevate);
 params.add(elevateIds, 2);
 assertQ(req(params), *[count(//doc)=1],
 //result/doc[1]/float[@name='id'][.='6.0']);
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6067) add buildAndRunCollectorChain method to reduce code duplication in SolrIndexSearcher

2014-05-14 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997793#comment-13997793
 ] 

Hoss Man commented on SOLR-6067:


Hmmm with the patch, i'm seeing lots of tests tripping an assert in 
SolrIndexSearcher...

{noformat}
   [junit4]   2 906349 T4720 oasc.SolrException.log ERROR 
java.lang.AssertionError
   [junit4]   2at 
org.apache.solr.search.SolrIndexSearcher.getDocListAndSetNC(SolrIndexSearcher.java:1701)
   [junit4]   2at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1427)
   [junit4]   2at 
org.apache.solr.search.SolrIndexSearcher.access$100(SolrIndexSearcher.java:124)
   [junit4]   2at 
org.apache.solr.search.SolrIndexSearcher$3.regenerateItem(SolrIndexSearcher.java:503)
   [junit4]   2at 
org.apache.solr.search.LRUCache.warm(LRUCache.java:189)
   [junit4]   2at 
org.apache.solr.search.SolrIndexSearcher.warm(SolrIndexSearcher.java:2110)
   [junit4]   2at 
org.apache.solr.core.SolrCore$4.call(SolrCore.java:1718)
   [junit4]   2at 
java.util.concurrent.FutureTask.run(FutureTask.java:262)

...

   [junit4] Throwable #1: java.lang.AssertionError
   [junit4]at 
__randomizedtesting.SeedInfo.seed([6B71312FB526EB2B:CE319917776B3B23]:0)
   [junit4]at 
org.apache.solr.search.SolrIndexSearcher.getDocListAndSetNC(SolrIndexSearcher.java:1701)
   [junit4]at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1427)
   [junit4]at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:512)
   [junit4]at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:461)
   [junit4]at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:221)
   [junit4]at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   [junit4]at 
org.apache.solr.core.SolrCore.execute(SolrCore.java:1964)
   [junit4]at 
org.apache.solr.util.TestHarness.query(TestHarness.java:295)
   [junit4]at 
org.apache.solr.util.TestHarness.query(TestHarness.java:278)
   [junit4]at 
org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:693)
   [junit4]at 
org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:686)
   [junit4]at 
org.apache.solr.TestTrie.testFacetField(TestTrie.java:280)
   [junit4]at 
org.apache.solr.TestTrie.testTrieFacet_PrecisionStep(TestTrie.java:257)

...

   [junit4] Throwable #1: java.lang.AssertionError
   [junit4]at 
__randomizedtesting.SeedInfo.seed([6B71312FB526EB2B:741292EA5001AB22]:0)
   [junit4]at 
org.apache.solr.search.SolrIndexSearcher.getDocListAndSetNC(SolrIndexSearcher.java:1701)
   [junit4]at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1427)
   [junit4]at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:512)
   [junit4]at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:461)
   [junit4]at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:221)
   [junit4]at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   [junit4]at 
org.apache.solr.core.SolrCore.execute(SolrCore.java:1964)
   [junit4]at 
org.apache.solr.util.TestHarness.query(TestHarness.java:295)
   [junit4]at 
org.apache.solr.util.TestHarness.query(TestHarness.java:278)
   [junit4]at 
org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:785)
   [junit4]at 
org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:754)
   [junit4]at 
org.apache.solr.search.TestFiltering.testRandomFiltering(TestFiltering.java:323)

{noformat}

...i don't have time to dig in right now, but i'll try to circle back and 
review more closely later.

 add buildAndRunCollectorChain method to reduce code duplication in 
 SolrIndexSearcher
 

 Key: SOLR-6067
 URL: https://issues.apache.org/jira/browse/SOLR-6067
 Project: Solr
  Issue Type: Improvement
Reporter: Christine Poerschke
Priority: Minor
 Attachments: SOLR-6067.patch


 https://github.com/apache/lucene-solr/pull/48 has the proposed change. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: 

[jira] [Commented] (LUCENE-4396) BooleanScorer should sometimes be used for MUST clauses

2014-05-14 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997609#comment-13997609
 ] 

Michael McCandless commented on LUCENE-4396:


I like this tasks file!

But, maybe we could test on fewer terms, for the
Low/HighAndManyLow/High tasks?  I think it's more common to have a
handful (3-5 maybe) of terms.  But maybe keep your current category
and rename it to Tons instead of Many?

Thank you for adding the test case; it's always disturbing when
luceneutil finds a bug that ant test doesn't!  Maybe we can improve
the test so that it exercises BS and NBS?  E.g., toggle the require
docs in order via a custom collector?  We could commit this test today
to trunk/4x right?

bq. A patch for luceneutil, which allows scores is different within a tolerance 
range.

Hmm do we know why the scores changed?  Are we comparing BS2 to
NovelBS?  (I think BS and BS2 already have different scores today?).

So, with these changes, BS (a BulkScorer) can handle required clauses
(but you commented this out in your patch in order to test NBS I
guess?), and NBS (a Scorer) can handle required too.

Do you have any perf results of BS w/ required clauses (as a
BulkScorer) vs BS2 (what trunk does today)?


 BooleanScorer should sometimes be used for MUST clauses
 ---

 Key: LUCENE-4396
 URL: https://issues.apache.org/jira/browse/LUCENE-4396
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
 Attachments: AndOr.tasks, LUCENE-4396.patch, LUCENE-4396.patch, 
 LUCENE-4396.patch, LUCENE-4396.patch, luceneutil-score-equal.patch


 Today we only use BooleanScorer if the query consists of SHOULD and MUST_NOT.
 If there is one or more MUST clauses we always use BooleanScorer2.
 But I suspect that unless the MUST clauses have very low hit count compared 
 to the other clauses, that BooleanScorer would perform better than 
 BooleanScorer2.  BooleanScorer still has some vestiges from when it used to 
 handle MUST so it shouldn't be hard to bring back this capability ... I think 
 the challenging part might be the heuristics on when to use which (likely we 
 would have to use firstDocID as proxy for total hit count).
 Likely we should also have BooleanScorer sometimes use .advance() on the subs 
 in this case, eg if suddenly the MUST clause skips 100 docs then you want 
 to .advance() all the SHOULD clauses.
 I won't have near term time to work on this so feel free to take it if you 
 are inspired!



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5973) Pluggable Ranking Collectors and Merge Strategies

2014-05-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997975#comment-13997975
 ] 

ASF subversion and git services commented on SOLR-5973:
---

Commit 1594698 from [~joel.bernstein] in branch 'dev/trunk'
[ https://svn.apache.org/r1594698 ]

SOLR-5973: Pluggable Ranking Collectors and Merge Strategies

 Pluggable Ranking Collectors and Merge Strategies
 -

 Key: SOLR-5973
 URL: https://issues.apache.org/jira/browse/SOLR-5973
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Joel Bernstein
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 4.9

 Attachments: SOLR-5973.patch, SOLR-5973.patch, SOLR-5973.patch, 
 SOLR-5973.patch, SOLR-5973.patch, SOLR-5973.patch, SOLR-5973.patch, 
 SOLR-5973.patch, SOLR-5973.patch, SOLR-5973.patch, SOLR-5973.patch, 
 SOLR-5973.patch, SOLR-5973.patch, SOLR-5973.patch, SOLR-5973.patch


 This ticket introduces a new RankQuery and MergeStrategy to Solr. By 
 extending the RankQuery class, and implementing it's interface, you can 
 specify a custom ranking collector (TopDocsCollector) and distributed merge 
 strategy for a Solr query. 
 Sample syntax:
 {code}
 q={!customRank subquery=*:* param1=a param2=b}wt=jsonindent=true
 {code}
 In the sample above the param: {code}q={!customRank  subquery=*:* param1=a 
 param2=b}{code} points to a QParserPlugin that returns a Query that extends 
 RankQuery.  The RankQuery defines the custom ranking and merge strategy for 
 it's  subquery.
 The RankQuery impl will have to do several things:
 1) Implement the RankQuery interface.
 2) Wrap the subquery and proxy all calls to the Query interface to the 
 subquery. Using local params syntax the subquery can be any valid Solr query. 
 The custom QParserPlugin is responsible for parsing the subquery.
 3)  Implement hashCode() and equals() so the queryResultCache works properly 
 with subquery and custom ranking algorithm. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Windows (64bit/jdk1.7.0_60-ea-b15) - Build # 3938 - Still Failing!

2014-05-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Windows/3938/
Java: 64bit/jdk1.7.0_60-ea-b15 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 20647 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\build.xml:467: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\build.xml:92: The 
following files contain @author tags, tabs or nocommits:
* solr/core/src/test/org/apache/solr/handler/TestReplicationHandler.java

Total time: 113 minutes 11 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 64bit/jdk1.7.0_60-ea-b15 -XX:-UseCompressedOops 
-XX:+UseG1GC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_20-ea-b11) - Build # 10263 - Failure!

2014-05-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10263/
Java: 64bit/jdk1.8.0_20-ea-b11 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
REGRESSION:  
org.apache.lucene.index.TestConcurrentMergeScheduler.testTotalBytesSize

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([A9F78D2B657129D8:BC14DDB28AD90E01]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.lucene.index.TestConcurrentMergeScheduler.testTotalBytesSize(TestConcurrentMergeScheduler.java:369)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 1199 lines...]
   [junit4] Suite: org.apache.lucene.index.TestConcurrentMergeScheduler
   [junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=TestConcurrentMergeScheduler -Dtests.method=testTotalBytesSize 
-Dtests.seed=A9F78D2B657129D8 -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=iw_IL 

[jira] [Commented] (SOLR-5681) Make the OverseerCollectionProcessor multi-threaded

2014-05-14 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13998005#comment-13998005
 ] 

Shalin Shekhar Mangar commented on SOLR-5681:
-

Some comments on the latest patch:

# The new createCollection method in CollectionAdminRequest is not required. In 
fact we should clean up the existing methods which hard code “implicit” router. 
I opened SOLR-6073 for it.
# There are some unrelated changes in CollectionHandler.handleRequestStatus()
# The added synchronisation in CoreAdminHandler.addTask is required. In fact it 
is a bug with the async work you did earlier and it should be fixed in 
trunk/branch_4x asap. We’re probably late for it to make into 4.8.1 but we 
should still try for it.
# The DistributedMap.size() method needlessly fetches all children. It can be 
implemented more efficiently using:
Stat stat = new Stat();
zookeeper.getData(dir, null, stat, true);
stat.getNumChildren();
# The ‘excludeList’ param in DistributedQueue.peekTopN should be named 
‘excludeSet’.
# DistributedQueue.peekTopN has the following code. It checks for topN.isEmpty 
but it should actually check for orderedChildren.isEmpty instead. Otherwise the 
method will return null even if children were found in the second pass after 
waiting.
{code}
if (waitedEnough) {
  if (topN.isEmpty()) return null;
}
{code}
# DistributedQueue.peekTopN has the following. Here the counter should be 
incremented only after topN.add(queueEvent) otherwise it either returns less 
nodes than requested and available or it waits more than required. For example, 
suppose children are (1,2,3,4,5), n=2 and excludeList=(1,2) then an extra await 
is invoked or if excludeList=(1,3) then only 2 is returned. In fact I think we 
should remove counter and just use topN.size() in the if condition. Also, is 
there any chance that headNode may be null?
{code}
for (String headNode : orderedChildren.values()) {
  if (headNode != null  counter++  n) {
try {
  String id = dir + / + headNode;
  if (excludeList != null  excludeList.contains(id)) continue;
  QueueEvent queueEvent = new QueueEvent(id,
  zookeeper.getData(dir + / + headNode, null, null, true), 
null);
  topN.add(queueEvent);
} catch (KeeperException.NoNodeException e) {
  // Another client removed the node first, try next
}
  } else {
if (topN.size() = 1) {
  return topN;
}
  }
}
if (topN.size() = 1) {
  return topN;
} else {
  childWatcher.await(wait == Long.MAX_VALUE ? DEFAULT_TIMEOUT : wait);
  waitedEnough = wait != Long.MAX_VALUE;
  continue;
}
{code}
# The DistributedQueue.peekTopN method catches and swallows the 
InterruptedException. We should just declare that it throws 
InterruptedException and let the caller deal with it.
# Remove the e.printStackTrace() calls in DistributedQueue.getLastElementId()
# Do not swallow InterruptedException in DistributedQueue.getLastElementId()
# overseerCollectionProcessor.shutdown(); in Overseer.close() is not required 
because that is done by ccThread.close() already 
# There are formatting errors in success, error, time and storeFailureDetails 
methods in Overseer.Stats
# If the  maxParallelThreads is supposed to be a constant then it should 
renamed accordingly as MAX_PARALLEL_THREADS.
# The maxParallelThreads=10 is not actually used while creating the 
ThreadPoolExecutor. Instead it is initialised with 5-100 threads!
# Use this.processedZKTasks = Collections.synchronizedSet(new 
HashSetString()); to remove the unchecked cast warning in OCP constructor.
# Instead of passing a shardHandler to OCP constructor, why not just pass a 
shardHandlerFactory?
# Remove the e.printStackTrace in catch clauses in OCP.run()
# Do not swallow InterruptedException in OCP.run()
# In OCP.cleanupWorkQueue, the synchronization on a ConcurrentHashMap is not 
required
# What is the reason behind cleaning work queue twice and sleeping for 20ms in 
this code:
{code}
cleanUpWorkQueue();

while(runningTasks.size()  maxParallelThreads) {
  Thread.sleep(20);
}

cleanUpWorkQueue();

{code}
# There are unrelated changes in OCP.prioritizeOverseerNodes
# There are formatting problems in run(), checkExclusivity and cleanUpWorkQueue 
methods in OCP.
# We should check for asyncId != null in if (completedMap.contains(asyncId) || 
failureMap.contains(asyncId)) to avoid two unnecessary calls to ZK.
# KeeperException.NodeExistsException thrown from markTaskAsRunning is ignored 
- Why would that happen? If it happens, why is it okay to ignore it? Shouldn’t 
we fail loudly or log a warning?

 Make the OverseerCollectionProcessor multi-threaded
 ---

[jira] [Resolved] (LUCENE-5656) IndexWriter leaks CFS handles in some exceptional cases

2014-05-14 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-5656.
-

   Resolution: Fixed
Fix Version/s: 5.0
   4.9
   4.8.1

Shai spotted this easily, I committed!

 IndexWriter leaks CFS handles in some exceptional cases
 ---

 Key: LUCENE-5656
 URL: https://issues.apache.org/jira/browse/LUCENE-5656
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.8.1
Reporter: Robert Muir
 Fix For: 4.8.1, 4.9, 5.0


 in trunk:
 ant test  -Dtestcase=TestIndexWriterOutOfMemory -Dtests.method=testBasics 
 -Dtests.seed=3D485DE153FCA22D -Dtests.nightly=true -Dtests.locale=no_NO 
 -Dtests.timezone=CAT -Dtests.file.encoding=US-ASCII
 Seems to happen when an exception is thrown here:
 {noformat}
[junit4]   1 java.lang.OutOfMemoryError: Fake OutOfMemoryError
[junit4]   1  at 
 org.apache.lucene.index.TestIndexWriterOutOfMemory$2.eval(TestIndexWriterOutOfMemory.java:117)
[junit4]   1  at 
 org.apache.lucene.store.MockDirectoryWrapper.maybeThrowDeterministicException(MockDirectoryWrapper.java:888)
[junit4]   1  at 
 org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:575)
[junit4]   1  at 
 org.apache.lucene.store.Directory.openChecksumInput(Directory.java:107)
[junit4]   1  at 
 org.apache.lucene.codecs.lucene45.Lucene45DocValuesProducer.init(Lucene45DocValuesProducer.java:84)
[junit4]   1  at 
 org.apache.lucene.codecs.lucene45.Lucene45DocValuesFormat.fieldsProducer(Lucene45DocValuesFormat.java:178)
[junit4]   1  at 
 org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsReader.init(PerFieldDocValuesFormat.java:232)
[junit4]   1  at 
 org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat.fieldsProducer(PerFieldDocValuesFormat.java:324)
[junit4]   1  at 
 org.apache.lucene.index.SegmentDocValues.newDocValuesProducer(SegmentDocValues.java:51)
[junit4]   1  at 
 org.apache.lucene.index.SegmentDocValues.getDocValuesProducer(SegmentDocValues.java:68)
[junit4]   1  at 
 org.apache.lucene.index.SegmentReader.initDocValuesProducers(SegmentReader.java:189)
[junit4]   1  at 
 org.apache.lucene.index.SegmentReader.init(SegmentReader.java:166)
[junit4]   1  at 
 org.apache.lucene.index.ReadersAndUpdates.writeFieldUpdates(ReadersAndUpdates.java:553)
[junit4]   1  at 
 org.apache.lucene.index.BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:230)
[junit4]   1  at 
 org.apache.lucene.index.IndexWriter.applyAllDeletesAndUpdates(IndexWriter.java:3086)
[junit4]   1  at 
 org.apache.lucene.index.IndexWriter.maybeApplyDeletes(IndexWriter.java:3077)
[junit4]   1  at 
 org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2791)
[junit4]   1  at 
 org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2940)
[junit4]   1  at 
 org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2907)
 {noformat}
 and the leak is from here:
 {noformat}
[junit4] Caused by: java.lang.RuntimeException: unclosed IndexInput: 
 _0_Asserting_0.dvd
[junit4]  at 
 org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:560)
[junit4]  at 
 org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:604)
[junit4]  at 
 org.apache.lucene.codecs.lucene45.Lucene45DocValuesProducer.init(Lucene45DocValuesProducer.java:116)
[junit4]  at 
 org.apache.lucene.codecs.lucene45.Lucene45DocValuesFormat.fieldsProducer(Lucene45DocValuesFormat.java:178)
[junit4]  at 
 org.apache.lucene.codecs.asserting.AssertingDocValuesFormat.fieldsProducer(AssertingDocValuesFormat.java:61)
[junit4]  at 
 org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsReader.init(PerFieldDocValuesFormat.java:232)
[junit4]  at 
 org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat.fieldsProducer(PerFieldDocValuesFormat.java:324)
[junit4]  at 
 org.apache.lucene.index.SegmentDocValues.newDocValuesProducer(SegmentDocValues.java:51)
[junit4]  at 
 org.apache.lucene.index.SegmentDocValues.getDocValuesProducer(SegmentDocValues.java:68)
[junit4]  at 
 org.apache.lucene.index.SegmentReader.initDocValuesProducers(SegmentReader.java:189)
[junit4]  at 
 org.apache.lucene.index.SegmentReader.init(SegmentReader.java:116)
[junit4]  at 
 org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:133)
[junit4]  at 
 org.apache.lucene.index.BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:211)
[junit4]

[jira] [Updated] (SOLR-5681) Make the OverseerCollectionProcessor multi-threaded

2014-05-14 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5681:
-

Attachment: (was: SOLR-5681.patch)

 Make the OverseerCollectionProcessor multi-threaded
 ---

 Key: SOLR-5681
 URL: https://issues.apache.org/jira/browse/SOLR-5681
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Attachments: SOLR-5681-2.patch, SOLR-5681.patch, SOLR-5681.patch, 
 SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, 
 SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, 
 SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, 
 SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, SOLR-5681.patch, 
 SOLR-5681.patch


 Right now, the OverseerCollectionProcessor is single threaded i.e submitting 
 anything long running would have it block processing of other mutually 
 exclusive tasks.
 When OCP tasks become optionally async (SOLR-5477), it'd be good to have 
 truly non-blocking behavior by multi-threading the OCP itself.
 For example, a ShardSplit call on Collection1 would block the thread and 
 thereby, not processing a create collection task (which would stay queued in 
 zk) though both the tasks are mutually exclusive.
 Here are a few of the challenges:
 * Mutual exclusivity: Only let mutually exclusive tasks run in parallel. An 
 easy way to handle that is to only let 1 task per collection run at a time.
 * ZK Distributed Queue to feed tasks: The OCP consumes tasks from a queue. 
 The task from the workQueue is only removed on completion so that in case of 
 a failure, the new Overseer can re-consume the same task and retry. A queue 
 is not the right data structure in the first place to look ahead i.e. get the 
 2nd task from the queue when the 1st one is in process. Also, deleting tasks 
 which are not at the head of a queue is not really an 'intuitive' thing.
 Proposed solutions for task management:
 * Task funnel and peekAfter(): The parent thread is responsible for getting 
 and passing the request to a new thread (or one from the pool). The parent 
 method uses a peekAfter(last element) instead of a peek(). The peekAfter 
 returns the task after the 'last element'. Maintain this request information 
 and use it for deleting/cleaning up the workQueue.
 * Another (almost duplicate) queue: While offering tasks to workQueue, also 
 offer them to a new queue (call it volatileWorkQueue?). The difference is, as 
 soon as a task from this is picked up for processing by the thread, it's 
 removed from the queue. At the end, the cleanup is done from the workQueue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.8-Linux (64bit/jdk1.8.0_20-ea-b11) - Build # 124 - Failure!

2014-05-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.8-Linux/124/
Java: 64bit/jdk1.8.0_20-ea-b11 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
REGRESSION:  
org.apache.lucene.util.automaton.TestSpecialOperations.testRandomFiniteStrings1

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space
at 
__randomizedtesting.SeedInfo.seed([73523B7516EE202D:2E7EE6C4EF518B83]:0)
at java.util.Arrays.copyOf(Arrays.java:3181)
at java.util.ArrayList.grow(ArrayList.java:246)
at java.util.ArrayList.ensureExplicitCapacity(ArrayList.java:220)
at java.util.ArrayList.ensureCapacityInternal(ArrayList.java:212)
at java.util.ArrayList.add(ArrayList.java:443)
at 
org.apache.lucene.util.automaton.MinimizationOperations.minimizeHopcroft(MinimizationOperations.java:108)
at 
org.apache.lucene.util.automaton.MinimizationOperations.minimize(MinimizationOperations.java:54)
at 
org.apache.lucene.util.automaton.Automaton.minimize(Automaton.java:774)
at 
org.apache.lucene.util.automaton.TestSpecialOperations.testRandomFiniteStrings1(TestSpecialOperations.java:113)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)




Build Log:
[...truncated 631 lines...]
   [junit4] Suite: org.apache.lucene.util.automaton.TestSpecialOperations
   [junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=TestSpecialOperations -Dtests.method=testRandomFiniteStrings1 
-Dtests.seed=73523B7516EE202D -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=ar_IQ -Dtests.timezone=America/Santo_Domingo 
-Dtests.file.encoding=ISO-8859-1
   [junit4] ERROR   26.3s J0 | TestSpecialOperations.testRandomFiniteStrings1 

   [junit4] Throwable #1: java.lang.OutOfMemoryError: Java heap space
   [junit4]at 
__randomizedtesting.SeedInfo.seed([73523B7516EE202D:2E7EE6C4EF518B83]:0)
   [junit4]at java.util.Arrays.copyOf(Arrays.java:3181)
   [junit4]at java.util.ArrayList.grow(ArrayList.java:246)
   [junit4]at 
java.util.ArrayList.ensureExplicitCapacity(ArrayList.java:220)
   [junit4]at 
java.util.ArrayList.ensureCapacityInternal(ArrayList.java:212)
   [junit4]at java.util.ArrayList.add(ArrayList.java:443)
   [junit4]at 
org.apache.lucene.util.automaton.MinimizationOperations.minimizeHopcroft(MinimizationOperations.java:108)
   [junit4]at 
org.apache.lucene.util.automaton.MinimizationOperations.minimize(MinimizationOperations.java:54)
   [junit4]at 
org.apache.lucene.util.automaton.Automaton.minimize(Automaton.java:774)
   [junit4]at 

[jira] [Reopened] (LUCENE-5283) Fail the build if ant test didn't execute any tests (everything filtered out).

2014-05-14 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss reopened LUCENE-5283:
-


Someone has changed something in the build system so that it no longer works 
(false alarms about no tests in a submodule).

All of this code is too fragile I think and should be removed. The value added 
is minimal and the headaches with maintenance are huge. The problem is that any 
subant or ant call which doesn't pass the required properties to detect the 
top level build will break it. I don't think there's a hook in Ant to allow 
detection of the top-level build file.

 Fail the build if ant test didn't execute any tests (everything filtered out).
 --

 Key: LUCENE-5283
 URL: https://issues.apache.org/jira/browse/LUCENE-5283
 Project: Lucene - Core
  Issue Type: Wish
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Trivial
 Fix For: 4.6, 5.0

 Attachments: LUCENE-5283-permgen.patch, LUCENE-5283.patch, 
 LUCENE-5283.patch, LUCENE-5283.patch


 This should be an optional setting that defaults to 'false' (the build 
 proceeds).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5658) IllegalArgumentException from ByteBufferIndexInput.buildSlice

2014-05-14 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13993133#comment-13993133
 ] 

Uwe Schindler commented on LUCENE-5658:
---

bq. This can only happen if the underlying JVM has a bug (J9???) or the 
underlying filesystem reports another file length after opening than it really 
is (maybe file was truncated).

The first is more obviously the problem. If the file length() reported by the 
OS is incorrect, the original mmap should have failed already. 
openFullSlice/clone just clones the ByteBuffers. And this cannot change the 
length, unless the JVM corrumpts the clones.

 IllegalArgumentException from ByteBufferIndexInput.buildSlice
 -

 Key: LUCENE-5658
 URL: https://issues.apache.org/jira/browse/LUCENE-5658
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/store
Affects Versions: 4.8
Reporter: Shai Erera

 I've received an email with the following stacktrace:
 {noformat}
 Exception in thread Lucene Merge Thread #73 
 org.apache.lucene.index.MergePolicy$MergeException: 
 java.lang.IllegalArgumentException
   at 
 org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
   at 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
 Caused by: java.lang.IllegalArgumentException
   at java.nio.Buffer.limit(Buffer.java:278)
   at 
 org.apache.lucene.store.ByteBufferIndexInput.buildSlice(ByteBufferIndexInput.java:259)
   at 
 org.apache.lucene.store.ByteBufferIndexInput.buildSlice(ByteBufferIndexInput.java:230)
   at 
 org.apache.lucene.store.ByteBufferIndexInput.clone(ByteBufferIndexInput.java:187)
   at 
 org.apache.lucene.store.MMapDirectory$1.openFullSlice(MMapDirectory.java:211)
   at 
 org.apache.lucene.store.CompoundFileDirectory.readEntries(CompoundFileDirectory.java:138)
   at 
 org.apache.lucene.store.CompoundFileDirectory.init(CompoundFileDirectory.java:105)
   at 
 org.apache.lucene.index.SegmentReader.readFieldInfos(SegmentReader.java:209)
   at org.apache.lucene.index.SegmentReader.init(SegmentReader.java:99)
   at 
 org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:142)
   at 
 org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:624)
   at 
 org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4068)
   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3728)
   at 
 org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
   at 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
 {noformat}
 According to the email, it happens randomly while indexing Wikipedia, on 
 4.8.0. As far as I understood, the indexer creates 4 indexes in parallel, by 
 a total of 48 threads. Each index is created in a separate directory, and 
 there's no sharing of MP or MS instances between the writers (in fact, 
 default settings are used). This could explain the {{Lucene Merge Thread 
 #73}}. The indexing process ends w/ a {{forceMerge(1)}}. If that call is 
 omitted, the exception doesn't reproduce. Also, since it doesn't happen 
 always, there's no simple testcase which reproduces.
 I've asked the reporter to add more info to the issue, but opening the issue 
 now so we could check and hopefully fix before 4.8.1.
 I checked the stacktrace against trunk, but not all the lines align (e.g. 
 {{at 
 org.apache.lucene.store.MMapDirectory$1.openFullSlice(MMapDirectory.java:211)}}
  is only in 4.8), but I haven't dug deeper yet...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6056) Zookeeper crash JVM stack OOM because of recover strategy

2014-05-14 Thread Raintung Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raintung Li updated SOLR-6056:
--

Description: 
Some errorsorg.apache.solr.common.SolrException: Error opening new searcher. 
exceeded limit of maxWarmingSearchers=2, try again later, that occur 
distributedupdateprocessor trig the core admin recover process.
That means every update request will send the core admin recover request.
(see the code DistributedUpdateProcessor.java doFinish())

The terrible thing is CoreAdminHandler will start a new thread to publish the 
recover status and start recovery. Threads increase very quickly, and stack OOM 
, Overseer can't handle a lot of status update , zookeeper node for  
/overseer/queue/qn-125553 increase more than 40 thousand in two minutes.

At the last zookeeper crash. 
The worse thing is queue has too much nodes in the zookeeper, the cluster can't 
publish the right status because only one overseer work, I have to start three 
threads to clear the queue nodes. The cluster doesn't work normal near 30 
minutes...



  was:
Some errorsorg.apache.solr.common.SolrException: Error opening new searcher. 
exceeded limit of maxWarmingSearchers=2, try again later, that occur 
distributedupdateprocessor trig the core admin recover process.
That means every update request will send the core admin recover request.
(see the code DistributedUpdateProcessor.java doFinish())

The terrible thing is CoreAdminHandler will start a new thread to publish the 
recover status and start recovery. Threads increase very quickly, and stack OOM 
, Overseer can't handle a lot of status update , zookeeper node for  
/overseer/queue/qn-125553 increase more than 40 thousand in two minutes.

At the last zookeeper crash. 
The worse thing is queue has to much nodes in the zookeeper, the cluster can't 
publish the right status because only one overseer work, I have to start three 
threads to clear the queue nodes. The cluster doesn't work normal near 30 
minutes...




 Zookeeper crash JVM stack OOM because of recover strategy 
 --

 Key: SOLR-6056
 URL: https://issues.apache.org/jira/browse/SOLR-6056
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6
 Environment: Two linux servers, 65G memory, 16 core cpu
 20 collections, every collection has one shard two replica 
 one zookeeper
Reporter: Raintung Li
Priority: Critical
  Labels: cluster, crash, recover

 Some errorsorg.apache.solr.common.SolrException: Error opening new searcher. 
 exceeded limit of maxWarmingSearchers=2, try again later, that occur 
 distributedupdateprocessor trig the core admin recover process.
 That means every update request will send the core admin recover request.
 (see the code DistributedUpdateProcessor.java doFinish())
 The terrible thing is CoreAdminHandler will start a new thread to publish the 
 recover status and start recovery. Threads increase very quickly, and stack 
 OOM , Overseer can't handle a lot of status update , zookeeper node for  
 /overseer/queue/qn-125553 increase more than 40 thousand in two minutes.
 At the last zookeeper crash. 
 The worse thing is queue has too much nodes in the zookeeper, the cluster 
 can't publish the right status because only one overseer work, I have to 
 start three threads to clear the queue nodes. The cluster doesn't work normal 
 near 30 minutes...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6062) a phrase query is created for each field supplied through edismax's pf, pf2 and pf3 parameters (rather them being combined in a single dismax query)

2014-05-14 Thread Michael Dodsworth (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13996085#comment-13996085
 ] 

Michael Dodsworth commented on SOLR-6062:
-

As was mentioned on this issue, the behavioral change was not desirable.

 a phrase query is created for each field supplied through edismax's pf, pf2 
 and pf3 parameters (rather them being combined in a single dismax query)
 

 Key: SOLR-6062
 URL: https://issues.apache.org/jira/browse/SOLR-6062
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.0
Reporter: Michael Dodsworth
Priority: Minor

 https://issues.apache.org/jira/browse/SOLR-2058 subtly changed how phrase 
 queries, created through the pf, pf2 and pf3 parameters, are merged into the 
 main user query.
 For the query: 'term1 term2' with pf2:[field1, field2, field3] we now get 
 (omitting the non phrase query section for clarity):
 {code:java}
 main query
 DisjunctionMaxQuery((field1:term1 term2^1.0)~0.1)
 DisjunctionMaxQuery((field2:term1 term2^1.0)~0.1)
 DisjunctionMaxQuery((field3:term1 term2^1.0)~0.1)
 {code}
 Prior to this change, we had:
 {code:java}
 main query 
 DisjunctionMaxQuery((field1:term1 term2^1.0 | field2:term1 term2^1.0 | 
 field3:term1 term2^1.0)~0.1)
 {code}
 The upshot being that if the phrase query term1 term2 appears in multiple 
 fields, it will get a significant boost over the previous implementation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5650) createTempDir and associated functions no longer create java.io.tmpdir

2014-05-14 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-5650:


Attachment: LUCENE-5650.patch

This is an adjustment to Ryan's patch. I moved a lot of the temp-file related 
code out of LuceneTestCase (leaving appropriate delegate calls to the rule code 
and its logic).

A side-effect of this is that temp. dir gets created before any test code below 
the rule is executed. This helps hunspell tests. 

All Lucene test pass. Solr's does have a few offenders still; haven't looked at 
them yet.

 createTempDir and associated functions no longer create java.io.tmpdir
 --

 Key: LUCENE-5650
 URL: https://issues.apache.org/jira/browse/LUCENE-5650
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/test
Reporter: Ryan Ernst
Assignee: Dawid Weiss
Priority: Minor
 Fix For: 4.9, 5.0

 Attachments: LUCENE-5650.patch, LUCENE-5650.patch


 The recent refactoring to all the create temp file/dir functions (which is 
 great!) has a minor regression from what existed before.  With the old 
 {{LuceneTestCase.TEMP_DIR}}, the directory was created if it did not exist.  
 So, if you set {{java.io.tmpdir}} to {{./temp}}, then it would create that 
 dir within the per jvm working dir.  However, {{getBaseTempDirForClass()}} 
 now does asserts that check the dir exists, is a dir, and is writeable.
 Lucene uses {{.}} as {{java.io.tmpdir}}.  Then in the test security 
 manager, the per jvm cwd has read/write/execute permissions.  However, this 
 allows tests to write to their cwd, which I'm trying to protect against (by 
 setting cwd to read/execute in my test security manager).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5666) Add UninvertingReader

2014-05-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997030#comment-13997030
 ] 

ASF subversion and git services commented on LUCENE-5666:
-

Commit 1594417 from [~rcmuir] in branch 'dev/branches/lucene5666'
[ https://svn.apache.org/r1594417 ]

LUCENE-5666: fix test failures

 Add UninvertingReader
 -

 Key: LUCENE-5666
 URL: https://issues.apache.org/jira/browse/LUCENE-5666
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: 5.0


 Currently the fieldcache is not pluggable at all. It would be better if 
 everything used the docvalues apis.
 This would allow people to customize the implementation, extend the classes 
 with custom subclasses with additional stuff, etc etc.
 FieldCache can be accessed via the docvalues apis, using the FilterReader api.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4465) Configurable Collectors

2014-05-14 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13995249#comment-13995249
 ] 

Otis Gospodnetic commented on SOLR-4465:


[~joel.bernstein] maybe this should be closed so it's not confusing people 
(because there have been a LOT of JIRAs in this post-filter/configurable 
collector/pluggable ranking collector space)

 Configurable Collectors
 ---

 Key: SOLR-4465
 URL: https://issues.apache.org/jira/browse/SOLR-4465
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 4.1
Reporter: Joel Bernstein
 Fix For: 4.8

 Attachments: SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
 SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
 SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
 SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, SOLR-4465.patch, 
 SOLR-4465.patch, SOLR-4465.patch


 This ticket provides a patch to add pluggable collectors to Solr. This patch 
 was generated and tested with Solr 4.1.
 This is how the patch functions:
 Collectors are plugged into Solr in the solconfig.xml using the new 
 collectorFactory element. For example:
 collectorFactory name=default class=solr.CollectorFactory/
 collectorFactory name=sum class=solr.SumCollectorFactory/
 The elements above define two collector factories. The first one is the 
 default collectorFactory. The class attribute points to 
 org.apache.solr.handler.component.CollectorFactory, which implements logic 
 that returns the default TopScoreDocCollector and TopFieldCollector. 
 To create your own collectorFactory you must subclass the default 
 CollectorFactory and at a minimum override the getCollector method to return 
 your new collector. 
 The parameter cl turns on pluggable collectors:
 cl=true
 If cl is not in the parameters, Solr will automatically use the default 
 collectorFactory.
 *Pluggable Doclist Sorting With the Docs Collector*
 You can specify two types of pluggable collectors. The first type is the docs 
 collector. For example:
 cl.docs=name
 The above param points to a named collectorFactory in the solrconfig.xml to 
 construct the collector. The docs collectorFactorys must return a collector 
 that extends the TopDocsCollector base class. Docs collectors are responsible 
 for collecting the doclist.
 You can specify only one docs collector per query.
 You can pass parameters to the docs collector using local params syntax. For 
 example:
 cl.docs=\{! sort=mycustomesort\}mycollector
 If cl=true and a docs collector is not specified, Solr will use the default 
 collectorFactory to create the docs collector.
 *Pluggable Custom Analytics With Delegating Collectors*
 You can also specify any number of custom analytic collectors with the 
 cl.analytic parameter. Analytic collectors are designed to collect 
 something else besides the doclist. Typically this would be some type of 
 custom analytic. For example:
 cl.analytic=sum
 The parameter above specifies a analytic collector named sum. Like the docs 
 collectors, sum points to a named collectorFactory in the solrconfig.xml. 
 You can specificy any number of analytic collectors by adding additional 
 cl.analytic parameters.
 Analytic collector factories must return Collector instances that extend 
 DelegatingCollector. 
 A sample analytic collector is provided in the patch through the 
 org.apache.solr.handler.component.SumCollectorFactory.
 This collectorFactory provides a very simple DelegatingCollector that groups 
 by a field and sums a column of floats. The sum collector is not designed to 
 be a fully functional sum function but to be a proof of concept for pluggable 
 analytics through delegating collectors.
 You can send parameters to analytic collectors with solr local param syntax.
 For example:
 cl.analytic=\{! id=1 groupby=field1 column=field2\}sum
 The id parameter is mandatory for analytic collectors and is used to 
 identify the output from the collector. In this example the groupby and 
 column params tell the sum collector which field to group by and sum.
 Analytic collectors are passed a reference to the ResponseBuilder and can 
 place maps with analytic output directory into the SolrQueryResponse with the 
 add() method.
 Maps that are placed in the SolrQueryResponse are automatically added to the 
 outgoing response. The response will include a list named cl.analytic.id, 
 where id is specified in the local param.
 *Distributed Search*
 The CollectorFactory also has a method called merge(). This method aggregates 
 the results from each of the shards during distributed search. The default 
 CollectoryFactory implements the default merge logic for merging documents 
 from each shard. If you define a different docs collector you can override 
 the default merge 

[jira] [Commented] (SOLR-4962) Allow for analytic functions to be performed through altered collectors

2014-05-14 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13995257#comment-13995257
 ] 

Otis Gospodnetic commented on SOLR-4962:


Greg - the parent issue is Closed.  Should this be Closed, too?  Not sure how 
it related to SOLR-5073.

 Allow for analytic functions to be performed through altered collectors
 ---

 Key: SOLR-4962
 URL: https://issues.apache.org/jira/browse/SOLR-4962
 Project: Solr
  Issue Type: Sub-task
  Components: search
Reporter: Greg Bowyer
 Fix For: 4.9, 5.0


 This is a split from SOLR-4465, in that issue the ability to create 
 customised collectors that allow for aggregate functions was born, but 
 suffers from being unable to work well with queryResultCaching and grouping.
 Migrating out this functionality into a collector component within solr, and 
 perhaps pushing down some of the logic towards lucene seems to be the way to 
 go.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6045) atomic updates w/ solrj + BinaryRequestWriter aren't working when adding multiple fields w/ same name in a single SolrInputDocument

2014-05-14 Thread Scott Lindner (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13992784#comment-13992784
 ] 

Scott Lindner commented on SOLR-6045:
-

Maybe the problem is actually the fact that the OOB RequestWriter works when it 
shouldn't?  It must be taking the multiple actions and combining them into a 
single Map - and while that actually sounded good to me at first, the code in 
the distributed update processor is basically unpredictable if you combine 
multiple actions.

For instance a set followed by an add is OK but if they are processed in 
the other order then the set will overwrite the add.  I think logically 
that's fine but then the actions need to have a predictable precedence and I 
think the only logical ordering would be something like:

remove -- incr -- set -- add

In any case I think the point here is that using the OOB RequestWriter or the 
BinaryRequestWriter shouldn't impact behavior and should be consistent.

 atomic updates w/ solrj + BinaryRequestWriter aren't working when adding 
 multiple fields w/ same name in a single SolrInputDocument
 ---

 Key: SOLR-6045
 URL: https://issues.apache.org/jira/browse/SOLR-6045
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8
 Environment: client  server both on 4.8
Reporter: Scott Lindner

 I'm using the following code snippet:
 {code}
 HttpSolrServer srvr = new HttpSolrServer(HOST:8983/solr/foo-test);
 SolrInputDocument sid = new SolrInputDocument();
 sid.addField(id, some_id);
 MapString, String fieldModifier = Maps.newHashMap();
 fieldModifier.put(set, new_value1);
 sid.addField(field1, fieldModifier);
 MapString, Object fieldModifier2 = Maps.newHashMap();
 fieldModifier2.put(set, new_value2);
 sid.addField(field1, fieldModifier2);
 srvr.add(sid);
 srvr.commit();
 {code}
 *NOTE*: the important part here is that I am using the same field name and 
 adding 2 values separately to the same solr document.
 This produces the correct values in the index.  Here is the output from 
 searching from the admin console:
 {noformat}
 field1: [
   new_value1,
   new_value2
 ]
 {noformat}
 However if I modify the above code to have the following lines after creating 
 the SolrServer:
 {code}
 srvr.setRequestWriter(new BinaryRequestWriter());
 srvr.setParser(new BinaryResponseParser());
 {code}
 Then the values that are returned are incorrect:
 {noformat}
 field1: [
   {set=new_value1},
   {set=new_value2}
 ]
 {noformat}
 This also behaves the same if I use the CloudSolrServer as well.
 If I modify my code to look like the following:
 {code}
 MapString, ListString fieldModifier = Maps.newHashMap();
 fieldModifier.put(set, Lists.newArrayList(new_value1, 
 new_value2));
 sid.addField(field1, fieldModifier);
 {code}
 Then this *does* work with the BinaryRequestWriter.  So this seems to be an 
 issue when calling addField() with the same name multiple times.
 In the process of debugging this I think I also uncovered a few other similar 
 issues but I will file separate bugs for those.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6072) The 'deletereplica' API should remove the data and instance directory default

2014-05-14 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-6072:
---

 Summary: The 'deletereplica' API should remove the data and 
instance directory default
 Key: SOLR-6072
 URL: https://issues.apache.org/jira/browse/SOLR-6072
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.8
Reporter: Shalin Shekhar Mangar
 Fix For: 4.9, 5.0


The 'deletereplica' collection API should clean up the data and instance 
directory automatically. Not doing that is a bug even if it's a back-compat 
break because if we don't do that then there is no way to free up the disk 
space except manual intervention.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5661) LiveIndexWriterConfig has setters that require magical order

2014-05-14 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13993585#comment-13993585
 ] 

Shai Erera commented on LUCENE-5661:


I hit this confusing magic order myself a couple days ago, while writing a 
test. +1 for unifying both settings, perhaps stuff them under a 
setFlushConditions or something?

 LiveIndexWriterConfig has setters that require magical order
 

 Key: LUCENE-5661
 URL: https://issues.apache.org/jira/browse/LUCENE-5661
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir

 Specifically setRamBufferSizeMB and setMaxBufferedDocs.
 Furthermore these are live settings on IWC, so there are potential race 
 conditions.
 It would be good if there were a better API, even if that just means 
 documenting if both X and Y are set, X takes precedence.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5661) LiveIndexWriterConfig has setters that require magical order

2014-05-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13993553#comment-13993553
 ] 

ASF subversion and git services commented on LUCENE-5661:
-

Commit 1593527 from [~rcmuir] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1593527 ]

LUCENE-5661: add workaround for race conditions in the LiveIWC api

 LiveIndexWriterConfig has setters that require magical order
 

 Key: LUCENE-5661
 URL: https://issues.apache.org/jira/browse/LUCENE-5661
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir

 Specifically setRamBufferSizeMB and setMaxBufferedDocs.
 Furthermore these are live settings on IWC, so there are potential race 
 conditions.
 It would be good if there were a better API, even if that just means 
 documenting if both X and Y are set, X takes precedence.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5670) org.apache.lucene.util.fst.FST should skip over outputs it is not interested in

2014-05-14 Thread Christian Ziech (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christian Ziech updated LUCENE-5670:


Attachment: LUCENE-5670.patch

Attached an (untested) patch where a skipOutput method is added to the 
outputs which doesn't create excess objects. Default implementation is the 
current behavior by invoking the read() method.

Also a skipBytes(int) method was added to the DataInput which defaults to 
reading the data as before. Several implementations of the DataInput already 
had a skipBytes() method and now effectively implement it.

 org.apache.lucene.util.fst.FST should skip over outputs it is not interested 
 in
 ---

 Key: LUCENE-5670
 URL: https://issues.apache.org/jira/browse/LUCENE-5670
 Project: Lucene - Core
  Issue Type: Improvement
Affects Versions: 4.7
Reporter: Christian Ziech
Priority: Minor
 Attachments: LUCENE-5670.patch


 Currently the FST uses the read(DataInput) method from the Outputs class to 
 skip over outputs it actually is not interested in. For most use cases this 
 just creates some additional objects that are immediately destroyed again.
 When traversing an FST with non-trivial data however this can easily add up 
 to several excess objects that nobody actually ever read.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5671) Upgrade ICU version

2014-05-14 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5671:
---

 Summary: Upgrade ICU version
 Key: LUCENE-5671
 URL: https://issues.apache.org/jira/browse/LUCENE-5671
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
Reporter: Robert Muir
 Attachments: LUCENE-5671.patch

This has a bugfix for a concurrency issue, reported on our users list. I think 
this is bad because it will strike users randomly while indexing/querying.

See http://bugs.icu-project.org/trac/ticket/10767

Apparently there is a better fix in the future, but the existing sync is enough 
to prevent the bug (my test passes 100% of the time with 53.1 whereas it fails 
30% of the time with 52.1)





--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6075) CoreAdminHandler should synchronize while adding a task to the tracking map

2014-05-14 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-6075:
---

Attachment: SOLR-6075.patch

Here's a fix.

 CoreAdminHandler should synchronize while adding a task to the tracking map
 ---

 Key: SOLR-6075
 URL: https://issues.apache.org/jira/browse/SOLR-6075
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Attachments: SOLR-6075.patch


 CoreAdminHandler should synchronize on the tracker maps when adding a task. 
 It's a rather nasty bug and we should get this in asap.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5644) ThreadAffinityDocumentsWriterThreadPool should clear the bindings on flush

2014-05-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13994052#comment-13994052
 ] 

ASF subversion and git services commented on LUCENE-5644:
-

Commit 1593649 from [~mikemccand] in branch 'dev/branches/lucene_solr_4_8'
[ https://svn.apache.org/r1593649 ]

LUCENE-5644: favor an already initialized ThreadState

 ThreadAffinityDocumentsWriterThreadPool should clear the bindings on flush
 --

 Key: LUCENE-5644
 URL: https://issues.apache.org/jira/browse/LUCENE-5644
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 4.8.1, 4.9, 5.0

 Attachments: LUCENE-5644.patch, LUCENE-5644.patch, LUCENE-5644.patch, 
 LUCENE-5644.patch, LUCENE-5644.patch


 This class remembers which thread used which DWPT, but it never clears
 this affinity.  It really should clear it on flush, this way if the
 number of threads doing indexing has changed we only use as many DWPTs
 as there are incoming threads.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5666) Add UninvertingReader

2014-05-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997343#comment-13997343
 ] 

ASF subversion and git services commented on LUCENE-5666:
-

Commit 1594492 from [~rcmuir] in branch 'dev/branches/lucene5666'
[ https://svn.apache.org/r1594492 ]

LUCENE-5666: support the 2 crazy instances of insanity that are too hard for me 
to fix :(

 Add UninvertingReader
 -

 Key: LUCENE-5666
 URL: https://issues.apache.org/jira/browse/LUCENE-5666
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: 5.0


 Currently the fieldcache is not pluggable at all. It would be better if 
 everything used the docvalues apis.
 This would allow people to customize the implementation, extend the classes 
 with custom subclasses with additional stuff, etc etc.
 FieldCache can be accessed via the docvalues apis, using the FilterReader api.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6054) Log progress of transaction log replays

2014-05-14 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-6054:
---

 Summary: Log progress of transaction log replays
 Key: SOLR-6054
 URL: https://issues.apache.org/jira/browse/SOLR-6054
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 4.9, 5.0


There is zero logging of how a transaction log replay is progressing. We should 
add some simple checkpoint based progress information. Logging the size of the 
log file at the beginning would also be useful.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5283) Fail the build if ant test didn't execute any tests (everything filtered out).

2014-05-14 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13994270#comment-13994270
 ] 

Michael McCandless commented on LUCENE-5283:


Please don't remove this!  This catches me all the time.

Hang on, when did it break?  I swear I've seen it working recently...

 Fail the build if ant test didn't execute any tests (everything filtered out).
 --

 Key: LUCENE-5283
 URL: https://issues.apache.org/jira/browse/LUCENE-5283
 Project: Lucene - Core
  Issue Type: Wish
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Trivial
 Fix For: 4.6, 5.0

 Attachments: LUCENE-5283-permgen.patch, LUCENE-5283.patch, 
 LUCENE-5283.patch, LUCENE-5283.patch


 This should be an optional setting that defaults to 'false' (the build 
 proceeds).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6070) Cannot use multiple highlighting components in a single solrconfig

2014-05-14 Thread Elaine Cario (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13996766#comment-13996766
 ] 

Elaine Cario commented on SOLR-6070:


If I commented out the code in question, it all worked fine.  Was going to 
submit a patch, but noted there is some complexity around handling the 
hard-coded internal COMPONENT_NAME, which is static, so patch is delayed while 
I work through that (unless of course someone who is more familiar than me can 
work through that - this is my first attempt at changing Solr!).

 Cannot use multiple highlighting components in a single solrconfig
 --

 Key: SOLR-6070
 URL: https://issues.apache.org/jira/browse/SOLR-6070
 Project: Solr
  Issue Type: Bug
  Components: highlighter
Affects Versions: 4.7.2, 4.8
Reporter: Elaine Cario
  Labels: highlighting

 I'm trying to use both the PostingsHighlighter and the FastVectorHighlighter 
 in the same solrconfig (selection driven by different request handlers), but 
 once I define 2 search components in the config, it always picks the Postings 
 Highlighter (even if I never reference it in any request handler).
 I think the culprit is some specific code in SolrCore.loadSearchComponents(), 
 which overwrites the highlighting component with the contents of the 
 postingshighlight component - so the components map has 2 entries, but they 
 both point to the same highlighting class (the PostingsHighlighter).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5619) TestBackwardsCompatibility needs updatable docvalues

2014-05-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997504#comment-13997504
 ] 

ASF subversion and git services commented on LUCENE-5619:
-

Commit 1594561 from [~shaie] in branch 'dev/trunk'
[ https://svn.apache.org/r1594561 ]

LUCENE-5619: add back-compat index+test for doc-values updates

 TestBackwardsCompatibility needs updatable docvalues
 

 Key: LUCENE-5619
 URL: https://issues.apache.org/jira/browse/LUCENE-5619
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5619.patch, dvupdates.48.zip


 We don't test this at all in TestBackCompat. this is scary!



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/ibm-j9-jdk7) - Build # 10261 - Still Failing!

2014-05-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10261/
Java: 64bit/ibm-j9-jdk7 
-Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}

All tests passed

Build Log:
[...truncated 20412 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:467: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:92: The following 
files contain @author tags, tabs or nocommits:
* solr/core/src/test/org/apache/solr/handler/TestReplicationHandler.java

Total time: 53 minutes 12 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 64bit/ibm-j9-jdk7 
-Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] $PROJECT_NAME - Build # $BUILD_NUMBER - $BUILD_STATUS!

2014-05-14 Thread builder
Build: ${BUILD_URL}

${FAILED_TESTS}

Build Log:
${BUILD_LOG_MULTILINE_REGEX,regex=(?x: 
  \


  \
(?:.*\\[javac\\]\\s++(?![1-9]\\d*\\s+error).*\\r?\\n)*+.*\\[javac\\]\\s+[1-9]\\d*\\s+error.*\\r?\\n
   \


  \
|.*\\[junit4\\]\\s+Suite:.*+\\s++   
   \
 (?:.*\\[junit4\\]\\s++(?!Suite:)(?!Completed).*\\r?\\n)*+  
 \
 .*\\[junit4\\]\\s++Completed\\s+.*\\s*FAILURES!\\r?\\n  
   \


  \
|.*\\[junit4\\]\\s+JVM\\s+J\\d+:\\s+std(?:out|err)\\s+was\\s+not\\s+empty.*+\\s++
   \
 
(?:.*\\[junit4\\]\\s++(?!JVM\\s+\\d+:\\s+std)(?!s+JVM\\s+J\\d+:\\s+EOF).*\\r?\\n)*+
 \
 .*\\[junit4\\]\\s++\\s+JVM\\s+J\\d+:\\s+EOF.*\\r?\\n
 \


   \
|.*rat-sources:.*\\r?\\n
 \
 
(?:\\s*+\\[echo\\]\\s*\\r?\\n|\\s*+\\[echo\\]\\s++(?![1-9]\\d*\\s+Unknown\\s+License)\\S.*\\r?\\n)*+
\
 \\s*+\\[echo\\]\\s+[1-9]\\d*\\s+Unknown\\s+License.*\\r?\\n
\
 (?:\\s*+\\[echo\\].*\\r?\\n)*+ 
 \


   \
|(?:.*\\r?\\n){2}.*\\[licenses\\]\\s+MISSING\\s+sha1(?:.*\\r?\\n){2}
   \


\
|.*check-licenses:.*\\r?\\n\\s*\\[echo\\].*\\r?\\n  
\
 \\s*\\[licenses\\]\\s+(?:MISSING\\s+LICENSE|CHECKSUM\\s+FAILED).*\\r?\\n   
  \
 (?:\\s*+\\[licenses\\].*\\r?\\n)++ 
  \


 \
|(?:.*\\[javadoc\\]\\s++(?![1-9]\\d*\\s+(?:error|warning)).+\\r?\\n)*+  
  \
 .*\\[javadoc\\]\\s+[1-9]\\d*\\s+(?:error|warning).*\\r?\\n 
 \


 \
|.*javadocs-lint:.*\\r?\\n(?:.*\\[exec\\].*\\r?\\n)*+   
\


 \
|.*check.*:.*\\r?\\n

 \
 (?:\\s*+\\[forbidden-apis\\]\\s*\\r?\\n
 \
  |\\s*+\\[forbidden-apis\\]\\s++   
   \

(?!Scanned\\s+\\d+\\s+(?:\\(and\\s+\\d+\\s+related\\)\\s+)?class\\s+file\\(s\\))\\S.*\\r?\n)*+
\
 \\s*+\\[forbidden-apis\\]\\s++ 
   

[JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.8.0) - Build # 1528 - Still Failing!

2014-05-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/1528/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 20538 lines...]
BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/build.xml:467: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-4.x-MacOSX/build.xml:92: The following 
files contain @author tags, tabs or nocommits:
* solr/core/src/test/org/apache/solr/handler/TestReplicationHandler.java

Total time: 109 minutes 6 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-trunk-Linux-java7-64-analyzers - Build # 4728 - Failure!

2014-05-14 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-java7-64-analyzers/4728/

1 tests failed.
REGRESSION:  org.apache.lucene.analysis.core.TestRandomChains.testRandomChains

Error Message:
startOffset must be non-negative, and endOffset must be = startOffset, 
startOffset=8,endOffset=3

Stack Trace:
java.lang.IllegalArgumentException: startOffset must be non-negative, and 
endOffset must be = startOffset, startOffset=8,endOffset=3
at 
__randomizedtesting.SeedInfo.seed([FB93398492208C31:C67210E5D53291F1]:0)
at 
org.apache.lucene.analysis.tokenattributes.PackedTokenAttributeImpl.setOffset(PackedTokenAttributeImpl.java:107)
at 
org.apache.lucene.analysis.shingle.ShingleFilter.incrementToken(ShingleFilter.java:345)
at 
org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:68)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:704)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:615)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:513)
at 
org.apache.lucene.analysis.core.TestRandomChains.testRandomChains(TestRandomChains.java:906)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Commented] (LUCENE-5644) ThreadAffinityDocumentsWriterThreadPool should clear the bindings on flush

2014-05-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13992647#comment-13992647
 ] 

ASF subversion and git services commented on LUCENE-5644:
-

Commit 1593226 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1593226 ]

LUCENE-5644: switch to simpler LIFO thread to ThreadState allocator during 
indexing

 ThreadAffinityDocumentsWriterThreadPool should clear the bindings on flush
 --

 Key: LUCENE-5644
 URL: https://issues.apache.org/jira/browse/LUCENE-5644
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 4.8.1, 4.9, 5.0

 Attachments: LUCENE-5644.patch, LUCENE-5644.patch, LUCENE-5644.patch, 
 LUCENE-5644.patch


 This class remembers which thread used which DWPT, but it never clears
 this affinity.  It really should clear it on flush, this way if the
 number of threads doing indexing has changed we only use as many DWPTs
 as there are incoming threads.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6057) Duplicate background-color in #content #analysis #analysis-result .match (analysis.css)

2014-05-14 Thread Al Krinker (JIRA)
Al Krinker created SOLR-6057:


 Summary: Duplicate background-color in #content #analysis 
#analysis-result .match (analysis.css)
 Key: SOLR-6057
 URL: https://issues.apache.org/jira/browse/SOLR-6057
 Project: Solr
  Issue Type: Bug
Reporter: Al Krinker
Priority: Trivial


Inside of solr/webapp/web/css/styles/analysis.css, you can find #content 
#analysis #analysis-result .match element with following content:

#content #analysis #analysis-result .match
{
background-color: #e9eff7;
background-color: #f2f2ff;
}

background-color listed twice.

Also, it was very hard for me to see the highlight. Recommend to change it to 
background-color: #FF;



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6061) Exception when sorting on a date field when using cursorMark parameter.

2014-05-14 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-6061.
--

Resolution: Invalid
  Assignee: Steve Rowe

Whew, good to hear, thanks for bringing closure.

No need to delete, I've just marked the issue as Invalid.

 Exception when sorting on a date field when using cursorMark parameter.
 ---

 Key: SOLR-6061
 URL: https://issues.apache.org/jira/browse/SOLR-6061
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8
Reporter: Ramon Salla
Assignee: Steve Rowe
  Labels: cursorMark,, datefield

 I get the next exception when using new cursorMark pagination search and 
 sorting using a date field (in this case I'm using a TrieDate).
 {code}
 error: {
 msg: java.lang.Long cannot be cast to java.lang.String,
 trace: java.lang.ClassCastException: java.lang.Long cannot be cast to 
 java.lang.String\n\tat 
 org.apache.solr.schema.FieldType.unmarshalStringSortValue(FieldType.java:993)\n\tat
  org.apache.solr.schema.StrField.unmarshalSortValue(StrField.java:92)\n\tat 
 org.apache.solr.search.CursorMark.parseSerializedTotem(CursorMark.java:232)\n\tat
  
 org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:158)\n\tat
  
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:197)\n\tat
  
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)\n\tat
  org.apache.solr.core.SolrCore.execute(SolrCore.java:1952)\n\tat 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:774)\n\tat
  
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)\n\tat
  
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)\n\tat
  
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)\n\tat
  
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)\n\tat
  
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)\n\tat
  
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)\n\tat
  
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)\n\tat
  
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)\n\tat
  
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)\n\tat
  
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)\n\tat
  
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)\n\tat
  
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)\n\tat
  
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)\n\tat
  
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)\n\tat
  
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)\n\tat
  org.eclipse.jetty.server.Server.handle(Server.java:368)\n\tat 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)\n\tat
  
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)\n\tat
  
 org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)\n\tat
  
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)\n\tat
  org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)\n\tat 
 org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)\n\tat 
 org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)\n\tat
  
 org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)\n\tat
  
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)\n\tat
  
 org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)\n\tat
  java.lang.Thread.run(Thread.java:745)\n,
 code: 500
   }
 {code}
 The query is like this:
 {code}
 select?q=*%3A*cursorMark=AoI%2FITAwNjZmNzA1N2UzZTVjZWY2NDQyMGY5NmY2ZDQ2ZDE0ZGUwODJiMTVkZTBmZWI3ZTk5NGNkZjZmNWViMDEzZDJ4v%2BDu%2Br0Crows=10sort=createdAt+asc%2Cid+asc
 {code}
 Related Issue could be:
 https://issues.apache.org/jira/browse/SOLR-5920



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org