[jira] [Commented] (LUCENE-3459) Change ChainedFilter to use FixedBitSet

2014-02-10 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13896308#comment-13896308
 ] 

Uwe Schindler commented on LUCENE-3459:
---

Cool, thanks! I wanted to do this since 4.0 :( Many thanks!

 Change ChainedFilter to use FixedBitSet
 ---

 Key: LUCENE-3459
 URL: https://issues.apache.org/jira/browse/LUCENE-3459
 Project: Lucene - Core
  Issue Type: Task
  Components: modules/other
Affects Versions: 3.4, 4.0-ALPHA
Reporter: Uwe Schindler
Assignee: Uwe Schindler

 ChainedFilter also uses OpenBitSet(DISI) at the moment. It should also be 
 changed to use FixedBitSet. There are two issues:
 - It exposes sometimes OpenBitSetDISI to it's public API - we should remove 
 those methods like in BooleanFilter and break backwards
 - It allows a XOR operation. This is not yet supported by FixedBitSet, but 
 it's easy to add (like for BooleanFilter). On the other hand, this XOR 
 operation is bogus, as it may mark documents in the BitSet that are deleted, 
 breaking new features like applying Filters down-low (LUCENE-1536). We should 
 remove the XOR operation maybe or force it to use IR.validDocs() (trunk) or 
 IR.isDeleted()



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.7.0_60-ea-b04) - Build # 3764 - Still Failing!

2014-02-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3764/
Java: 64bit/jdk1.7.0_60-ea-b04 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.BasicHttpSolrServerTest.testConnectionRefused

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([D19514919F7771AE:FE3CB08048F83EAA]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.client.solrj.impl.BasicHttpSolrServerTest.testConnectionRefused(BasicHttpSolrServerTest.java:159)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:744)




Build Log:
[...truncated 11345 lines...]
   [junit4] Suite: 

[jira] [Commented] (LUCENE-5440) Add LongFixedBitSet and replace usage of OpenBitSet

2014-02-10 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13896358#comment-13896358
 ] 

Uwe Schindler commented on LUCENE-5440:
---

+1 to commit, the @lucene.internal problems can be solved separately!

 Add LongFixedBitSet and replace usage of OpenBitSet
 ---

 Key: LUCENE-5440
 URL: https://issues.apache.org/jira/browse/LUCENE-5440
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Reporter: Shai Erera
Assignee: Shai Erera
 Attachments: LUCENE-5440.patch, LUCENE-5440.patch, LUCENE-5440.patch, 
 LUCENE-5440.patch, LUCENE-5440.patch


 Spinoff from here: http://lucene.markmail.org/thread/35gw3amo53dsqsqj. I 
 wrote a LongFixedBitSet which behaves like FixedBitSet, only allows managing 
 more than 2.1B bits. It overcome some issues I've encountered with 
 OpenBitSet, such as the use of set/fastSet as well the implementation of 
 DocIdSet. I'll post a patch shortly and describe it in more detail.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5441) Decouple DocIdSet from OpenBitSet and FixedBitSet

2014-02-10 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13896366#comment-13896366
 ] 

Adrien Grand commented on LUCENE-5441:
--

+1 on decoupling DocIdSet from our bit sets. The current patch looks good to me 
but I would also be happy with a dedicated class instead of the anonymous 
wrapper.

bq. I would call it maybe BitsDocIdSet

We have a {{Bits}} interface that provides random access to boolean values. 
Since this class would only work with FixedBitSet, I think Uwe's proposition 
would be more appropriate?

 Decouple DocIdSet from OpenBitSet and FixedBitSet
 -

 Key: LUCENE-5441
 URL: https://issues.apache.org/jira/browse/LUCENE-5441
 Project: Lucene - Core
  Issue Type: Task
  Components: core/other
Affects Versions: 4.6.1
Reporter: Uwe Schindler
 Fix For: 5.0

 Attachments: LUCENE-5441.patch, LUCENE-5441.patch


 Back from the times of Lucene 2.4 when DocIdSet was introduced, we somehow 
 kept the stupid filters can return a BitSet directly in the code. So lots 
 of Filters return just FixedBitSet, because this is the superclass (ideally 
 interface) of FixedBitSet.
 We should decouple that and *not* implement that abstract interface directly 
 by FixedBitSet. This leads to bugs e.g. in BlockJoin, because it used Filters 
 in a wrong way, just because it was always returning Bitsets. But some 
 filters actually don't do this.
 I propose to let FixedBitSet (only in trunk, because that a major backwards 
 break) just have a method {{asDocIdSet()}}, that returns an anonymous 
 instance of DocIdSet: bits() returns the FixedBitSet itsself, iterator() 
 returns a new Iterator (like it always did) and the cost/cacheable methods 
 return static values.
 Filters in trunk would need to be changed like that:
 {code:java}
 FixedBitSet bits = 
 ...
 return bits;
 {code}
 gets:
 {code:java}
 FixedBitSet bits = 
 ...
 return bits.asDocIdSet();
 {code}
 As this methods returns an anonymous DocIdSet, calling code can no longer 
 rely or check if the implementation behind is a FixedBitSet.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5441) Decouple DocIdSet from OpenBitSet and FixedBitSet

2014-02-10 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13896393#comment-13896393
 ] 

Shai Erera commented on LUCENE-5441:


OK, but I prefer a shorter name. I see that we have DocIdBitSet, which works on 
top of Java's BitSet. But looks like it's used only in tests today, so maybe we 
hijack it to use FixedBitSet? Why do we need to offer something on top of 
Java's when we have our own?

 Decouple DocIdSet from OpenBitSet and FixedBitSet
 -

 Key: LUCENE-5441
 URL: https://issues.apache.org/jira/browse/LUCENE-5441
 Project: Lucene - Core
  Issue Type: Task
  Components: core/other
Affects Versions: 4.6.1
Reporter: Uwe Schindler
 Fix For: 5.0

 Attachments: LUCENE-5441.patch, LUCENE-5441.patch


 Back from the times of Lucene 2.4 when DocIdSet was introduced, we somehow 
 kept the stupid filters can return a BitSet directly in the code. So lots 
 of Filters return just FixedBitSet, because this is the superclass (ideally 
 interface) of FixedBitSet.
 We should decouple that and *not* implement that abstract interface directly 
 by FixedBitSet. This leads to bugs e.g. in BlockJoin, because it used Filters 
 in a wrong way, just because it was always returning Bitsets. But some 
 filters actually don't do this.
 I propose to let FixedBitSet (only in trunk, because that a major backwards 
 break) just have a method {{asDocIdSet()}}, that returns an anonymous 
 instance of DocIdSet: bits() returns the FixedBitSet itsself, iterator() 
 returns a new Iterator (like it always did) and the cost/cacheable methods 
 return static values.
 Filters in trunk would need to be changed like that:
 {code:java}
 FixedBitSet bits = 
 ...
 return bits;
 {code}
 gets:
 {code:java}
 FixedBitSet bits = 
 ...
 return bits.asDocIdSet();
 {code}
 As this methods returns an anonymous DocIdSet, calling code can no longer 
 rely or check if the implementation behind is a FixedBitSet.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5609) Don't let cores create slices/named replicas

2014-02-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13896397#comment-13896397
 ] 

Noble Paul commented on SOLR-5609:
--

Takeaway from my discussion with [~markrmil...@gmail.com] .

 If legacyCloud=false , delete the collection1 or defaultcol cores when 
they send the STATE commands


 Don't let cores create slices/named replicas
 

 Key: SOLR-5609
 URL: https://issues.apache.org/jira/browse/SOLR-5609
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
 Fix For: 5.0, 4.7


 In SolrCloud, it is possible for a core to come up in any node , and register 
 itself with an arbitrary slice/coreNodeName. This is a legacy requirement and 
 we would like to make it only possible for Overseer to initiate creation of 
 slice/replicas
 We plan to introduce cluster level properties at the top level
 /cluster-props.json
 {code:javascript}
 {
 noSliceOrReplicaByCores:true
 }
 {code}
 If this property is set to true, cores won't be able to send STATE commands 
 with unknown slice/coreNodeName . Those commands will fail at Overseer. This 
 is useful for SOLR-5310 / SOLR-5311 where a core/replica is deleted by a 
 command and  it comes up later and tries to create a replica/slice



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5704) solr.xml coreNodeDirectory is ignored when creating new cores via REST(ish) apis

2014-02-10 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated SOLR-5704:


Attachment: SOLR-5704.patch

Updated patch, fixing a couple of test fails due to TestHarness creating a 
ConfigSolr object with a null resource loader.

 solr.xml coreNodeDirectory is ignored when creating new cores via REST(ish) 
 apis
 

 Key: SOLR-5704
 URL: https://issues.apache.org/jira/browse/SOLR-5704
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.6.1
 Environment: x86_64 Linux
 x86_64 Sun Java 7_u21
Reporter: Jesse Sipprell
Assignee: Alan Woodward
Priority: Minor
  Labels: solr.xml
 Attachments: SOLR-5704.patch, SOLR-5704.patch


 New style core.properties auto-configuration works correctly at startup 
 when {{$\{coreRootDirectory\}}} is specified in {{$\{solr.home\}/solr.xml}}, 
 however it does not work if a core is later created dynamically via either 
 (indirectly) the collection API or (directly) the core API.
 Core creation is always attempted in {{$\{solr.home\}}}.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5704) solr.xml coreNodeDirectory is ignored when creating new cores via REST(ish) apis

2014-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13896448#comment-13896448
 ] 

ASF subversion and git services commented on SOLR-5704:
---

Commit 1566598 from [~romseygeek] in branch 'dev/trunk'
[ https://svn.apache.org/r1566598 ]

SOLR-5704: new cores should be created under coreRootDirectory

 solr.xml coreNodeDirectory is ignored when creating new cores via REST(ish) 
 apis
 

 Key: SOLR-5704
 URL: https://issues.apache.org/jira/browse/SOLR-5704
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.6.1
 Environment: x86_64 Linux
 x86_64 Sun Java 7_u21
Reporter: Jesse Sipprell
Assignee: Alan Woodward
Priority: Minor
  Labels: solr.xml
 Attachments: SOLR-5704.patch, SOLR-5704.patch


 New style core.properties auto-configuration works correctly at startup 
 when {{$\{coreRootDirectory\}}} is specified in {{$\{solr.home\}/solr.xml}}, 
 however it does not work if a core is later created dynamically via either 
 (indirectly) the collection API or (directly) the core API.
 Core creation is always attempted in {{$\{solr.home\}}}.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5704) solr.xml coreNodeDirectory is ignored when creating new cores via REST(ish) apis

2014-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13896450#comment-13896450
 ] 

ASF subversion and git services commented on SOLR-5704:
---

Commit 1566600 from [~romseygeek] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1566600 ]

SOLR-5704: new cores should be created under coreRootDirectory

 solr.xml coreNodeDirectory is ignored when creating new cores via REST(ish) 
 apis
 

 Key: SOLR-5704
 URL: https://issues.apache.org/jira/browse/SOLR-5704
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.6.1
 Environment: x86_64 Linux
 x86_64 Sun Java 7_u21
Reporter: Jesse Sipprell
Assignee: Alan Woodward
Priority: Minor
  Labels: solr.xml
 Attachments: SOLR-5704.patch, SOLR-5704.patch


 New style core.properties auto-configuration works correctly at startup 
 when {{$\{coreRootDirectory\}}} is specified in {{$\{solr.home\}/solr.xml}}, 
 however it does not work if a core is later created dynamically via either 
 (indirectly) the collection API or (directly) the core API.
 Core creation is always attempted in {{$\{solr.home\}}}.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5704) solr.xml coreNodeDirectory is ignored when creating new cores via REST(ish) apis

2014-02-10 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved SOLR-5704.
-

   Resolution: Fixed
Fix Version/s: 4.7
   5.0

Thanks Jesse!

 solr.xml coreNodeDirectory is ignored when creating new cores via REST(ish) 
 apis
 

 Key: SOLR-5704
 URL: https://issues.apache.org/jira/browse/SOLR-5704
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.6.1
 Environment: x86_64 Linux
 x86_64 Sun Java 7_u21
Reporter: Jesse Sipprell
Assignee: Alan Woodward
Priority: Minor
  Labels: solr.xml
 Fix For: 5.0, 4.7

 Attachments: SOLR-5704.patch, SOLR-5704.patch


 New style core.properties auto-configuration works correctly at startup 
 when {{$\{coreRootDirectory\}}} is specified in {{$\{solr.home\}/solr.xml}}, 
 however it does not work if a core is later created dynamically via either 
 (indirectly) the collection API or (directly) the core API.
 Core creation is always attempted in {{$\{solr.home\}}}.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5712) Nullpointer while using TermVectorComponent when all the defined 'termVectors' fields of a document are empty.

2014-02-10 Thread Ron van der Vegt (JIRA)
Ron van der Vegt created SOLR-5712:
--

 Summary: Nullpointer while using TermVectorComponent when all the 
defined 'termVectors' fields of a document are empty.
 Key: SOLR-5712
 URL: https://issues.apache.org/jira/browse/SOLR-5712
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.6
Reporter: Ron van der Vegt


I defined three fields in my schema:

field name=l3_keywords type=text indexed=true stored=true 
multiValued=true termVectors=true termPositions=true
 termOffsets=true /
field name=verified_keywords type=text indexed=true stored=true 
multiValued=true termVectors=true termPositions=true
 termOffsets=true /
field name=type type=text indexed=true stored=true 
multiValued=true termVectors=true termPositions=true
 termOffsets=true /

I indexed around 3000 documents. In one of the documents all the three fields 
above are empty. On index time there is no issue. But when I tried to access my 
vector component request handler I get an nullpointer exception. I removed the 
document, where all the three fields are empy, indexed again and the exception 
is gone.

My request handler:

searchComponent name=tvComponent 
class=org.apache.solr.handler.component.TermVectorComponent/
requestHandler name=/tvrh class=solr.SearchHandler
lst name=defaults
bool name=tvtrue/bool
/lst
arr name=last-components
strtvComponent/str
/arr
/requestHandler


http://localhost:8983/solr/tvrh?q=*:*

257414 [qtp1106570297-16] ERROR org.apache.solr.servlet.SolrDispatchFilter  – 
null:java.lang.NullPointerException
at 
org.apache.solr.handler.component.TermVectorComponent.process(TermVectorComponent.java:322)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:208)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1859)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:710)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:413)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:197)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5476) Overseer Role for nodes

2014-02-10 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5476:
-

Attachment: SOLR-5476.patch

Another testcase to kill an existing overseer and checking for another one to 
takeover

 Overseer Role for nodes
 ---

 Key: SOLR-5476
 URL: https://issues.apache.org/jira/browse/SOLR-5476
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, 4.7

 Attachments: SOLR-5476.patch, SOLR-5476.patch, SOLR-5476.patch, 
 SOLR-5476.patch, SOLR-5476.patch, SOLR-5476.patch, SOLR-5476.patch


 In a very large cluster the Overseer is likely to be overloaded.If the same 
 node is a serving a few other shards it can lead to OverSeer getting slowed 
 down due to GC pauses , or simply too much of work  . If the cluster is 
 really large , it is possible to dedicate high end h/w for OverSeers
 It works as a new collection admin command
 command=addrolerole=overseernode=192.168.1.5:8983_solr
 This results in the creation of a entry in the /roles.json in ZK which would 
 look like the following
 {code:javascript}
 {
 overseer : [192.168.1.5:8983_solr]
 }
 {code}
 If a node is designated for overseer it gets preference over others when 
 overseer election takes place. If no designated servers are available another 
 random node would become the Overseer.
 Later on, if one of the designated nodes are brought up ,it would take over 
 the Overseer role from the current Overseer to become the Overseer of the 
 system



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5476) Overseer Role for nodes

2014-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13896496#comment-13896496
 ] 

ASF subversion and git services commented on SOLR-5476:
---

Commit 1566620 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1566620 ]

SOLR-5476 added testcase

 Overseer Role for nodes
 ---

 Key: SOLR-5476
 URL: https://issues.apache.org/jira/browse/SOLR-5476
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, 4.7

 Attachments: SOLR-5476.patch, SOLR-5476.patch, SOLR-5476.patch, 
 SOLR-5476.patch, SOLR-5476.patch, SOLR-5476.patch, SOLR-5476.patch


 In a very large cluster the Overseer is likely to be overloaded.If the same 
 node is a serving a few other shards it can lead to OverSeer getting slowed 
 down due to GC pauses , or simply too much of work  . If the cluster is 
 really large , it is possible to dedicate high end h/w for OverSeers
 It works as a new collection admin command
 command=addrolerole=overseernode=192.168.1.5:8983_solr
 This results in the creation of a entry in the /roles.json in ZK which would 
 look like the following
 {code:javascript}
 {
 overseer : [192.168.1.5:8983_solr]
 }
 {code}
 If a node is designated for overseer it gets preference over others when 
 overseer election takes place. If no designated servers are available another 
 random node would become the Overseer.
 Later on, if one of the designated nodes are brought up ,it would take over 
 the Overseer role from the current Overseer to become the Overseer of the 
 system



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5476) Overseer Role for nodes

2014-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13896513#comment-13896513
 ] 

ASF subversion and git services commented on SOLR-5476:
---

Commit 1566624 from [~noble.paul] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1566624 ]

SOLR-5476 added testcase

 Overseer Role for nodes
 ---

 Key: SOLR-5476
 URL: https://issues.apache.org/jira/browse/SOLR-5476
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, 4.7

 Attachments: SOLR-5476.patch, SOLR-5476.patch, SOLR-5476.patch, 
 SOLR-5476.patch, SOLR-5476.patch, SOLR-5476.patch, SOLR-5476.patch


 In a very large cluster the Overseer is likely to be overloaded.If the same 
 node is a serving a few other shards it can lead to OverSeer getting slowed 
 down due to GC pauses , or simply too much of work  . If the cluster is 
 really large , it is possible to dedicate high end h/w for OverSeers
 It works as a new collection admin command
 command=addrolerole=overseernode=192.168.1.5:8983_solr
 This results in the creation of a entry in the /roles.json in ZK which would 
 look like the following
 {code:javascript}
 {
 overseer : [192.168.1.5:8983_solr]
 }
 {code}
 If a node is designated for overseer it gets preference over others when 
 overseer election takes place. If no designated servers are available another 
 random node would become the Overseer.
 Later on, if one of the designated nodes are brought up ,it would take over 
 the Overseer role from the current Overseer to become the Overseer of the 
 system



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5440) Add LongFixedBitSet and replace usage of OpenBitSet

2014-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13896700#comment-13896700
 ] 

ASF subversion and git services commented on LUCENE-5440:
-

Commit 152 from [~shaie] in branch 'dev/trunk'
[ https://svn.apache.org/r152 ]

LUCENE-5440: Add LongBitSet to handle large number of bits; replace usage of 
OpenBitSet by FixedBitSet/LongBitSet

 Add LongFixedBitSet and replace usage of OpenBitSet
 ---

 Key: LUCENE-5440
 URL: https://issues.apache.org/jira/browse/LUCENE-5440
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Reporter: Shai Erera
Assignee: Shai Erera
 Attachments: LUCENE-5440.patch, LUCENE-5440.patch, LUCENE-5440.patch, 
 LUCENE-5440.patch, LUCENE-5440.patch


 Spinoff from here: http://lucene.markmail.org/thread/35gw3amo53dsqsqj. I 
 wrote a LongFixedBitSet which behaves like FixedBitSet, only allows managing 
 more than 2.1B bits. It overcome some issues I've encountered with 
 OpenBitSet, such as the use of set/fastSet as well the implementation of 
 DocIdSet. I'll post a patch shortly and describe it in more detail.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5440) Add LongFixedBitSet and replace usage of OpenBitSet

2014-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13896716#comment-13896716
 ] 

ASF subversion and git services commented on LUCENE-5440:
-

Commit 1566670 from [~shaie] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1566670 ]

LUCENE-5440: Add LongBitSet to handle large number of bits; replace usage of 
OpenBitSet by FixedBitSet/LongBitSet

 Add LongFixedBitSet and replace usage of OpenBitSet
 ---

 Key: LUCENE-5440
 URL: https://issues.apache.org/jira/browse/LUCENE-5440
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Reporter: Shai Erera
Assignee: Shai Erera
 Attachments: LUCENE-5440.patch, LUCENE-5440.patch, LUCENE-5440.patch, 
 LUCENE-5440.patch, LUCENE-5440.patch


 Spinoff from here: http://lucene.markmail.org/thread/35gw3amo53dsqsqj. I 
 wrote a LongFixedBitSet which behaves like FixedBitSet, only allows managing 
 more than 2.1B bits. It overcome some issues I've encountered with 
 OpenBitSet, such as the use of set/fastSet as well the implementation of 
 DocIdSet. I'll post a patch shortly and describe it in more detail.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5440) Add LongFixedBitSet and replace usage of OpenBitSet

2014-02-10 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13896717#comment-13896717
 ] 

Shai Erera commented on LUCENE-5440:


Committed this patch to trunk and 4x. I will work on the solr/ code ... at 
least, I'll make a best effort :).

 Add LongFixedBitSet and replace usage of OpenBitSet
 ---

 Key: LUCENE-5440
 URL: https://issues.apache.org/jira/browse/LUCENE-5440
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Reporter: Shai Erera
Assignee: Shai Erera
 Attachments: LUCENE-5440.patch, LUCENE-5440.patch, LUCENE-5440.patch, 
 LUCENE-5440.patch, LUCENE-5440.patch


 Spinoff from here: http://lucene.markmail.org/thread/35gw3amo53dsqsqj. I 
 wrote a LongFixedBitSet which behaves like FixedBitSet, only allows managing 
 more than 2.1B bits. It overcome some issues I've encountered with 
 OpenBitSet, such as the use of set/fastSet as well the implementation of 
 DocIdSet. I'll post a patch shortly and describe it in more detail.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5704) solr.xml coreNodeDirectory is ignored when creating new cores via REST(ish) apis

2014-02-10 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13896715#comment-13896715
 ] 

Erick Erickson commented on SOLR-5704:
--

Thanks guys!

 solr.xml coreNodeDirectory is ignored when creating new cores via REST(ish) 
 apis
 

 Key: SOLR-5704
 URL: https://issues.apache.org/jira/browse/SOLR-5704
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.6.1
 Environment: x86_64 Linux
 x86_64 Sun Java 7_u21
Reporter: Jesse Sipprell
Assignee: Alan Woodward
Priority: Minor
  Labels: solr.xml
 Fix For: 5.0, 4.7

 Attachments: SOLR-5704.patch, SOLR-5704.patch


 New style core.properties auto-configuration works correctly at startup 
 when {{$\{coreRootDirectory\}}} is specified in {{$\{solr.home\}/solr.xml}}, 
 however it does not work if a core is later created dynamically via either 
 (indirectly) the collection API or (directly) the core API.
 Core creation is always attempted in {{$\{solr.home\}}}.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.7.0_51) - Build # 9325 - Failure!

2014-02-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/9325/
Java: 64bit/jdk1.7.0_51 -XX:-UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 7350 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/codecs/test/temp/junit4-J1-20140210_171831_397.sysout
   [junit4]  JVM J1: stdout (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  SIGSEGV (0xb) at pc=0x7fed8569efab, pid=15145, 
tid=140657468221184
   [junit4] #
   [junit4] # JRE version: Java(TM) SE Runtime Environment (7.0_51-b13) (build 
1.7.0_51-b13)
   [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.51-b03 mixed mode 
linux-amd64 )
   [junit4] # Problematic frame:
   [junit4] # J  
org.apache.lucene.codecs.compressing.CompressingTermVectorsReader.readPositions(IILorg/apache/lucene/util/packed/PackedInts$Reader;Lorg/apache/lucene/util/packed/PackedInts$Reader;[III[[I)[[I
   [junit4] #
   [junit4] # Failed to write core dump. Core dumps have been disabled. To 
enable core dumping, try ulimit -c unlimited before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/codecs/test/J1/hs_err_pid15145.log
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.sun.com/bugreport/crash.jsp
   [junit4] #
   [junit4]  JVM J1: EOF 

[...truncated 20 lines...]
   [junit4] ERROR: JVM J1 ended with an exception, command line: 
/var/lib/jenkins/tools/java/64bit/jdk1.7.0_51/jre/bin/java 
-XX:-UseCompressedOops -XX:+UseSerialGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/heapdumps 
-Dtests.prefix=tests -Dtests.seed=F5E506995138D958 -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=4.7 
-Dtests.cleanthreads=perMethod 
-Djava.util.logging.config.file=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true 
-Dtests.asserts.gracious=false -Dtests.multiplier=3 -DtempDir=. 
-Djava.io.tmpdir=. 
-Djunit4.tempDir=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/codecs/test/temp
 
-Dclover.db.dir=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/clover/db
 -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Djava.security.policy=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/junit4/tests.policy
 -Dlucene.version=4.7-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.disableHdfs=true -Dfile.encoding=UTF-8 -classpath 

[jira] [Updated] (LUCENE-5438) add near-real-time replication

2014-02-10 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-5438:
---

Attachment: LUCENE-5438.patch

New patch, improving the test case.  I added some abstraction (Replica class), 
and the test now randomly commits/stops/starts replicas.

 add near-real-time replication
 --

 Key: LUCENE-5438
 URL: https://issues.apache.org/jira/browse/LUCENE-5438
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/replicator
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, 4.7

 Attachments: LUCENE-5438.patch, LUCENE-5438.patch


 Lucene's replication module makes it easy to incrementally sync index
 changes from a master index to any number of replicas, and it
 handles/abstracts all the underlying complexity of holding a
 time-expiring snapshot, finding which files need copying, syncing more
 than one index (e.g., taxo + index), etc.
 But today you must first commit on the master, and then again the
 replica's copied files are fsync'd, because the code operates on
 commit points.  But this isn't technically necessary, and it mixes
 up durability and fast turnaround time.
 Long ago we added near-real-time readers to Lucene, for the same
 reason: you shouldn't have to commit just to see the new index
 changes.
 I think we should do the same for replication: allow the new segments
 to be copied out to replica(s), and new NRT readers to be opened, to
 fully decouple committing from visibility.  This way apps can then
 separately choose when to replicate (for freshness), and when to
 commit (for durability).
 I think for some apps this could be a compelling alternative to the
 re-index all documents on each shard approach that Solr Cloud /
 ElasticSearch implement today, and it may also mean that the
 transaction log can remain external to / above the cluster.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5671) Heisenbug #2 in DistribCursorPagingTest: full walk returns one fewer doc than expected

2014-02-10 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13896840#comment-13896840
 ] 

Hoss Man commented on SOLR-5671:


Given this...

bq. All the failures were using either Lucene41 or Lucene42 codec

... and the fact that (as far as i can tell) we haven't seen any more failures 
of this type since the codec constraints were added to the test in SOLR-5652, 
my hunch is that this is the same test bug manifesting itself in a slightly 
different way:  just as the non-deterministic behavior the docvalues w/missing 
value caused a doc to appear twice in SOLR-5652, shifting from it's originla 
position to a later position on a subsequent request, it's easily concievable 
that the same non-deterministic behavior could cause a doc to shift to an early 
position -- prior to the current cursor page -- on subsequent requests, causing 
that doc to be skipped entirely.

I think we can go ahead and resolve this -- if it does pop up again we can 
re-open (and we'll have the better assertion failure message steve added to 
help diagnose)

 Heisenbug #2 in DistribCursorPagingTest: full walk returns one fewer doc than 
 expected 
 ---

 Key: SOLR-5671
 URL: https://issues.apache.org/jira/browse/SOLR-5671
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.7
Reporter: Steve Rowe
 Fix For: 5.0, 4.7


 Twice on Uwe's Jenkins, DistribCursorPagingTest has paged through a small 
 number of indexed docs and retrieved one fewer doc than the number of indexed 
 docs.  Both of these failures were on trunk on Windows:
 http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3708/
 http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3713/  
 I've also seen this twice on trunk on my OS X laptop (out of 875 trials).
 None of the seeds have reproduced for me.
 All the failures were using either Lucene41 or Lucene42 codec



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5671) Heisenbug #2 in DistribCursorPagingTest: full walk returns one fewer doc than expected

2014-02-10 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-5671.


   Resolution: Fixed
Fix Version/s: 4.7
   5.0

 Heisenbug #2 in DistribCursorPagingTest: full walk returns one fewer doc than 
 expected 
 ---

 Key: SOLR-5671
 URL: https://issues.apache.org/jira/browse/SOLR-5671
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.7
Reporter: Steve Rowe
 Fix For: 5.0, 4.7


 Twice on Uwe's Jenkins, DistribCursorPagingTest has paged through a small 
 number of indexed docs and retrieved one fewer doc than the number of indexed 
 docs.  Both of these failures were on trunk on Windows:
 http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3708/
 http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3713/  
 I've also seen this twice on trunk on my OS X laptop (out of 875 trials).
 None of the seeds have reproduced for me.
 All the failures were using either Lucene41 or Lucene42 codec



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1101: POMs out of sync

2014-02-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1101/

All tests passed

Build Log:
[...truncated 48822 lines...]
  [mvn] [INFO] -
  [mvn] [INFO] -
  [mvn] [ERROR] COMPILATION ERROR : 
  [mvn] [INFO] -

[...truncated 373 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:476: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:176: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/extra-targets.xml:77:
 Java returned: 1

Total time: 46 minutes 12 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Assigned] (SOLR-5561) DefaultSimilarity 'init' method is not called, if the similarity plugin is not explicitly declared in the schema

2014-02-10 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reassigned SOLR-5561:
--

Assignee: Hoss Man

 DefaultSimilarity 'init' method is not called, if the similarity plugin is 
 not explicitly declared in the schema
 

 Key: SOLR-5561
 URL: https://issues.apache.org/jira/browse/SOLR-5561
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 4.6
Reporter: Isaac Hebsh
Assignee: Hoss Man
  Labels: similarity
 Fix For: 5.0, 4.7

 Attachments: SOLR-5561.patch, SOLR-5561.patch, SOLR-5561.patch


 As a result, discountOverlap is not initialized to true, and the default 
 behavior is that multiple terms on the same position DO affect the fieldNorm. 
 this is not the intended default.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5441) Decouple DocIdSet from OpenBitSet and FixedBitSet

2014-02-10 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5441:
--

Attachment: LUCENE-5441.patch

New patch after LUCENE-5440 was committed. This includes the following extra 
features:
- FixedBitDocIdSet was added instead of the asDocIdSet() method.
- removed the OpenBitSetDISI optimizations.
- The and/or/andNot/xor(DISI) methods are no longer available in FixedBitSet. 
Those are only available in FixedBitDocIdSet. This leads t some additional 
wrapping or less wrapping depending where it was used before. I did not review 
all the automatic changes I did, surely there could be some private method 
signatures changed in ChainedFilter/BooleanFilter to reduce wrapping.
- I also optimized FixedBitDocIdSet.xor(DISI) to use bitwise XOR, if the 
iterator is a FixedBitSet one. This was missing in Shai's patch.

The current code does not change the DocIdSet abstract interface to support 
inplace and/or/... (especially as this is only supported by bitsets, but not 
the other DIS impls?!). I am also not yet happy with the current state of this 
DIS wrapping. In any case - FixedBitSet is now clean from any DocIdSet uses! 
It's just a BitSet, nothing more - like the Long one.

 Decouple DocIdSet from OpenBitSet and FixedBitSet
 -

 Key: LUCENE-5441
 URL: https://issues.apache.org/jira/browse/LUCENE-5441
 Project: Lucene - Core
  Issue Type: Task
  Components: core/other
Affects Versions: 4.6.1
Reporter: Uwe Schindler
 Fix For: 5.0

 Attachments: LUCENE-5441.patch, LUCENE-5441.patch, LUCENE-5441.patch


 Back from the times of Lucene 2.4 when DocIdSet was introduced, we somehow 
 kept the stupid filters can return a BitSet directly in the code. So lots 
 of Filters return just FixedBitSet, because this is the superclass (ideally 
 interface) of FixedBitSet.
 We should decouple that and *not* implement that abstract interface directly 
 by FixedBitSet. This leads to bugs e.g. in BlockJoin, because it used Filters 
 in a wrong way, just because it was always returning Bitsets. But some 
 filters actually don't do this.
 I propose to let FixedBitSet (only in trunk, because that a major backwards 
 break) just have a method {{asDocIdSet()}}, that returns an anonymous 
 instance of DocIdSet: bits() returns the FixedBitSet itsself, iterator() 
 returns a new Iterator (like it always did) and the cost/cacheable methods 
 return static values.
 Filters in trunk would need to be changed like that:
 {code:java}
 FixedBitSet bits = 
 ...
 return bits;
 {code}
 gets:
 {code:java}
 FixedBitSet bits = 
 ...
 return bits.asDocIdSet();
 {code}
 As this methods returns an anonymous DocIdSet, calling code can no longer 
 rely or check if the implementation behind is a FixedBitSet.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5441) Decouple DocIdSet from OpenBitSet and FixedBitSet

2014-02-10 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13896944#comment-13896944
 ] 

Shai Erera commented on LUCENE-5441:


Uwe, just a wild thought -- since you already break back-compat by making FBS 
not extend DocIdSet, can we also rename it to IntBitSet? Migrating your code 
following those two changes is equally trivial... if not, then how about we 
keep FBS as-is, deprecated, and do all this work on a new IntBitSet? I prefer 
the first approach since it means less work (and also I think that writing a 
Filter is quite expert).

 Decouple DocIdSet from OpenBitSet and FixedBitSet
 -

 Key: LUCENE-5441
 URL: https://issues.apache.org/jira/browse/LUCENE-5441
 Project: Lucene - Core
  Issue Type: Task
  Components: core/other
Affects Versions: 4.6.1
Reporter: Uwe Schindler
 Fix For: 5.0

 Attachments: LUCENE-5441.patch, LUCENE-5441.patch, LUCENE-5441.patch


 Back from the times of Lucene 2.4 when DocIdSet was introduced, we somehow 
 kept the stupid filters can return a BitSet directly in the code. So lots 
 of Filters return just FixedBitSet, because this is the superclass (ideally 
 interface) of FixedBitSet.
 We should decouple that and *not* implement that abstract interface directly 
 by FixedBitSet. This leads to bugs e.g. in BlockJoin, because it used Filters 
 in a wrong way, just because it was always returning Bitsets. But some 
 filters actually don't do this.
 I propose to let FixedBitSet (only in trunk, because that a major backwards 
 break) just have a method {{asDocIdSet()}}, that returns an anonymous 
 instance of DocIdSet: bits() returns the FixedBitSet itsself, iterator() 
 returns a new Iterator (like it always did) and the cost/cacheable methods 
 return static values.
 Filters in trunk would need to be changed like that:
 {code:java}
 FixedBitSet bits = 
 ...
 return bits;
 {code}
 gets:
 {code:java}
 FixedBitSet bits = 
 ...
 return bits.asDocIdSet();
 {code}
 As this methods returns an anonymous DocIdSet, calling code can no longer 
 rely or check if the implementation behind is a FixedBitSet.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5652) Heisenbug in DistribCursorPagingTest: walk already seen ...

2014-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13896992#comment-13896992
 ] 

ASF subversion and git services commented on SOLR-5652:
---

Commit 1566741 from [~steve_rowe] in branch 'dev/trunk'
[ https://svn.apache.org/r1566741 ]

SOLR-5652: in cursor tests, don't sort on 'plain' docvalue fields (i.e., those 
using standard Lucene sorting for missing values) when the codec doesn't 
support missing docvalues.

 Heisenbug in DistribCursorPagingTest: walk already seen ...
 -

 Key: SOLR-5652
 URL: https://issues.apache.org/jira/browse/SOLR-5652
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Hoss Man
 Attachments: 129.log, 372.log, 
 SOLR-5652-dont-sort-on-any-dv-fields-when-codec-doesnt-support-missing-dvs.patch,
  SOLR-5652.codec.skip.dv.patch, SOLR-5652.nocommit.patch, SOLR-5652.patch, 
 bin_dv.post-patch.log.txt, 
 jenkins.thetaphi.de_Lucene-Solr-4.x-MacOSX_1200.log.txt, 
 jenkins.thetaphi.de_Lucene-Solr-4.x-MacOSX_1217.log.txt, 
 str_dv.post-patch.log.txt


 Several times now, Uwe's jenkins has encountered a walk already seen ... 
 assertion failure from DistribCursorPagingTest that I've been unable to 
 fathom, let alone reproduce (although sarowe was able to trigger a similar, 
 non-reproducible seed, failure on his machine)
 Using this as a tracking issue to try and make sense of it.
 Summary of things noticed so far:
 * So far only seen on http://jenkins.thetaphi.de  sarowe's mac
 * So far seen on MacOSX and Linux
 * So far seen on branch 4x and trunk
 * So far seen on Java6, Java7, and Java8
 * fails occured in first block of randomized testing: 
 ** we've indexed a small number of randomized docs
 ** we're explicitly looping over every field and sorting in both directions
 * fails were sorting on one of the \*_dv_last or \*_dv_first fields 
 (docValues=true, either sortMissingLast=true OR sortMissingFirst=true) 
 ** for desc sorts, sort on same field asc has worked fine just before this 
 (fields are in arbitrary order, but asc always tried before desc)
 ** sorting on some other random fields has sometimes been tried before this 
 and worked
 (specifics of each failure seen in the wild recorded in comments)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5652) Heisenbug in DistribCursorPagingTest: walk already seen ...

2014-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13896996#comment-13896996
 ] 

ASF subversion and git services commented on SOLR-5652:
---

Commit 1566742 from [~steve_rowe] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1566742 ]

SOLR-5652: in cursor tests, don't sort on 'plain' docvalue fields (i.e., those 
using standard Lucene sorting for missing values) when the codec doesn't 
support missing docvalues. (merged trunk r1566741)

 Heisenbug in DistribCursorPagingTest: walk already seen ...
 -

 Key: SOLR-5652
 URL: https://issues.apache.org/jira/browse/SOLR-5652
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Hoss Man
 Attachments: 129.log, 372.log, 
 SOLR-5652-dont-sort-on-any-dv-fields-when-codec-doesnt-support-missing-dvs.patch,
  SOLR-5652.codec.skip.dv.patch, SOLR-5652.nocommit.patch, SOLR-5652.patch, 
 bin_dv.post-patch.log.txt, 
 jenkins.thetaphi.de_Lucene-Solr-4.x-MacOSX_1200.log.txt, 
 jenkins.thetaphi.de_Lucene-Solr-4.x-MacOSX_1217.log.txt, 
 str_dv.post-patch.log.txt


 Several times now, Uwe's jenkins has encountered a walk already seen ... 
 assertion failure from DistribCursorPagingTest that I've been unable to 
 fathom, let alone reproduce (although sarowe was able to trigger a similar, 
 non-reproducible seed, failure on his machine)
 Using this as a tracking issue to try and make sense of it.
 Summary of things noticed so far:
 * So far only seen on http://jenkins.thetaphi.de  sarowe's mac
 * So far seen on MacOSX and Linux
 * So far seen on branch 4x and trunk
 * So far seen on Java6, Java7, and Java8
 * fails occured in first block of randomized testing: 
 ** we've indexed a small number of randomized docs
 ** we're explicitly looping over every field and sorting in both directions
 * fails were sorting on one of the \*_dv_last or \*_dv_first fields 
 (docValues=true, either sortMissingLast=true OR sortMissingFirst=true) 
 ** for desc sorts, sort on same field asc has worked fine just before this 
 (fields are in arbitrary order, but asc always tried before desc)
 ** sorting on some other random fields has sometimes been tried before this 
 and worked
 (specifics of each failure seen in the wild recorded in comments)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5652) Heisenbug in DistribCursorPagingTest: walk already seen ...

2014-02-10 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-5652.
--

   Resolution: Fixed
Fix Version/s: 4.7

I committed my patch to also skip sorting on {{\*_dv}} fields when the codec 
doesn't support missing docvalues, to trunk and branch_4x.

Resolving, as I think we have now addressed all of the 
missing-docvalues-related failures we've seen.


 Heisenbug in DistribCursorPagingTest: walk already seen ...
 -

 Key: SOLR-5652
 URL: https://issues.apache.org/jira/browse/SOLR-5652
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 4.7

 Attachments: 129.log, 372.log, 
 SOLR-5652-dont-sort-on-any-dv-fields-when-codec-doesnt-support-missing-dvs.patch,
  SOLR-5652.codec.skip.dv.patch, SOLR-5652.nocommit.patch, SOLR-5652.patch, 
 bin_dv.post-patch.log.txt, 
 jenkins.thetaphi.de_Lucene-Solr-4.x-MacOSX_1200.log.txt, 
 jenkins.thetaphi.de_Lucene-Solr-4.x-MacOSX_1217.log.txt, 
 str_dv.post-patch.log.txt


 Several times now, Uwe's jenkins has encountered a walk already seen ... 
 assertion failure from DistribCursorPagingTest that I've been unable to 
 fathom, let alone reproduce (although sarowe was able to trigger a similar, 
 non-reproducible seed, failure on his machine)
 Using this as a tracking issue to try and make sense of it.
 Summary of things noticed so far:
 * So far only seen on http://jenkins.thetaphi.de  sarowe's mac
 * So far seen on MacOSX and Linux
 * So far seen on branch 4x and trunk
 * So far seen on Java6, Java7, and Java8
 * fails occured in first block of randomized testing: 
 ** we've indexed a small number of randomized docs
 ** we're explicitly looping over every field and sorting in both directions
 * fails were sorting on one of the \*_dv_last or \*_dv_first fields 
 (docValues=true, either sortMissingLast=true OR sortMissingFirst=true) 
 ** for desc sorts, sort on same field asc has worked fine just before this 
 (fields are in arbitrary order, but asc always tried before desc)
 ** sorting on some other random fields has sometimes been tried before this 
 and worked
 (specifics of each failure seen in the wild recorded in comments)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5709) Highlighting grouped duplicate docs from different shards with group.limit 1 throws ArrayIndexOutOfBoundsException

2014-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13897007#comment-13897007
 ] 

ASF subversion and git services commented on SOLR-5709:
---

Commit 1566743 from [~steve_rowe] in branch 'dev/trunk'
[ https://svn.apache.org/r1566743 ]

SOLR-5709: Highlighting grouped duplicate docs from different shards with 
group.limit  1 throws ArrayIndexOutOfBoundsException

 Highlighting grouped duplicate docs from different shards with group.limit  
 1 throws ArrayIndexOutOfBoundsException
 

 Key: SOLR-5709
 URL: https://issues.apache.org/jira/browse/SOLR-5709
 Project: Solr
  Issue Type: Bug
  Components: highlighter
Affects Versions: 4.3, 4.4, 4.5, 4.6, 5.0
Reporter: Steve Rowe
Assignee: Steve Rowe
 Attachments: SOLR-5709.patch


 In a sharded (non-SolrCloud) deployment, if you index a document with the 
 same unique key value into more than one shard, and then try to highlight 
 grouped docs with more than one doc per group, where the grouped docs contain 
 at least one duplicate doc pair, you get an AIOOBE.
 Here's the stack trace I got from such a situation, with 1 doc indexed into 
 each shard in a 2-shard index, with {{group.limit=2}}:
 {noformat}
 ERROR null:java.lang.ArrayIndexOutOfBoundsException: 1
   at 
 org.apache.solr.handler.component.HighlightComponent.finishStage(HighlightComponent.java:185)
   at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:328)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1916)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:758)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:412)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:202)
   at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
   at 
 org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:136)
   at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
   at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
   at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229)
   at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
   at 
 org.eclipse.jetty.server.handler.GzipHandler.handle(GzipHandler.java:301)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1077)
   at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
   at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
   at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
   at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
   at org.eclipse.jetty.server.Server.handle(Server.java:368)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
   at 
 org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
   at 
 org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
   at 
 org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
   at 
 org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:628)
   at 
 org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
   at 
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
   at 
 org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
   at java.lang.Thread.run(Thread.java:724)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For 

[jira] [Commented] (SOLR-5709) Highlighting grouped duplicate docs from different shards with group.limit 1 throws ArrayIndexOutOfBoundsException

2014-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13897014#comment-13897014
 ] 

ASF subversion and git services commented on SOLR-5709:
---

Commit 1566746 from [~steve_rowe] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1566746 ]

SOLR-5709: Highlighting grouped duplicate docs from different shards with 
group.limit  1 throws ArrayIndexOutOfBoundsException (merged trunk r1566743)

 Highlighting grouped duplicate docs from different shards with group.limit  
 1 throws ArrayIndexOutOfBoundsException
 

 Key: SOLR-5709
 URL: https://issues.apache.org/jira/browse/SOLR-5709
 Project: Solr
  Issue Type: Bug
  Components: highlighter
Affects Versions: 4.3, 4.4, 4.5, 4.6, 5.0
Reporter: Steve Rowe
Assignee: Steve Rowe
 Fix For: 5.0, 4.7

 Attachments: SOLR-5709.patch


 In a sharded (non-SolrCloud) deployment, if you index a document with the 
 same unique key value into more than one shard, and then try to highlight 
 grouped docs with more than one doc per group, where the grouped docs contain 
 at least one duplicate doc pair, you get an AIOOBE.
 Here's the stack trace I got from such a situation, with 1 doc indexed into 
 each shard in a 2-shard index, with {{group.limit=2}}:
 {noformat}
 ERROR null:java.lang.ArrayIndexOutOfBoundsException: 1
   at 
 org.apache.solr.handler.component.HighlightComponent.finishStage(HighlightComponent.java:185)
   at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:328)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1916)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:758)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:412)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:202)
   at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
   at 
 org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:136)
   at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
   at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
   at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229)
   at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
   at 
 org.eclipse.jetty.server.handler.GzipHandler.handle(GzipHandler.java:301)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1077)
   at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
   at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
   at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
   at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
   at org.eclipse.jetty.server.Server.handle(Server.java:368)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
   at 
 org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
   at 
 org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
   at 
 org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
   at 
 org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:628)
   at 
 org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
   at 
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
   at 
 org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
   at java.lang.Thread.run(Thread.java:724)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-

[jira] [Resolved] (SOLR-5709) Highlighting grouped duplicate docs from different shards with group.limit 1 throws ArrayIndexOutOfBoundsException

2014-02-10 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-5709.
--

   Resolution: Fixed
Fix Version/s: 4.7
   5.0

Committed to branch_4x and trunk.

 Highlighting grouped duplicate docs from different shards with group.limit  
 1 throws ArrayIndexOutOfBoundsException
 

 Key: SOLR-5709
 URL: https://issues.apache.org/jira/browse/SOLR-5709
 Project: Solr
  Issue Type: Bug
  Components: highlighter
Affects Versions: 4.3, 4.4, 4.5, 4.6, 5.0
Reporter: Steve Rowe
Assignee: Steve Rowe
 Fix For: 5.0, 4.7

 Attachments: SOLR-5709.patch


 In a sharded (non-SolrCloud) deployment, if you index a document with the 
 same unique key value into more than one shard, and then try to highlight 
 grouped docs with more than one doc per group, where the grouped docs contain 
 at least one duplicate doc pair, you get an AIOOBE.
 Here's the stack trace I got from such a situation, with 1 doc indexed into 
 each shard in a 2-shard index, with {{group.limit=2}}:
 {noformat}
 ERROR null:java.lang.ArrayIndexOutOfBoundsException: 1
   at 
 org.apache.solr.handler.component.HighlightComponent.finishStage(HighlightComponent.java:185)
   at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:328)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1916)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:758)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:412)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:202)
   at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
   at 
 org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:136)
   at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
   at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
   at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229)
   at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
   at 
 org.eclipse.jetty.server.handler.GzipHandler.handle(GzipHandler.java:301)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1077)
   at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
   at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
   at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
   at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
   at org.eclipse.jetty.server.Server.handle(Server.java:368)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
   at 
 org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
   at 
 org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
   at 
 org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
   at 
 org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:628)
   at 
 org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
   at 
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
   at 
 org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
   at java.lang.Thread.run(Thread.java:724)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-5652) Heisenbug in DistribCursorPagingTest: walk already seen ...

2014-02-10 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reopened SOLR-5652:



bq. I committed my patch to also skip sorting on *_dv fields when the codec 
doesn't support missing docvalues, to trunk and branch_4x.

thanks steve.

bq. Resolving, as I think we have now addressed all of the 
missing-docvalues-related failures we've seen.

I actually want to keep this one open a bit ... give it another week or two, 
and assuming no more failures i want to dial back on some of the extra logging 
we added here to be less verbose in the non-failure case.

 Heisenbug in DistribCursorPagingTest: walk already seen ...
 -

 Key: SOLR-5652
 URL: https://issues.apache.org/jira/browse/SOLR-5652
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 4.7

 Attachments: 129.log, 372.log, 
 SOLR-5652-dont-sort-on-any-dv-fields-when-codec-doesnt-support-missing-dvs.patch,
  SOLR-5652.codec.skip.dv.patch, SOLR-5652.nocommit.patch, SOLR-5652.patch, 
 bin_dv.post-patch.log.txt, 
 jenkins.thetaphi.de_Lucene-Solr-4.x-MacOSX_1200.log.txt, 
 jenkins.thetaphi.de_Lucene-Solr-4.x-MacOSX_1217.log.txt, 
 str_dv.post-patch.log.txt


 Several times now, Uwe's jenkins has encountered a walk already seen ... 
 assertion failure from DistribCursorPagingTest that I've been unable to 
 fathom, let alone reproduce (although sarowe was able to trigger a similar, 
 non-reproducible seed, failure on his machine)
 Using this as a tracking issue to try and make sense of it.
 Summary of things noticed so far:
 * So far only seen on http://jenkins.thetaphi.de  sarowe's mac
 * So far seen on MacOSX and Linux
 * So far seen on branch 4x and trunk
 * So far seen on Java6, Java7, and Java8
 * fails occured in first block of randomized testing: 
 ** we've indexed a small number of randomized docs
 ** we're explicitly looping over every field and sorting in both directions
 * fails were sorting on one of the \*_dv_last or \*_dv_first fields 
 (docValues=true, either sortMissingLast=true OR sortMissingFirst=true) 
 ** for desc sorts, sort on same field asc has worked fine just before this 
 (fields are in arbitrary order, but asc always tried before desc)
 ** sorting on some other random fields has sometimes been tried before this 
 and worked
 (specifics of each failure seen in the wild recorded in comments)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5652) Heisenbug in DistribCursorPagingTest: walk already seen ...

2014-02-10 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13897043#comment-13897043
 ] 

Steve Rowe commented on SOLR-5652:
--

{quote}
bq. Resolving, as I think we have now addressed all of the 
missing-docvalues-related failures we've seen.

I actually want to keep this one open a bit ... give it another week or two, 
and assuming no more failures i want to dial back on some of the extra logging 
we added here to be less verbose in the non-failure case.
{quote}

Sure, make sense.

 Heisenbug in DistribCursorPagingTest: walk already seen ...
 -

 Key: SOLR-5652
 URL: https://issues.apache.org/jira/browse/SOLR-5652
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 4.7

 Attachments: 129.log, 372.log, 
 SOLR-5652-dont-sort-on-any-dv-fields-when-codec-doesnt-support-missing-dvs.patch,
  SOLR-5652.codec.skip.dv.patch, SOLR-5652.nocommit.patch, SOLR-5652.patch, 
 bin_dv.post-patch.log.txt, 
 jenkins.thetaphi.de_Lucene-Solr-4.x-MacOSX_1200.log.txt, 
 jenkins.thetaphi.de_Lucene-Solr-4.x-MacOSX_1217.log.txt, 
 str_dv.post-patch.log.txt


 Several times now, Uwe's jenkins has encountered a walk already seen ... 
 assertion failure from DistribCursorPagingTest that I've been unable to 
 fathom, let alone reproduce (although sarowe was able to trigger a similar, 
 non-reproducible seed, failure on his machine)
 Using this as a tracking issue to try and make sense of it.
 Summary of things noticed so far:
 * So far only seen on http://jenkins.thetaphi.de  sarowe's mac
 * So far seen on MacOSX and Linux
 * So far seen on branch 4x and trunk
 * So far seen on Java6, Java7, and Java8
 * fails occured in first block of randomized testing: 
 ** we've indexed a small number of randomized docs
 ** we're explicitly looping over every field and sorting in both directions
 * fails were sorting on one of the \*_dv_last or \*_dv_first fields 
 (docValues=true, either sortMissingLast=true OR sortMissingFirst=true) 
 ** for desc sorts, sort on same field asc has worked fine just before this 
 (fields are in arbitrary order, but asc always tried before desc)
 ** sorting on some other random fields has sometimes been tried before this 
 and worked
 (specifics of each failure seen in the wild recorded in comments)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5652) Heisenbug in DistribCursorPagingTest: walk already seen ...

2014-02-10 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13897043#comment-13897043
 ] 

Steve Rowe edited comment on SOLR-5652 at 2/10/14 9:31 PM:
---

{quote}
bq. Resolving, as I think we have now addressed all of the 
missing-docvalues-related failures we've seen.

I actually want to keep this one open a bit ... give it another week or two, 
and assuming no more failures i want to dial back on some of the extra logging 
we added here to be less verbose in the non-failure case.
{quote}

Sure, makes sense.


was (Author: steve_rowe):
{quote}
bq. Resolving, as I think we have now addressed all of the 
missing-docvalues-related failures we've seen.

I actually want to keep this one open a bit ... give it another week or two, 
and assuming no more failures i want to dial back on some of the extra logging 
we added here to be less verbose in the non-failure case.
{quote}

Sure, make sense.

 Heisenbug in DistribCursorPagingTest: walk already seen ...
 -

 Key: SOLR-5652
 URL: https://issues.apache.org/jira/browse/SOLR-5652
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 4.7

 Attachments: 129.log, 372.log, 
 SOLR-5652-dont-sort-on-any-dv-fields-when-codec-doesnt-support-missing-dvs.patch,
  SOLR-5652.codec.skip.dv.patch, SOLR-5652.nocommit.patch, SOLR-5652.patch, 
 bin_dv.post-patch.log.txt, 
 jenkins.thetaphi.de_Lucene-Solr-4.x-MacOSX_1200.log.txt, 
 jenkins.thetaphi.de_Lucene-Solr-4.x-MacOSX_1217.log.txt, 
 str_dv.post-patch.log.txt


 Several times now, Uwe's jenkins has encountered a walk already seen ... 
 assertion failure from DistribCursorPagingTest that I've been unable to 
 fathom, let alone reproduce (although sarowe was able to trigger a similar, 
 non-reproducible seed, failure on his machine)
 Using this as a tracking issue to try and make sense of it.
 Summary of things noticed so far:
 * So far only seen on http://jenkins.thetaphi.de  sarowe's mac
 * So far seen on MacOSX and Linux
 * So far seen on branch 4x and trunk
 * So far seen on Java6, Java7, and Java8
 * fails occured in first block of randomized testing: 
 ** we've indexed a small number of randomized docs
 ** we're explicitly looping over every field and sorting in both directions
 * fails were sorting on one of the \*_dv_last or \*_dv_first fields 
 (docValues=true, either sortMissingLast=true OR sortMissingFirst=true) 
 ** for desc sorts, sort on same field asc has worked fine just before this 
 (fields are in arbitrary order, but asc always tried before desc)
 ** sorting on some other random fields has sometimes been tried before this 
 and worked
 (specifics of each failure seen in the wild recorded in comments)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5536) Add ValueSource collapse criteria to CollapsingQParsingPlugin

2014-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13897047#comment-13897047
 ] 

ASF subversion and git services commented on SOLR-5536:
---

Commit 1566754 from [~joel.bernstein] in branch 'dev/trunk'
[ https://svn.apache.org/r1566754 ]

SOLR-5536: Added a proper ValueSource context

 Add ValueSource collapse criteria to CollapsingQParsingPlugin
 -

 Key: SOLR-5536
 URL: https://issues.apache.org/jira/browse/SOLR-5536
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 4.6
Reporter: Joel Bernstein
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 5.0, 4.7

 Attachments: SOLR-5536.patch, SOLR-5536.patch, SOLR-5536.patch, 
 SOLR-5536.patch


 It would be useful for the CollapsingQParserPlugin to support ValueSource 
 collapse criteria.
 Proposed syntax:
 {code}
 fq={!collapse field=collapse_field max=value_source}
 {code}
 This ticket will also introduce a function query called cscore,  which will 
 return the score of the current document being collapsed. This will allow 
 score to be incorporated into collapse criteria functions.
 A simple example of the cscore usage:
 {code}
 fq={!collapse field=collapse_field max=sum(cscore(), field(x))}
 {code}
  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5437) ASCIIFoldingFilter that emits both unfolded and folded tokens

2014-02-10 Thread Nik Everett (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nik Everett updated LUCENE-5437:


Attachment: (was: LUCENE-5437.patch)

 ASCIIFoldingFilter that emits both unfolded and folded tokens
 -

 Key: LUCENE-5437
 URL: https://issues.apache.org/jira/browse/LUCENE-5437
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Nik Everett
Assignee: Simon Willnauer
Priority: Minor
 Attachments: LUCENE-5437.patch


 I've found myself wanting an ASCIIFoldingFilter that emits both the folded 
 tokens and the original, unfolded tokens.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5437) ASCIIFoldingFilter that emits both unfolded and folded tokens

2014-02-10 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13897083#comment-13897083
 ] 

Simon Willnauer commented on LUCENE-5437:
-

I like this patch much better. I wonder if we can come up with a better naming 
for the variables / options. In _PatternCaptureGroupTokenFilter_ we use 
_preserveOriginal_ which I kind of like better.  For the tests I wonder if you 
can make _assertNextTerms_ accept a _ASCIIFoldingFilter_ instead of a 
_TokenFilter_ and add a getter to it so we can randomly set if the orig should 
be preserved? That would also prevent the subclass?

 ASCIIFoldingFilter that emits both unfolded and folded tokens
 -

 Key: LUCENE-5437
 URL: https://issues.apache.org/jira/browse/LUCENE-5437
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Nik Everett
Assignee: Simon Willnauer
Priority: Minor
 Attachments: LUCENE-5437.patch


 I've found myself wanting an ASCIIFoldingFilter that emits both the folded 
 tokens and the original, unfolded tokens.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5440) Add LongFixedBitSet and replace usage of OpenBitSet

2014-02-10 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated LUCENE-5440:
---

Attachment: LUCENE-5440-solr.patch

Patch moves solr/ code to use FixedBitSet instead of OpenBitSet. I ran into few 
issues e.g. w/ bulk operations such as or/union, where OpenBitSet grew the 
underlying long[]. Now it needs to grow on the outside. I think I'll add a 
check to FBS.or() to make sure the given bitset is not bigger than the current 
one.

Anyway, I still didn't run tests, just wanted to checkpoint. There are no more 
uses of OBS in solr/.

 Add LongFixedBitSet and replace usage of OpenBitSet
 ---

 Key: LUCENE-5440
 URL: https://issues.apache.org/jira/browse/LUCENE-5440
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Reporter: Shai Erera
Assignee: Shai Erera
 Attachments: LUCENE-5440-solr.patch, LUCENE-5440.patch, 
 LUCENE-5440.patch, LUCENE-5440.patch, LUCENE-5440.patch, LUCENE-5440.patch


 Spinoff from here: http://lucene.markmail.org/thread/35gw3amo53dsqsqj. I 
 wrote a LongFixedBitSet which behaves like FixedBitSet, only allows managing 
 more than 2.1B bits. It overcome some issues I've encountered with 
 OpenBitSet, such as the use of set/fastSet as well the implementation of 
 DocIdSet. I'll post a patch shortly and describe it in more detail.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5561) DefaultSimilarity 'init' method is not called, if the similarity plugin is not explicitly declared in the schema

2014-02-10 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-5561:
---

Attachment: SOLR-5561.patch

Ahmet  Vitaliy - thanks for your tests  improvements!

Here's a slightly updated patch...

* introduce a String constant for the discountOverlaps param name (since it's 
now used outside the class in IndexSchema, a constant is appropriate)
* consolidated the two tests classes into one to reduce some duplication  
leverage some existing BaseSimilarityTestCase helper methods
* change the tests to leverage the Version enum values instead of just version 
string constants (this will make it easier to find/remove tests refering to 4.6 
and 4.7 once those constants are no longer explicitly supported)
* added some javadocs

I'll commit soon unless anyone sees any problems?

 DefaultSimilarity 'init' method is not called, if the similarity plugin is 
 not explicitly declared in the schema
 

 Key: SOLR-5561
 URL: https://issues.apache.org/jira/browse/SOLR-5561
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 4.6
Reporter: Isaac Hebsh
Assignee: Hoss Man
  Labels: similarity
 Fix For: 5.0, 4.7

 Attachments: SOLR-5561.patch, SOLR-5561.patch, SOLR-5561.patch, 
 SOLR-5561.patch


 As a result, discountOverlap is not initialized to true, and the default 
 behavior is that multiple terms on the same position DO affect the fieldNorm. 
 this is not the intended default.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5440) Add LongFixedBitSet and replace usage of OpenBitSet

2014-02-10 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13897118#comment-13897118
 ] 

Yonik Seeley commented on LUCENE-5440:
--

OpenBitSet is part of the Solr APIs in a number of places, so if we make these 
changes, I guess it should be trunk only?

 Add LongFixedBitSet and replace usage of OpenBitSet
 ---

 Key: LUCENE-5440
 URL: https://issues.apache.org/jira/browse/LUCENE-5440
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Reporter: Shai Erera
Assignee: Shai Erera
 Attachments: LUCENE-5440-solr.patch, LUCENE-5440.patch, 
 LUCENE-5440.patch, LUCENE-5440.patch, LUCENE-5440.patch, LUCENE-5440.patch


 Spinoff from here: http://lucene.markmail.org/thread/35gw3amo53dsqsqj. I 
 wrote a LongFixedBitSet which behaves like FixedBitSet, only allows managing 
 more than 2.1B bits. It overcome some issues I've encountered with 
 OpenBitSet, such as the use of set/fastSet as well the implementation of 
 DocIdSet. I'll post a patch shortly and describe it in more detail.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.7.0_51) - Build # 3766 - Failure!

2014-02-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3766/
Java: 32bit/jdk1.7.0_51 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.BasicHttpSolrServerTest.testConnectionRefused

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([E370FFF9EA15F741:CCD95BE83D9AB845]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.client.solrj.impl.BasicHttpSolrServerTest.testConnectionRefused(BasicHttpSolrServerTest.java:159)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:744)




Build Log:
[...truncated 11584 lines...]
   [junit4] Suite: 

[jira] [Commented] (LUCENE-5400) Long text matching email local-part rule in UAX29URLEmailTokenizer causes extremely slow tokenization

2014-02-10 Thread Edu Garcia (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13897182#comment-13897182
 ] 

Edu Garcia commented on LUCENE-5400:


Hi.

We've hit this bug in Atlassian Confluence 
(https://jira.atlassian.com/browse/CONF-32566) and it's causing a bit of 
customer pain.

Is [~steve_rowe]'s solution a viable one, or is someone working on a better one?

Thank you!

 Long text matching email local-part rule in UAX29URLEmailTokenizer causes 
 extremely slow tokenization
 -

 Key: LUCENE-5400
 URL: https://issues.apache.org/jira/browse/LUCENE-5400
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.5
Reporter: Chris Geeringh
Assignee: Steve Rowe

 This is a pretty nasty bug, and causes the cluster to stop accepting updates. 
 I'm not sure how to consistently reproduce it but I have done so numerous 
 times. Switching to a whitespace tokenizer improved indexing speed, and I 
 never got the issue again.
 I'm running a 4.6 Snapshot - I had issues with deadlocks with numerous 
 versions of Solr, and have finally narrowed down the problem to this code, 
 which affects many/all(?) versions of Solr.
 When the thread hits this issue it uses 100% CPU, restarting the node which 
 has the error allows indexing to continue until hit again. Here is thread 
 dump:
 http-bio-8080-exec-45 (201)
 
 org.apache.lucene.analysis.standard.UAX29URLEmailTokenizerImpl.getNextToken​(UAX29URLEmailTokenizerImpl.java:4343)
 
 org.apache.lucene.analysis.standard.UAX29URLEmailTokenizer.incrementToken​(UAX29URLEmailTokenizer.java:147)
 
 org.apache.lucene.analysis.util.FilteringTokenFilter.incrementToken​(FilteringTokenFilter.java:82)
 
 org.apache.lucene.analysis.core.LowerCaseFilter.incrementToken​(LowerCaseFilter.java:54)
 
 org.apache.lucene.index.DocInverterPerField.processFields​(DocInverterPerField.java:174)
 
 org.apache.lucene.index.DocFieldProcessor.processDocument​(DocFieldProcessor.java:248)
 
 org.apache.lucene.index.DocumentsWriterPerThread.updateDocument​(DocumentsWriterPerThread.java:253)
 
 org.apache.lucene.index.DocumentsWriter.updateDocument​(DocumentsWriter.java:453)
 org.apache.lucene.index.IndexWriter.updateDocument​(IndexWriter.java:1517)
 
 org.apache.solr.update.DirectUpdateHandler2.addDoc​(DirectUpdateHandler2.java:217)
 
 org.apache.solr.update.processor.RunUpdateProcessor.processAdd​(RunUpdateProcessorFactory.java:69)
 
 org.apache.solr.update.processor.UpdateRequestProcessor.processAdd​(UpdateRequestProcessor.java:51)
 
 org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd​(DistributedUpdateProcessor.java:583)
 
 org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd​(DistributedUpdateProcessor.java:719)
 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd​(DistributedUpdateProcessor.java:449)
 
 org.apache.solr.handler.loader.JavabinLoader$1.update​(JavabinLoader.java:89)
 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator​(JavaBinUpdateRequestCodec.java:151)
 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator​(JavaBinUpdateRequestCodec.java:131)
 org.apache.solr.common.util.JavaBinCodec.readVal​(JavaBinCodec.java:221)
 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList​(JavaBinUpdateRequestCodec.java:116)
 org.apache.solr.common.util.JavaBinCodec.readVal​(JavaBinCodec.java:186)
 org.apache.solr.common.util.JavaBinCodec.unmarshal​(JavaBinCodec.java:112)
 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal​(JavaBinUpdateRequestCodec.java:158)
 
 org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs​(JavabinLoader.java:99)
 org.apache.solr.handler.loader.JavabinLoader.load​(JavabinLoader.java:58)
 
 org.apache.solr.handler.UpdateRequestHandler$1.load​(UpdateRequestHandler.java:92)
 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody​(ContentStreamHandlerBase.java:74)
 
 org.apache.solr.handler.RequestHandlerBase.handleRequest​(RequestHandlerBase.java:135)
 org.apache.solr.core.SolrCore.execute​(SolrCore.java:1859)
 
 org.apache.solr.servlet.SolrDispatchFilter.execute​(SolrDispatchFilter.java:703)
 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter​(SolrDispatchFilter.java:406)
 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter​(SolrDispatchFilter.java:195)
 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter​(ApplicationFilterChain.java:243)
 
 org.apache.catalina.core.ApplicationFilterChain.doFilter​(ApplicationFilterChain.java:210)
 
 

[jira] [Commented] (SOLR-5561) DefaultSimilarity 'init' method is not called, if the similarity plugin is not explicitly declared in the schema

2014-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13897275#comment-13897275
 ] 

ASF subversion and git services commented on SOLR-5561:
---

Commit 1566842 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1566842 ]

SOLR-5561: Fix implicit DefaultSimilarityFactory initialization in IndexSchema 
to properly specify discountOverlap option

 DefaultSimilarity 'init' method is not called, if the similarity plugin is 
 not explicitly declared in the schema
 

 Key: SOLR-5561
 URL: https://issues.apache.org/jira/browse/SOLR-5561
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 4.6
Reporter: Isaac Hebsh
Assignee: Hoss Man
  Labels: similarity
 Fix For: 5.0, 4.7

 Attachments: SOLR-5561.patch, SOLR-5561.patch, SOLR-5561.patch, 
 SOLR-5561.patch


 As a result, discountOverlap is not initialized to true, and the default 
 behavior is that multiple terms on the same position DO affect the fieldNorm. 
 this is not the intended default.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5536) Add ValueSource collapse criteria to CollapsingQParsingPlugin

2014-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13897277#comment-13897277
 ] 

ASF subversion and git services commented on SOLR-5536:
---

Commit 1566844 from [~joel.bernstein] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1566844 ]

SOLR-5536: Added a proper ValueSource context

 Add ValueSource collapse criteria to CollapsingQParsingPlugin
 -

 Key: SOLR-5536
 URL: https://issues.apache.org/jira/browse/SOLR-5536
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 4.6
Reporter: Joel Bernstein
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 5.0, 4.7

 Attachments: SOLR-5536.patch, SOLR-5536.patch, SOLR-5536.patch, 
 SOLR-5536.patch


 It would be useful for the CollapsingQParserPlugin to support ValueSource 
 collapse criteria.
 Proposed syntax:
 {code}
 fq={!collapse field=collapse_field max=value_source}
 {code}
 This ticket will also introduce a function query called cscore,  which will 
 return the score of the current document being collapsed. This will allow 
 score to be incorporated into collapse criteria functions.
 A simple example of the cscore usage:
 {code}
 fq={!collapse field=collapse_field max=sum(cscore(), field(x))}
 {code}
  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5561) DefaultSimilarity 'init' method is not called, if the similarity plugin is not explicitly declared in the schema

2014-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13897302#comment-13897302
 ] 

ASF subversion and git services commented on SOLR-5561:
---

Commit 1566871 from hoss...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1566871 ]

SOLR-5561: Fix implicit DefaultSimilarityFactory initialization in IndexSchema 
to properly specify discountOverlap option (merge r1566842)

 DefaultSimilarity 'init' method is not called, if the similarity plugin is 
 not explicitly declared in the schema
 

 Key: SOLR-5561
 URL: https://issues.apache.org/jira/browse/SOLR-5561
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 4.6
Reporter: Isaac Hebsh
Assignee: Hoss Man
  Labels: similarity
 Fix For: 5.0, 4.7

 Attachments: SOLR-5561.patch, SOLR-5561.patch, SOLR-5561.patch, 
 SOLR-5561.patch


 As a result, discountOverlap is not initialized to true, and the default 
 behavior is that multiple terms on the same position DO affect the fieldNorm. 
 this is not the intended default.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3854) SolrCloud does not work with https

2014-02-10 Thread Steve Davids (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13897305#comment-13897305
 ] 

Steve Davids commented on SOLR-3854:


Looks like the ALLOW_SSL property doesn't get reset to true in the afterClass 
method (should replace sslConfig = null; in SolrTestCaseJ4). Other than that, I 
don't see any glaring issues - have a timeline for pushing this to the 4.x 
branch? If there are any outstanding issues you would like me to look at just 
let me know.

 SolrCloud does not work with https
 --

 Key: SOLR-3854
 URL: https://issues.apache.org/jira/browse/SOLR-3854
 Project: Solr
  Issue Type: Bug
Reporter: Sami Siren
Assignee: Mark Miller
 Fix For: 5.0, 4.7

 Attachments: SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
 SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
 SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
 SOLR-3854v2.patch


 There are a few places in current codebase that assume http is used. This 
 prevents using https when running solr in cloud mode.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5561) DefaultSimilarity 'init' method is not called, if the similarity plugin is not explicitly declared in the schema

2014-02-10 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-5561.


Resolution: Fixed

Thanks everyone!

 DefaultSimilarity 'init' method is not called, if the similarity plugin is 
 not explicitly declared in the schema
 

 Key: SOLR-5561
 URL: https://issues.apache.org/jira/browse/SOLR-5561
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 4.6
Reporter: Isaac Hebsh
Assignee: Hoss Man
  Labels: similarity
 Fix For: 5.0, 4.7

 Attachments: SOLR-5561.patch, SOLR-5561.patch, SOLR-5561.patch, 
 SOLR-5561.patch


 As a result, discountOverlap is not initialized to true, and the default 
 behavior is that multiple terms on the same position DO affect the fieldNorm. 
 this is not the intended default.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5477) Async execution of OverseerCollectionProcessor tasks

2014-02-10 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-5477:
---

Attachment: SOLR-5477-updated.patch

Updated patch. There were quite a few conflicts and I think I fixed them.
It'd be good if whoever commits this has a final look at it.

 Async execution of OverseerCollectionProcessor tasks
 

 Key: SOLR-5477
 URL: https://issues.apache.org/jira/browse/SOLR-5477
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Anshum Gupta
 Attachments: SOLR-5477-CoreAdminStatus.patch, 
 SOLR-5477-updated.patch, SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch, 
 SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch, 
 SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch, 
 SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch, 
 SOLR-5477.patch, SOLR-5477.patch, SOLR-5477.patch


 Typical collection admin commands are long running and it is very common to 
 have the requests get timed out.  It is more of a problem if the cluster is 
 very large.Add an option to run these commands asynchronously
 add an extra param async=true for all collection commands
 the task is written to ZK and the caller is returned a task id. 
 as separate collection admin command will be added to poll the status of the 
 task
 command=statusid=7657668909
 if id is not passed all running async tasks should be listed
 A separate queue is created to store in-process tasks . After the tasks are 
 completed the queue entry is removed. OverSeerColectionProcessor will perform 
 these tasks in multiple threads



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5649) Small ConnectionManager improvements

2014-02-10 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-5649:
--

 Priority: Minor  (was: Trivial)
Affects Version/s: (was: 4.7)
   4.6.1
Fix Version/s: 5.0
 Assignee: Mark Miller
   Issue Type: Bug  (was: Improvement)

 Small ConnectionManager improvements
 

 Key: SOLR-5649
 URL: https://issues.apache.org/jira/browse/SOLR-5649
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6.1
Reporter: Gregory Chanan
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.7


 I was just looking through the ConnectionManager and want to jot these down 
 before I forget them.  I'm happy to make a patch if someone thinks it's 
 valuable as well.
 - clientConnected doesn't seem to be read, can be eliminated
 - state is a private volatile variable, but only used in one function -- 
 seems unlikely private volatile is what is wanted
 - A comment explaining why disconnected() is not called in the case of 
 Expired would be helpful (Expired means we have already waited the timeout 
 period so we want to reject updates right away)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5649) Small ConnectionManager improvements

2014-02-10 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13897312#comment-13897312
 ] 

Mark Miller commented on SOLR-5649:
---

bq. clientConnected doesn't seem to be read, can be eliminated

Yup, a relica from original code.

bq. state is a private volatile variable, but only used in one function – 
seems unlikely private volatile is what is wanted

Yeah, there used to be a getState method that we inherited but it is gone now.

bq. A comment explaining why disconnected() is not called in the case of 
Expired would be helpful (Expired means we have already waited the timeout 
period so we want to reject updates right away)

The current comment already says:

{quote}  // we don't call disconnected because there
  // is no need to start the timer - if we are expired
  // likelyExpired can just be set to true{quote}

 Small ConnectionManager improvements
 

 Key: SOLR-5649
 URL: https://issues.apache.org/jira/browse/SOLR-5649
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6.1
Reporter: Gregory Chanan
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.7


 I was just looking through the ConnectionManager and want to jot these down 
 before I forget them.  I'm happy to make a patch if someone thinks it's 
 valuable as well.
 - clientConnected doesn't seem to be read, can be eliminated
 - state is a private volatile variable, but only used in one function -- 
 seems unlikely private volatile is what is wanted
 - A comment explaining why disconnected() is not called in the case of 
 Expired would be helpful (Expired means we have already waited the timeout 
 period so we want to reject updates right away)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5649) Small ConnectionManager improvements

2014-02-10 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13897313#comment-13897313
 ] 

Mark Miller commented on SOLR-5649:
---

A couple additional items:

* We don't want to call connected() after updating the SolrZooKeeper instance 
in process - the new event thread will call on the connect event. We just want 
to start the reconnected thread and then that thread will process the thread 
death event and die.

* Greg mentioned to me he also noticed the connected status variable was now in 
an synchronized block - it should be synchronized or changed to a volatile.

 Small ConnectionManager improvements
 

 Key: SOLR-5649
 URL: https://issues.apache.org/jira/browse/SOLR-5649
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6.1
Reporter: Gregory Chanan
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.7


 I was just looking through the ConnectionManager and want to jot these down 
 before I forget them.  I'm happy to make a patch if someone thinks it's 
 valuable as well.
 - clientConnected doesn't seem to be read, can be eliminated
 - state is a private volatile variable, but only used in one function -- 
 seems unlikely private volatile is what is wanted
 - A comment explaining why disconnected() is not called in the case of 
 Expired would be helpful (Expired means we have already waited the timeout 
 period so we want to reject updates right away)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5649) Small ConnectionManager improvements

2014-02-10 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13897313#comment-13897313
 ] 

Mark Miller edited comment on SOLR-5649 at 2/11/14 12:06 AM:
-

A couple additional items:

* We don't want to call connected() after updating the SolrZooKeeper instance 
in process - the new event thread will call on the connect event. We just want 
to start the reconnected thread and then that thread will process the thread 
death event and die.

* Greg mentioned to me he also noticed the connected status variable was now in 
a non synchronized block - it should be synchronized or changed to a volatile.


was (Author: markrmil...@gmail.com):
A couple additional items:

* We don't want to call connected() after updating the SolrZooKeeper instance 
in process - the new event thread will call on the connect event. We just want 
to start the reconnected thread and then that thread will process the thread 
death event and die.

* Greg mentioned to me he also noticed the connected status variable was now in 
an synchronized block - it should be synchronized or changed to a volatile.

 Small ConnectionManager improvements
 

 Key: SOLR-5649
 URL: https://issues.apache.org/jira/browse/SOLR-5649
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6.1
Reporter: Gregory Chanan
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.7


 I was just looking through the ConnectionManager and want to jot these down 
 before I forget them.  I'm happy to make a patch if someone thinks it's 
 valuable as well.
 - clientConnected doesn't seem to be read, can be eliminated
 - state is a private volatile variable, but only used in one function -- 
 seems unlikely private volatile is what is wanted
 - A comment explaining why disconnected() is not called in the case of 
 Expired would be helpful (Expired means we have already waited the timeout 
 period so we want to reject updates right away)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5649) Small ConnectionManager improvements

2014-02-10 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-5649:
--

Attachment: SOLR-5649.patch

A patch addressing the above issues.

 Small ConnectionManager improvements
 

 Key: SOLR-5649
 URL: https://issues.apache.org/jira/browse/SOLR-5649
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6.1
Reporter: Gregory Chanan
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.7

 Attachments: SOLR-5649.patch


 I was just looking through the ConnectionManager and want to jot these down 
 before I forget them.  I'm happy to make a patch if someone thinks it's 
 valuable as well.
 - clientConnected doesn't seem to be read, can be eliminated
 - state is a private volatile variable, but only used in one function -- 
 seems unlikely private volatile is what is wanted
 - A comment explaining why disconnected() is not called in the case of 
 Expired would be helpful (Expired means we have already waited the timeout 
 period so we want to reject updates right away)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3854) SolrCloud does not work with https

2014-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13897324#comment-13897324
 ] 

ASF subversion and git services commented on SOLR-3854:
---

Commit 1566883 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1566883 ]

SOLR-3854 : Reset ALLOW_SSL in afterClass.

 SolrCloud does not work with https
 --

 Key: SOLR-3854
 URL: https://issues.apache.org/jira/browse/SOLR-3854
 Project: Solr
  Issue Type: Bug
Reporter: Sami Siren
Assignee: Mark Miller
 Fix For: 5.0, 4.7

 Attachments: SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
 SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
 SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
 SOLR-3854v2.patch


 There are a few places in current codebase that assume http is used. This 
 prevents using https when running solr in cloud mode.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.6.0_45) - Build # 9328 - Failure!

2014-02-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/9328/
Java: 32bit/jdk1.6.0_45 -client -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 28714 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:459: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:64: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build.xml:283: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/common-build.xml:1734: 
Rat problems were found!

Total time: 53 minutes 46 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 32bit/jdk1.6.0_45 -client -XX:+UseParallelGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-5709) Highlighting grouped duplicate docs from different shards with group.limit 1 throws ArrayIndexOutOfBoundsException

2014-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13897359#comment-13897359
 ] 

ASF subversion and git services commented on SOLR-5709:
---

Commit 1566914 from [~steve_rowe] in branch 'dev/trunk'
[ https://svn.apache.org/r1566914 ]

SOLR-5709: add missing license

 Highlighting grouped duplicate docs from different shards with group.limit  
 1 throws ArrayIndexOutOfBoundsException
 

 Key: SOLR-5709
 URL: https://issues.apache.org/jira/browse/SOLR-5709
 Project: Solr
  Issue Type: Bug
  Components: highlighter
Affects Versions: 4.3, 4.4, 4.5, 4.6, 5.0
Reporter: Steve Rowe
Assignee: Steve Rowe
 Fix For: 5.0, 4.7

 Attachments: SOLR-5709.patch


 In a sharded (non-SolrCloud) deployment, if you index a document with the 
 same unique key value into more than one shard, and then try to highlight 
 grouped docs with more than one doc per group, where the grouped docs contain 
 at least one duplicate doc pair, you get an AIOOBE.
 Here's the stack trace I got from such a situation, with 1 doc indexed into 
 each shard in a 2-shard index, with {{group.limit=2}}:
 {noformat}
 ERROR null:java.lang.ArrayIndexOutOfBoundsException: 1
   at 
 org.apache.solr.handler.component.HighlightComponent.finishStage(HighlightComponent.java:185)
   at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:328)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1916)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:758)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:412)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:202)
   at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
   at 
 org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:136)
   at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
   at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
   at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229)
   at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
   at 
 org.eclipse.jetty.server.handler.GzipHandler.handle(GzipHandler.java:301)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1077)
   at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
   at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
   at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
   at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
   at org.eclipse.jetty.server.Server.handle(Server.java:368)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
   at 
 org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
   at 
 org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
   at 
 org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
   at 
 org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:628)
   at 
 org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
   at 
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
   at 
 org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
   at java.lang.Thread.run(Thread.java:724)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_51) - Build # 3691 - Failure!

2014-02-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Windows/3691/
Java: 32bit/jdk1.7.0_51 -client -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 29503 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\build.xml:459: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\build.xml:64: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\build.xml:283: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\lucene\common-build.xml:1734:
 Rat problems were found!

Total time: 101 minutes 15 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 32bit/jdk1.7.0_51 -client -XX:+UseSerialGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-5709) Highlighting grouped duplicate docs from different shards with group.limit 1 throws ArrayIndexOutOfBoundsException

2014-02-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13897362#comment-13897362
 ] 

ASF subversion and git services commented on SOLR-5709:
---

Commit 1566918 from [~steve_rowe] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1566918 ]

SOLR-5709: add missing license (merged trunk r1566914)

 Highlighting grouped duplicate docs from different shards with group.limit  
 1 throws ArrayIndexOutOfBoundsException
 

 Key: SOLR-5709
 URL: https://issues.apache.org/jira/browse/SOLR-5709
 Project: Solr
  Issue Type: Bug
  Components: highlighter
Affects Versions: 4.3, 4.4, 4.5, 4.6, 5.0
Reporter: Steve Rowe
Assignee: Steve Rowe
 Fix For: 5.0, 4.7

 Attachments: SOLR-5709.patch


 In a sharded (non-SolrCloud) deployment, if you index a document with the 
 same unique key value into more than one shard, and then try to highlight 
 grouped docs with more than one doc per group, where the grouped docs contain 
 at least one duplicate doc pair, you get an AIOOBE.
 Here's the stack trace I got from such a situation, with 1 doc indexed into 
 each shard in a 2-shard index, with {{group.limit=2}}:
 {noformat}
 ERROR null:java.lang.ArrayIndexOutOfBoundsException: 1
   at 
 org.apache.solr.handler.component.HighlightComponent.finishStage(HighlightComponent.java:185)
   at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:328)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1916)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:758)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:412)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:202)
   at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
   at 
 org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:136)
   at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
   at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
   at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229)
   at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
   at 
 org.eclipse.jetty.server.handler.GzipHandler.handle(GzipHandler.java:301)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1077)
   at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
   at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
   at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
   at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
   at org.eclipse.jetty.server.Server.handle(Server.java:368)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
   at 
 org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
   at 
 org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
   at 
 org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
   at 
 org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:628)
   at 
 org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
   at 
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
   at 
 org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
   at java.lang.Thread.run(Thread.java:724)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: 

[jira] [Updated] (SOLR-5653) Create a RESTManager to provide REST API endpoints for reconfigurable plugins

2014-02-10 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-5653:
-

Attachment: SOLR-5653.patch

Here is the first attempt at a solution for the RestManager and implementations 
for managing stop words and synonyms via a REST API.

A few things to notice about this implementation:

1) The RestManager needs to be able to read/write data from/to ZooKeeper if in 
cloud mode or the local FS if in standalone mode. This is the purpose of the 
ManagedResourceStorage.StorageIO interface. The idea here is that the 
RestManager receives the StorageIO in its constructor from the SolrCore during 
initialization. Currently, this is done in the SolrCore object, which has to do 
an instanceof on the SolrResourceLoader to determine if it is ZK aware. This is 
a bit hacky but I didn't see a better way to determine if a core is running in 
ZK mode from within the SolrCore object. Currently, I provide 3 implementations 
of StorageIO: ZooKeeperStorageIO, FileStorageIO, and InMemoryStorageIO.

2) A ManagedResource should be able to choose its own storage format, with the 
storageIO being determined by the container.
This gives the ManagedResource developer flexibility in how they store data 
without having to fuss with knowing how load/store bytes to ZK or local FS. 
Currently, the only provided storage format is JSON, see: 
ManagedResourceStorage.JsonStorage.

3) I'm using a registry object that is available from the SolrResourceLoader 
to capture Solr components that declare themselves as being managed. This is 
needed because parsing the solrconfig.xml may encounter managed components 
before it parses and initializes the RestManager. Basically, I wanted to 
separate the registration of managed components from the initialization of the 
RestManager and those components as I didn't want to force the position of the 
restManager/ element in the solrconfig.xml to be before all other components.

4) The design is based around the concept that there may be many different Solr 
components that share a single ManagedResource. For instance, there may be many 
ManagedStopFilterFactory instances declared in schema.xml that share a common 
set of managed English stop words. Thus, I'm using the observer pattern which 
allows Solr components to register as an observer of a shared ManagedResource. 
This way we don't end up with 10 different managers of the same stop word list.

5) ManagedResourceObserver instances are notified once during core 
initialization (load or reload) when the managed data is available. This is 
their signal to internalize the managed data, such as the 
ManagedStopFilterFactory converting the managed set of terms into a 
CharArraySet used for creating StopFilters. This is a critical part of the 
design in that updates to the managed data are not applied until a core is 
reloaded. This is to avoid having analysis components with different views of 
managed data, i.e. we don't want some of the replicas for a shard to have a 
different set of stop words than the other shards.

6) I've provided one concrete ManagedResource implementation for managing a 
word set, which is useful for stop words and protected words 
(KeywordMarkerFilter). This implementation shows how to handle initArgs and a 
managedList of words.

Known Issues:

a. The current RestManager attaches its registered endpoints using SolrRestApi, 
which is configured to process requests to /collection/schema. While this path 
works for stop words and synonyms, it doesn't work in the general case of any 
type of ManagedResource. We need to figure out a better path for which to 
configure the RestManager, but re-working that should be minor.

b. I had to make a few things public in the BaseSchemaResource class and 
extended the RestManager.ManagedEndpoint class from it. We should refactor 
BaseSchemaResource into a BaseSolrResource as it has usefulness beyond schema 
related resources.

c. Deletes - the ManagedResource framework supports deletes but I wasn't sure 
how to enable them in Restlet; again probably a minor issue in the restlet 
config / setup.

 Create a RESTManager to provide REST API endpoints for reconfigurable plugins
 -

 Key: SOLR-5653
 URL: https://issues.apache.org/jira/browse/SOLR-5653
 Project: Solr
  Issue Type: Sub-task
Reporter: Steve Rowe
 Attachments: SOLR-5653.patch


 It should be possible to reconfigure Solr plugins' resources and init params 
 without directly editing the serialized schema or {{solrconfig.xml}} (see 
 Hoss's arguments about this in the context of the schema, which also apply to 
 {{solrconfig.xml}}, in the description of SOLR-4658)
 The RESTManager should allow plugins declared in either the schema or in 
 {{solrconfig.xml}} to register 

[jira] [Commented] (SOLR-5653) Create a RESTManager to provide REST API endpoints for reconfigurable plugins

2014-02-10 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13897383#comment-13897383
 ] 

Mark Miller commented on SOLR-5653:
---

bq. 1. but I didn't see a better way to determine if a core is running in ZK 
mode from within the SolrCore object. 

You can look at the descriptor for the core, get the corecontainer and call 
isZooKeeperAware.

 Create a RESTManager to provide REST API endpoints for reconfigurable plugins
 -

 Key: SOLR-5653
 URL: https://issues.apache.org/jira/browse/SOLR-5653
 Project: Solr
  Issue Type: Sub-task
Reporter: Steve Rowe
 Attachments: SOLR-5653.patch


 It should be possible to reconfigure Solr plugins' resources and init params 
 without directly editing the serialized schema or {{solrconfig.xml}} (see 
 Hoss's arguments about this in the context of the schema, which also apply to 
 {{solrconfig.xml}}, in the description of SOLR-4658)
 The RESTManager should allow plugins declared in either the schema or in 
 {{solrconfig.xml}} to register one or more REST endpoints, one endpoint per 
 reconfigurable resource, including init params.  To allow for multiple plugin 
 instances, registering plugins will need to provide a handle of some form to 
 distinguish the instances.
 This RESTManager should also be able to create new instances of plugins that 
 it has been configured to allow.  The RESTManager will need its own 
 serialized configuration to remember these plugin declarations.
 Example endpoints:
 * SynonymFilterFactory
 ** init params: {{/solr/collection1/config/syns/myinstance/options}}
 ** synonyms resource: 
 {{/solr/collection1/config/syns/myinstance/synonyms-list}}
 * /select request handler
 ** init params: {{/solr/collection1/config/requestHandlers/select/options}}
 We should aim for full CRUD over init params and structured resources.  The 
 plugins will bear responsibility for handling resource modification requests, 
 though we should provide utility methods to make this easy.
 However, since we won't be directly modifying the serialized schema and 
 {{solrconfig.xml}}, anything configured in those two places can't be 
 invalidated by configuration serialized elsewhere.  As a result, it won't be 
 possible to remove plugins declared in the serialized schema or 
 {{solrconfig.xml}}.  Similarly, any init params declared in either place 
 won't be modifiable.  Instead, there should be some form of init param that 
 declares that the plugin is reconfigurable, maybe using something like 
 managed - note that request handlers already provide a handle - the 
 request handler name - and so don't need that to be separately specified:
 {code:xml}
 requestHandler name=/select class=solr.SearchHandler
managed/
 /requestHandler
 {code}
 and in the serialized schema - a handle needs to be specified here:
 {code:xml}
 fieldType name=text_general class=solr.TextField 
 positionIncrementGap=100
 ...
   analyzer type=query
 tokenizer class=solr.StandardTokenizerFactory/
 filter class=solr.SynonymFilterFactory managed=english-synonyms/
 ...
 {code}
 All of the above examples use the existing plugin factory class names, but 
 we'll have to create new RESTManager-aware classes to handle registration 
 with RESTManager.
 Core/collection reloading should not be performed automatically when a REST 
 API call is made to one of these RESTManager-mediated REST endpoints, since 
 for batched config modifications, that could take way too long.  But maybe 
 reloading could be a query parameter to these REST API calls. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5655) Create a stopword filter factory that is (re)configurable, and capable of reporting its configuration, via REST API

2014-02-10 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-5655:
-

Attachment: SOLR-5655.patch

Depends on the patch posted for SOLR-5653.

Deletes are implemented but not active from the REST API yet ... coming soon.

 Create a stopword filter factory that is (re)configurable, and capable of 
 reporting its configuration, via REST API
 ---

 Key: SOLR-5655
 URL: https://issues.apache.org/jira/browse/SOLR-5655
 Project: Solr
  Issue Type: Sub-task
  Components: Schema and Analysis
Reporter: Steve Rowe
 Attachments: SOLR-5655.patch


 A stopword filter factory could be (re)configurable via REST API by 
 registering with the RESTManager described in SOLR-5653, and then responding 
 to REST API calls to modify its init params and its stopwords resource file.
 Read-only (GET) REST API calls should also be provided, both for init params 
 and the stopwords resource file.
 It should be possible to add/remove one or more entries in the stopwords 
 resource file.
 We should probably use JSON for the REST request body, as is done in the 
 Schema REST API methods.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5654) Create a synonym filter factory that is (re)configurable, and capable of reporting its configuration, via REST API

2014-02-10 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-5654:
-

Attachment: SOLR-5654.patch

Basic implementation which depends on my patch for SOLR-5653.

It only supports the solr format for now and basically uses an adapter to 
provide a SolrResourceLoader to the existing SynonymFilterFactory which is 
backed by the managed synonym mappings.

 Create a synonym filter factory that is (re)configurable, and capable of 
 reporting its configuration, via REST API
 --

 Key: SOLR-5654
 URL: https://issues.apache.org/jira/browse/SOLR-5654
 Project: Solr
  Issue Type: Sub-task
  Components: Schema and Analysis
Reporter: Steve Rowe
 Attachments: SOLR-5654.patch


 A synonym filter factory could be (re)configurable via REST API by 
 registering with the RESTManager described in SOLR-5653, and then responding 
 to REST API calls to modify its init params and its synonyms resource file.
 Read-only (GET) REST API calls should also be provided, both for init params 
 and the synonyms resource file.
 It should be possible to add/remove/modify one or more entries in the 
 synonyms resource file.
 We should probably use JSON for the REST request body, as is done in the 
 Schema REST API methods.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5654) Create a synonym filter factory that is (re)configurable, and capable of reporting its configuration, via REST API

2014-02-10 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13897386#comment-13897386
 ] 

Timothy Potter edited comment on SOLR-5654 at 2/11/14 1:04 AM:
---

Basic implementation which depends on my patch for SOLR-5653.

It only supports the solr format for now and basically uses an adapter to 
provide a SolrResourceLoader to the existing SynonymFilterFactory which is 
backed by the managed synonym mappings.

Also, I should mention that I'm not thrilled with how I handle ignoreCase 
changes right now, so will probably clean that up a bit in a subsequent patch.


was (Author: tim.potter):
Basic implementation which depends on my patch for SOLR-5653.

It only supports the solr format for now and basically uses an adapter to 
provide a SolrResourceLoader to the existing SynonymFilterFactory which is 
backed by the managed synonym mappings.

 Create a synonym filter factory that is (re)configurable, and capable of 
 reporting its configuration, via REST API
 --

 Key: SOLR-5654
 URL: https://issues.apache.org/jira/browse/SOLR-5654
 Project: Solr
  Issue Type: Sub-task
  Components: Schema and Analysis
Reporter: Steve Rowe
 Attachments: SOLR-5654.patch


 A synonym filter factory could be (re)configurable via REST API by 
 registering with the RESTManager described in SOLR-5653, and then responding 
 to REST API calls to modify its init params and its synonyms resource file.
 Read-only (GET) REST API calls should also be provided, both for init params 
 and the synonyms resource file.
 It should be possible to add/remove/modify one or more entries in the 
 synonyms resource file.
 We should probably use JSON for the REST request body, as is done in the 
 Schema REST API methods.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3854) SolrCloud does not work with https

2014-02-10 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13897420#comment-13897420
 ] 

Mark Miller commented on SOLR-3854:
---

Thanks Steve - I'll merge back to 4x fairly soon.

 SolrCloud does not work with https
 --

 Key: SOLR-3854
 URL: https://issues.apache.org/jira/browse/SOLR-3854
 Project: Solr
  Issue Type: Bug
Reporter: Sami Siren
Assignee: Mark Miller
 Fix For: 5.0, 4.7

 Attachments: SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
 SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
 SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
 SOLR-3854v2.patch


 There are a few places in current codebase that assume http is used. This 
 prevents using https when running solr in cloud mode.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0-fcs-b128) - Build # 3767 - Still Failing!

2014-02-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3767/
Java: 32bit/jdk1.8.0-fcs-b128 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.BasicHttpSolrServerTest.testConnectionRefused

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([DED6B40E1C42D981:F17F101FCBCD9685]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.client.solrj.impl.BasicHttpSolrServerTest.testConnectionRefused(BasicHttpSolrServerTest.java:159)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:744)




Build Log:
[...truncated 11439 lines...]
   [junit4] Suite: 

[jira] [Commented] (SOLR-64) strict hierarchical facets

2014-02-10 Thread Ales Perme (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-64?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13897613#comment-13897613
 ] 

Ales Perme commented on SOLR-64:


Hi Guys. Is there any news on this patch/functionality? It is being pushed from 
one version to another, regardless it is marked as Major and a lot of votes. 
Is there any viable, performance efficient alternative to this kind of 
hierarchical facet? Thank you for your reply...

 strict hierarchical facets
 --

 Key: SOLR-64
 URL: https://issues.apache.org/jira/browse/SOLR-64
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Yonik Seeley
 Fix For: 4.7

 Attachments: SOLR-64.patch, SOLR-64.patch, SOLR-64.patch, 
 SOLR-64.patch, SOLR-64_3.1.0.patch


 Strict Facet Hierarchies... each tag has at most one parent (a tree).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org