[jira] [Updated] (SOLR-7039) First collection created with stateFormat=2 results in a weird /clusterstate.json

2015-01-27 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-7039:
-
Attachment: SOLR-7039.patch

The bug was due to the serialization logic of clusterstate.


 First collection created with stateFormat=2 results in a weird 
 /clusterstate.json
 -

 Key: SOLR-7039
 URL: https://issues.apache.org/jira/browse/SOLR-7039
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.0
Reporter: Timothy Potter
Assignee: Noble Paul
Priority: Blocker
 Fix For: 5.0

 Attachments: SOLR-7039.patch


 With the 5.0 branch, when I do:
 {code}
 bin/solr -c  bin/solr create -c foo
 {code}
 The {{/clusterstate.json}} in ZK has and invalid definition of the foo 
 collection
 {code}
 {foo:{
 replicationFactor:1,
 router:{name:compositeId},
 maxShardsPerNode:1,
 autoAddReplicas:false,
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{}
 {code}
 To verify this isn't the UI sending back the wrong data, I went into the 
 zkCli.sh command-line and got:
 {code}
 [zk: localhost:9983(CONNECTED) 2] get /clusterstate.json
 {foo:{
 replicationFactor:1,
 router:{name:compositeId},
 maxShardsPerNode:1,
 autoAddReplicas:false,
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{}
 cZxid = 0x20
 ctime = Mon Jan 26 14:56:44 MST 2015
 mZxid = 0x65
 mtime = Mon Jan 26 14:57:16 MST 2015
 pZxid = 0x20
 cversion = 0
 dataVersion = 1
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 247
 numChildren = 0
 {code}
 The {{/collections/foo/state.json}} looks correct:
 {code}
 {foo:{
 replicationFactor:1,
 router:{name:compositeId},
 maxShardsPerNode:1,
 autoAddReplicas:false,
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{core_node1:{
 core:foo_shard1_replica1,
 base_url:http://192.168.1.2:8983/solr;,
 node_name:192.168.1.2:8983_solr,
 state:active,
 leader:true}}
 {code}
 Here's the weird thing ... If I create a second collection using the same 
 script, all is well and /clusterstate.json is empty
 {code}
 bin/solr create -c foo2
 {code}
 Calling this a blocker because 5.0 can't be released with this happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7041) Nuke defaultSearchField and solrQueryParser from schema

2015-01-27 Thread Mike Murphy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293484#comment-14293484
 ] 

Mike Murphy commented on SOLR-7041:
---

defaultSearchField is very useful, can we please keep this?

 Nuke defaultSearchField and solrQueryParser from schema
 ---

 Key: SOLR-7041
 URL: https://issues.apache.org/jira/browse/SOLR-7041
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Reporter: Jan Høydahl
 Fix For: 5.0, Trunk


 The two tags {{defautlSearchField}} and {{solrQueryParser}} were deprecated 
 in Solr3.6 (SOLR-2724). Time to nuke them from code and {{schema.xml}} in 5.0?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7041) Nuke defaultSearchField and solrQueryParser from schema

2015-01-27 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293518#comment-14293518
 ] 

David Smiley commented on SOLR-7041:


Mike, just use 'df' or 'qf' as appropriate.  defaultSearchField in schema.xml 
is trappy.

 Nuke defaultSearchField and solrQueryParser from schema
 ---

 Key: SOLR-7041
 URL: https://issues.apache.org/jira/browse/SOLR-7041
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Reporter: Jan Høydahl
 Fix For: 5.0, Trunk


 The two tags {{defautlSearchField}} and {{solrQueryParser}} were deprecated 
 in Solr3.6 (SOLR-2724). Time to nuke them from code and {{schema.xml}} in 5.0?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_31) - Build # 4442 - Still Failing!

2015-01-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4442/
Java: 64bit/jdk1.8.0_31 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ReplicationFactorTest.test

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.NoHttpResponseException: The target server failed to respond
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:865)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:730)
at 
org.apache.solr.cloud.ReplicationFactorTest.testRf3(ReplicationFactorTest.java:264)
at 
org.apache.solr.cloud.ReplicationFactorTest.test(ReplicationFactorTest.java:110)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[jira] [Commented] (LUCENE-4524) Merge DocsEnum and DocsAndPositionsEnum into PostingsEnum

2015-01-27 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293555#comment-14293555
 ] 

Alan Woodward commented on LUCENE-4524:
---

In an attempt to make LUCENE-2878 a bit more manageable, I'm trying to split 
this patch back out again.  In addition to merging DocsEnum and 
DocsAndPositionsEnum, I've removed TermsEnum.docsAndPositions(), moving all the 
functionality into TermsEnum.docs().  However, I'm bumping into an API issue 
here, because our previous guarantee was that docs() would never return null, 
while docsAndPositions() returns null if the relevant postings information 
wasn't indexed.

One option would be to add a postings() method to DocsEnum which returns what 
postings details are available.  So instead of returning null, we return a 
DocsEnum that contains whatever postings the index supports, and clients then 
check postings() to see if it supports what they want to do.

 Merge DocsEnum and DocsAndPositionsEnum into PostingsEnum
 -

 Key: LUCENE-4524
 URL: https://issues.apache.org/jira/browse/LUCENE-4524
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/codecs, core/index, core/search
Affects Versions: 4.0
Reporter: Simon Willnauer
 Fix For: 4.9, Trunk

 Attachments: LUCENE-4524.patch, LUCENE-4524.patch


 spinnoff from http://www.gossamer-threads.com/lists/lucene/java-dev/172261
 {noformat}
 hey folks, 
 I have spend a hell lot of time on the positions branch to make 
 positions and offsets working on all queries if needed. The one thing 
 that bugged me the most is the distinction between DocsEnum and 
 DocsAndPositionsEnum. Really when you look at it closer DocsEnum is a 
 DocsAndFreqsEnum and if we omit Freqs we should return a DocIdSetIter. 
 Same is true for 
 DocsAndPostionsAndPayloadsAndOffsets*YourFancyFeatureHere*Enum. I 
 don't really see the benefits from this. We should rather make the 
 interface simple and call it something like PostingsEnum where you 
 have to specify flags on the TermsIterator and if we can't provide the 
 sufficient enum we throw an exception? 
 I just want to bring up the idea here since it might simplify a lot 
 for users as well for us when improving our positions / offset etc. 
 support. 
 thoughts? Ideas? 
 simon 
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7039) First collection created with stateFormat=2 results in a weird /clusterstate.json

2015-01-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293529#comment-14293529
 ] 

ASF subversion and git services commented on SOLR-7039:
---

Commit 1655039 from [~noble.paul] in branch 'dev/branches/lucene_solr_5_0'
[ https://svn.apache.org/r1655039 ]

SOLR-7039 First collection created with stateFormat=2 writes to 
clusterstate.json also

 First collection created with stateFormat=2 results in a weird 
 /clusterstate.json
 -

 Key: SOLR-7039
 URL: https://issues.apache.org/jira/browse/SOLR-7039
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.0
Reporter: Timothy Potter
Assignee: Noble Paul
Priority: Blocker
 Fix For: 5.0

 Attachments: SOLR-7039.patch


 With the 5.0 branch, when I do:
 {code}
 bin/solr -c  bin/solr create -c foo
 {code}
 The {{/clusterstate.json}} in ZK has and invalid definition of the foo 
 collection
 {code}
 {foo:{
 replicationFactor:1,
 router:{name:compositeId},
 maxShardsPerNode:1,
 autoAddReplicas:false,
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{}
 {code}
 To verify this isn't the UI sending back the wrong data, I went into the 
 zkCli.sh command-line and got:
 {code}
 [zk: localhost:9983(CONNECTED) 2] get /clusterstate.json
 {foo:{
 replicationFactor:1,
 router:{name:compositeId},
 maxShardsPerNode:1,
 autoAddReplicas:false,
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{}
 cZxid = 0x20
 ctime = Mon Jan 26 14:56:44 MST 2015
 mZxid = 0x65
 mtime = Mon Jan 26 14:57:16 MST 2015
 pZxid = 0x20
 cversion = 0
 dataVersion = 1
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 247
 numChildren = 0
 {code}
 The {{/collections/foo/state.json}} looks correct:
 {code}
 {foo:{
 replicationFactor:1,
 router:{name:compositeId},
 maxShardsPerNode:1,
 autoAddReplicas:false,
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{core_node1:{
 core:foo_shard1_replica1,
 base_url:http://192.168.1.2:8983/solr;,
 node_name:192.168.1.2:8983_solr,
 state:active,
 leader:true}}
 {code}
 Here's the weird thing ... If I create a second collection using the same 
 script, all is well and /clusterstate.json is empty
 {code}
 bin/solr create -c foo2
 {code}
 Calling this a blocker because 5.0 can't be released with this happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4586) Eliminate the maxBooleanClauses limit

2015-01-27 Thread Mike Murphy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293532#comment-14293532
 ] 

Mike Murphy commented on SOLR-4586:
---

bq. My point is merely that it would be quite inappropriate for employees of 
competing search platforms to vote on Solr matters, even if we assume the best 
technical intentions/motivations of everyone. Surely we can agree on this 
point. 

+1
The presence of a conflict of interest is independent of the occurrence of 
impropriety.
http://en.wikipedia.org/wiki/Conflict_of_interest


 Eliminate the maxBooleanClauses limit
 -

 Key: SOLR-4586
 URL: https://issues.apache.org/jira/browse/SOLR-4586
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.2
 Environment: 4.3-SNAPSHOT 1456767M - ncindex - 2013-03-15 13:11:50
Reporter: Shawn Heisey
 Attachments: SOLR-4586.patch, SOLR-4586.patch, SOLR-4586.patch, 
 SOLR-4586.patch, SOLR-4586.patch, SOLR-4586.patch, 
 SOLR-4586_verify_maxClauses.patch


 In the #solr IRC channel, I mentioned the maxBooleanClauses limitation to 
 someone asking a question about queries.  Mark Miller told me that 
 maxBooleanClauses no longer applies, that the limitation was removed from 
 Lucene sometime in the 3.x series.  The config still shows up in the example 
 even in the just-released 4.2.
 Checking through the source code, I found that the config option is parsed 
 and the value stored in objects, but does not actually seem to be used by 
 anything.  I removed every trace of it that I could find, and all tests still 
 pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7039) First collection created with stateFormat=2 results in a weird /clusterstate.json

2015-01-27 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-7039.
--
   Resolution: Fixed
Fix Version/s: 5.1
   Trunk

 First collection created with stateFormat=2 results in a weird 
 /clusterstate.json
 -

 Key: SOLR-7039
 URL: https://issues.apache.org/jira/browse/SOLR-7039
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.0
Reporter: Timothy Potter
Assignee: Noble Paul
Priority: Blocker
 Fix For: 5.0, Trunk, 5.1

 Attachments: SOLR-7039.patch


 With the 5.0 branch, when I do:
 {code}
 bin/solr -c  bin/solr create -c foo
 {code}
 The {{/clusterstate.json}} in ZK has and invalid definition of the foo 
 collection
 {code}
 {foo:{
 replicationFactor:1,
 router:{name:compositeId},
 maxShardsPerNode:1,
 autoAddReplicas:false,
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{}
 {code}
 To verify this isn't the UI sending back the wrong data, I went into the 
 zkCli.sh command-line and got:
 {code}
 [zk: localhost:9983(CONNECTED) 2] get /clusterstate.json
 {foo:{
 replicationFactor:1,
 router:{name:compositeId},
 maxShardsPerNode:1,
 autoAddReplicas:false,
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{}
 cZxid = 0x20
 ctime = Mon Jan 26 14:56:44 MST 2015
 mZxid = 0x65
 mtime = Mon Jan 26 14:57:16 MST 2015
 pZxid = 0x20
 cversion = 0
 dataVersion = 1
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 247
 numChildren = 0
 {code}
 The {{/collections/foo/state.json}} looks correct:
 {code}
 {foo:{
 replicationFactor:1,
 router:{name:compositeId},
 maxShardsPerNode:1,
 autoAddReplicas:false,
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{core_node1:{
 core:foo_shard1_replica1,
 base_url:http://192.168.1.2:8983/solr;,
 node_name:192.168.1.2:8983_solr,
 state:active,
 leader:true}}
 {code}
 Here's the weird thing ... If I create a second collection using the same 
 script, all is well and /clusterstate.json is empty
 {code}
 bin/solr create -c foo2
 {code}
 Calling this a blocker because 5.0 can't be released with this happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4524) Merge DocsEnum and DocsAndPositionsEnum into PostingsEnum

2015-01-27 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-4524:
--
Attachment: LUCENE-4524.patch

Here's what I've got so far.  Warning: tests fail, due to some things returning 
null when they're not expected to.

 Merge DocsEnum and DocsAndPositionsEnum into PostingsEnum
 -

 Key: LUCENE-4524
 URL: https://issues.apache.org/jira/browse/LUCENE-4524
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/codecs, core/index, core/search
Affects Versions: 4.0
Reporter: Simon Willnauer
 Fix For: 4.9, Trunk

 Attachments: LUCENE-4524.patch, LUCENE-4524.patch, LUCENE-4524.patch


 spinnoff from http://www.gossamer-threads.com/lists/lucene/java-dev/172261
 {noformat}
 hey folks, 
 I have spend a hell lot of time on the positions branch to make 
 positions and offsets working on all queries if needed. The one thing 
 that bugged me the most is the distinction between DocsEnum and 
 DocsAndPositionsEnum. Really when you look at it closer DocsEnum is a 
 DocsAndFreqsEnum and if we omit Freqs we should return a DocIdSetIter. 
 Same is true for 
 DocsAndPostionsAndPayloadsAndOffsets*YourFancyFeatureHere*Enum. I 
 don't really see the benefits from this. We should rather make the 
 interface simple and call it something like PostingsEnum where you 
 have to specify flags on the TermsIterator and if we can't provide the 
 sufficient enum we throw an exception? 
 I just want to bring up the idea here since it might simplify a lot 
 for users as well for us when improving our positions / offset etc. 
 support. 
 thoughts? Ideas? 
 simon 
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7045) Add german Solr book to the book list

2015-01-27 Thread Markus Klose (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Klose updated SOLR-7045:
---
Attachment: eias.jpg
eias.txt

 Add german Solr book to the book list
 -

 Key: SOLR-7045
 URL: https://issues.apache.org/jira/browse/SOLR-7045
 Project: Solr
  Issue Type: Wish
Reporter: Markus Klose
Priority: Trivial
 Attachments: eias.jpg, eias.txt


 poviding the image (eias.jpg) and the formated text (eias.txt)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-5.0 - Build # 18 - Failure

2015-01-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.0/18/

No tests ran.

Build Log:
[...truncated 51656 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.0/lucene/build/smokeTestRelease/dist
 [copy] Copying 446 files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.0/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.0/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 JAVA_HOME=/home/jenkins/tools/java/latest1.7
   [smoker] NOTE: output encoding is US-ASCII
   [smoker] 
   [smoker] Load release URL 
file:/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.0/lucene/build/smokeTestRelease/dist/...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.01 sec (14.8 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-5.0.0-src.tgz...
   [smoker] 27.9 MB in 0.04 sec (693.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.0.0.tgz...
   [smoker] 64.0 MB in 0.09 sec (726.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.0.0.zip...
   [smoker] 73.6 MB in 0.10 sec (711.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-5.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5645 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5645 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run ant validate
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.jettyConnector=Socket 
-Dtests.multiplier=1 -Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 207 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.00 sec (69.9 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-5.0.0-src.tgz...
   [smoker] 34.9 MB in 0.05 sec (730.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.0.0.tgz...
   [smoker] 121.4 MB in 0.19 sec (651.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.0.0.zip...
   [smoker] 127.6 MB in 0.18 sec (702.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-5.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-5.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.0/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.0/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker] verify WAR metadata/contained JAR identity/no javax.* or java.* 
classes...
   [smoker] unpack lucene-5.0.0.tgz...
   [smoker] copying unpacked distribution for Java 7 ...
   [smoker] test solr example w/ Java 7...
   [smoker]   start Solr instance 
(log=/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.0/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   starting Solr on port 8983 from 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.0/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7
   [smoker]   startup done
   [smoker] 
   [smoker] Setup new core instance directory:
   [smoker] 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.0/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/server/solr/techproducts
   [smoker] 
   [smoker] Creating new core 'techproducts' using command:
   [smoker] 

[jira] [Updated] (SOLR-7034) Consider allowing any node to become leader, regardless of their last published state.

2015-01-27 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-7034:
--
Fix Version/s: 5.1
   Trunk
 Assignee: Mark Miller

 Consider allowing any node to become leader, regardless of their last 
 published state.
 --

 Key: SOLR-7034
 URL: https://issues.apache.org/jira/browse/SOLR-7034
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: Trunk, 5.1


 Now that we allow a min replication param for updates, I think it's time to 
 loosen this up. Currently, you can end up in a state where no one in a shard 
 thinks they can be leader and you so do this fast ugly infinite loop trying 
 to pick the leader.
 We should let anyone that is able to properly sync with the available 
 replicas to become leader if that process succeeds.
 The previous strategy was to account for the case of not having enough 
 replicas after a machine loss to ensure you don't lose the data. The idea was 
 that you should stop the cluster to avoid losing data and repair and get all 
 your replicas involved in a leadership election. Instead, we should favor 
 carrying on, and those that want to ensure they don't lose data due to major 
 replica loss should use the min replication update param.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-2562) Make Luke a Lucene/Solr Module

2015-01-27 Thread Tomoko Uchida (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293926#comment-14293926
 ] 

Tomoko Uchida edited comment on LUCENE-2562 at 1/27/15 6:32 PM:


Patch updated. I've modified Overview tab only.

Progress and Status :
- Missing values in upper panel (index info) were all filled. 
- Fields table are now sortable by field name and term counts.

Pending tasks to be done:
- Decoders (last pending task for Overview tab)

I'm trying for decoders. It might need some sort of pluggable design (I believe 
Solr's decoders should be plugged, not built-in feature.) Suggestions / ideas  
welcome.




was (Author: tomoko uchida):
Patch updated. I've modified Overview tab only.
Progress and Status :
- Missing values in upper panel (index info) were all filled. 
- Fields table are now sortable by field name and term counts.
Pending tasks to be done:
- Decoders (last pending task for Overview tab)

I'm trying for decoders. It might need some sort of pluggable design (I believe 
Solr's decoders should be plugged, not built-in feature.) Suggestions / ideas  
welcome.



 Make Luke a Lucene/Solr Module
 --

 Key: LUCENE-2562
 URL: https://issues.apache.org/jira/browse/LUCENE-2562
 Project: Lucene - Core
  Issue Type: Task
Reporter: Mark Miller
  Labels: gsoc2014
 Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, 
 LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, 
 Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke1.jpg, luke2.jpg, 
 luke3.jpg


 see
 RE: Luke - in need of maintainer: 
 http://markmail.org/message/m4gsto7giltvrpuf
 Web-based Luke: http://markmail.org/message/4xwps7p7ifltme5q
 I think it would be great if there was a version of Luke that always worked 
 with trunk - and it would also be great if it was easier to match Luke jars 
 with Lucene versions.
 While I'd like to get GWT Luke into the mix as well, I think the easiest 
 starting point is to straight port Luke to another UI toolkit before 
 abstracting out DTO objects that both GWT Luke and Pivot Luke could share.
 I've started slowly converting Luke's use of thinlet to Apache Pivot. I 
 haven't/don't have a lot of time for this at the moment, but I've plugged 
 away here and there over the past work or two. There is still a *lot* to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7045) Add german Solr book to the book list

2015-01-27 Thread Markus Klose (JIRA)
Markus Klose created SOLR-7045:
--

 Summary: Add german Solr book to the book list
 Key: SOLR-7045
 URL: https://issues.apache.org/jira/browse/SOLR-7045
 Project: Solr
  Issue Type: Wish
Reporter: Markus Klose
Priority: Trivial


poviding the image (eias.jpg) and the formated text (eias.txt)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7034) Consider allowing any node to become leader, regardless of their last published state.

2015-01-27 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14294015#comment-14294015
 ] 

Mark Miller commented on SOLR-7034:
---

A good first step might be, if all replicas in a shard participate in a leader 
sync, don't consult last published state. This would at least deal with cases 
where replicas 'blink' at the same time (gc, network interrupt, etc), but 
everyone gets it together and are ready to move on.

 Consider allowing any node to become leader, regardless of their last 
 published state.
 --

 Key: SOLR-7034
 URL: https://issues.apache.org/jira/browse/SOLR-7034
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: Trunk, 5.1


 Now that we allow a min replication param for updates, I think it's time to 
 loosen this up. Currently, you can end up in a state where no one in a shard 
 thinks they can be leader and you so do this fast ugly infinite loop trying 
 to pick the leader.
 We should let anyone that is able to properly sync with the available 
 replicas to become leader if that process succeeds.
 The previous strategy was to account for the case of not having enough 
 replicas after a machine loss to ensure you don't lose the data. The idea was 
 that you should stop the cluster to avoid losing data and repair and get all 
 your replicas involved in a leadership election. Instead, we should favor 
 carrying on, and those that want to ensure they don't lose data due to major 
 replica loss should use the min replication update param.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b47) - Build # 11696 - Failure!

2015-01-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11696/
Java: 64bit/jdk1.9.0-ea-b47 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:55072/blw

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:55072/blw
at 
__randomizedtesting.SeedInfo.seed([751CF87768202D0A:FD48C7ADC6DC40F2]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:570)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:214)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:210)
at 
org.apache.solr.cloud.ShardSplitTest.splitShard(ShardSplitTest.java:512)
at 
org.apache.solr.cloud.ShardSplitTest.incompleteOrOverlappingCustomRangeTest(ShardSplitTest.java:117)
at org.apache.solr.cloud.ShardSplitTest.test(ShardSplitTest.java:82)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-7033) RecoveryStrategy should not publish any state when closed / cancelled.

2015-01-27 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-7033:
--
 Priority: Critical  (was: Major)
Fix Version/s: 5.1
   Trunk

 RecoveryStrategy should not publish any state when closed / cancelled.
 --

 Key: SOLR-7033
 URL: https://issues.apache.org/jira/browse/SOLR-7033
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Critical
 Fix For: Trunk, 5.1

 Attachments: SOLR-7033.patch


 Currently, when closed / cancelled, RecoveryStrategy can publish a recovery 
 failed state. In a bad loop (like when no one can become leader because no 
 one had a last state of active) this can cause very fast looped publishing of 
 this state to zk.
 It's an outstanding item to improve that specific scenario anyway, but 
 regardless, we should fix the close / cancel path to never publish any state 
 to zk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6758) Solr node to node communication errors w/ SSL + client auth

2015-01-27 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14294054#comment-14294054
 ] 

Steve Rowe commented on SOLR-6758:
--

The process to set this up has been simplified in Solr 5 - see the Solr 
Reference Guide page [Enabling 
SSL|https://cwiki.apache.org/confluence/display/solr/Enabling+SSL] for a 
description.  While Solr 5 hasn't been released yet, you can test with the 
latest release candidate - see [this 
thread|http://markmail.org/message/gejj5647aasldoun] for download links.  
Here's a direct link to the first release candidate (and most recent as of this 
writing - there will be at least one more RC before the final release): 
http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC1-rev1654615/solr/solr-5.0.0.tgz

 Solr node to node communication errors w/ SSL + client auth
 ---

 Key: SOLR-6758
 URL: https://issues.apache.org/jira/browse/SOLR-6758
 Project: Solr
  Issue Type: Bug
Reporter: liuqibj

 Enable two solr servers SSL w/ client auth(JSSE) and then change solr.xml to 
 use 8443 port and change zookeeper to use https instead of http. 
 then starting the two solr server and found these errors. Any suggestions?
 ERROR - 2014-11-19 12:55:31.125; org.apache.solr.common.SolrException; 
 null:org.apache.solr.common.SolrException: 
 org.apache.solr.client.solrj.SolrServerException: IOException occured when 
 talking to server at: https://9.12.11.9:8443/solr/collection2_shard1_replica2
   at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:308)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1916)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:780)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:427)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
   at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
   at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
   at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
   at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
   at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
   at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
   at 
 org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
   at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
   at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
   at 
 org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1040)
   at 
 org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:607)
   at 
 org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:314)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1156)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:626)
   at 
 org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
   at java.lang.Thread.run(Thread.java:804)
 Caused by: org.apache.solr.client.solrj.SolrServerException: IOException 
 occured when talking to server at: 
 https://9.12.11.9::8443/solr/collection2_shard1_replica2
   at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:507)
   at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:199)
   at 
 org.apache.solr.handler.component.HttpShardHandler$1.call(HttpShardHandler.java:156)
   at 
 org.apache.solr.handler.component.HttpShardHandler$1.call(HttpShardHandler.java:118)
   at java.util.concurrent.FutureTask.run(FutureTask.java:273)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:482)
   at java.util.concurrent.FutureTask.run(FutureTask.java:273)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1156)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:626)
   ... 1 more
 Caused by: javax.net.ssl.SSLHandshakeException: com.ibm.jsse2.util.j: PKIX 
 path building failed: java.security.cert.CertPathBuilderException: 
 PKIXCertPathBuilderImpl could not build a valid CertPath.; internal cause is: 
   java.security.cert.CertPathValidatorException: The certificate issued 
 by CN=testing, L=lt, 

[jira] [Commented] (SOLR-7045) Add german Solr book to the book list

2015-01-27 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14294057#comment-14294057
 ] 

Erick Erickson commented on SOLR-7045:
--

Markus:

The Solr Wiki is freely editable... after you create a logon. So you can add  
the book to: http://wiki.apache.org/solr/SolrResources. Just create a logon and 
ping the user's list to be added to authors. We had a problem with spam at one 
point so had to add this extra step...

As far as the Apache website is concerned, I'm not quite sure, anyone?

 Add german Solr book to the book list
 -

 Key: SOLR-7045
 URL: https://issues.apache.org/jira/browse/SOLR-7045
 Project: Solr
  Issue Type: Wish
Reporter: Markus Klose
Priority: Trivial
 Attachments: eias.jpg, eias.txt


 poviding the image (eias.jpg) and the formated text (eias.txt)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6196) Include geo3d package, along with Lucene integration to make it useful

2015-01-27 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14294093#comment-14294093
 ] 

Karl Wright commented on LUCENE-6196:
-

I'm completely out of time to work on this this week, and probably next week as 
well.  Where it is left:
- I have a correct formula for min and max latitude for plane/sphere intersect
- I have what appears to be a workable approach for min/lmax longitude for 
plane/sphere intersect, but there are at least a half dozen corner cases, and I 
have yet to come up with a formula that checks out.

Picking this up again sometime mid February...

 Include geo3d package, along with Lucene integration to make it useful
 --

 Key: LUCENE-6196
 URL: https://issues.apache.org/jira/browse/LUCENE-6196
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: Karl Wright
Assignee: David Smiley
 Attachments: ShapeImpl.java, geo3d.zip


 I would like to explore contributing a geo3d package to Lucene.  This can be 
 used in conjunction with Lucene search, both for generating geohashes (via 
 spatial4j) for complex geographic shapes, as well as limiting results 
 resulting from those queries to those results within the exact shape in 
 highly performant ways.
 The package uses 3d planar geometry to do its magic, which basically limits 
 computation necessary to determine membership (once a shape has been 
 initialized, of course) to only multiplications and additions, which makes it 
 feasible to construct a performant BoostSource-based filter for geographic 
 shapes.  The math is somewhat more involved when generating geohashes, but is 
 still more than fast enough to do a good job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7012) add an ant target to package a plugin into a jar

2015-01-27 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14294099#comment-14294099
 ] 

Noble Paul commented on SOLR-7012:
--

what would be the steps involved to write and package user's code?

will it be easier than the patch submitted by ishan?

 add an ant target to package a plugin into a jar
 

 Key: SOLR-7012
 URL: https://issues.apache.org/jira/browse/SOLR-7012
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-7012.patch, SOLR-7012.patch, SOLR-7012.patch


 Now it is extremely hard to create  plugin because the user do not know about 
 the exact dependencies and their poms
 we will add a target to solr/build.xml called plugin-jar
 invoke it as follows
 {code}
 ant -Dplugin.package=my.package -Djar.location=/tmp/my.jar plugin-jar
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module

2015-01-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293946#comment-14293946
 ] 

ASF subversion and git services commented on LUCENE-2562:
-

Commit 1655103 from [~markrmil...@gmail.com]
[ https://svn.apache.org/r1655103 ]

LUCENE-2562: Add some Ivy support, support Lucene, Solr 4.10.3, Missing values 
in upper panel (index info) were all filled, Fields table are now sortable by 
field name and term counts. Tomoko Uchida.

 Make Luke a Lucene/Solr Module
 --

 Key: LUCENE-2562
 URL: https://issues.apache.org/jira/browse/LUCENE-2562
 Project: Lucene - Core
  Issue Type: Task
Reporter: Mark Miller
  Labels: gsoc2014
 Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, 
 LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, 
 Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke1.jpg, luke2.jpg, 
 luke3.jpg


 see
 RE: Luke - in need of maintainer: 
 http://markmail.org/message/m4gsto7giltvrpuf
 Web-based Luke: http://markmail.org/message/4xwps7p7ifltme5q
 I think it would be great if there was a version of Luke that always worked 
 with trunk - and it would also be great if it was easier to match Luke jars 
 with Lucene versions.
 While I'd like to get GWT Luke into the mix as well, I think the easiest 
 starting point is to straight port Luke to another UI toolkit before 
 abstracting out DTO objects that both GWT Luke and Pivot Luke could share.
 I've started slowly converting Luke's use of thinlet to Apache Pivot. I 
 haven't/don't have a lot of time for this at the moment, but I've plugged 
 away here and there over the past work or two. There is still a *lot* to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6201) MinShouldMatchSumScorer should advance less and score lazily

2015-01-27 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293286#comment-14293286
 ] 

Adrien Grand commented on LUCENE-6201:
--

bq. An alternative way to implement this, where there is less advance()'ing is 
to implement it in BS1 with the new range api. 

It's actually what I had started working on until I noticed this intriguing 
TODO in MinShouldMatchSumScorer that it should score lazily. :-) I actually 
think we need both approaches, I'll try to see if I can merge my two patches to 
get global speedups on this queries.

 MinShouldMatchSumScorer should advance less and score lazily
 

 Key: LUCENE-6201
 URL: https://issues.apache.org/jira/browse/LUCENE-6201
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6201.patch


 MinShouldMatchSumScorer currently computes the score eagerly, even on 
 documents that do not eventually match if it cannot find {{minShouldMatch}} 
 matches on the same document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6201) MinShouldMatchSumScorer should advance less and score lazily

2015-01-27 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-6201:


 Summary: MinShouldMatchSumScorer should advance less and score 
lazily
 Key: LUCENE-6201
 URL: https://issues.apache.org/jira/browse/LUCENE-6201
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1


MinShouldMatchSumScorer currently computes the score eagerly, even on documents 
that do not eventually match if it cannot find {{minShouldMatch}} matches on 
the same document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6920) During replication use checksums to verify if files are the same

2015-01-27 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-6920:

Attachment: SOLR-6920.patch

Updated patch. This handles the back compat check correctly

While running the tests I got a failure which can be reproduced with 
{noformat}ant test  -Dtestcase=SyncSliceTest -Dtests.method=test 
-Dtests.seed=588DD6F3A8F57A44 -Dtests.slow=true -Dtests.locale=no_NO_NY 
-Dtests.timezone=America/Bahia -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8{noformat}

The exception thrown is - 
{code}
131990 T79 C17 P63349 oasc.SolrException.log ERROR 
java.lang.ArrayIndexOutOfBoundsException: -8
at 
org.apache.lucene.store.RAMInputStream.readByte(RAMInputStream.java:73)
at org.apache.lucene.store.DataInput.readInt(DataInput.java:98)
at 
org.apache.lucene.store.MockIndexInputWrapper.readInt(MockIndexInputWrapper.java:159)
at 
org.apache.lucene.codecs.CodecUtil.validateFooter(CodecUtil.java:414)
at 
org.apache.lucene.codecs.CodecUtil.retrieveChecksum(CodecUtil.java:401)
at 
org.apache.solr.handler.ReplicationHandler.getFileList(ReplicationHandler.java:445)
at 
org.apache.solr.handler.ReplicationHandler.handleRequestBody(ReplicationHandler.java:212)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:144)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2006)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:413)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:204)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:142)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.server.handler.GzipHandler.handle(GzipHandler.java:301)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1077)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
at 
org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
at 
org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
at 
org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:628)
at 
org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
{code}

On debugging I found out the file which was causing it - 
{{_0_MockRandom_0.sd}} . This is a MockRandomPostingsFormat.SEED_EXT fille.

Adding this to SyncSliceTest fixed the fail - 
{noformat}@LuceneTestCase.SuppressCodecs({ MockRandom }){noformat} but any 
other test could end up using it causing a failure. Any idea on how to tackle it

 During replication use checksums to verify if files are the same
 

 Key: SOLR-6920
 URL: https://issues.apache.org/jira/browse/SOLR-6920
 Project: Solr
 

[jira] [Commented] (LUCENE-5569) Rename AtomicReader to LeafReader

2015-01-27 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293270#comment-14293270
 ] 

Adrien Grand commented on LUCENE-5569:
--

Actually I think this 5.0 break is one of the easiest ones to fix when 
upgrading. Although it might generate lots of compilation errors, especially if 
you use experimental APIs like oal.search.Collector, it's very easy to fix 
using eg. sed to replace all occurrences of AtomicReader with LeafReader (which 
will also rename AtomicReaderContext to LeafReaderContext).

 Rename AtomicReader to LeafReader
 -

 Key: LUCENE-5569
 URL: https://issues.apache.org/jira/browse/LUCENE-5569
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Ryan Ernst
Priority: Blocker
 Fix For: 5.0

 Attachments: LUCENE-5569.patch, LUCENE-5569.patch


 See LUCENE-5527 for more context: several of us seem to prefer {{Leaf}} to 
 {{Atomic}}.
 Talking from my experience, I was a bit confused in the beginning that this 
 thing is named {{AtomicReader}}, since {{Atomic}} is otherwise used in Java 
 in the context of concurrency. So maybe renaming it to {{Leaf}} would help 
 remove this confusion and also carry the information that these readers are 
 used as leaves of top-level readers?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6201) MinShouldMatchSumScorer should advance less and score lazily

2015-01-27 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293274#comment-14293274
 ] 

Robert Muir commented on LUCENE-6201:
-

One concern i have is this seems to make the fast queries faster and the slow 
ones slower. The bottleneck for many apps will be the behavior on high freq 
queries (e.g. HighMinShouldMatch4)

An alternative way to implement this, where there is less advance()'ing is to 
implement it in BS1 with the new range api. 

 MinShouldMatchSumScorer should advance less and score lazily
 

 Key: LUCENE-6201
 URL: https://issues.apache.org/jira/browse/LUCENE-6201
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6201.patch


 MinShouldMatchSumScorer currently computes the score eagerly, even on 
 documents that do not eventually match if it cannot find {{minShouldMatch}} 
 matches on the same document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7041) Nuke defaultSearchField and solrQueryParser from schema

2015-01-27 Thread JIRA
Jan Høydahl created SOLR-7041:
-

 Summary: Nuke defaultSearchField and solrQueryParser from schema
 Key: SOLR-7041
 URL: https://issues.apache.org/jira/browse/SOLR-7041
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Reporter: Jan Høydahl
 Fix For: 5.0, Trunk


The two tags {{defautlSearchField}} and {{solrQueryParser}} were deprecated 
in Solr3.6 (SOLR-2724). Time to nuke them from code and {{schema.xml}} in 5.0?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7012) add an ant target to package a plugin into a jar

2015-01-27 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-7012:
---
Attachment: SOLR-7012.patch

Added a patch which creates a plugin jar for a user, provided his plugin code 
resides in the solr/core/src/java folder.

Example:
ant -Dplugin.package=com.my.plugin -Djar.location=abcd.jar plugin-jar

 add an ant target to package a plugin into a jar
 

 Key: SOLR-7012
 URL: https://issues.apache.org/jira/browse/SOLR-7012
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-7012.patch


 Now it is extremely hard to create  plugin because the user do not know about 
 the exact dependencies and their poms
 we will add a target to solr/build.xml called plugin-jar
 invoke it as follows
 {code}
 ant -Dplugin.package=my.package -Djar.location=/tmp/my.jar plugin-jar
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6201) MinShouldMatchSumScorer should advance less and score lazily

2015-01-27 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6201:
-
Attachment: LUCENE-6201.patch

I have been working on an alternative implementation that tries to generalize 
how our disjunction and conjunction scorers are working. It keeps track of 
scorers in 3 different data-structures:
 - a linked-list of scorers that are positioned on the next potential match, 
called 'lead' since they are used to lead the iteration
 - a heap of scorers that are beyond the next potential match called 'head', 
ordered by doc ID (like DisjunctionScorer)
 - a heap of scorers that are behind the next potential match called 'tail', 
ordered by cost (like ConjunctionScorer, although the ConjunctionScorer case is 
simpler since the set of scorers does not change it can just use a sorted 
array). This heap has a size of at most {{minShouldMatch - 1}}, which 
guarantees that they can't be a match on their own (since we need at least 
{{minShouldMatch}} matching clauses).

When you want to move to the next document, you first move scorers from 'lead' 
to 'tail'. And if it overflows, you just have to advance the least-costly 
scorers to 'head'. Then the next potential match is the doc ID at the top of 
'head' and we need to pop 'head' from all scorers which are on this doc ID. 
Finally, we just have to advance the least-costly scorer from 'tail' until 
there is a match.

I ran benchmarks with the current implementation using the tasks file from 
LUCENE-4571. Some queries are slower, other queries are faster:

{noformat}
TaskQPS baseline  StdDev   QPS patch  StdDev
Pct diff
 Low3MinShouldMatch2   41.11  (7.3%)   34.29  (3.6%)  
-16.6% ( -25% -   -6%)
 Low2MinShouldMatch2   24.18  (7.4%)   21.28  (2.8%)  
-12.0% ( -20% -   -1%)
 Low1MinShouldMatch2   17.92  (7.1%)   16.26  (3.1%)   
-9.3% ( -18% -0%)
 HighMinShouldMatch4   23.14  (6.3%)   21.13  (3.4%)   
-8.7% ( -17% -1%)
 HighMinShouldMatch3   17.01  (6.9%)   15.73  (2.9%)   
-7.5% ( -16% -2%)
 Low1MinShouldMatch3   23.20  (6.9%)   21.49  (3.1%)   
-7.4% ( -16% -2%)
 HighMinShouldMatch2   14.48  (6.9%)   13.63  (3.4%)   
-5.8% ( -15% -4%)
 Low4MinShouldMatch2  327.94  (2.6%)  312.58  (2.4%)   
-4.7% (  -9% -0%)
 Low2MinShouldMatch3   39.53  (7.1%)   38.77  (3.2%)   
-1.9% ( -11% -9%)
 Low4MinShouldMatch0   73.34  (3.3%)   72.17  (2.1%)   
-1.6% (  -6% -3%)
 Low2MinShouldMatch0   36.11  (2.1%)   35.62  (1.5%)   
-1.4% (  -4% -2%)
 Low1MinShouldMatch4   41.57  (6.1%)   41.01  (3.3%)   
-1.4% ( -10% -8%)
 Low3MinShouldMatch0   48.49  (2.1%)   47.90  (1.7%)   
-1.2% (  -4% -2%)
 Low3MinShouldMatch3  311.34  (8.0%)  309.54  (2.4%)   
-0.6% ( -10% -   10%)
 Low1MinShouldMatch0   30.28  (2.0%)   30.14  (1.1%)   
-0.5% (  -3% -2%)
 HighMinShouldMatch0   26.09  (1.7%)   25.99  (1.1%)   
-0.4% (  -3% -2%)
PKLookup  322.05  (3.3%)  323.17  (3.3%)
0.3% (  -6% -7%)
 Low2MinShouldMatch4  362.28  (5.7%)  366.96  (3.0%)
1.3% (  -6% -   10%)
 Low4MinShouldMatch4 1380.17  (6.7%) 1541.42 (11.0%)   
11.7% (  -5% -   31%)
 Low3MinShouldMatch4 1299.86  (6.4%) 1506.99  (4.7%)   
15.9% (   4% -   28%)
 Low4MinShouldMatch3 1060.15  (6.0%) 1233.64  (3.7%)   
16.4% (   6% -   27%)
{noformat}

This implementation is very careful about not advancing more than needed which 
is sometimes not the right trade-off on term queries since they are so fast. I 
tried to measure how many times nextDoc and advance are called for the extreme 
queries from this benchmark: Low3MinShouldMatch2 (-16.6%) and 
Low4MinShouldMatch3 (16.4%).

|| Low3MinShouldMatch2 || trunk || patch || diff ||
| nextDoc | 3317417 | 2385754 | -28% |
| advance | 2565471 | 3196711 | +25% |
| total | 5882888 | 5582465 | -5% |

|| Low4MinShouldMatch3 || trunk || patch ||
| nextDoc | 86588 | 320 | -99% |
| advance | 20644 | 74305 | +260% |
| total | 107232 | 74625 | -30% |

Overall this new impl seems to especially help on queries which have 
low-frequency clauses and high values of minShouldMatch where its logic helps 
save calls to nextDoc/advance. When it does not help save many nextDoc/advance 
calls like in Low3MinShouldMatch2, its constant overhead makes it slower.

The other interesting part is that it scores lazily, so I hacked luceneutil to 
wrap the parsed boolean queries into constant-score queries, and this time the 
difference is even better since the current 

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2551 - Still Failing

2015-01-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2551/

5 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:42903/c8n_1x2_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:42903/c8n_1x2_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([AE2E4FAF117A8B64:267A7075BF86E69C]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:787)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:730)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:468)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:189)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:102)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-6915) SaslZkACLProvider and Kerberos Test Using MiniKdc

2015-01-27 Thread Emmanuel Lecharny (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293247#comment-14293247
 ] 

Emmanuel Lecharny commented on SOLR-6915:
-

I would suggest you switch to ApacheDS M19. M15 is quite ancient, and depends 
on LDAP API 1.0.0-M20, which is 9 version behind already.

Although the GenerilizedTimeSyntaxChecker has not changed for years... FTR, the 
date 270126230030Z is perfectly valid, and I don't see how possibly it can 
fail. Here is the code :

http://svn.apache.org/viewvc/directory/shared/trunk/ldap/model/src/main/java/org/apache/directory/api/ldap/model/schema/syntaxCheckers/GeneralizedTimeSyntaxChecker.java?revision=1002871view=markup

 SaslZkACLProvider and Kerberos Test Using MiniKdc
 -

 Key: SOLR-6915
 URL: https://issues.apache.org/jira/browse/SOLR-6915
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Fix For: 5.0, Trunk

 Attachments: SOLR-6915.patch, SOLR-6915.patch, fail.log, fail.log, 
 tests-failures.txt


 We should provide a ZkACLProvider that requires SASL authentication.  This 
 provider will be useful for administration in a kerberos environment.   In 
 such an environment, the administrator wants solr to authenticate to 
 zookeeper using SASL, since this is only way to authenticate with zookeeper 
 via kerberos.
 The authorization model in such a setup can vary, e.g. you can imagine a 
 scenario where solr owns (is the only writer of) the non-config znodes, but 
 some set of trusted users are allowed to modify the configs.  It's hard to 
 predict all the possibilities here, but one model that seems generally useful 
 is to have a model where solr itself owns all the znodes and all actions that 
 require changing the znodes are routed to Solr APIs.  That seems simple and 
 reasonable as a first version.
 As for testing, I noticed while working on SOLR-6625 that we don't really 
 have any infrastructure for testing kerberos integration in unit tests.  
 Internally, I've been testing using kerberos-enabled VM clusters, but this 
 isn't great since we won't notice any breakages until someone actually spins 
 up a VM.  So part of this JIRA is to provide some infrastructure for testing 
 kerberos at the unit test level (using Hadoop's MiniKdc, HADOOP-9848).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2554 - Still Failing

2015-01-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2554/

5 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.NoHttpResponseException: The target server failed to respond
at 
__randomizedtesting.SeedInfo.seed([5CC92AAC31978B6:8D98AD706DE5154E]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:865)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:730)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:468)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:189)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:102)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[jira] [Updated] (SOLR-7046) NullPointerException when group.function uses query() function

2015-01-27 Thread Jim Musil (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Musil updated SOLR-7046:

Description: 
When you attempt to group by function using the query() function, it results in 
an NullPointerException.

Using the example webapp loaded with the hd.xml file in exampledocs, you can 
recreate by issuing the following query:

http://localhost:8983/solr/select/?q=*:*group=truegroup.func=ceil(query({!type=edismax%20v=$q}))

This appears to be due to a small bug in the following file:

{code:title=Grouping.java}
protected void prepare() throws IOException {
  Map context = ValueSource.newContext(searcher);
  groupBy.createWeight(context, searcher);
  actualGroupsToFind = getMax(offset, numGroups, maxDoc);
}
{code}

The variable context is always null because it's scope is local to this 
function, but it gets passed on to another function later.

The patch is simply removing the Map qualifier from the instantiation.


  was:
You attempt to group by function using the query() function, it results in an 
NullPointerException.

Using the example webapp loaded with the hd.xml file in exampledocs, you can 
recreate by issuing the following query:

http://localhost:8983/solr/select/?q=*:*group=truegroup.func=ceil(query({!type=edismax%20v=$q}))

This appears to be due to a small bug in the following file:

{code:title=Grouping.java}
protected void prepare() throws IOException {
  Map context = ValueSource.newContext(searcher);
  groupBy.createWeight(context, searcher);
  actualGroupsToFind = getMax(offset, numGroups, maxDoc);
}
{code}

The variable context is always null because it's scope is local to this 
function, but it gets passed on to another function later.

The patch is simply removing the Map qualifier from the instantiation.



 NullPointerException when group.function uses query() function
 --

 Key: SOLR-7046
 URL: https://issues.apache.org/jira/browse/SOLR-7046
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.10.3
Reporter: Jim Musil

 When you attempt to group by function using the query() function, it results 
 in an NullPointerException.
 Using the example webapp loaded with the hd.xml file in exampledocs, you can 
 recreate by issuing the following query:
 http://localhost:8983/solr/select/?q=*:*group=truegroup.func=ceil(query({!type=edismax%20v=$q}))
 This appears to be due to a small bug in the following file:
 {code:title=Grouping.java}
 protected void prepare() throws IOException {
   Map context = ValueSource.newContext(searcher);
   groupBy.createWeight(context, searcher);
   actualGroupsToFind = getMax(offset, numGroups, maxDoc);
 }
 {code}
 The variable context is always null because it's scope is local to this 
 function, but it gets passed on to another function later.
 The patch is simply removing the Map qualifier from the instantiation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6915) SaslZkACLProvider and Kerberos Test Using MiniKdc

2015-01-27 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14294226#comment-14294226
 ] 

Gregory Chanan commented on SOLR-6915:
--

[~elecharny] thanks for the suggestion, I'll look into it but I may not be able 
to do anything because I'm relying on hadoop MiniKDC, so likely they would have 
to upgrade the dependency first.

About the date 270126230030Z I think you are right, that comment refers to an 
error coming from bouncycastle, not from apacheds.  I believe the errors coming 
from apacheds are only the two locales:
th_TH_TH_#u-nu-thai
hi_IN

 SaslZkACLProvider and Kerberos Test Using MiniKdc
 -

 Key: SOLR-6915
 URL: https://issues.apache.org/jira/browse/SOLR-6915
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Fix For: 5.0, Trunk

 Attachments: SOLR-6915.patch, SOLR-6915.patch, fail.log, fail.log, 
 tests-failures.txt


 We should provide a ZkACLProvider that requires SASL authentication.  This 
 provider will be useful for administration in a kerberos environment.   In 
 such an environment, the administrator wants solr to authenticate to 
 zookeeper using SASL, since this is only way to authenticate with zookeeper 
 via kerberos.
 The authorization model in such a setup can vary, e.g. you can imagine a 
 scenario where solr owns (is the only writer of) the non-config znodes, but 
 some set of trusted users are allowed to modify the configs.  It's hard to 
 predict all the possibilities here, but one model that seems generally useful 
 is to have a model where solr itself owns all the znodes and all actions that 
 require changing the znodes are routed to Solr APIs.  That seems simple and 
 reasonable as a first version.
 As for testing, I noticed while working on SOLR-6625 that we don't really 
 have any infrastructure for testing kerberos integration in unit tests.  
 Internally, I've been testing using kerberos-enabled VM clusters, but this 
 isn't great since we won't notice any breakages until someone actually spins 
 up a VM.  So part of this JIRA is to provide some infrastructure for testing 
 kerberos at the unit test level (using Hadoop's MiniKdc, HADOOP-9848).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7046) NullPointerException when group.function uses query() function

2015-01-27 Thread Jim Musil (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Musil updated SOLR-7046:

Attachment: SOLR-7046.patch

 NullPointerException when group.function uses query() function
 --

 Key: SOLR-7046
 URL: https://issues.apache.org/jira/browse/SOLR-7046
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.10.3
Reporter: Jim Musil
 Attachments: SOLR-7046.patch


 When you attempt to group by function using the query() function, it results 
 in an NullPointerException.
 Using the example webapp loaded with the hd.xml file in exampledocs, you can 
 recreate by issuing the following query:
 http://localhost:8983/solr/select/?q=*:*group=truegroup.func=ceil(query({!type=edismax%20v=$q}))
 This appears to be due to a small bug in the following file:
 {code:title=Grouping.java}
 protected void prepare() throws IOException {
   Map context = ValueSource.newContext(searcher);
   groupBy.createWeight(context, searcher);
   actualGroupsToFind = getMax(offset, numGroups, maxDoc);
 }
 {code}
 The variable context is always null because it's scope is local to this 
 function, but it gets passed on to another function later.
 The patch is simply removing the Map qualifier from the instantiation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7045) Add german Solr book to the book list

2015-01-27 Thread Ahmet Arslan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14294248#comment-14294248
 ] 

Ahmet Arslan commented on SOLR-7045:


Please see recently added book

 Add german Solr book to the book list
 -

 Key: SOLR-7045
 URL: https://issues.apache.org/jira/browse/SOLR-7045
 Project: Solr
  Issue Type: Wish
Reporter: Markus Klose
Assignee: Erik Hatcher
Priority: Trivial
 Attachments: eias.jpg, eias.txt


 poviding the image (eias.jpg) and the formated text (eias.txt)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6954) Considering changing SolrClient#shutdown to SolrClient#close.

2015-01-27 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14294197#comment-14294197
 ] 

Alan Woodward commented on SOLR-6954:
-

I plan on committing this tomorrow.

 Considering changing SolrClient#shutdown to SolrClient#close.
 -

 Key: SOLR-6954
 URL: https://issues.apache.org/jira/browse/SOLR-6954
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
 Fix For: 5.0, Trunk

 Attachments: SOLR-6954.patch, SOLR-6954.patch


 SolrClient#shutdown is not as odd as SolrServer#shutdown, but as we want 
 users to release these objects, close is more standard and if we implement 
 Closeable, tools help point out leaks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7046) NullPointerException when group.function uses query() function

2015-01-27 Thread Jim Musil (JIRA)
Jim Musil created SOLR-7046:
---

 Summary: NullPointerException when group.function uses query() 
function
 Key: SOLR-7046
 URL: https://issues.apache.org/jira/browse/SOLR-7046
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.10.3
Reporter: Jim Musil


You attempt to group by function using the query() function, it results in an 
NullPointerException.

Using the example webapp loaded with the hd.xml file in exampledocs, you can 
recreate by issuing the following query:

http://localhost:8983/solr/select/?q=*:*group=truegroup.func=ceil(query({!type=edismax%20v=$q}))

This appears to be due to a small bug in the following file:

{code:title=Grouping.java}
protected void prepare() throws IOException {
  Map context = ValueSource.newContext(searcher);
  groupBy.createWeight(context, searcher);
  actualGroupsToFind = getMax(offset, numGroups, maxDoc);
}
{code}

The variable context is always null because it's scope is local to this 
function, but it gets passed on to another function later.

The patch is simply removing the Map qualifier from the instantiation.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7045) Add german Solr book to the book list

2015-01-27 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher reassigned SOLR-7045:
--

Assignee: Erik Hatcher

 Add german Solr book to the book list
 -

 Key: SOLR-7045
 URL: https://issues.apache.org/jira/browse/SOLR-7045
 Project: Solr
  Issue Type: Wish
Reporter: Markus Klose
Assignee: Erik Hatcher
Priority: Trivial
 Attachments: eias.jpg, eias.txt


 poviding the image (eias.jpg) and the formated text (eias.txt)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5890) Delete silently fails if not sent to shard where document was added

2015-01-27 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-5890:
---
Attachment: SOLR-5890.patch

Updated the patch, now with the Hash based router also honouring the __route__ 
param.

1. deleteById command now has a __route__ parameter. In implicit router, the 
target shard can be specified. In compositeId router, the route parameter is 
hashed to obtain the target slice (useful for collections that use 
router.field).
2. commitWithin wasn't working. Added a fix in SolrCmdDistributor.

 Delete silently fails if not sent to shard where document was added
 ---

 Key: SOLR-5890
 URL: https://issues.apache.org/jira/browse/SOLR-5890
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.7
 Environment: Debian 7.4.
Reporter: Peter Inglesby
Assignee: Noble Paul
  Labels: difficulty-medium, impact-medium, workaround-exists
 Fix For: 5.0, Trunk

 Attachments: 5890_tests.patch, SOLR-5890-without-broadcast.patch, 
 SOLR-5890.patch, SOLR-5890.patch, SOLR-5890.patch, SOLR-5890.patch, 
 SOLR-5980.patch


 We have SolrCloud set up with two shards, each with a leader and a replica.  
 We use haproxy to distribute requests between the four nodes.
 Regardless of which node we send an add request to, following a commit, the 
 newly-added document is returned in a search, as expected.
 However, we can only delete a document if the delete request is sent to a 
 node in the shard where the document was added.  If we send the delete 
 request to a node in the other shard (and then send a commit) the document is 
 not deleted.  Such a delete request will get a 200 response, with the 
 following body:
   {'responseHeader'={'status'=0,'QTime'=7}}
 Apart from the the very low QTime, this is indistinguishable from a 
 successful delete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6915) SaslZkACLProvider and Kerberos Test Using MiniKdc

2015-01-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14294369#comment-14294369
 ] 

ASF subversion and git services commented on SOLR-6915:
---

Commit 1655188 from gcha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1655188 ]

SOLR-6915: Avoid broken Locales and skip IBM J9

 SaslZkACLProvider and Kerberos Test Using MiniKdc
 -

 Key: SOLR-6915
 URL: https://issues.apache.org/jira/browse/SOLR-6915
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Fix For: 5.0, Trunk

 Attachments: SOLR-6915.patch, SOLR-6915.patch, fail.log, fail.log, 
 tests-failures.txt


 We should provide a ZkACLProvider that requires SASL authentication.  This 
 provider will be useful for administration in a kerberos environment.   In 
 such an environment, the administrator wants solr to authenticate to 
 zookeeper using SASL, since this is only way to authenticate with zookeeper 
 via kerberos.
 The authorization model in such a setup can vary, e.g. you can imagine a 
 scenario where solr owns (is the only writer of) the non-config znodes, but 
 some set of trusted users are allowed to modify the configs.  It's hard to 
 predict all the possibilities here, but one model that seems generally useful 
 is to have a model where solr itself owns all the znodes and all actions that 
 require changing the znodes are routed to Solr APIs.  That seems simple and 
 reasonable as a first version.
 As for testing, I noticed while working on SOLR-6625 that we don't really 
 have any infrastructure for testing kerberos integration in unit tests.  
 Internally, I've been testing using kerberos-enabled VM clusters, but this 
 isn't great since we won't notice any breakages until someone actually spins 
 up a VM.  So part of this JIRA is to provide some infrastructure for testing 
 kerberos at the unit test level (using Hadoop's MiniKdc, HADOOP-9848).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7047) solr.cmd fails if Solr installation path contains parenthesis

2015-01-27 Thread JIRA
Jan Høydahl created SOLR-7047:
-

 Summary: solr.cmd fails if Solr installation path contains 
parenthesis
 Key: SOLR-7047
 URL: https://issues.apache.org/jira/browse/SOLR-7047
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
 Environment: Windows with 32 bit Windows
Reporter: Jan Høydahl
 Fix For: 5.1


Steps to reproduce
{code}
  jar xvf solr-5.0.0.zip
  rename solr-5.0.0 solr (5)
  cd solr (5)\bin
  solr.cmd start
{code}

The script fails when trying to assign an environment variable using 
{{SOLR_TIP}}, which contains parens.

This is more or less the same root issue as SOLR-6693 where the issue is that 
{{SOLR_HOME}} contains parens in case of 32 bit Windows, i.e. {{C:\Program 
Files (x86)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6915) SaslZkACLProvider and Kerberos Test Using MiniKdc

2015-01-27 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan resolved SOLR-6915.
--
   Resolution: Fixed
Fix Version/s: (was: 5.0)
   5.1

Committed a change to 5.1 and trunk that skips IBM J9 and avoids the broken 
locales.

 SaslZkACLProvider and Kerberos Test Using MiniKdc
 -

 Key: SOLR-6915
 URL: https://issues.apache.org/jira/browse/SOLR-6915
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Fix For: Trunk, 5.1

 Attachments: SOLR-6915.patch, SOLR-6915.patch, fail.log, fail.log, 
 tests-failures.txt


 We should provide a ZkACLProvider that requires SASL authentication.  This 
 provider will be useful for administration in a kerberos environment.   In 
 such an environment, the administrator wants solr to authenticate to 
 zookeeper using SASL, since this is only way to authenticate with zookeeper 
 via kerberos.
 The authorization model in such a setup can vary, e.g. you can imagine a 
 scenario where solr owns (is the only writer of) the non-config znodes, but 
 some set of trusted users are allowed to modify the configs.  It's hard to 
 predict all the possibilities here, but one model that seems generally useful 
 is to have a model where solr itself owns all the znodes and all actions that 
 require changing the znodes are routed to Solr APIs.  That seems simple and 
 reasonable as a first version.
 As for testing, I noticed while working on SOLR-6625 that we don't really 
 have any infrastructure for testing kerberos integration in unit tests.  
 Internally, I've been testing using kerberos-enabled VM clusters, but this 
 isn't great since we won't notice any breakages until someone actually spins 
 up a VM.  So part of this JIRA is to provide some infrastructure for testing 
 kerberos at the unit test level (using Hadoop's MiniKdc, HADOOP-9848).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6915) SaslZkACLProvider and Kerberos Test Using MiniKdc

2015-01-27 Thread Emmanuel Lecharny (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14294394#comment-14294394
 ] 

Emmanuel Lecharny commented on SOLR-6915:
-

Can you bit a bit more explicit about what you are doing that breaks in 
ApacheDS when using the Thai locale ?

 SaslZkACLProvider and Kerberos Test Using MiniKdc
 -

 Key: SOLR-6915
 URL: https://issues.apache.org/jira/browse/SOLR-6915
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Fix For: Trunk, 5.1

 Attachments: SOLR-6915.patch, SOLR-6915.patch, fail.log, fail.log, 
 tests-failures.txt


 We should provide a ZkACLProvider that requires SASL authentication.  This 
 provider will be useful for administration in a kerberos environment.   In 
 such an environment, the administrator wants solr to authenticate to 
 zookeeper using SASL, since this is only way to authenticate with zookeeper 
 via kerberos.
 The authorization model in such a setup can vary, e.g. you can imagine a 
 scenario where solr owns (is the only writer of) the non-config znodes, but 
 some set of trusted users are allowed to modify the configs.  It's hard to 
 predict all the possibilities here, but one model that seems generally useful 
 is to have a model where solr itself owns all the znodes and all actions that 
 require changing the znodes are routed to Solr APIs.  That seems simple and 
 reasonable as a first version.
 As for testing, I noticed while working on SOLR-6625 that we don't really 
 have any infrastructure for testing kerberos integration in unit tests.  
 Internally, I've been testing using kerberos-enabled VM clusters, but this 
 isn't great since we won't notice any breakages until someone actually spins 
 up a VM.  So part of this JIRA is to provide some infrastructure for testing 
 kerberos at the unit test level (using Hadoop's MiniKdc, HADOOP-9848).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7047) solr.cmd fails if Solr installation path contains parenthesis

2015-01-27 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14294470#comment-14294470
 ] 

Anshum Gupta commented on SOLR-7047:


Is that still the case? I thought that the fix for path with spaces would also 
handle this. Can you confirm if this fails on trunk/5x?

 solr.cmd fails if Solr installation path contains parenthesis
 -

 Key: SOLR-7047
 URL: https://issues.apache.org/jira/browse/SOLR-7047
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
 Environment: Windows with 32 bit Windows
Reporter: Jan Høydahl
 Fix For: 5.1


 Steps to reproduce
 {code}
   jar xvf solr-5.0.0.zip
   rename solr-5.0.0 solr (5)
   cd solr (5)\bin
   solr.cmd start
 {code}
 The script fails when trying to assign an environment variable using 
 {{SOLR_TIP}}, which contains parens.
 This is more or less the same root issue as SOLR-6693 where the issue is that 
 {{SOLR_HOME}} contains parens in case of 32 bit Windows, i.e. {{C:\Program 
 Files (x86)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7048) warning:org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException,this warn's problem take place when searching and indexing

2015-01-27 Thread kelo2015 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kelo2015 updated SOLR-7048:
---
Summary: 
warning:org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException,this
 warn's problem take place when searching and indexing  (was: this problem take 
place when searching and indexing)

 warning:org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException,this
  warn's problem take place when searching and indexing
 -

 Key: SOLR-7048
 URL: https://issues.apache.org/jira/browse/SOLR-7048
 Project: Solr
  Issue Type: Bug
  Components: hdfs, SolrCloud
Affects Versions: 4.8.1
 Environment: hadoop cluster:HDP2.1,solr version:4.8.1
Reporter: kelo2015

 indexing:
 org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException: 
 access control error while attempting to set up short-circuit access to 
 /user/solr/sub_2014_s08/data/index/_1k1.fdtBlock token with 
 block_token_identifier (expiryDate=1422394876052, keyId=-280715669, 
 userId=solr, blockPoolId=BP-2117321730-132.121.94.119-1395990208332, 
 blockId=1117922079, access modes=[READ]) is expired.
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.newShortCircuitBlockReader(BlockReaderFactory.java:217)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:99)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1064)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:898)
   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1154)
   at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:76)
   at 
 org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:95)
   at 
 org.apache.solr.store.hdfs.HdfsDirectory$HdfsIndexInput.readInternal(HdfsDirectory.java:212)
   at 
 org.apache.solr.store.blockcache.CustomBufferedIndexInput.refill(CustomBufferedIndexInput.java:191)
   at 
 org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:93)
   at 
 org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:67)
   at 
 org.apache.solr.store.blockcache.BlockDirectory$CachedIndexInput.readIntoCacheAndResult(BlockDirectory.java:210)
   at 
 org.apache.solr.store.blockcache.BlockDirectory$CachedIndexInput.fetchBlock(BlockDirectory.java:197)
   at 
 org.apache.solr.store.blockcache.BlockDirectory$CachedIndexInput.readInternal(BlockDirectory.java:181)
   at 
 org.apache.solr.store.blockcache.CustomBufferedIndexInput.refill(CustomBufferedIndexInput.java:191)
   at 
 org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:93)
   at 
 org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:67)
   at 
 org.apache.lucene.store.BufferedChecksumIndexInput.readBytes(BufferedChecksumIndexInput.java:49)
   at org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:101)
   at 
 org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:135)
   at 
 org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader$ChunkIterator.decompress(CompressingStoredFieldsReader.java:501)
   at 
 org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.merge(CompressingStoredFieldsWriter.java:387)
   at 
 org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:322)
   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:100)
   at 
 org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4132)
   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3728)
   at 
 org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
   at 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
 searching :
 org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException: 
 access control error while attempting to set up short-circuit access to 
 /user/solr/data/index/_s5d_Lucene41_0.timBlock token with 
 block_token_identifier (expiryDate=1422357860644, keyId=-280715670, 
 userId=solr, blockPoolId=BP-2117321730-132.121.94.119-1395990208332, 
 blockId=1086137313, access modes=[READ]) is expired.
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.newShortCircuitBlockReader(BlockReaderFactory.java:217)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:99)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1064)
  

[jira] [Updated] (SOLR-7048) this problem take place when searching and indexing,

2015-01-27 Thread kelo2015 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kelo2015 updated SOLR-7048:
---
Description: 
indexing:
org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException: access 
control error while attempting to set up short-circuit access to 
/user/solr/sub_2014_s08/data/index/_1k1.fdtBlock token with 
block_token_identifier (expiryDate=1422394876052, keyId=-280715669, 
userId=solr, blockPoolId=BP-2117321730-132.121.94.119-1395990208332, 
blockId=1117922079, access modes=[READ]) is expired.
at 
org.apache.hadoop.hdfs.BlockReaderFactory.newShortCircuitBlockReader(BlockReaderFactory.java:217)
at 
org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:99)
at 
org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1064)
at 
org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:898)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1154)
at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:76)
at 
org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:95)
at 
org.apache.solr.store.hdfs.HdfsDirectory$HdfsIndexInput.readInternal(HdfsDirectory.java:212)
at 
org.apache.solr.store.blockcache.CustomBufferedIndexInput.refill(CustomBufferedIndexInput.java:191)
at 
org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:93)
at 
org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:67)
at 
org.apache.solr.store.blockcache.BlockDirectory$CachedIndexInput.readIntoCacheAndResult(BlockDirectory.java:210)
at 
org.apache.solr.store.blockcache.BlockDirectory$CachedIndexInput.fetchBlock(BlockDirectory.java:197)
at 
org.apache.solr.store.blockcache.BlockDirectory$CachedIndexInput.readInternal(BlockDirectory.java:181)
at 
org.apache.solr.store.blockcache.CustomBufferedIndexInput.refill(CustomBufferedIndexInput.java:191)
at 
org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:93)
at 
org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:67)
at 
org.apache.lucene.store.BufferedChecksumIndexInput.readBytes(BufferedChecksumIndexInput.java:49)
at org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:101)
at 
org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:135)
at 
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader$ChunkIterator.decompress(CompressingStoredFieldsReader.java:501)
at 
org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.merge(CompressingStoredFieldsWriter.java:387)
at 
org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:322)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:100)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4132)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3728)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)

searching :
org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException: access 
control error while attempting to set up short-circuit access to 
/user/solr/data/index/_s5d_Lucene41_0.timBlock token with 
block_token_identifier (expiryDate=1422357860644, keyId=-280715670, 
userId=solr, blockPoolId=BP-2117321730-132.121.94.119-1395990208332, 
blockId=1086137313, access modes=[READ]) is expired.
at 
org.apache.hadoop.hdfs.BlockReaderFactory.newShortCircuitBlockReader(BlockReaderFactory.java:217)
at 
org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:99)
at 
org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1064)
at 
org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:898)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1154)
at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:76)
at 
org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:95)
at 
org.apache.solr.store.hdfs.HdfsDirectory$HdfsIndexInput.readInternal(HdfsDirectory.java:212)
at 
org.apache.solr.store.blockcache.CustomBufferedIndexInput.refill(CustomBufferedIndexInput.java:191)
at 
org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:93)
at 
org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:67)
at 

[jira] [Updated] (SOLR-5147) Support child documents in DIH

2015-01-27 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-5147:
---
Fix Version/s: 5.1

 Support child documents in DIH
 --

 Key: SOLR-5147
 URL: https://issues.apache.org/jira/browse/SOLR-5147
 Project: Solr
  Issue Type: Sub-task
Reporter: Vadim Kirilchuk
Assignee: Noble Paul
 Fix For: Trunk, 5.1

 Attachments: SOLR-5147-5x.patch, SOLR-5147.patch, SOLR-5147.patch, 
 dih-oome-fix.patch


 DIH should be able to index hierarchical documents, i.e. it should be able to 
 work with SolrInputDocuments#addChildDocument.
 There was patch in SOLR-3076: 
 https://issues.apache.org/jira/secure/attachment/12576960/dih-3076.patch
 But it is not uptodate and far from being complete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7048) this problem take place when searching and indexing,

2015-01-27 Thread kelo2015 (JIRA)
kelo2015 created SOLR-7048:
--

 Summary: this problem take place when searching and indexing,
 Key: SOLR-7048
 URL: https://issues.apache.org/jira/browse/SOLR-7048
 Project: Solr
  Issue Type: Bug
  Components: hdfs
Affects Versions: 4.8.1
 Environment: hadoop cluster:HDP2.1,solr verstion:4.8.1
Reporter: kelo2015


indexing:
org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException: access 
control error while attempting to set up short-circuit access to 
/user/solr/esearchcloud/coll_crmyun_756_sub_2014/core_756_sub_2014_s08/data/index/_1k1.fdtBlock
 token with block_token_identifier (expiryDate=1422394876052, keyId=-280715669, 
userId=solr, blockPoolId=BP-2117321730-132.121.94.119-1395990208332, 
blockId=1117922079, access modes=[READ]) is expired.
at 
org.apache.hadoop.hdfs.BlockReaderFactory.newShortCircuitBlockReader(BlockReaderFactory.java:217)
at 
org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:99)
at 
org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1064)
at 
org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:898)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1154)
at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:76)
at 
org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:95)
at 
org.apache.solr.store.hdfs.HdfsDirectory$HdfsIndexInput.readInternal(HdfsDirectory.java:212)
at 
org.apache.solr.store.blockcache.CustomBufferedIndexInput.refill(CustomBufferedIndexInput.java:191)
at 
org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:93)
at 
org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:67)
at 
org.apache.solr.store.blockcache.BlockDirectory$CachedIndexInput.readIntoCacheAndResult(BlockDirectory.java:210)
at 
org.apache.solr.store.blockcache.BlockDirectory$CachedIndexInput.fetchBlock(BlockDirectory.java:197)
at 
org.apache.solr.store.blockcache.BlockDirectory$CachedIndexInput.readInternal(BlockDirectory.java:181)
at 
org.apache.solr.store.blockcache.CustomBufferedIndexInput.refill(CustomBufferedIndexInput.java:191)
at 
org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:93)
at 
org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:67)
at 
org.apache.lucene.store.BufferedChecksumIndexInput.readBytes(BufferedChecksumIndexInput.java:49)
at org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:101)
at 
org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:135)
at 
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader$ChunkIterator.decompress(CompressingStoredFieldsReader.java:501)
at 
org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.merge(CompressingStoredFieldsWriter.java:387)
at 
org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:322)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:100)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4132)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3728)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)

searching :
org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException: access 
control error while attempting to set up short-circuit access to 
/user/solr/esearchcloud/coll_crmyun_755_sub_history/core_755_sub_history_s08/data/index/_s5d_Lucene41_0.timBlock
 token with block_token_identifier (expiryDate=1422357860644, keyId=-280715670, 
userId=solr, blockPoolId=BP-2117321730-132.121.94.119-1395990208332, 
blockId=1086137313, access modes=[READ]) is expired.
at 
org.apache.hadoop.hdfs.BlockReaderFactory.newShortCircuitBlockReader(BlockReaderFactory.java:217)
at 
org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:99)
at 
org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1064)
at 
org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:898)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1154)
at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:76)
at 
org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:95)
at 
org.apache.solr.store.hdfs.HdfsDirectory$HdfsIndexInput.readInternal(HdfsDirectory.java:212)
   

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2555 - Still Failing

2015-01-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2555/

5 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.NoHttpResponseException: The target server failed to respond
at 
__randomizedtesting.SeedInfo.seed([4A55D4392924CEFD:C201EBE387D8A305]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:865)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:730)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:468)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:189)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:102)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[jira] [Updated] (SOLR-7048) this problem take place when searching and indexing

2015-01-27 Thread kelo2015 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kelo2015 updated SOLR-7048:
---
Summary: this problem take place when searching and indexing  (was: this 
problem take place when searching and indexing,)

 this problem take place when searching and indexing
 ---

 Key: SOLR-7048
 URL: https://issues.apache.org/jira/browse/SOLR-7048
 Project: Solr
  Issue Type: Bug
  Components: hdfs
Affects Versions: 4.8.1
 Environment: hadoop cluster:HDP2.1,solr verstion:4.8.1
Reporter: kelo2015

 indexing:
 org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException: 
 access control error while attempting to set up short-circuit access to 
 /user/solr/sub_2014_s08/data/index/_1k1.fdtBlock token with 
 block_token_identifier (expiryDate=1422394876052, keyId=-280715669, 
 userId=solr, blockPoolId=BP-2117321730-132.121.94.119-1395990208332, 
 blockId=1117922079, access modes=[READ]) is expired.
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.newShortCircuitBlockReader(BlockReaderFactory.java:217)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:99)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1064)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:898)
   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1154)
   at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:76)
   at 
 org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:95)
   at 
 org.apache.solr.store.hdfs.HdfsDirectory$HdfsIndexInput.readInternal(HdfsDirectory.java:212)
   at 
 org.apache.solr.store.blockcache.CustomBufferedIndexInput.refill(CustomBufferedIndexInput.java:191)
   at 
 org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:93)
   at 
 org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:67)
   at 
 org.apache.solr.store.blockcache.BlockDirectory$CachedIndexInput.readIntoCacheAndResult(BlockDirectory.java:210)
   at 
 org.apache.solr.store.blockcache.BlockDirectory$CachedIndexInput.fetchBlock(BlockDirectory.java:197)
   at 
 org.apache.solr.store.blockcache.BlockDirectory$CachedIndexInput.readInternal(BlockDirectory.java:181)
   at 
 org.apache.solr.store.blockcache.CustomBufferedIndexInput.refill(CustomBufferedIndexInput.java:191)
   at 
 org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:93)
   at 
 org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:67)
   at 
 org.apache.lucene.store.BufferedChecksumIndexInput.readBytes(BufferedChecksumIndexInput.java:49)
   at org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:101)
   at 
 org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:135)
   at 
 org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader$ChunkIterator.decompress(CompressingStoredFieldsReader.java:501)
   at 
 org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.merge(CompressingStoredFieldsWriter.java:387)
   at 
 org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:322)
   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:100)
   at 
 org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4132)
   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3728)
   at 
 org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
   at 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
 searching :
 org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException: 
 access control error while attempting to set up short-circuit access to 
 /user/solr/data/index/_s5d_Lucene41_0.timBlock token with 
 block_token_identifier (expiryDate=1422357860644, keyId=-280715670, 
 userId=solr, blockPoolId=BP-2117321730-132.121.94.119-1395990208332, 
 blockId=1086137313, access modes=[READ]) is expired.
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.newShortCircuitBlockReader(BlockReaderFactory.java:217)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:99)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1064)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:898)
   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1154)
   at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:76)
   at 
 

[jira] [Updated] (SOLR-7048) this problem take place when searching and indexing

2015-01-27 Thread kelo2015 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kelo2015 updated SOLR-7048:
---
Component/s: SolrCloud

 this problem take place when searching and indexing
 ---

 Key: SOLR-7048
 URL: https://issues.apache.org/jira/browse/SOLR-7048
 Project: Solr
  Issue Type: Bug
  Components: hdfs, SolrCloud
Affects Versions: 4.8.1
 Environment: hadoop cluster:HDP2.1,solr version:4.8.1
Reporter: kelo2015

 indexing:
 org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException: 
 access control error while attempting to set up short-circuit access to 
 /user/solr/sub_2014_s08/data/index/_1k1.fdtBlock token with 
 block_token_identifier (expiryDate=1422394876052, keyId=-280715669, 
 userId=solr, blockPoolId=BP-2117321730-132.121.94.119-1395990208332, 
 blockId=1117922079, access modes=[READ]) is expired.
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.newShortCircuitBlockReader(BlockReaderFactory.java:217)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:99)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1064)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:898)
   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1154)
   at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:76)
   at 
 org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:95)
   at 
 org.apache.solr.store.hdfs.HdfsDirectory$HdfsIndexInput.readInternal(HdfsDirectory.java:212)
   at 
 org.apache.solr.store.blockcache.CustomBufferedIndexInput.refill(CustomBufferedIndexInput.java:191)
   at 
 org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:93)
   at 
 org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:67)
   at 
 org.apache.solr.store.blockcache.BlockDirectory$CachedIndexInput.readIntoCacheAndResult(BlockDirectory.java:210)
   at 
 org.apache.solr.store.blockcache.BlockDirectory$CachedIndexInput.fetchBlock(BlockDirectory.java:197)
   at 
 org.apache.solr.store.blockcache.BlockDirectory$CachedIndexInput.readInternal(BlockDirectory.java:181)
   at 
 org.apache.solr.store.blockcache.CustomBufferedIndexInput.refill(CustomBufferedIndexInput.java:191)
   at 
 org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:93)
   at 
 org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:67)
   at 
 org.apache.lucene.store.BufferedChecksumIndexInput.readBytes(BufferedChecksumIndexInput.java:49)
   at org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:101)
   at 
 org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:135)
   at 
 org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader$ChunkIterator.decompress(CompressingStoredFieldsReader.java:501)
   at 
 org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.merge(CompressingStoredFieldsWriter.java:387)
   at 
 org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:322)
   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:100)
   at 
 org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4132)
   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3728)
   at 
 org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
   at 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
 searching :
 org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException: 
 access control error while attempting to set up short-circuit access to 
 /user/solr/data/index/_s5d_Lucene41_0.timBlock token with 
 block_token_identifier (expiryDate=1422357860644, keyId=-280715670, 
 userId=solr, blockPoolId=BP-2117321730-132.121.94.119-1395990208332, 
 blockId=1086137313, access modes=[READ]) is expired.
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.newShortCircuitBlockReader(BlockReaderFactory.java:217)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:99)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1064)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:898)
   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1154)
   at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:76)
   at 
 org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:95)
   at 
 

[jira] [Updated] (SOLR-7048) this problem take place when searching and indexing

2015-01-27 Thread kelo2015 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kelo2015 updated SOLR-7048:
---
Environment: hadoop cluster:HDP2.1,solr version:4.8.1  (was: hadoop 
cluster:HDP2.1,solr verstion:4.8.1)

 this problem take place when searching and indexing
 ---

 Key: SOLR-7048
 URL: https://issues.apache.org/jira/browse/SOLR-7048
 Project: Solr
  Issue Type: Bug
  Components: hdfs
Affects Versions: 4.8.1
 Environment: hadoop cluster:HDP2.1,solr version:4.8.1
Reporter: kelo2015

 indexing:
 org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException: 
 access control error while attempting to set up short-circuit access to 
 /user/solr/sub_2014_s08/data/index/_1k1.fdtBlock token with 
 block_token_identifier (expiryDate=1422394876052, keyId=-280715669, 
 userId=solr, blockPoolId=BP-2117321730-132.121.94.119-1395990208332, 
 blockId=1117922079, access modes=[READ]) is expired.
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.newShortCircuitBlockReader(BlockReaderFactory.java:217)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:99)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1064)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:898)
   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1154)
   at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:76)
   at 
 org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:95)
   at 
 org.apache.solr.store.hdfs.HdfsDirectory$HdfsIndexInput.readInternal(HdfsDirectory.java:212)
   at 
 org.apache.solr.store.blockcache.CustomBufferedIndexInput.refill(CustomBufferedIndexInput.java:191)
   at 
 org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:93)
   at 
 org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:67)
   at 
 org.apache.solr.store.blockcache.BlockDirectory$CachedIndexInput.readIntoCacheAndResult(BlockDirectory.java:210)
   at 
 org.apache.solr.store.blockcache.BlockDirectory$CachedIndexInput.fetchBlock(BlockDirectory.java:197)
   at 
 org.apache.solr.store.blockcache.BlockDirectory$CachedIndexInput.readInternal(BlockDirectory.java:181)
   at 
 org.apache.solr.store.blockcache.CustomBufferedIndexInput.refill(CustomBufferedIndexInput.java:191)
   at 
 org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:93)
   at 
 org.apache.solr.store.blockcache.CustomBufferedIndexInput.readBytes(CustomBufferedIndexInput.java:67)
   at 
 org.apache.lucene.store.BufferedChecksumIndexInput.readBytes(BufferedChecksumIndexInput.java:49)
   at org.apache.lucene.codecs.compressing.LZ4.decompress(LZ4.java:101)
   at 
 org.apache.lucene.codecs.compressing.CompressionMode$4.decompress(CompressionMode.java:135)
   at 
 org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader$ChunkIterator.decompress(CompressingStoredFieldsReader.java:501)
   at 
 org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.merge(CompressingStoredFieldsWriter.java:387)
   at 
 org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:322)
   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:100)
   at 
 org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4132)
   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3728)
   at 
 org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
   at 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
 searching :
 org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException: 
 access control error while attempting to set up short-circuit access to 
 /user/solr/data/index/_s5d_Lucene41_0.timBlock token with 
 block_token_identifier (expiryDate=1422357860644, keyId=-280715670, 
 userId=solr, blockPoolId=BP-2117321730-132.121.94.119-1395990208332, 
 blockId=1086137313, access modes=[READ]) is expired.
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.newShortCircuitBlockReader(BlockReaderFactory.java:217)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:99)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1064)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:898)
   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1154)
   at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:76)
   at 
 

[jira] [Assigned] (SOLR-7046) NullPointerException when group.function uses query() function

2015-01-27 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reassigned SOLR-7046:


Assignee: Erick Erickson

 NullPointerException when group.function uses query() function
 --

 Key: SOLR-7046
 URL: https://issues.apache.org/jira/browse/SOLR-7046
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.10.3
Reporter: Jim Musil
Assignee: Erick Erickson
 Attachments: SOLR-7046.patch


 When you attempt to group by function using the query() function, it results 
 in an NullPointerException.
 Using the example webapp loaded with the hd.xml file in exampledocs, you can 
 recreate by issuing the following query:
 http://localhost:8983/solr/select/?q=*:*group=truegroup.func=ceil(query({!type=edismax%20v=$q}))
 This appears to be due to a small bug in the following file:
 {code:title=Grouping.java}
 protected void prepare() throws IOException {
   Map context = ValueSource.newContext(searcher);
   groupBy.createWeight(context, searcher);
   actualGroupsToFind = getMax(offset, numGroups, maxDoc);
 }
 {code}
 The variable context is always null because it's scope is local to this 
 function, but it gets passed on to another function later.
 The patch is simply removing the Map qualifier from the instantiation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_31) - Build # 4443 - Still Failing!

2015-01-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4443/
Java: 64bit/jdk1.8.0_31 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestSolrConfigHandler

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 21FBB361FD8489A1-001\tempDir-010\collection1\conf\params.json: 
java.nio.file.FileSystemException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 21FBB361FD8489A1-001\tempDir-010\collection1\conf\params.json: The process 
cannot access the file because it is being used by another process. 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 21FBB361FD8489A1-001\tempDir-010\collection1\conf: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 21FBB361FD8489A1-001\tempDir-010\collection1\conf
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 21FBB361FD8489A1-001\tempDir-010\collection1: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 21FBB361FD8489A1-001\tempDir-010\collection1
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 21FBB361FD8489A1-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 21FBB361FD8489A1-001\tempDir-010
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 21FBB361FD8489A1-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 21FBB361FD8489A1-001\tempDir-010
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 21FBB361FD8489A1-001: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 21FBB361FD8489A1-001 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 21FBB361FD8489A1-001\tempDir-010\collection1\conf\params.json: 
java.nio.file.FileSystemException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 21FBB361FD8489A1-001\tempDir-010\collection1\conf\params.json: The process 
cannot access the file because it is being used by another process.

   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 21FBB361FD8489A1-001\tempDir-010\collection1\conf: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 21FBB361FD8489A1-001\tempDir-010\collection1\conf
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 21FBB361FD8489A1-001\tempDir-010\collection1: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 21FBB361FD8489A1-001\tempDir-010\collection1
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 21FBB361FD8489A1-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 21FBB361FD8489A1-001\tempDir-010
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 21FBB361FD8489A1-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 21FBB361FD8489A1-001\tempDir-010
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 21FBB361FD8489A1-001: java.nio.file.DirectoryNotEmptyException: 

[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_31) - Build # 11698 - Failure!

2015-01-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11698/
Java: 64bit/jdk1.8.0_31 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.MultiThreadedOCPTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.cloud.MultiThreadedOCPTest: 
1) Thread[id=8976, name=OverseerThreadFactory-4615-thread-5, 
state=TIMED_WAITING, group=Overseer collection creation process.] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.OverseerCollectionProcessor.waitForCoreNodeName(OverseerCollectionProcessor.java:1841)
 at 
org.apache.solr.cloud.OverseerCollectionProcessor.splitShard(OverseerCollectionProcessor.java:1723)
 at 
org.apache.solr.cloud.OverseerCollectionProcessor.processMessage(OverseerCollectionProcessor.java:622)
 at 
org.apache.solr.cloud.OverseerCollectionProcessor$Runner.run(OverseerCollectionProcessor.java:2880)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.MultiThreadedOCPTest: 
   1) Thread[id=8976, name=OverseerThreadFactory-4615-thread-5, 
state=TIMED_WAITING, group=Overseer collection creation process.]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.OverseerCollectionProcessor.waitForCoreNodeName(OverseerCollectionProcessor.java:1841)
at 
org.apache.solr.cloud.OverseerCollectionProcessor.splitShard(OverseerCollectionProcessor.java:1723)
at 
org.apache.solr.cloud.OverseerCollectionProcessor.processMessage(OverseerCollectionProcessor.java:622)
at 
org.apache.solr.cloud.OverseerCollectionProcessor$Runner.run(OverseerCollectionProcessor.java:2880)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([F9C450863DE111ED]:0)




Build Log:
[...truncated 9930 lines...]
   [junit4] Suite: org.apache.solr.cloud.MultiThreadedOCPTest
   [junit4]   2 Creating dataDir: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.MultiThreadedOCPTest
 F9C450863DE111ED-001/init-core-data-001
   [junit4]   2 1518183 T8748 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(true) and clientAuth (true)
   [junit4]   2 1518184 T8748 
oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system 
property: /
   [junit4]   2 1518187 T8748 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   1 client port:0.0.0.0/0.0.0.0:0
   [junit4]   2 1518188 T8749 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2 1518288 T8748 oasc.ZkTestServer.run start zk server on 
port:37606
   [junit4]   2 1518288 T8748 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 1518290 T8748 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2 1518292 T8756 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@2af01b1d 
name:ZooKeeperConnection Watcher:127.0.0.1:37606 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2 1518292 T8748 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2 1518293 T8748 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2 1518293 T8748 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2 1518297 T8748 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 1518298 T8748 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2 1518299 T8759 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@b49fdc9 name:ZooKeeperConnection 
Watcher:127.0.0.1:37606/solr got event WatchedEvent state:SyncConnected 
type:None path:null path:null type:None
   [junit4]   2 1518299 T8748 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2 1518300 T8748 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2 1518300 T8748 oascc.SolrZkClient.makePath makePath: 
/collections/collection1
   [junit4]   2 1518301 T8748 oascc.SolrZkClient.makePath makePath: 
/collections/collection1/shards
   [junit4]   2 1518302 T8748 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection
   [junit4]   2 1518303 T8748 oascc.SolrZkClient.makePath makePath: 

[jira] [Commented] (LUCENE-4835) Raise maxClauseCount in BooleanQuery to Integer.MAX_VALUE

2015-01-27 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293611#comment-14293611
 ] 

Shawn Heisey commented on LUCENE-4835:
--

I think the discussion here does leave room for an increase in the default 
maxBooleanClauses value, just not to Integer.MAX_VALUE.  Rob's objections to 
that setting do have technical merit.  My initial WAG as to a new value is 
16384 ... that would satisfy the requirements of every situation I've actually 
seen when Solr users must increase the value, but it would still be low enough 
to catch seriously abnormal code/config.

I'm still pursuing SOLR-4586 to remove the limit entirely in Solr, though if we 
increase the default in Lucene, the default in Solr should also get a bump.


 Raise maxClauseCount in BooleanQuery to Integer.MAX_VALUE
 -

 Key: LUCENE-4835
 URL: https://issues.apache.org/jira/browse/LUCENE-4835
 Project: Lucene - Core
  Issue Type: Improvement
Affects Versions: 4.2
Reporter: Shawn Heisey
 Fix For: 4.9, Trunk


 Discussion on SOLR-4586 raised the idea of raising the limit on boolean 
 clauses from 1024 to Integer.MAX_VALUE.  This should be a safe change.  It 
 will change the nature of help requests from Why can't I do 2000 clauses? 
 to Why is my 5000-clause query slow?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-1280) Fields used update processor

2015-01-27 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-1280:
---
Priority: Minor  (was: Trivial)

 Fields used update processor
 

 Key: SOLR-1280
 URL: https://issues.apache.org/jira/browse/SOLR-1280
 Project: Solr
  Issue Type: New Feature
  Components: update
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: FieldsUsedUpdateProcessorFactory.java, 
 FieldsUsedUpdateProcessorFactory.java, SOLR-1280.patch


 When dealing with highly heterogeneous documents with different fields per 
 document, it can be very useful to know what fields are present on the result 
 documents from a search.  For example, this could be used to determine which 
 fields make the best facets for a given query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-1280) Fields used update processor

2015-01-27 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher reopened SOLR-1280:


Re-opening this issue to add this as a general purpose (Java-based) update 
processor.  As it is, what is in Solr proper now is a techproducts-specific 
update-script.js commented out snippet that achieves the same goal but more 
crudely.

 Fields used update processor
 

 Key: SOLR-1280
 URL: https://issues.apache.org/jira/browse/SOLR-1280
 Project: Solr
  Issue Type: New Feature
  Components: update
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Trivial
 Fix For: Trunk, 5.1

 Attachments: FieldsUsedUpdateProcessorFactory.java, 
 FieldsUsedUpdateProcessorFactory.java, SOLR-1280.patch


 When dealing with highly heterogeneous documents with different fields per 
 document, it can be very useful to know what fields are present on the result 
 documents from a search.  For example, this could be used to determine which 
 fields make the best facets for a given query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-1280) Fields used update processor

2015-01-27 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-1280:
---
Fix Version/s: (was: 4.0-BETA)
   5.1

 Fields used update processor
 

 Key: SOLR-1280
 URL: https://issues.apache.org/jira/browse/SOLR-1280
 Project: Solr
  Issue Type: New Feature
  Components: update
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Trivial
 Fix For: Trunk, 5.1

 Attachments: FieldsUsedUpdateProcessorFactory.java, 
 FieldsUsedUpdateProcessorFactory.java, SOLR-1280.patch


 When dealing with highly heterogeneous documents with different fields per 
 document, it can be very useful to know what fields are present on the result 
 documents from a search.  For example, this could be used to determine which 
 fields make the best facets for a given query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7037) bin/solr start -e techproducts -c fails to start Solr in cloud mode

2015-01-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293640#comment-14293640
 ] 

ASF subversion and git services commented on SOLR-7037:
---

Commit 1655059 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1655059 ]

SOLR-7037: bin/solr start -e techproducts -c fails to start Solr in cloud mode

 bin/solr start -e techproducts -c fails to start Solr in cloud mode
 ---

 Key: SOLR-7037
 URL: https://issues.apache.org/jira/browse/SOLR-7037
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Anshum Gupta
Assignee: Timothy Potter
Priority: Critical
 Fix For: 5.0, Trunk


 bin/solr start -e techproducts -c should start Solr in cloud mode with the 
 techproducts example but it doesn't. Seems like it starts a standalone Solr 
 instance. We should fix that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4586) Eliminate the maxBooleanClauses limit

2015-01-27 Thread Mike Murphy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293652#comment-14293652
 ] 

Mike Murphy commented on SOLR-4586:
---

But Robert ignored a veto from Hoss Man and refused a call to revert based on 
conflict of interest.
He said I'm not going to revert it. You just want to make Lucene harder to 
use, so more people will use apache solr instead.

https://issues.apache.org/jira/browse/LUCENE-5859?focusedCommentId=14080242page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14080242


 Eliminate the maxBooleanClauses limit
 -

 Key: SOLR-4586
 URL: https://issues.apache.org/jira/browse/SOLR-4586
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.2
 Environment: 4.3-SNAPSHOT 1456767M - ncindex - 2013-03-15 13:11:50
Reporter: Shawn Heisey
 Attachments: SOLR-4586.patch, SOLR-4586.patch, SOLR-4586.patch, 
 SOLR-4586.patch, SOLR-4586.patch, SOLR-4586.patch, 
 SOLR-4586_verify_maxClauses.patch


 In the #solr IRC channel, I mentioned the maxBooleanClauses limitation to 
 someone asking a question about queries.  Mark Miller told me that 
 maxBooleanClauses no longer applies, that the limitation was removed from 
 Lucene sometime in the 3.x series.  The config still shows up in the example 
 even in the just-released 4.2.
 Checking through the source code, I found that the config option is parsed 
 and the value stored in objects, but does not actually seem to be used by 
 anything.  I removed every trace of it that I could find, and all tests still 
 pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4835) Raise maxClauseCount in BooleanQuery to Integer.MAX_VALUE

2015-01-27 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293659#comment-14293659
 ] 

David Smiley commented on LUCENE-4835:
--

I whole-heartedly agree with Yonik's opinion.  I simultaneously had the idea of 
making the limit much lower.  How about 64?

 Raise maxClauseCount in BooleanQuery to Integer.MAX_VALUE
 -

 Key: LUCENE-4835
 URL: https://issues.apache.org/jira/browse/LUCENE-4835
 Project: Lucene - Core
  Issue Type: Improvement
Affects Versions: 4.2
Reporter: Shawn Heisey
 Fix For: 4.9, Trunk


 Discussion on SOLR-4586 raised the idea of raising the limit on boolean 
 clauses from 1024 to Integer.MAX_VALUE.  This should be a safe change.  It 
 will change the nature of help requests from Why can't I do 2000 clauses? 
 to Why is my 5000-clause query slow?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7037) bin/solr start -e techproducts -c fails to start Solr in cloud mode

2015-01-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293657#comment-14293657
 ] 

ASF subversion and git services commented on SOLR-7037:
---

Commit 1655061 from [~thelabdude] in branch 'dev/branches/lucene_solr_5_0'
[ https://svn.apache.org/r1655061 ]

SOLR-7037: bin/solr start -e techproducts -c fails to start Solr in cloud mode

 bin/solr start -e techproducts -c fails to start Solr in cloud mode
 ---

 Key: SOLR-7037
 URL: https://issues.apache.org/jira/browse/SOLR-7037
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Anshum Gupta
Assignee: Timothy Potter
Priority: Critical
 Fix For: 5.0, Trunk


 bin/solr start -e techproducts -c should start Solr in cloud mode with the 
 techproducts example but it doesn't. Seems like it starts a standalone Solr 
 instance. We should fix that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5743) Faceting with BlockJoin support

2015-01-27 Thread Dr Oleg Savrasov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293587#comment-14293587
 ] 

Dr Oleg Savrasov commented on SOLR-5743:


I created the files by copying and modifying existing configurations. It looks 
like my IDE processed changes incorrectly. Sorry about that. Please find 
updated patch attached. Should you have any issues, please notice me.

 Faceting with BlockJoin support
 ---

 Key: SOLR-5743
 URL: https://issues.apache.org/jira/browse/SOLR-5743
 Project: Solr
  Issue Type: New Feature
Reporter: abipc
  Labels: features
 Attachments: SOLR-5743.patch, SOLR-5743.patch


 For a sample inventory(note - nested documents) like this -   
  doc
 field name=id10/field
 field name=type_sparent/field
 field name=BRAND_sNike/field
 doc
 field name=id11/field
 field name=COLOR_sRed/field
 field name=SIZE_sXL/field
 /doc
 doc
 field name=id12/field
 field name=COLOR_sBlue/field
 field name=SIZE_sXL/field
 /doc
 /doc
 Faceting results must contain - 
 Red(1)
 XL(1) 
 Blue(1) 
 for a q=* query. 
 PS : The inventory example has been taken from this blog - 
 http://blog.griddynamics.com/2013/09/solr-block-join-support.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7016) techproducts example does not start on windows with solr.cmd -e techproducts if install dir contains whitespace

2015-01-27 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293614#comment-14293614
 ] 

Timothy Potter commented on SOLR-7016:
--

It's not a true revert because:

{code}
sysproperty key=jetty.home value=${server.dir}/
{code}

is now required for the Jetty 9 stuff on trunk.

 techproducts example does not start on windows with solr.cmd -e 
 techproducts if install dir contains whitespace
 -

 Key: SOLR-7016
 URL: https://issues.apache.org/jira/browse/SOLR-7016
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Reporter: Uwe Schindler
Assignee: Timothy Potter
Priority: Critical
 Fix For: 5.0, Trunk, 5.1

 Attachments: SOLR-7016.patch, SOLR-7016.patch


 If you try to start the techproducts example on windows with solr.cmd -e 
 techproducts it fails if install dir contains whitespace:
 {noformat}
 C:\Users\Uwe Schindler\Projects\lucene\trunk-lusolr1\solr\binsolr.cmd -e 
 techproducts -p 8984
 Ein Unterverzeichnis oder eine Datei mit dem Namen C:\Users\Uwe 
 Schindler\Projects\lucene\trunk-lusolr1\solr\example\techproducts\s
 olr existiert bereits.
 Backing up C:\Users\Uwe 
 Schindler\Projects\lucene\trunk-lusolr1\solr\example\techproducts\solr\..\logs\solr_gc.log
 1 Datei(en) verschoben.
 Starting Solr on port 8984 from C:\Users\Uwe 
 Schindler\Projects\lucene\trunk-lusolr1\solr\server
 Zugriff verweigert
 Gewartet wird 10 Sekunden. Weiter mit beliebiger Taste...Fehler: Hauptklasse 
 Schindler\Projects\lucene\trunk-lusolr1\solr\example\re
 sources\log4j.properties konnte nicht gefunden oder geladen werden
0
 SLF4J: Class path contains multiple SLF4J bindings.
 SLF4J: Found binding in 
 [jar:file:/C:/Users/Uwe%20Schindler/Projects/lucene/trunk-lusolr1/solr/server/lib/ext/slf4j-log4j12-1.7.6.ja
 r!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
 [jar:file:/C:/Users/Uwe%20Schindler/Projects/lucene/trunk-lusolr1/solr/server/lib/ext/slf4j-log4j12-1.7.7.ja
 r!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
 explanation.
 SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
 WARN  - 2015-01-22 12:40:23.866; org.apache.solr.util.SolrCLI; Request to 
 http://localhost:8984/solr/admin/info/system failed due to
 : Connection refused: connect, sleeping for 5 seconds before re-trying the 
 request ...
 Exception in thread main java.net.ConnectException: Connection refused: 
 connect
 at java.net.DualStackPlainSocketImpl.connect0(Native Method)
 at 
 java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:79)
 at 
 java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
 at 
 java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
 at 
 java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
 at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)
 at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
 at java.net.Socket.connect(Socket.java:589)
 at 
 org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:117)
 at 
 org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:178)
 at 
 org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
 at 
 org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:610)
 at 
 org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:445)
 at 
 org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
 at 
 org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:72)
 at 
 org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:214)
 at 
 org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:160)
 at 
 org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:136)
 at org.apache.solr.util.SolrCLI.getJson(SolrCLI.java:503)
 at org.apache.solr.util.SolrCLI.getJson(SolrCLI.java:456)
 at org.apache.solr.util.SolrCLI.getJson(SolrCLI.java:466)
 at 
 org.apache.solr.util.SolrCLI$CreateCoreTool.runTool(SolrCLI.java:1379)
 at org.apache.solr.util.SolrCLI.main(SolrCLI.java:203)
 Indexing tech product example docs from C:\Users\Uwe 
 Schindler\Projects\lucene\trunk-lusolr1\solr\example\exampledocs
 Error: Unable to access jarfile 

[jira] [Commented] (SOLR-4586) Eliminate the maxBooleanClauses limit

2015-01-27 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293616#comment-14293616
 ] 

Robert Muir commented on SOLR-4586:
---

My technical veto still stands as a member of the PMC.
It does not matter who i work for.

A million people can say +1 its a conflict, that doesnt matter. Developing at 
apache is a conflict of interest by definition.

http://www.apache.org/foundation/voting.html


 Eliminate the maxBooleanClauses limit
 -

 Key: SOLR-4586
 URL: https://issues.apache.org/jira/browse/SOLR-4586
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.2
 Environment: 4.3-SNAPSHOT 1456767M - ncindex - 2013-03-15 13:11:50
Reporter: Shawn Heisey
 Attachments: SOLR-4586.patch, SOLR-4586.patch, SOLR-4586.patch, 
 SOLR-4586.patch, SOLR-4586.patch, SOLR-4586.patch, 
 SOLR-4586_verify_maxClauses.patch


 In the #solr IRC channel, I mentioned the maxBooleanClauses limitation to 
 someone asking a question about queries.  Mark Miller told me that 
 maxBooleanClauses no longer applies, that the limitation was removed from 
 Lucene sometime in the 3.x series.  The config still shows up in the example 
 even in the just-released 4.2.
 Checking through the source code, I found that the config option is parsed 
 and the value stored in objects, but does not actually seem to be used by 
 anything.  I removed every trace of it that I could find, and all tests still 
 pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5743) Faceting with BlockJoin support

2015-01-27 Thread Dr Oleg Savrasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dr Oleg Savrasov updated SOLR-5743:
---
Attachment: SOLR-5743.patch

 Faceting with BlockJoin support
 ---

 Key: SOLR-5743
 URL: https://issues.apache.org/jira/browse/SOLR-5743
 Project: Solr
  Issue Type: New Feature
Reporter: abipc
  Labels: features
 Attachments: SOLR-5743.patch, SOLR-5743.patch


 For a sample inventory(note - nested documents) like this -   
  doc
 field name=id10/field
 field name=type_sparent/field
 field name=BRAND_sNike/field
 doc
 field name=id11/field
 field name=COLOR_sRed/field
 field name=SIZE_sXL/field
 /doc
 doc
 field name=id12/field
 field name=COLOR_sBlue/field
 field name=SIZE_sXL/field
 /doc
 /doc
 Faceting results must contain - 
 Red(1)
 XL(1) 
 Blue(1) 
 for a q=* query. 
 PS : The inventory example has been taken from this blog - 
 http://blog.griddynamics.com/2013/09/solr-block-join-support.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4586) Eliminate the maxBooleanClauses limit

2015-01-27 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293615#comment-14293615
 ] 

Shawn Heisey commented on SOLR-4586:


I believe that Rob's recent job change does present a conflict of interest when 
it comes to Solr, although he's a really intelligent person and when he's got a 
strong technical argument, I'm inclined to listen.  His concerns on LUCENE-4835 
stand, although I think the issue notes do leave room for further discussion on 
an increase in the default value, just not to Integer.MAX_VALUE, and I've 
brought it up there.

As the head of this project in Jira, my thinking is that [~ysee...@gmail.com] 
is the deciding vote.  Given that he's already removed the limit in 
heliosearch, I think I know where that vote would land.

I stand ready for the following changes, if/when we can reach consensus:

 * In Solr 5.1, default maxBooleanClauses to MAX_VALUE and ignore the config 
value.
 * In Solr 6.0, throw an error if maxBooleanClauses is found in solrconfig.xml. 
(not strongly tied to this one)

If that test still shows intermittent failures, I don't know if I can fix the 
problem, but I can definitely try.

If we don't remove the limit, the error message in Solr when the count is 
exceeded should probably say consider using the terms query parser instead.  
I'm inclined to have it point at the wiki or reference guide also.


 Eliminate the maxBooleanClauses limit
 -

 Key: SOLR-4586
 URL: https://issues.apache.org/jira/browse/SOLR-4586
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.2
 Environment: 4.3-SNAPSHOT 1456767M - ncindex - 2013-03-15 13:11:50
Reporter: Shawn Heisey
 Attachments: SOLR-4586.patch, SOLR-4586.patch, SOLR-4586.patch, 
 SOLR-4586.patch, SOLR-4586.patch, SOLR-4586.patch, 
 SOLR-4586_verify_maxClauses.patch


 In the #solr IRC channel, I mentioned the maxBooleanClauses limitation to 
 someone asking a question about queries.  Mark Miller told me that 
 maxBooleanClauses no longer applies, that the limitation was removed from 
 Lucene sometime in the 3.x series.  The config still shows up in the example 
 even in the just-released 4.2.
 Checking through the source code, I found that the config option is parsed 
 and the value stored in objects, but does not actually seem to be used by 
 anything.  I removed every trace of it that I could find, and all tests still 
 pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7042) Enhance bin/post's JSON handling

2015-01-27 Thread Erik Hatcher (JIRA)
Erik Hatcher created SOLR-7042:
--

 Summary: Enhance bin/post's JSON handling
 Key: SOLR-7042
 URL: https://issues.apache.org/jira/browse/SOLR-7042
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Erik Hatcher
Assignee: Erik Hatcher
 Fix For: 5.1


The current (5.0) version of bin/post assumes JSON (and XML) are in *Solr* 
command format, eg. {{bin/post -c collection1 data.json}} and that the URL to 
post to is /update.  

This issue is to improve/evolve bin/post so that it can post to /update when 
the data is in *Solr* XML or JSON format and to /update/json/docs for arbitrary 
JSON.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4835) Raise maxClauseCount in BooleanQuery to Integer.MAX_VALUE

2015-01-27 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293625#comment-14293625
 ] 

Yonik Seeley commented on LUCENE-4835:
--

If there is to be an arbitrary limit, I think it should be much lower, not 
higher.  That way poor people may be more likely to hit it in testing rather 
than in production as their system grows.

But really, I disagree with having any arbitrary limit.  The performance curve 
as one adds terms is nice and smooth.  Adding in an arbitrary limit creates a 
bug in *working* code (your system suddenly stops working when you cross a 
threshold), to try and prevent a hypothetical code bug ( someone who has a bug 
in their code and accidentally keeps adding to the same BQ ).

But this hypothetical code bug off continuously adding to the same BQ would 
lead to either an OOM error, or array store error, etc... ,basically something 
that would be caught at test time.  And really, there are *hundreds* of places 
in code where you can accidentally continuously add to the same data 
structure... ArrayList, StringBuilder, etc.  It would be horrible to have 
arbitrary limits for all of these things.

 Raise maxClauseCount in BooleanQuery to Integer.MAX_VALUE
 -

 Key: LUCENE-4835
 URL: https://issues.apache.org/jira/browse/LUCENE-4835
 Project: Lucene - Core
  Issue Type: Improvement
Affects Versions: 4.2
Reporter: Shawn Heisey
 Fix For: 4.9, Trunk


 Discussion on SOLR-4586 raised the idea of raising the limit on boolean 
 clauses from 1024 to Integer.MAX_VALUE.  This should be a safe change.  It 
 will change the nature of help requests from Why can't I do 2000 clauses? 
 to Why is my 5000-clause query slow?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7042) Enhance bin/post's JSON handling

2015-01-27 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293626#comment-14293626
 ] 

Erik Hatcher commented on SOLR-7042:


It is possible to post to /update/json/docs currently with:
{code}
$ bin/post -url http://localhost:8983/solr/collection1/update/json/docs -c 
collection1 data.json
{code}
and then using {{-params}} to add mappings, etc.  But this should be 
streamlined, for one not having to specify {{-url}}.

One suggestion is to reverse the assumption, and assume JSON (and eventually 
XML) is arbitrary, non-Solr-specific format, and make it explicit when the data 
is in Solr format, such as:
{code}
$ bin/post -c collection1 data.json  # assumes arbitrary JSON, posts to 
/update/json/docs
$ bin/post -c collection1 example/films/films.json -format solr # posts to 
/update
{code}

 Enhance bin/post's JSON handling
 

 Key: SOLR-7042
 URL: https://issues.apache.org/jira/browse/SOLR-7042
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Erik Hatcher
Assignee: Erik Hatcher
 Fix For: 5.1


 The current (5.0) version of bin/post assumes JSON (and XML) are in *Solr* 
 command format, eg. {{bin/post -c collection1 data.json}} and that the URL to 
 post to is /update.  
 This issue is to improve/evolve bin/post so that it can post to /update when 
 the data is in *Solr* XML or JSON format and to /update/json/docs for 
 arbitrary JSON.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7037) bin/solr start -e techproducts -c fails to start Solr in cloud mode

2015-01-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293634#comment-14293634
 ] 

ASF subversion and git services commented on SOLR-7037:
---

Commit 1655058 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1655058 ]

SOLR-7037: bin/solr start -e techproducts -c fails to start Solr in cloud mode

 bin/solr start -e techproducts -c fails to start Solr in cloud mode
 ---

 Key: SOLR-7037
 URL: https://issues.apache.org/jira/browse/SOLR-7037
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Anshum Gupta
Assignee: Timothy Potter
Priority: Critical
 Fix For: 5.0, Trunk


 bin/solr start -e techproducts -c should start Solr in cloud mode with the 
 techproducts example but it doesn't. Seems like it starts a standalone Solr 
 instance. We should fix that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5850) Race condition in ConcurrentUpdateSolrServer

2015-01-27 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14294449#comment-14294449
 ] 

Shawn Heisey commented on SOLR-5850:


This looks like a good change.  I'd argue that it's even safe for 5.0, but I'm 
guessing that ship has sailed and we'd need to put it in for 5.1.


 Race condition in ConcurrentUpdateSolrServer
 

 Key: SOLR-5850
 URL: https://issues.apache.org/jira/browse/SOLR-5850
 Project: Solr
  Issue Type: Bug
  Components: clients - java, search, SolrCloud, update
Affects Versions: 4.6
Reporter: Devansh Dhutia
Assignee: Timothy Potter
Priority: Critical
  Labels: 500, cloud, difficulty-medium, error, impact-medium, 
 update
 Attachments: SOLR-5850.patch, SOLR-5850.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 Possibly related to SOLR-2308, we are seeing a Queue Full error message when 
 issuing writes to Solr Cloud
 Each Update has 200 documents, and a commit is issued after 2000 documents 
 have been added. 
 The writes are spread out to all the servers in the cloud (2 in this case) 
 and following is the stack trace from Solr: 
 {code:xml}
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status500/intint 
 name=QTime101/int/lstlst name=errorstr name=msgQueue 
 full/strstr name=t
 racejava.lang.IllegalStateException: Queue full
 at java.util.AbstractQueue.add(Unknown Source)
 at 
 org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner$1.writeTo(ConcurrentUpdateSolrServer.java:181)
 at 
 org.apache.http.entity.EntityTemplate.writeTo(EntityTemplate.java:72)
 at 
 org.apache.http.entity.HttpEntityWrapper.writeTo(HttpEntityWrapper.java:98)
 at 
 org.apache.http.impl.client.EntityEnclosingRequestWrapper$EntityWrapper.writeTo(EntityEnclosingRequestWrapper.java:108)
 at 
 org.apache.http.impl.entity.EntitySerializer.serialize(EntitySerializer.java:122)
 at 
 org.apache.http.impl.AbstractHttpClientConnection.sendRequestEntity(AbstractHttpClientConnection.java:271)
 at 
 org.apache.http.impl.conn.ManagedClientConnectionImpl.sendRequestEntity(ManagedClientConnectionImpl.java:197)
 at 
 org.apache.http.protocol.HttpRequestExecutor.doSendRequest(HttpRequestExecutor.java:257)
 at 
 org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
 at 
 org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:715)
 at 
 org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:520)
 at 
 org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
 at 
 org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
 at 
 org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
 at 
 org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner.run(ConcurrentUpdateSolrServer.java:232)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 /strint name=code500/int/lst
 /response
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module

2015-01-27 Thread Tomoko Uchida (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293926#comment-14293926
 ] 

Tomoko Uchida commented on LUCENE-2562:
---

Patch updated. I've modified Overview tab only.
Progress and Status :
- Missing values in upper panel (index info) were all filled. 
- Fields table are now sortable by field name and term counts.
Pending tasks to be done:
- Decoders (last pending task for Overview tab)

I'm trying for decoders. It might need some sort of pluggable design (I believe 
Solr's decoders should be plugged, not built-in feature.) Suggestions / ideas  
welcome.



 Make Luke a Lucene/Solr Module
 --

 Key: LUCENE-2562
 URL: https://issues.apache.org/jira/browse/LUCENE-2562
 Project: Lucene - Core
  Issue Type: Task
Reporter: Mark Miller
  Labels: gsoc2014
 Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, 
 LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, 
 Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke1.jpg, luke2.jpg, 
 luke3.jpg


 see
 RE: Luke - in need of maintainer: 
 http://markmail.org/message/m4gsto7giltvrpuf
 Web-based Luke: http://markmail.org/message/4xwps7p7ifltme5q
 I think it would be great if there was a version of Luke that always worked 
 with trunk - and it would also be great if it was easier to match Luke jars 
 with Lucene versions.
 While I'd like to get GWT Luke into the mix as well, I think the easiest 
 starting point is to straight port Luke to another UI toolkit before 
 abstracting out DTO objects that both GWT Luke and Pivot Luke could share.
 I've started slowly converting Luke's use of thinlet to Apache Pivot. I 
 haven't/don't have a lot of time for this at the moment, but I've plugged 
 away here and there over the past work or two. There is still a *lot* to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6915) SaslZkACLProvider and Kerberos Test Using MiniKdc

2015-01-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14294367#comment-14294367
 ] 

ASF subversion and git services commented on SOLR-6915:
---

Commit 1655187 from gcha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1655187 ]

SOLR-6915: Avoid broken Locales and skip IBM J9

 SaslZkACLProvider and Kerberos Test Using MiniKdc
 -

 Key: SOLR-6915
 URL: https://issues.apache.org/jira/browse/SOLR-6915
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Fix For: 5.0, Trunk

 Attachments: SOLR-6915.patch, SOLR-6915.patch, fail.log, fail.log, 
 tests-failures.txt


 We should provide a ZkACLProvider that requires SASL authentication.  This 
 provider will be useful for administration in a kerberos environment.   In 
 such an environment, the administrator wants solr to authenticate to 
 zookeeper using SASL, since this is only way to authenticate with zookeeper 
 via kerberos.
 The authorization model in such a setup can vary, e.g. you can imagine a 
 scenario where solr owns (is the only writer of) the non-config znodes, but 
 some set of trusted users are allowed to modify the configs.  It's hard to 
 predict all the possibilities here, but one model that seems generally useful 
 is to have a model where solr itself owns all the znodes and all actions that 
 require changing the znodes are routed to Solr APIs.  That seems simple and 
 reasonable as a first version.
 As for testing, I noticed while working on SOLR-6625 that we don't really 
 have any infrastructure for testing kerberos integration in unit tests.  
 Internally, I've been testing using kerberos-enabled VM clusters, but this 
 isn't great since we won't notice any breakages until someone actually spins 
 up a VM.  So part of this JIRA is to provide some infrastructure for testing 
 kerberos at the unit test level (using Hadoop's MiniKdc, HADOOP-9848).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 1912 - Failure!

2015-01-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1912/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseParallelGC

75 tests failed.
FAILED:  org.apache.solr.cloud.AliasIntegrationTest.test

Error Message:
Could not get the port for ZooKeeper server

Stack Trace:
java.lang.RuntimeException: Could not get the port for ZooKeeper server
at org.apache.solr.cloud.ZkTestServer.run(ZkTestServer.java:482)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.distribSetUp(AbstractDistribZkTestBase.java:62)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.distribSetUp(AbstractFullDistribZkTestBase.java:198)
at 
org.apache.solr.cloud.AliasIntegrationTest.distribSetUp(AliasIntegrationTest.java:66)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:910)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.cloud.AsyncMigrateRouteKeyTest.test

Error Message:
Could not get the port for ZooKeeper server

Stack Trace:
java.lang.RuntimeException: Could not get the port for ZooKeeper server
at org.apache.solr.cloud.ZkTestServer.run(ZkTestServer.java:482)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.distribSetUp(AbstractDistribZkTestBase.java:62)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.distribSetUp(AbstractFullDistribZkTestBase.java:198)
at 

Re: [VOTE] Release 5.0.0 RC1

2015-01-27 Thread Jan Høydahl
I just filed https://issues.apache.org/jira/browse/SOLR-7041 
https://issues.apache.org/jira/browse/SOLR-7041 Nuke defaultSearchField and 
solrQueryParser from schema”. Has it been discussed already?

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

 25. jan. 2015 kl. 20.07 skrev Uwe Schindler u...@thetaphi.de:
 
 In addition,
  
 on most computers you extract to your windows “Desktop” and for most users 
 this is also using a white space (the user name has in most cases white space 
 in the name), this is also bad user experience.
  
 Uwe
  
 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de http://www.thetaphi.de/
 eMail: u...@thetaphi.de
  
 From: Anshum Gupta [mailto:ans...@anshumgupta.net] 
 Sent: Sunday, January 25, 2015 7:58 PM
 To: dev@lucene.apache.org
 Subject: Re: [VOTE] Release 5.0.0 RC1
  
 I'm not really a windows user so I don't really know what's a fix for the 
 paths with a space. May be we can either fix it or document the way to use it 
 so that this doesn't happen (put the path in quotes or something?). The worst 
 case, we should document that it doesn't work in such cases and the steps 
 from there on.
  
 Leaving it as is would be a problem for users as they wouldn't even have a 
 way to get around, with all the documentation about the traditional way of 
 starting/stopping Solr yanked from the Ref guide.
  
 P.S.: I'll do whatever is in my hands to get this out soon, but can't promise 
 on that. You would have an RC though :-).
  
  
 On Sun, Jan 25, 2015 at 10:33 AM, Uwe Schindler u...@thetaphi.de 
 mailto:u...@thetaphi.de wrote:
 Hi,
  
 With both Java 7 and Java 8 (took much longer, of course):
  
 Java 1.7 JAVA_HOME=/home/thetaphi/jdk1.7.0_76
 Java 1.8 JAVA_HOME=/home/thetaphi/jdk1.8.0_31
 Ubuntu 14.04, 64bit
  
 SUCCESS! [2:21:52.553327]
  
 Unfortunately, Solr still has the problem that Windows scripts to 
 startup/shutdown and create collections don’t work if you have spaces in the 
 installation path. This is quite common on Windows (e.g. the default 
 installation folder is named “c:\Program Files” – if you install Solr there 
 you cannot even start it without error messages (Access Denied) – but it 
 still starts up. But running techproducts demo fails horrible with crazy 
 error messages). I played around with some other install dir, but to me this 
 is still a blocker, see https://issues.apache.org/jira/browse/SOLR-7016 
 https://issues.apache.org/jira/browse/SOLR-7016
  
 In addition on Windows, shutting down *all* Solr servers does not work. 
 solr.cmd stop –all” looks like a no-op. Shutting down a single solr server 
 goes sometimes into a loop of “Waiting 10 seconds…” and then “cannot find 
 xxx.port file”. So I had to shutdown the whole cloud manually.
  
 So +/- 0 to release Solr as-is, I would like that somebody takes care of 
 above issues. This makes Solr unuseable on Windows.
 I would give my +1 to release Lucene... (but this does not help)
  
 Uwe
  
 P.S.: If we get this out this week I can announce this release together with 
 my talk about Lucene/Solr 5 at FOSDEM 2015 in Brussels! J
  
 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de http://www.thetaphi.de/
 eMail: u...@thetaphi.de mailto:u...@thetaphi.de
  
 From: Uwe Schindler [mailto:u...@thetaphi.de mailto:u...@thetaphi.de] 
 Sent: Sunday, January 25, 2015 3:53 PM
 To: dev@lucene.apache.org mailto:dev@lucene.apache.org
 Subject: RE: [VOTE] Release 5.0.0 RC1
  
 Hey,
  
 if you have Java 8 on your machine, too, you can smoke with both (Java 7 and 
 Java 8):
  
 python3 dev-tools/scripts/smokeTestRelease.py --test-java8 /path/to/jdk8 
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC1-rev1654615
  
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC1-rev1654615
  
 I am doing that at the moment to ensure all is fine also with Java 8.
 Uwe
  
 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de http://www.thetaphi.de/
 eMail: u...@thetaphi.de mailto:u...@thetaphi.de
  
 From: Anshum Gupta [mailto:ans...@anshumgupta.net 
 mailto:ans...@anshumgupta.net] 
 Sent: Sunday, January 25, 2015 6:17 AM
 To: dev@lucene.apache.org mailto:dev@lucene.apache.org
 Subject: [VOTE] Release 5.0.0 RC1
  
 Please review and vote for the following RC:
  
 Artifacts:
  
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC1-rev1654615
  
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC1-rev1654615
  
 Smoke tester: 
  python3  dev-tools/scripts/smokeTestRelease.py 
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC1-rev1654615
  
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC1-rev1654615
  
 Here's my +1
  
 SUCCESS! [0:39:42.513560] 
  
 --
 Anshum Gupta
 http://about.me/anshumgupta http://about.me/anshumgupta
 
 
  
 --
 Anshum Gupta
 http://about.me/anshumgupta http://about.me/anshumgupta


[jira] [Commented] (SOLR-7041) Nuke defaultSearchField and solrQueryParser from schema

2015-01-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293346#comment-14293346
 ] 

Jan Høydahl commented on SOLR-7041:
---

I see the two used in a bunch of test-schemas. Also, the methods 
{{getDefaultSearchFieldName()}} and {{getQueryParserDefaultOperator()}} in 
{{IndexSchema.java}} are *not* deprecated in current trunk.

If we don't take the time to rip it all out for 5.0, I propose we
* remove the commented-out parts from example schemas
* deprecate the two methods in IndexSchema
* remove mentions in RefGuide
* start logging a WARN if schema parser finds any of these in use

Another in-between option is to fail-fast if {{luceneMatchVersion = 5.0}} and 
log warning if less (indicates people brought their old config).

 Nuke defaultSearchField and solrQueryParser from schema
 ---

 Key: SOLR-7041
 URL: https://issues.apache.org/jira/browse/SOLR-7041
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Reporter: Jan Høydahl
 Fix For: 5.0, Trunk


 The two tags {{defautlSearchField}} and {{solrQueryParser}} were deprecated 
 in Solr3.6 (SOLR-2724). Time to nuke them from code and {{schema.xml}} in 5.0?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7012) add an ant target to package a plugin into a jar

2015-01-27 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293385#comment-14293385
 ] 

Noble Paul edited comment on SOLR-7012 at 1/27/15 11:32 AM:


use assert to ensure that those properties are set
 http://ant-contrib.sourceforge.net/tasks/tasks/assert_task.html

why was that script required?
 


was (Author: noble.paul):
use assert to ensure that those properties are set
 http://ant-contrib.sourceforge.net/tasks/tasks/assert_task.html

 add an ant target to package a plugin into a jar
 

 Key: SOLR-7012
 URL: https://issues.apache.org/jira/browse/SOLR-7012
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-7012.patch, SOLR-7012.patch, SOLR-7012.patch


 Now it is extremely hard to create  plugin because the user do not know about 
 the exact dependencies and their poms
 we will add a target to solr/build.xml called plugin-jar
 invoke it as follows
 {code}
 ant -Dplugin.package=my.package -Djar.location=/tmp/my.jar plugin-jar
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7012) add an ant target to package a plugin into a jar

2015-01-27 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293393#comment-14293393
 ] 

Ishan Chattopadhyaya commented on SOLR-7012:


Assert would need ant-contrib jars, and a base ant installation wouldn't work.
The script was to convert something like my.plugin.pkg to my/plugin/pkg 
(replace dot with slash) so that it can be used in the fileset path for the 
jar. Alternative to using that script is to use propertyregex tag, but, for 
that too, the ant-contrib jars are needed.

 add an ant target to package a plugin into a jar
 

 Key: SOLR-7012
 URL: https://issues.apache.org/jira/browse/SOLR-7012
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-7012.patch, SOLR-7012.patch, SOLR-7012.patch


 Now it is extremely hard to create  plugin because the user do not know about 
 the exact dependencies and their poms
 we will add a target to solr/build.xml called plugin-jar
 invoke it as follows
 {code}
 ant -Dplugin.package=my.package -Djar.location=/tmp/my.jar plugin-jar
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7012) add an ant target to package a plugin into a jar

2015-01-27 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293393#comment-14293393
 ] 

Ishan Chattopadhyaya edited comment on SOLR-7012 at 1/27/15 11:44 AM:
--

Assert would need ant-contrib jars, and a base ant installation wouldn't work. 
fail unless=... should suffice.
The script was to convert something like my.plugin.pkg to my/plugin/pkg 
(replace dot with slash) so that it can be used in the fileset path for the 
jar. Alternative to using that script is to use propertyregex tag, but, for 
that too, the ant-contrib jars are needed.


was (Author: ichattopadhyaya):
Assert would need ant-contrib jars, and a base ant installation wouldn't work.
The script was to convert something like my.plugin.pkg to my/plugin/pkg 
(replace dot with slash) so that it can be used in the fileset path for the 
jar. Alternative to using that script is to use propertyregex tag, but, for 
that too, the ant-contrib jars are needed.

 add an ant target to package a plugin into a jar
 

 Key: SOLR-7012
 URL: https://issues.apache.org/jira/browse/SOLR-7012
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-7012.patch, SOLR-7012.patch, SOLR-7012.patch


 Now it is extremely hard to create  plugin because the user do not know about 
 the exact dependencies and their poms
 we will add a target to solr/build.xml called plugin-jar
 invoke it as follows
 {code}
 ant -Dplugin.package=my.package -Djar.location=/tmp/my.jar plugin-jar
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7012) add an ant target to package a plugin into a jar

2015-01-27 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-7012:
---
Attachment: SOLR-7012.patch

Updated the patch to use javascript to do the string replacement instead of the 
macrodef used in previous patch.

 add an ant target to package a plugin into a jar
 

 Key: SOLR-7012
 URL: https://issues.apache.org/jira/browse/SOLR-7012
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-7012.patch, SOLR-7012.patch


 Now it is extremely hard to create  plugin because the user do not know about 
 the exact dependencies and their poms
 we will add a target to solr/build.xml called plugin-jar
 invoke it as follows
 {code}
 ant -Dplugin.package=my.package -Djar.location=/tmp/my.jar plugin-jar
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7041) Nuke defaultSearchField and solrQueryParser from schema

2015-01-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293288#comment-14293288
 ] 

Jan Høydahl commented on SOLR-7041:
---

I know 5.0.0 is on its way out the door - just want to record this JIRA as I 
cannot recall nuking this has been discussed for 5.0 and it may be a simple 
patch.

See also https://cwiki.apache.org/confluence/display/solr/Other+Schema+Elements

 Nuke defaultSearchField and solrQueryParser from schema
 ---

 Key: SOLR-7041
 URL: https://issues.apache.org/jira/browse/SOLR-7041
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Reporter: Jan Høydahl
 Fix For: 5.0, Trunk


 The two tags {{defautlSearchField}} and {{solrQueryParser}} were deprecated 
 in Solr3.6 (SOLR-2724). Time to nuke them from code and {{schema.xml}} in 5.0?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7012) add an ant target to package a plugin into a jar

2015-01-27 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-7012:
---
Attachment: SOLR-7012.patch

 add an ant target to package a plugin into a jar
 

 Key: SOLR-7012
 URL: https://issues.apache.org/jira/browse/SOLR-7012
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-7012.patch, SOLR-7012.patch, SOLR-7012.patch


 Now it is extremely hard to create  plugin because the user do not know about 
 the exact dependencies and their poms
 we will add a target to solr/build.xml called plugin-jar
 invoke it as follows
 {code}
 ant -Dplugin.package=my.package -Djar.location=/tmp/my.jar plugin-jar
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7012) add an ant target to package a plugin into a jar

2015-01-27 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293385#comment-14293385
 ] 

Noble Paul commented on SOLR-7012:
--

use assert to ensure that those properties are set
 http://ant-contrib.sourceforge.net/tasks/tasks/assert_task.html

 add an ant target to package a plugin into a jar
 

 Key: SOLR-7012
 URL: https://issues.apache.org/jira/browse/SOLR-7012
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-7012.patch, SOLR-7012.patch, SOLR-7012.patch


 Now it is extremely hard to create  plugin because the user do not know about 
 the exact dependencies and their poms
 we will add a target to solr/build.xml called plugin-jar
 invoke it as follows
 {code}
 ant -Dplugin.package=my.package -Djar.location=/tmp/my.jar plugin-jar
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 1954 - Failure!

2015-01-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1954/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestReplicaProperties.test

Error Message:
No live SolrServers available to handle this request:[http://127.0.0.1:50481/a, 
http://127.0.0.1:50471/a, http://127.0.0.1:50485/a, http://127.0.0.1:50475/a, 
http://127.0.0.1:50478/a]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:50481/a, http://127.0.0.1:50471/a, 
http://127.0.0.1:50485/a, http://127.0.0.1:50475/a, http://127.0.0.1:50478/a]
at 
__randomizedtesting.SeedInfo.seed([B3BDCA8256D2691B:3BE9F558F82E04E3]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:349)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1006)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:787)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:730)
at 
org.apache.solr.cloud.ReplicaPropertiesBase.doPropertyAction(ReplicaPropertiesBase.java:51)
at 
org.apache.solr.cloud.TestReplicaProperties.clusterAssignPropertyTest(TestReplicaProperties.java:189)
at 
org.apache.solr.cloud.TestReplicaProperties.test(TestReplicaProperties.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2556 - Still Failing

2015-01-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2556/

5 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.NoHttpResponseException: The target server failed to respond
at 
__randomizedtesting.SeedInfo.seed([A97723222C735241:21231CF8828F3FB9]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:865)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:730)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:468)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:189)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:102)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

Re: [VOTE] Release 5.0.0 RC1

2015-01-27 Thread Ryan Ernst

 I just filed https://issues.apache.org/jira/browse/SOLR-7041 Nuke
 defaultSearchField and solrQueryParser from schema”. Has it been discussed
 already?



  What about https://issues.apache.org/jira/browse/SOLR-4586

I have hit this trappy magic1024 limit myself and it would be great if

it could be removed for 5.0.


A respin is not the time to cram in more changes (especially controversial
ones).

On Tue, Jan 27, 2015 at 5:28 AM, Mike Murphy mmurphy3...@gmail.com wrote:

 What about https://issues.apache.org/jira/browse/SOLR-4586
 I have hit this trappy magic1024 limit myself and it would be great if
 it could be removed for 5.0.

 On Tue, Jan 27, 2015 at 5:28 AM, Jan Høydahl jan@cominvent.com
 wrote:
  I just filed https://issues.apache.org/jira/browse/SOLR-7041 Nuke
  defaultSearchField and solrQueryParser from schema”. Has it been
 discussed
  already?
 
  --
  Jan Høydahl, search solution architect
  Cominvent AS - www.cominvent.com
 
  25. jan. 2015 kl. 20.07 skrev Uwe Schindler u...@thetaphi.de:
 
  In addition,
 
  on most computers you extract to your windows “Desktop” and for most
 users
  this is also using a white space (the user name has in most cases white
  space in the name), this is also bad user experience.
 
  Uwe
 
  -
  Uwe Schindler
  H.-H.-Meier-Allee 63, D-28213 Bremen
  http://www.thetaphi.de
  eMail: u...@thetaphi.de
 
  From: Anshum Gupta [mailto:ans...@anshumgupta.net]
  Sent: Sunday, January 25, 2015 7:58 PM
  To: dev@lucene.apache.org
  Subject: Re: [VOTE] Release 5.0.0 RC1
 
  I'm not really a windows user so I don't really know what's a fix for the
  paths with a space. May be we can either fix it or document the way to
 use
  it so that this doesn't happen (put the path in quotes or something?).
 The
  worst case, we should document that it doesn't work in such cases and the
  steps from there on.
 
  Leaving it as is would be a problem for users as they wouldn't even have
 a
  way to get around, with all the documentation about the traditional way
 of
  starting/stopping Solr yanked from the Ref guide.
 
  P.S.: I'll do whatever is in my hands to get this out soon, but can't
  promise on that. You would have an RC though :-).
 
 
  On Sun, Jan 25, 2015 at 10:33 AM, Uwe Schindler u...@thetaphi.de wrote:
 
  Hi,
 
 
 
  With both Java 7 and Java 8 (took much longer, of course):
 
 
 
  Java 1.7 JAVA_HOME=/home/thetaphi/jdk1.7.0_76
 
  Java 1.8 JAVA_HOME=/home/thetaphi/jdk1.8.0_31
 
  Ubuntu 14.04, 64bit
 
 
 
  SUCCESS! [2:21:52.553327]
 
 
 
  Unfortunately, Solr still has the problem that Windows scripts to
  startup/shutdown and create collections don’t work if you have spaces in
 the
  installation path. This is quite common on Windows (e.g. the default
  installation folder is named “c:\Program Files” – if you install Solr
 there
  you cannot even start it without error messages (Access Denied) – but it
  still starts up. But running techproducts demo fails horrible with crazy
  error messages). I played around with some other install dir, but to me
 this
  is still a blocker, see https://issues.apache.org/jira/browse/SOLR-7016
 
 
 
  In addition on Windows, shutting down *all* Solr servers does not work.
  solr.cmd stop –all” looks like a no-op. Shutting down a single solr
 server
  goes sometimes into a loop of “Waiting 10 seconds…” and then “cannot find
  xxx.port file”. So I had to shutdown the whole cloud manually.
 
 
 
  So +/- 0 to release Solr as-is, I would like that somebody takes care of
  above issues. This makes Solr unuseable on Windows.
 
  I would give my +1 to release Lucene... (but this does not help)
 
 
 
  Uwe
 
 
 
  P.S.: If we get this out this week I can announce this release together
 with
  my talk about Lucene/Solr 5 at FOSDEM 2015 in Brussels! J
 
 
 
  -
 
  Uwe Schindler
 
  H.-H.-Meier-Allee 63, D-28213 Bremen
 
  http://www.thetaphi.de
 
  eMail: u...@thetaphi.de
 
 
 
  From: Uwe Schindler [mailto:u...@thetaphi.de]
  Sent: Sunday, January 25, 2015 3:53 PM
  To: dev@lucene.apache.org
  Subject: RE: [VOTE] Release 5.0.0 RC1
 
 
 
  Hey,
 
 
 
  if you have Java 8 on your machine, too, you can smoke with both (Java 7
 and
  Java 8):
 
 
 
  python3 dev-tools/scripts/smokeTestRelease.py --test-java8 /path/to/jdk8
 
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC1-rev1654615
 
 
 
  I am doing that at the moment to ensure all is fine also with Java 8.
 
  Uwe
 
 
 
  -
 
  Uwe Schindler
 
  H.-H.-Meier-Allee 63, D-28213 Bremen
 
  http://www.thetaphi.de
 
  eMail: u...@thetaphi.de
 
 
 
  From: Anshum Gupta [mailto:ans...@anshumgupta.net]
  Sent: Sunday, January 25, 2015 6:17 AM
  To: dev@lucene.apache.org
  Subject: [VOTE] Release 5.0.0 RC1
 
 
 
  Please review and vote for the following RC:
 
 
 
  Artifacts:
 
 
 
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC1-rev1654615
 
 
 
  Smoke tester:
 
   python3  dev-tools/scripts/smokeTestRelease.py
 
 

[jira] [Commented] (LUCENE-6198) two phase intersection

2015-01-27 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293808#comment-14293808
 ] 

Yonik Seeley commented on LUCENE-6198:
--

Don't worry folks (if you are Solr users), I opened
https://issues.apache.org/jira/browse/SOLR-7044 
to optimize some of these cases while the right API at the lucene level is 
being sorted out.

 two phase intersection
 --

 Key: LUCENE-6198
 URL: https://issues.apache.org/jira/browse/LUCENE-6198
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-6198.patch


 Currently some scorers have to do a lot of per-document work to determine if 
 a document is a match. The simplest example is a phrase scorer, but there are 
 others (spans, sloppy phrase, geospatial, etc).
 Imagine a conjunction with two MUST clauses, one that is a term that matches 
 all odd documents, another that is a phrase matching all even documents. 
 Today this conjunction will be very expensive, because the zig-zag 
 intersection is reading a ton of useless positions.
 The same problem happens with filteredQuery and anything else that acts like 
 a conjunction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6198) two phase intersection

2015-01-27 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293815#comment-14293815
 ] 

Mark Miller commented on LUCENE-6198:
-

bq. It looks like you were somehow upset by discussions and closed the issue 
because of that. Discussions should be a good thing!

Welcome to the current state of the Lucene community.

 two phase intersection
 --

 Key: LUCENE-6198
 URL: https://issues.apache.org/jira/browse/LUCENE-6198
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-6198.patch


 Currently some scorers have to do a lot of per-document work to determine if 
 a document is a match. The simplest example is a phrase scorer, but there are 
 others (spans, sloppy phrase, geospatial, etc).
 Imagine a conjunction with two MUST clauses, one that is a term that matches 
 all odd documents, another that is a phrase matching all even documents. 
 Today this conjunction will be very expensive, because the zig-zag 
 intersection is reading a ton of useless positions.
 The same problem happens with filteredQuery and anything else that acts like 
 a conjunction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7012) add an ant target to package a plugin into a jar

2015-01-27 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293697#comment-14293697
 ] 

Erik Hatcher edited comment on SOLR-7012 at 1/27/15 3:47 PM:
-

IMO, this SDK of sorts needs to be separate from Solr's own build.  But 
connected in that it needs to leverage the common-build infrastructure.  I've 
done this sort of thing like this 
https://gist.github.com/erikhatcher/3aa0b40a6a3547d5405c - I've morphed it 
slightly and pasted here:
{code}
?xml version=1.0?

project name=solr-development-kit default=default
  description
 Solr Development Kit
  /description
  
  property name=lucene.root location=lucene-solr-5.0.0/
  property name=solr.root location=${lucene.root}/solr/
  
  property name=build.dir location=build/

  import file=${solr.root}/contrib/contrib-build.xml/
  
/project
{code}

A developer could have their own work directory with this build file, and 
benefit from ant test, and all the other ant targets that we provide for the 
contrib/ modules built into Solr.

I envision us shipping in future Solr distros an sdk directory that has just 
the build file infrastructure used by Lucene/Solr itself, including whatever it 
takes like this type of build file, to easily enable a developer to build 
plugins.

Further, the SDK build infrastructure could extend to uploading the built 
plugin to a running Solr cluster even too.


was (Author: ehatcher):
IMO, this SDK of sorts needs to be separate from Solr's own build.  But 
connected in that it needs to leverage the common-build infrastructure.  I've 
done this sort of thing like this 
https://gist.github.com/erikhatcher/3aa0b40a6a3547d5405c - I've morphed it 
slightly and pasted here:
{code}
?xml version=1.0?

project name=solr-development-kit default=default
  description
 Solr Development Kit
  /description
  
  property name=lucene.root location=lucene-solr-5.0.0/
  property name=solr.root location=${lucene.root}/solr/
  
  property name=build.dir location=build/

  import file=${solr.root}/contrib/contrib-build.xml/
  
/project
{code}

 add an ant target to package a plugin into a jar
 

 Key: SOLR-7012
 URL: https://issues.apache.org/jira/browse/SOLR-7012
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-7012.patch, SOLR-7012.patch, SOLR-7012.patch


 Now it is extremely hard to create  plugin because the user do not know about 
 the exact dependencies and their poms
 we will add a target to solr/build.xml called plugin-jar
 invoke it as follows
 {code}
 ant -Dplugin.package=my.package -Djar.location=/tmp/my.jar plugin-jar
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.7.0_76) - Build # 4338 - Still Failing!

2015-01-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4338/
Java: 64bit/jdk1.7.0_76 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.ReplicationFactorTest.test

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:63051/repfacttest_c8n_1x3_shard1_replica2

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:63051/repfacttest_c8n_1x3_shard1_replica2
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:787)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:730)
at 
org.apache.solr.cloud.ReplicationFactorTest.testRf3(ReplicationFactorTest.java:263)
at 
org.apache.solr.cloud.ReplicationFactorTest.test(ReplicationFactorTest.java:110)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-4835) Raise maxClauseCount in BooleanQuery to Integer.MAX_VALUE

2015-01-27 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293736#comment-14293736
 ] 

Robert Muir commented on LUCENE-4835:
-

-1 to lowering the limit in lucene, just because you guys have sour grapes 
about a solr issue.

 Raise maxClauseCount in BooleanQuery to Integer.MAX_VALUE
 -

 Key: LUCENE-4835
 URL: https://issues.apache.org/jira/browse/LUCENE-4835
 Project: Lucene - Core
  Issue Type: Improvement
Affects Versions: 4.2
Reporter: Shawn Heisey
 Fix For: 4.9, Trunk


 Discussion on SOLR-4586 raised the idea of raising the limit on boolean 
 clauses from 1024 to Integer.MAX_VALUE.  This should be a safe change.  It 
 will change the nature of help requests from Why can't I do 2000 clauses? 
 to Why is my 5000-clause query slow?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release 5.0.0 RC1

2015-01-27 Thread Anshum Gupta
I would say that this is not the time to push stuff that we forgot to put
in but to get critical/blocker bug fixes to make sure that Lucene and Solr
do not break and work as documented, when released. Let's not try to shoot
ourselves in the foot by changing more things than that as it'll be tough
to track and manage if that starts to happen.

About the respin, I'm just waiting for SOLR-6640. Everything else that was
reported in the earlier RC stands fixed. If anyone finds bugs that impact
the release of 5.0 as is, please fix and commit but not any other change.

Thanks for being patient.


On Tue, Jan 27, 2015 at 8:45 AM, Ryan Ernst r...@iernst.net wrote:

 I just filed https://issues.apache.org/jira/browse/SOLR-7041 Nuke
 defaultSearchField and solrQueryParser from schema”. Has it been discussed
 already?



  What about https://issues.apache.org/jira/browse/SOLR-4586

 I have hit this trappy magic1024 limit myself and it would be great if

 it could be removed for 5.0.


 A respin is not the time to cram in more changes (especially controversial
 ones).

 On Tue, Jan 27, 2015 at 5:28 AM, Mike Murphy mmurphy3...@gmail.com
 wrote:

 What about https://issues.apache.org/jira/browse/SOLR-4586
 I have hit this trappy magic1024 limit myself and it would be great if
 it could be removed for 5.0.

 On Tue, Jan 27, 2015 at 5:28 AM, Jan Høydahl jan@cominvent.com
 wrote:
  I just filed https://issues.apache.org/jira/browse/SOLR-7041 Nuke
  defaultSearchField and solrQueryParser from schema”. Has it been
 discussed
  already?
 
  --
  Jan Høydahl, search solution architect
  Cominvent AS - www.cominvent.com
 
  25. jan. 2015 kl. 20.07 skrev Uwe Schindler u...@thetaphi.de:
 
  In addition,
 
  on most computers you extract to your windows “Desktop” and for most
 users
  this is also using a white space (the user name has in most cases white
  space in the name), this is also bad user experience.
 
  Uwe
 
  -
  Uwe Schindler
  H.-H.-Meier-Allee 63, D-28213 Bremen
  http://www.thetaphi.de
  eMail: u...@thetaphi.de
 
  From: Anshum Gupta [mailto:ans...@anshumgupta.net]
  Sent: Sunday, January 25, 2015 7:58 PM
  To: dev@lucene.apache.org
  Subject: Re: [VOTE] Release 5.0.0 RC1
 
  I'm not really a windows user so I don't really know what's a fix for
 the
  paths with a space. May be we can either fix it or document the way to
 use
  it so that this doesn't happen (put the path in quotes or something?).
 The
  worst case, we should document that it doesn't work in such cases and
 the
  steps from there on.
 
  Leaving it as is would be a problem for users as they wouldn't even
 have a
  way to get around, with all the documentation about the traditional way
 of
  starting/stopping Solr yanked from the Ref guide.
 
  P.S.: I'll do whatever is in my hands to get this out soon, but can't
  promise on that. You would have an RC though :-).
 
 
  On Sun, Jan 25, 2015 at 10:33 AM, Uwe Schindler u...@thetaphi.de
 wrote:
 
  Hi,
 
 
 
  With both Java 7 and Java 8 (took much longer, of course):
 
 
 
  Java 1.7 JAVA_HOME=/home/thetaphi/jdk1.7.0_76
 
  Java 1.8 JAVA_HOME=/home/thetaphi/jdk1.8.0_31
 
  Ubuntu 14.04, 64bit
 
 
 
  SUCCESS! [2:21:52.553327]
 
 
 
  Unfortunately, Solr still has the problem that Windows scripts to
  startup/shutdown and create collections don’t work if you have spaces
 in the
  installation path. This is quite common on Windows (e.g. the default
  installation folder is named “c:\Program Files” – if you install Solr
 there
  you cannot even start it without error messages (Access Denied) – but it
  still starts up. But running techproducts demo fails horrible with crazy
  error messages). I played around with some other install dir, but to me
 this
  is still a blocker, see https://issues.apache.org/jira/browse/SOLR-7016
 
 
 
  In addition on Windows, shutting down *all* Solr servers does not work.
  solr.cmd stop –all” looks like a no-op. Shutting down a single solr
 server
  goes sometimes into a loop of “Waiting 10 seconds…” and then “cannot
 find
  xxx.port file”. So I had to shutdown the whole cloud manually.
 
 
 
  So +/- 0 to release Solr as-is, I would like that somebody takes care of
  above issues. This makes Solr unuseable on Windows.
 
  I would give my +1 to release Lucene... (but this does not help)
 
 
 
  Uwe
 
 
 
  P.S.: If we get this out this week I can announce this release together
 with
  my talk about Lucene/Solr 5 at FOSDEM 2015 in Brussels! J
 
 
 
  -
 
  Uwe Schindler
 
  H.-H.-Meier-Allee 63, D-28213 Bremen
 
  http://www.thetaphi.de
 
  eMail: u...@thetaphi.de
 
 
 
  From: Uwe Schindler [mailto:u...@thetaphi.de]
  Sent: Sunday, January 25, 2015 3:53 PM
  To: dev@lucene.apache.org
  Subject: RE: [VOTE] Release 5.0.0 RC1
 
 
 
  Hey,
 
 
 
  if you have Java 8 on your machine, too, you can smoke with both (Java
 7 and
  Java 8):
 
 
 
  python3 dev-tools/scripts/smokeTestRelease.py --test-java8 /path/to/jdk8
 
 

[jira] [Commented] (SOLR-7041) Nuke defaultSearchField and solrQueryParser from schema

2015-01-27 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293465#comment-14293465
 ] 

Alan Woodward commented on SOLR-7041:
-

+1 to deprecating properly in 5.0, and removing in trunk.

 Nuke defaultSearchField and solrQueryParser from schema
 ---

 Key: SOLR-7041
 URL: https://issues.apache.org/jira/browse/SOLR-7041
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Reporter: Jan Høydahl
 Fix For: 5.0, Trunk


 The two tags {{defautlSearchField}} and {{solrQueryParser}} were deprecated 
 in Solr3.6 (SOLR-2724). Time to nuke them from code and {{schema.xml}} in 5.0?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7039) First collection created with stateFormat=2 results in a weird /clusterstate.json

2015-01-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293486#comment-14293486
 ] 

ASF subversion and git services commented on SOLR-7039:
---

Commit 1655032 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1655032 ]

SOLR-7039 First collection created with stateFormat=2 writes to 
clusterstate.json also

 First collection created with stateFormat=2 results in a weird 
 /clusterstate.json
 -

 Key: SOLR-7039
 URL: https://issues.apache.org/jira/browse/SOLR-7039
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.0
Reporter: Timothy Potter
Assignee: Noble Paul
Priority: Blocker
 Fix For: 5.0

 Attachments: SOLR-7039.patch


 With the 5.0 branch, when I do:
 {code}
 bin/solr -c  bin/solr create -c foo
 {code}
 The {{/clusterstate.json}} in ZK has and invalid definition of the foo 
 collection
 {code}
 {foo:{
 replicationFactor:1,
 router:{name:compositeId},
 maxShardsPerNode:1,
 autoAddReplicas:false,
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{}
 {code}
 To verify this isn't the UI sending back the wrong data, I went into the 
 zkCli.sh command-line and got:
 {code}
 [zk: localhost:9983(CONNECTED) 2] get /clusterstate.json
 {foo:{
 replicationFactor:1,
 router:{name:compositeId},
 maxShardsPerNode:1,
 autoAddReplicas:false,
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{}
 cZxid = 0x20
 ctime = Mon Jan 26 14:56:44 MST 2015
 mZxid = 0x65
 mtime = Mon Jan 26 14:57:16 MST 2015
 pZxid = 0x20
 cversion = 0
 dataVersion = 1
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 247
 numChildren = 0
 {code}
 The {{/collections/foo/state.json}} looks correct:
 {code}
 {foo:{
 replicationFactor:1,
 router:{name:compositeId},
 maxShardsPerNode:1,
 autoAddReplicas:false,
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{core_node1:{
 core:foo_shard1_replica1,
 base_url:http://192.168.1.2:8983/solr;,
 node_name:192.168.1.2:8983_solr,
 state:active,
 leader:true}}
 {code}
 Here's the weird thing ... If I create a second collection using the same 
 script, all is well and /clusterstate.json is empty
 {code}
 bin/solr create -c foo2
 {code}
 Calling this a blocker because 5.0 can't be released with this happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release 5.0.0 RC1

2015-01-27 Thread Mike Murphy
What about https://issues.apache.org/jira/browse/SOLR-4586
I have hit this trappy magic1024 limit myself and it would be great if
it could be removed for 5.0.

On Tue, Jan 27, 2015 at 5:28 AM, Jan Høydahl jan@cominvent.com wrote:
 I just filed https://issues.apache.org/jira/browse/SOLR-7041 Nuke
 defaultSearchField and solrQueryParser from schema”. Has it been discussed
 already?

 --
 Jan Høydahl, search solution architect
 Cominvent AS - www.cominvent.com

 25. jan. 2015 kl. 20.07 skrev Uwe Schindler u...@thetaphi.de:

 In addition,

 on most computers you extract to your windows “Desktop” and for most users
 this is also using a white space (the user name has in most cases white
 space in the name), this is also bad user experience.

 Uwe

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de

 From: Anshum Gupta [mailto:ans...@anshumgupta.net]
 Sent: Sunday, January 25, 2015 7:58 PM
 To: dev@lucene.apache.org
 Subject: Re: [VOTE] Release 5.0.0 RC1

 I'm not really a windows user so I don't really know what's a fix for the
 paths with a space. May be we can either fix it or document the way to use
 it so that this doesn't happen (put the path in quotes or something?). The
 worst case, we should document that it doesn't work in such cases and the
 steps from there on.

 Leaving it as is would be a problem for users as they wouldn't even have a
 way to get around, with all the documentation about the traditional way of
 starting/stopping Solr yanked from the Ref guide.

 P.S.: I'll do whatever is in my hands to get this out soon, but can't
 promise on that. You would have an RC though :-).


 On Sun, Jan 25, 2015 at 10:33 AM, Uwe Schindler u...@thetaphi.de wrote:

 Hi,



 With both Java 7 and Java 8 (took much longer, of course):



 Java 1.7 JAVA_HOME=/home/thetaphi/jdk1.7.0_76

 Java 1.8 JAVA_HOME=/home/thetaphi/jdk1.8.0_31

 Ubuntu 14.04, 64bit



 SUCCESS! [2:21:52.553327]



 Unfortunately, Solr still has the problem that Windows scripts to
 startup/shutdown and create collections don’t work if you have spaces in the
 installation path. This is quite common on Windows (e.g. the default
 installation folder is named “c:\Program Files” – if you install Solr there
 you cannot even start it without error messages (Access Denied) – but it
 still starts up. But running techproducts demo fails horrible with crazy
 error messages). I played around with some other install dir, but to me this
 is still a blocker, see https://issues.apache.org/jira/browse/SOLR-7016



 In addition on Windows, shutting down *all* Solr servers does not work.
 solr.cmd stop –all” looks like a no-op. Shutting down a single solr server
 goes sometimes into a loop of “Waiting 10 seconds…” and then “cannot find
 xxx.port file”. So I had to shutdown the whole cloud manually.



 So +/- 0 to release Solr as-is, I would like that somebody takes care of
 above issues. This makes Solr unuseable on Windows.

 I would give my +1 to release Lucene... (but this does not help)



 Uwe



 P.S.: If we get this out this week I can announce this release together with
 my talk about Lucene/Solr 5 at FOSDEM 2015 in Brussels! J



 -

 Uwe Schindler

 H.-H.-Meier-Allee 63, D-28213 Bremen

 http://www.thetaphi.de

 eMail: u...@thetaphi.de



 From: Uwe Schindler [mailto:u...@thetaphi.de]
 Sent: Sunday, January 25, 2015 3:53 PM
 To: dev@lucene.apache.org
 Subject: RE: [VOTE] Release 5.0.0 RC1



 Hey,



 if you have Java 8 on your machine, too, you can smoke with both (Java 7 and
 Java 8):



 python3 dev-tools/scripts/smokeTestRelease.py --test-java8 /path/to/jdk8
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC1-rev1654615



 I am doing that at the moment to ensure all is fine also with Java 8.

 Uwe



 -

 Uwe Schindler

 H.-H.-Meier-Allee 63, D-28213 Bremen

 http://www.thetaphi.de

 eMail: u...@thetaphi.de



 From: Anshum Gupta [mailto:ans...@anshumgupta.net]
 Sent: Sunday, January 25, 2015 6:17 AM
 To: dev@lucene.apache.org
 Subject: [VOTE] Release 5.0.0 RC1



 Please review and vote for the following RC:



 Artifacts:


 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC1-rev1654615



 Smoke tester:

  python3  dev-tools/scripts/smokeTestRelease.py
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC1-rev1654615



 Here's my +1



 SUCCESS! [0:39:42.513560]



 --

 Anshum Gupta

 http://about.me/anshumgupta




 --
 Anshum Gupta
 http://about.me/anshumgupta





-- 
--Mike

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6640) ChaosMonkeySafeLeaderTest failure with CorruptIndexException

2015-01-27 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293425#comment-14293425
 ] 

Shalin Shekhar Mangar commented on SOLR-6640:
-

Thanks Mark. I am trying to figure out the failure and create a smaller 
reproducible test case.

 ChaosMonkeySafeLeaderTest failure with CorruptIndexException
 

 Key: SOLR-6640
 URL: https://issues.apache.org/jira/browse/SOLR-6640
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 5.0
Reporter: Shalin Shekhar Mangar
Priority: Blocker
 Fix For: 5.0

 Attachments: Lucene-Solr-5.x-Linux-64bit-jdk1.8.0_20-Build-11333.txt, 
 SOLR-6640-test.patch, SOLR-6640.patch, SOLR-6640.patch, SOLR-6640.patch, 
 SOLR-6640.patch, SOLR-6640_new_index_dir.patch, corruptindex.log


 Test failure found on jenkins:
 http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11333/
 {code}
 1 tests failed.
 REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch
 Error Message:
 shard2 is not consistent.  Got 62 from 
 http://127.0.0.1:57436/collection1lastClient and got 24 from 
 http://127.0.0.1:53065/collection1
 Stack Trace:
 java.lang.AssertionError: shard2 is not consistent.  Got 62 from 
 http://127.0.0.1:57436/collection1lastClient and got 24 from 
 http://127.0.0.1:53065/collection1
 at 
 __randomizedtesting.SeedInfo.seed([F4B371D421E391CD:7555FFCC56BCF1F1]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1255)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1234)
 at 
 org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:162)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
 {code}
 Cause of inconsistency is:
 {code}
 Caused by: org.apache.lucene.index.CorruptIndexException: file mismatch, 
 expected segment id=yhq3vokoe1den2av9jbd3yp8, got=yhq3vokoe1den2av9jbd3yp7 
 (resource=BufferedChecksumIndexInput(MMapIndexInput(path=/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeySafeLeaderTest-F4B371D421E391CD-001/tempDir-001/jetty3/index/_1_2.liv)))
[junit4]   2  at 
 org.apache.lucene.codecs.CodecUtil.checkSegmentHeader(CodecUtil.java:259)
[junit4]   2  at 
 org.apache.lucene.codecs.lucene50.Lucene50LiveDocsFormat.readLiveDocs(Lucene50LiveDocsFormat.java:88)
[junit4]   2  at 
 org.apache.lucene.codecs.asserting.AssertingLiveDocsFormat.readLiveDocs(AssertingLiveDocsFormat.java:64)
[junit4]   2  at 
 org.apache.lucene.index.SegmentReader.init(SegmentReader.java:102)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >