[jira] [Commented] (LUCENE-6226) Allow TermScorer to expose positions, offsets and payloads

2015-02-09 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14311933#comment-14311933
 ] 

Alan Woodward commented on LUCENE-6226:
---

Good point on Collector.postingsFlags(), I'll change that back.

The PostingsEnum contract is that you should only call .nextPosition() up to 
.freq() times, it doesn't know anything about NO_MORE_POSITIONS.  I've gone 
back and forth a bit about where this should be dealt with, though.  I suppose 
the nicest solution would be for NO_MORE_POSITIONS to somehow be encoded 
directly into the positions data in the index, so that it's returned naturally 
when reading the postings and doesn't require any branching anywhere, but I 
need to look more carefully at the index format to see if that's plausible.  
What do you think?

 Allow TermScorer to expose positions, offsets and payloads
 --

 Key: LUCENE-6226
 URL: https://issues.apache.org/jira/browse/LUCENE-6226
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6226.patch, LUCENE-6226.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6226) Allow TermScorer to expose positions, offsets and payloads

2015-02-09 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14311941#comment-14311941
 ] 

Adrien Grand commented on LUCENE-6226:
--

bq. The PostingsEnum contract is that you should only call .nextPosition() up 
to .freq() times

If this is the case then TermScorer doesn't have to count how many times 
nextPosition() has been called since Scorer extends PostingsEnum?

I have to admit I would need to think more about the pros/cons of either 
expecting nextPosition to not be called more than freq() times or lazily 
iterate over positions and return NO_MORE_POSITIONS when finished. By the way, 
the documentation looks wrong today since PostingsEnum.nextPosition says that 
it returns NO_MORE_POSITIONS when finished while eg. BlockPostingsEnum does not 
seem to do it?



 Allow TermScorer to expose positions, offsets and payloads
 --

 Key: LUCENE-6226
 URL: https://issues.apache.org/jira/browse/LUCENE-6226
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6226.patch, LUCENE-6226.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6225) Clarify documentation of clone() in IndexInput

2015-02-09 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14311972#comment-14311972
 ] 

Uwe Schindler commented on LUCENE-6225:
---

According to java.io.Closeable docs, closing should not throw exceptions 
because multiple closes are allowed (so if it implicitely closed by root 
object, an additional call to clone's close should not fail).

The comment means: If you access the cloned IndexInput after closing the 
original the readXXX methods will throw AlreadyClosedException. For clones. 
the close() method is a noop, that is intended.

 Clarify documentation of clone() in IndexInput
 --

 Key: LUCENE-6225
 URL: https://issues.apache.org/jira/browse/LUCENE-6225
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Fix For: Trunk


 Here is a snippet from IndexInput's documentation:
 {code}
 The original instance must take care that cloned instances throw 
 AlreadyClosedException when the original one is closed.
 {code}
 But concrete implementations don't throw this AlreadyClosedException (this 
 would break the contract on Closeable). For example, see NIOFSDirectory:
 {code}
 public void close() throws IOException {
   if (!isClone) {
 channel.close();
   }
 }
 {code}
 What trapped me was that the abstract class IndexInput overrides the default 
 implementation of clone(), but doesn't do anything useful... I guess you 
 could make it final and provide the tracking for cloned instances in this 
 class rather than reimplementing it everywhere else (isCloned() would be a 
 superclass method then too). Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-09 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6736:
-
Description: 
Managing Solr configuration files on zookeeper becomes cumbersome while using 
solr in cloud mode, especially while trying out changes in the configurations. 

It will be great if there is a request handler that can provide an API to 
manage the configurations similar to the collections handler that would allow 
actions like uploading new configurations, linking them to a collection, 
deleting configurations, etc.


example : 

{code}
#use the following command to upload a new configset called mynewconf. This 
will fail if there is alredy a conf called 'mynewconf'
curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
@testconf.gz http://localhost:8983/solr/admin/configs/mynewconf
{code}
A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
available
A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the list 
of files in mynewconf

  was:
Managing Solr configuration files on zookeeper becomes cumbersome while using 
solr in cloud mode, especially while trying out changes in the configurations. 

It will be great if there is a request handler that can provide an API to 
manage the configurations similar to the collections handler that would allow 
actions like uploading new configurations, linking them to a collection, 
deleting configurations, etc.


example




 A collections-like request handler to manage solr configurations on zookeeper
 -

 Key: SOLR-6736
 URL: https://issues.apache.org/jira/browse/SOLR-6736
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Varun Rajput
Assignee: Anshum Gupta
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6736.patch


 Managing Solr configuration files on zookeeper becomes cumbersome while using 
 solr in cloud mode, especially while trying out changes in the 
 configurations. 
 It will be great if there is a request handler that can provide an API to 
 manage the configurations similar to the collections handler that would allow 
 actions like uploading new configurations, linking them to a collection, 
 deleting configurations, etc.
 example : 
 {code}
 #use the following command to upload a new configset called mynewconf. This 
 will fail if there is alredy a conf called 'mynewconf'
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @testconf.gz http://localhost:8983/solr/admin/configs/mynewconf
 {code}
 A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
 available
 A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
 list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 1943 - Still Failing!

2015-02-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1943/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.DeleteInactiveReplicaTest.deleteLiveReplicaTest

Error Message:
Should have had a good message here

Stack Trace:
java.lang.AssertionError: Should have had a good message here
at 
__randomizedtesting.SeedInfo.seed([1FCB3017A93F273D:B2AB841CB4008F48]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:125)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2613 - Still Failing

2015-02-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2613/

5 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.NoHttpResponseException: The target server failed to respond
at 
__randomizedtesting.SeedInfo.seed([85F9D3CD1B2E502E:DADEC17B5D23DD6]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:865)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:730)
at 
org.apache.solr.cloud.HttpPartitionTest.doSendDoc(HttpPartitionTest.java:484)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:501)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:193)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 

[jira] [Updated] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-09 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6736:
-
Description: 
Managing Solr configuration files on zookeeper becomes cumbersome while using 
solr in cloud mode, especially while trying out changes in the configurations. 

It will be great if there is a request handler that can provide an API to 
manage the configurations similar to the collections handler that would allow 
actions like uploading new configurations, linking them to a collection, 
deleting configurations, etc.


example



  was:
Managing Solr configuration files on zookeeper becomes cumbersome while using 
solr in cloud mode, especially while trying out changes in the configurations. 

It will be great if there is a request handler that can provide an API to 
manage the configurations similar to the collections handler that would allow 
actions like uploading new configurations, linking them to a collection, 
deleting configurations, etc.


 A collections-like request handler to manage solr configurations on zookeeper
 -

 Key: SOLR-6736
 URL: https://issues.apache.org/jira/browse/SOLR-6736
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Varun Rajput
Assignee: Anshum Gupta
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6736.patch


 Managing Solr configuration files on zookeeper becomes cumbersome while using 
 solr in cloud mode, especially while trying out changes in the 
 configurations. 
 It will be great if there is a request handler that can provide an API to 
 manage the configurations similar to the collections handler that would allow 
 actions like uploading new configurations, linking them to a collection, 
 deleting configurations, etc.
 example



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-6225) Clarify documentation of clone() in IndexInput

2015-02-09 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss reassigned LUCENE-6225:
---

Assignee: Dawid Weiss

 Clarify documentation of clone() in IndexInput
 --

 Key: LUCENE-6225
 URL: https://issues.apache.org/jira/browse/LUCENE-6225
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Fix For: Trunk


 Here is a snippet from IndexInput's documentation:
 {code}
 The original instance must take care that cloned instances throw 
 AlreadyClosedException when the original one is closed.
 {code}
 But concrete implementations don't throw this AlreadyClosedException (this 
 would break the contract on Closeable). For example, see NIOFSDirectory:
 {code}
 public void close() throws IOException {
   if (!isClone) {
 channel.close();
   }
 }
 {code}
 What trapped me was that the abstract class IndexInput overrides the default 
 implementation of clone(), but doesn't do anything useful... I guess you 
 could make it final and provide the tracking for cloned instances in this 
 class rather than reimplementing it everywhere else (isCloned() would be a 
 superclass method then too). Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-09 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14311966#comment-14311966
 ] 

Anshum Gupta commented on SOLR-6736:


Sure, makes sense.

 A collections-like request handler to manage solr configurations on zookeeper
 -

 Key: SOLR-6736
 URL: https://issues.apache.org/jira/browse/SOLR-6736
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Varun Rajput
Assignee: Anshum Gupta
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6736.patch


 Managing Solr configuration files on zookeeper becomes cumbersome while using 
 solr in cloud mode, especially while trying out changes in the 
 configurations. 
 It will be great if there is a request handler that can provide an API to 
 manage the configurations similar to the collections handler that would allow 
 actions like uploading new configurations, linking them to a collection, 
 deleting configurations, etc.
 example



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7084) FreeTextSuggester Nullpointer when building dictionary

2015-02-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14311968#comment-14311968
 ] 

Jan Høydahl commented on SOLR-7084:
---

Just throwing an exception is not very elegant. We could mask the exception and 
return empty list without further notice but that would render the API 
inconsistent. Here are two options:
A) Fail the whole request with temporary error code (e.g. 503 Service 
Unavailable)
B) Fail only the request for this dictionary, returning empty list and an error 
code (SuggesterResult)

I'm tempted to suggest A here for the sake of simplicity. If requesting 
multiple dictionaries in one request, you'll not get a response before all 
dictionaries are available and working.

 FreeTextSuggester Nullpointer when building dictionary
 --

 Key: SOLR-7084
 URL: https://issues.apache.org/jira/browse/SOLR-7084
 Project: Solr
  Issue Type: Bug
  Components: Suggester
Affects Versions: 4.10.2
Reporter: Jan Høydahl
Assignee: Jan Høydahl
 Fix For: 4.10.4, Trunk, 5.1


 Using {{FreeTextSuggester}}. When starting Solr or reloading core, all 
 suggest requests will fail with Nullpointer
 {code}
 java.lang.NullPointerException\n\tat 
 org.apache.lucene.search.suggest.analyzing.FreeTextSuggester.lookup(FreeTextSuggester.java:542)\n\tat
  
 org.apache.lucene.search.suggest.analyzing.FreeTextSuggester.lookup(FreeTextSuggester.java:440)\n\tat
  
 org.apache.lucene.search.suggest.analyzing.FreeTextSuggester.lookup(FreeTextSuggester.java:429)\n\tat
  
 org.apache.solr.spelling.suggest.SolrSuggester.getSuggestions(SolrSuggester.java:199)\n\tat
  
 {code}
 Offending line:
 {code}
   BytesReader bytesReader = fst.getBytesReader();
 {code}
 The fst is null at this time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-09 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14311973#comment-14311973
 ] 

Noble Paul commented on SOLR-6736:
--

[~varunrajput] I have updated the syntax+semantics of this API. Please update 
your patch to reflect the description 

 A collections-like request handler to manage solr configurations on zookeeper
 -

 Key: SOLR-6736
 URL: https://issues.apache.org/jira/browse/SOLR-6736
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Varun Rajput
Assignee: Anshum Gupta
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6736.patch


 Managing Solr configuration files on zookeeper becomes cumbersome while using 
 solr in cloud mode, especially while trying out changes in the 
 configurations. 
 It will be great if there is a request handler that can provide an API to 
 manage the configurations similar to the collections handler that would allow 
 actions like uploading new configurations, linking them to a collection, 
 deleting configurations, etc.
 example : 
 {code}
 #use the following command to upload a new configset called mynewconf. This 
 will fail if there is alredy a conf called 'mynewconf'
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @testconf.gz http://localhost:8983/solr/admin/configs/mynewconf
 {code}
 A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
 available
 A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
 list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6225) Clarify documentation of clone() in IndexInput

2015-02-09 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14311975#comment-14311975
 ] 

Dawid Weiss commented on LUCENE-6225:
-

I think the comment should read exactly what your explanation of the comment 
was, it'd be clearer then... :)

 Clarify documentation of clone() in IndexInput
 --

 Key: LUCENE-6225
 URL: https://issues.apache.org/jira/browse/LUCENE-6225
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Fix For: Trunk


 Here is a snippet from IndexInput's documentation:
 {code}
 The original instance must take care that cloned instances throw 
 AlreadyClosedException when the original one is closed.
 {code}
 But concrete implementations don't throw this AlreadyClosedException (this 
 would break the contract on Closeable). For example, see NIOFSDirectory:
 {code}
 public void close() throws IOException {
   if (!isClone) {
 channel.close();
   }
 }
 {code}
 What trapped me was that the abstract class IndexInput overrides the default 
 implementation of clone(), but doesn't do anything useful... I guess you 
 could make it final and provide the tracking for cloned instances in this 
 class rather than reimplementing it everywhere else (isCloned() would be a 
 superclass method then too). Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-09 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6736:
-
Description: 
Managing Solr configuration files on zookeeper becomes cumbersome while using 
solr in cloud mode, especially while trying out changes in the configurations. 

It will be great if there is a request handler that can provide an API to 
manage the configurations similar to the collections handler that would allow 
actions like uploading new configurations, linking them to a collection, 
deleting configurations, etc.

example : 

{code}
#use the following command to upload a new configset called mynewconf. This 
will fail if there is alredy a conf called 'mynewconf'. The file could be a jar 
, zip or a tar file
curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
@testconf.zip http://localhost:8983/solr/admin/configs/mynewconf
{code}
A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
available
A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the list 
of files in mynewconf

  was:
Managing Solr configuration files on zookeeper becomes cumbersome while using 
solr in cloud mode, especially while trying out changes in the configurations. 

It will be great if there is a request handler that can provide an API to 
manage the configurations similar to the collections handler that would allow 
actions like uploading new configurations, linking them to a collection, 
deleting configurations, etc.


example : 

{code}
#use the following command to upload a new configset called mynewconf. This 
will fail if there is alredy a conf called 'mynewconf'
curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
@testconf.gz http://localhost:8983/solr/admin/configs/mynewconf
{code}
A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
available
A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the list 
of files in mynewconf


 A collections-like request handler to manage solr configurations on zookeeper
 -

 Key: SOLR-6736
 URL: https://issues.apache.org/jira/browse/SOLR-6736
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Varun Rajput
Assignee: Anshum Gupta
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6736.patch


 Managing Solr configuration files on zookeeper becomes cumbersome while using 
 solr in cloud mode, especially while trying out changes in the 
 configurations. 
 It will be great if there is a request handler that can provide an API to 
 manage the configurations similar to the collections handler that would allow 
 actions like uploading new configurations, linking them to a collection, 
 deleting configurations, etc.
 example : 
 {code}
 #use the following command to upload a new configset called mynewconf. This 
 will fail if there is alredy a conf called 'mynewconf'. The file could be a 
 jar , zip or a tar file
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @testconf.zip http://localhost:8983/solr/admin/configs/mynewconf
 {code}
 A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
 available
 A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
 list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-09 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14311983#comment-14311983
 ] 

Anshum Gupta commented on SOLR-6736:


A DELETE to remove the config too? Also, what I'm looking at here is supporting 
gz/zip/jar uploads.

 A collections-like request handler to manage solr configurations on zookeeper
 -

 Key: SOLR-6736
 URL: https://issues.apache.org/jira/browse/SOLR-6736
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Varun Rajput
Assignee: Anshum Gupta
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6736.patch


 Managing Solr configuration files on zookeeper becomes cumbersome while using 
 solr in cloud mode, especially while trying out changes in the 
 configurations. 
 It will be great if there is a request handler that can provide an API to 
 manage the configurations similar to the collections handler that would allow 
 actions like uploading new configurations, linking them to a collection, 
 deleting configurations, etc.
 example : 
 {code}
 #use the following command to upload a new configset called mynewconf. This 
 will fail if there is alredy a conf called 'mynewconf'. The file could be a 
 jar , zip or a tar file which contains all the files for the this conf.
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @testconf.zip http://localhost:8983/solr/admin/configs/mynewconf
 {code}
 A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
 available
 A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
 list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-09 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6736:
-
Description: 
Managing Solr configuration files on zookeeper becomes cumbersome while using 
solr in cloud mode, especially while trying out changes in the configurations. 

It will be great if there is a request handler that can provide an API to 
manage the configurations similar to the collections handler that would allow 
actions like uploading new configurations, linking them to a collection, 
deleting configurations, etc.

example : 

{code}
#use the following command to upload a new configset called mynewconf. This 
will fail if there is alredy a conf called 'mynewconf'. The file could be a jar 
, zip or a tar file which contains all the files for the this conf.
curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
@testconf.zip http://localhost:8983/solr/admin/configs/mynewconf
{code}
A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
available
A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the list 
of files in mynewconf

  was:
Managing Solr configuration files on zookeeper becomes cumbersome while using 
solr in cloud mode, especially while trying out changes in the configurations. 

It will be great if there is a request handler that can provide an API to 
manage the configurations similar to the collections handler that would allow 
actions like uploading new configurations, linking them to a collection, 
deleting configurations, etc.

example : 

{code}
#use the following command to upload a new configset called mynewconf. This 
will fail if there is alredy a conf called 'mynewconf'. The file could be a jar 
, zip or a tar file
curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
@testconf.zip http://localhost:8983/solr/admin/configs/mynewconf
{code}
A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
available
A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the list 
of files in mynewconf


 A collections-like request handler to manage solr configurations on zookeeper
 -

 Key: SOLR-6736
 URL: https://issues.apache.org/jira/browse/SOLR-6736
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Varun Rajput
Assignee: Anshum Gupta
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6736.patch


 Managing Solr configuration files on zookeeper becomes cumbersome while using 
 solr in cloud mode, especially while trying out changes in the 
 configurations. 
 It will be great if there is a request handler that can provide an API to 
 manage the configurations similar to the collections handler that would allow 
 actions like uploading new configurations, linking them to a collection, 
 deleting configurations, etc.
 example : 
 {code}
 #use the following command to upload a new configset called mynewconf. This 
 will fail if there is alredy a conf called 'mynewconf'. The file could be a 
 jar , zip or a tar file which contains all the files for the this conf.
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @testconf.zip http://localhost:8983/solr/admin/configs/mynewconf
 {code}
 A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
 available
 A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
 list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6226) Allow TermScorer to expose positions, offsets and payloads

2015-02-09 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-6226:
--
Attachment: LUCENE-6226.patch

You're right, I'm getting ahead of myself here.  We don't need to worry about 
positions being exhausted until we have queries that use subscorer positions 
and can't call freq() up front.

Here's an amended patch that removes the upto tracking from TermScorer and 
reverts the changes to Collector.

 Allow TermScorer to expose positions, offsets and payloads
 --

 Key: LUCENE-6226
 URL: https://issues.apache.org/jira/browse/LUCENE-6226
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6226.patch, LUCENE-6226.patch, LUCENE-6226.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6226) Allow TermScorer to expose positions, offsets and payloads

2015-02-09 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14311987#comment-14311987
 ] 

Alan Woodward edited comment on LUCENE-6226 at 2/9/15 9:46 AM:
---

You're right, I'm getting ahead of myself here.  We don't need to worry about 
positions being exhausted until we have queries that use subscorer positions 
and can't call freq() up front.

Here's an amended patch that removes the upto tracking from TermScorer and 
reverts the changes to Collector.

Edit: also fixes the PostingsEnum.nextPosition() javadocs


was (Author: romseygeek):
You're right, I'm getting ahead of myself here.  We don't need to worry about 
positions being exhausted until we have queries that use subscorer positions 
and can't call freq() up front.

Here's an amended patch that removes the upto tracking from TermScorer and 
reverts the changes to Collector.

 Allow TermScorer to expose positions, offsets and payloads
 --

 Key: LUCENE-6226
 URL: https://issues.apache.org/jira/browse/LUCENE-6226
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6226.patch, LUCENE-6226.patch, LUCENE-6226.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6227) Add BooleanClause.Occur.FILTER

2015-02-09 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-6227:


 Summary: Add BooleanClause.Occur.FILTER
 Key: LUCENE-6227
 URL: https://issues.apache.org/jira/browse/LUCENE-6227
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1


Now that we have weight-level control of whether scoring is needed or not, we 
could add a new clause type to BooleanQuery. It would behave like MUST exept 
that it would not participate to scoring.

Why do we need it given that we already have FilteredQuery? The idea is that by 
having a single query that performs conjunctions, we could potentially take 
better decisions. It's not ready to replace FilteredQuery yet as FilteredQuery 
has handling of random-access filters that BooleanQuery doesn't, but it's a 
first step towards that direction and eventually FilteredQuery would just 
rewrite to a BooleanQuery.

I've been calling this new clause type FILTER so far, but feel free to propose 
a better name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6227) Add BooleanClause.Occur.FILTER

2015-02-09 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6227:
-
Attachment: LUCENE-6227.patch

Patch. ConjunctionScorer now takes two sets of scorers, a first one which 
contains required clauses and is used for advancing to the next match, and 
another one which only takes the scoring clauses and is used when score() is 
called.

 Add BooleanClause.Occur.FILTER
 --

 Key: LUCENE-6227
 URL: https://issues.apache.org/jira/browse/LUCENE-6227
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6227.patch


 Now that we have weight-level control of whether scoring is needed or not, we 
 could add a new clause type to BooleanQuery. It would behave like MUST exept 
 that it would not participate to scoring.
 Why do we need it given that we already have FilteredQuery? The idea is that 
 by having a single query that performs conjunctions, we could potentially 
 take better decisions. It's not ready to replace FilteredQuery yet as 
 FilteredQuery has handling of random-access filters that BooleanQuery 
 doesn't, but it's a first step towards that direction and eventually 
 FilteredQuery would just rewrite to a BooleanQuery.
 I've been calling this new clause type FILTER so far, but feel free to 
 propose a better name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6227) Add BooleanClause.Occur.FILTER

2015-02-09 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312043#comment-14312043
 ] 

Michael McCandless commented on LUCENE-6227:


This looks wonderful!  Does it mean we can remove BooleanFilter?  Maybe 
TermsFilter?  Or that can come later; this is already an awesome step.

Minor silly Englishism: participate to - participate in

 Add BooleanClause.Occur.FILTER
 --

 Key: LUCENE-6227
 URL: https://issues.apache.org/jira/browse/LUCENE-6227
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6227.patch


 Now that we have weight-level control of whether scoring is needed or not, we 
 could add a new clause type to BooleanQuery. It would behave like MUST exept 
 that it would not participate to scoring.
 Why do we need it given that we already have FilteredQuery? The idea is that 
 by having a single query that performs conjunctions, we could potentially 
 take better decisions. It's not ready to replace FilteredQuery yet as 
 FilteredQuery has handling of random-access filters that BooleanQuery 
 doesn't, but it's a first step towards that direction and eventually 
 FilteredQuery would just rewrite to a BooleanQuery.
 I've been calling this new clause type FILTER so far, but feel free to 
 propose a better name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6227) Add BooleanClause.Occur.FILTER

2015-02-09 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6227:
-
Description: 
Now that we have weight-level control of whether scoring is needed or not, we 
could add a new clause type to BooleanQuery. It would behave like MUST exept 
that it would not participate in scoring.

Why do we need it given that we already have FilteredQuery? The idea is that by 
having a single query that performs conjunctions, we could potentially take 
better decisions. It's not ready to replace FilteredQuery yet as FilteredQuery 
has handling of random-access filters that BooleanQuery doesn't, but it's a 
first step towards that direction and eventually FilteredQuery would just 
rewrite to a BooleanQuery.

I've been calling this new clause type FILTER so far, but feel free to propose 
a better name.

  was:
Now that we have weight-level control of whether scoring is needed or not, we 
could add a new clause type to BooleanQuery. It would behave like MUST exept 
that it would not participate to scoring.

Why do we need it given that we already have FilteredQuery? The idea is that by 
having a single query that performs conjunctions, we could potentially take 
better decisions. It's not ready to replace FilteredQuery yet as FilteredQuery 
has handling of random-access filters that BooleanQuery doesn't, but it's a 
first step towards that direction and eventually FilteredQuery would just 
rewrite to a BooleanQuery.

I've been calling this new clause type FILTER so far, but feel free to propose 
a better name.


 Add BooleanClause.Occur.FILTER
 --

 Key: LUCENE-6227
 URL: https://issues.apache.org/jira/browse/LUCENE-6227
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6227.patch


 Now that we have weight-level control of whether scoring is needed or not, we 
 could add a new clause type to BooleanQuery. It would behave like MUST exept 
 that it would not participate in scoring.
 Why do we need it given that we already have FilteredQuery? The idea is that 
 by having a single query that performs conjunctions, we could potentially 
 take better decisions. It's not ready to replace FilteredQuery yet as 
 FilteredQuery has handling of random-access filters that BooleanQuery 
 doesn't, but it's a first step towards that direction and eventually 
 FilteredQuery would just rewrite to a BooleanQuery.
 I've been calling this new clause type FILTER so far, but feel free to 
 propose a better name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6693) Start script for windows fails with 32bit JRE

2015-02-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312326#comment-14312326
 ] 

ASF subversion and git services commented on SOLR-6693:
---

Commit 1658423 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1658423 ]

SOLR-6693: bin\solr.cmd doesn't support 32-bit JRE/JDK running on Windows due 
to parenthesis in JAVA_HOME

 Start script for windows fails with 32bit JRE
 -

 Key: SOLR-6693
 URL: https://issues.apache.org/jira/browse/SOLR-6693
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 4.10.2
 Environment: WINDOWS 8.1
Reporter: Jan Høydahl
Assignee: Timothy Potter
  Labels: bin\solr.cmd
 Fix For: 5.0, Trunk

 Attachments: SOLR-6693.patch, SOLR-6693.patch, SOLR-6693.patch, 
 solr.cmd, solr.cmd.patch


 *Reproduce:*
 # Install JRE8 from www.java.com (typically {{C:\Program Files 
 (x86)\Java\jre1.8.0_25}})
 # Run the command {{bin\solr start -V}}
 The result is:
 {{\Java\jre1.8.0_25\bin\java was unexpected at this time.}}
 *Reason*
 This comes from bad quoting of the {{%SOLR%}} variable. I think it's because 
 of the parenthesis that it freaks out. I think the same would apply for a 
 32-bit JDK because of the (x86) in the path, but I have not tested.
 Tip: You can remove the line {{@ECHO OFF}} at the top to see exactly which is 
 the offending line
 *Solution*
 Quoting the lines where %JAVA% is printed, e.g. instead of
 {noformat}
   @echo Using Java: %JAVA%
 {noformat}
 then use
 {noformat}
   @echo Using Java: %JAVA%
 {noformat}
 This is needed several places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6224) move package.htmls to package-info.java for better tooling support

2015-02-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312367#comment-14312367
 ] 

ASF subversion and git services commented on LUCENE-6224:
-

Commit 1658440 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1658440 ]

LUCENE-6224: cut over more package.htmls

 move package.htmls to package-info.java for better tooling support
 --

 Key: LUCENE-6224
 URL: https://issues.apache.org/jira/browse/LUCENE-6224
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir

 Today, on java8, if you typo a link in the package documentation of 
 org.apache.lucene.search (package.html) like this:
 {code}
 {@link org.apache.lucene.search.TermQueryX TermQuery}
 {code}
 then javadoc will silently do the wrong thing, it will generate a 
 codexxx/code block with no link at all.
 On the other hand, if instead we do it as package-info.java, then it shows up 
 in big red letters as an error in my IDE, doclint catches it at compile time, 
 etc, and we ensure our links are doing what we want.
 {code}
 [javac] 
 /home/rmuir/workspace/trunk/lucene/core/src/java/org/apache/lucene/search/package-info.java:75:
  error: reference not found
 [javac] {@link org.apache.lucene.search.TermQueryX TermQuery}
 {code}
 I think we should cutover? this also helps us rely less on our own linting 
 scripts long term because now doclint is checking these files too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6224) move package.htmls to package-info.java for better tooling support

2015-02-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312433#comment-14312433
 ] 

ASF subversion and git services commented on LUCENE-6224:
-

Commit 1658455 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1658455 ]

LUCENE-6224: cut over more package.htmls

 move package.htmls to package-info.java for better tooling support
 --

 Key: LUCENE-6224
 URL: https://issues.apache.org/jira/browse/LUCENE-6224
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir

 Today, on java8, if you typo a link in the package documentation of 
 org.apache.lucene.search (package.html) like this:
 {code}
 {@link org.apache.lucene.search.TermQueryX TermQuery}
 {code}
 then javadoc will silently do the wrong thing, it will generate a 
 codexxx/code block with no link at all.
 On the other hand, if instead we do it as package-info.java, then it shows up 
 in big red letters as an error in my IDE, doclint catches it at compile time, 
 etc, and we ensure our links are doing what we want.
 {code}
 [javac] 
 /home/rmuir/workspace/trunk/lucene/core/src/java/org/apache/lucene/search/package-info.java:75:
  error: reference not found
 [javac] {@link org.apache.lucene.search.TermQueryX TermQuery}
 {code}
 I think we should cutover? this also helps us rely less on our own linting 
 scripts long term because now doclint is checking these files too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6224) move package.htmls to package-info.java for better tooling support

2015-02-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312476#comment-14312476
 ] 

ASF subversion and git services commented on LUCENE-6224:
-

Commit 1658467 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1658467 ]

LUCENE-6224: cut over more package.htmls

 move package.htmls to package-info.java for better tooling support
 --

 Key: LUCENE-6224
 URL: https://issues.apache.org/jira/browse/LUCENE-6224
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir

 Today, on java8, if you typo a link in the package documentation of 
 org.apache.lucene.search (package.html) like this:
 {code}
 {@link org.apache.lucene.search.TermQueryX TermQuery}
 {code}
 then javadoc will silently do the wrong thing, it will generate a 
 codexxx/code block with no link at all.
 On the other hand, if instead we do it as package-info.java, then it shows up 
 in big red letters as an error in my IDE, doclint catches it at compile time, 
 etc, and we ensure our links are doing what we want.
 {code}
 [javac] 
 /home/rmuir/workspace/trunk/lucene/core/src/java/org/apache/lucene/search/package-info.java:75:
  error: reference not found
 [javac] {@link org.apache.lucene.search.TermQueryX TermQuery}
 {code}
 I think we should cutover? this also helps us rely less on our own linting 
 scripts long term because now doclint is checking these files too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.8.0_40-ea-b22) - Build # 11606 - Failure!

2015-02-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11606/
Java: 32bit/jdk1.8.0_40-ea-b22 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestSolrConfigHandlerCloud.test

Error Message:
Could not get expected value  'CY val' for path 'params/c' full output: {   
responseHeader:{ status:0, QTime:0},   params:{ 
useParams:, wt:json},   context:{ webapp:/w_af/ez, 
path:/dump, httpMethod:GET}}

Stack Trace:
java.lang.AssertionError: Could not get expected value  'CY val' for path 
'params/c' full output: {
  responseHeader:{
status:0,
QTime:0},
  params:{
useParams:,
wt:json},
  context:{
webapp:/w_af/ez,
path:/dump,
httpMethod:GET}}
at 
__randomizedtesting.SeedInfo.seed([79A3CAA7FFC942CB:F1F7F57D51352F33]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:399)
at 
org.apache.solr.handler.TestSolrConfigHandlerCloud.testReqParams(TestSolrConfigHandlerCloud.java:200)
at 
org.apache.solr.handler.TestSolrConfigHandlerCloud.test(TestSolrConfigHandlerCloud.java:77)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-6224) move package.htmls to package-info.java for better tooling support

2015-02-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312457#comment-14312457
 ] 

ASF subversion and git services commented on LUCENE-6224:
-

Commit 1658464 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1658464 ]

LUCENE-6224: cut over more package.htmls

 move package.htmls to package-info.java for better tooling support
 --

 Key: LUCENE-6224
 URL: https://issues.apache.org/jira/browse/LUCENE-6224
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir

 Today, on java8, if you typo a link in the package documentation of 
 org.apache.lucene.search (package.html) like this:
 {code}
 {@link org.apache.lucene.search.TermQueryX TermQuery}
 {code}
 then javadoc will silently do the wrong thing, it will generate a 
 codexxx/code block with no link at all.
 On the other hand, if instead we do it as package-info.java, then it shows up 
 in big red letters as an error in my IDE, doclint catches it at compile time, 
 etc, and we ensure our links are doing what we want.
 {code}
 [javac] 
 /home/rmuir/workspace/trunk/lucene/core/src/java/org/apache/lucene/search/package-info.java:75:
  error: reference not found
 [javac] {@link org.apache.lucene.search.TermQueryX TermQuery}
 {code}
 I think we should cutover? this also helps us rely less on our own linting 
 scripts long term because now doclint is checking these files too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7091) Data-driven schema and block-join style update requests don't play well together

2015-02-09 Thread Timothy Potter (JIRA)
Timothy Potter created SOLR-7091:


 Summary: Data-driven schema and block-join style update requests 
don't play well together
 Key: SOLR-7091
 URL: https://issues.apache.org/jira/browse/SOLR-7091
 Project: Solr
  Issue Type: Bug
Reporter: Timothy Potter


Tried to index the basic block-join example docs from the refguide (see link 
below) into the gettingstarted collection, which uses the data-driven schema 
configs:
https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherParsers-BlockJoinQueryParsers

Here's the result:

{code}
$ bin/post -c gettingstarted block-join/docs.xml 
java -classpath dist/solr-core-6.0.0-SNAPSHOT.jar -Dauto=yes -Dc=gettingstarted 
-Ddata=files org.apache.solr.util.SimplePostTool block-join/docs.xml
SimplePostTool version 5.0.0
Posting files to [base] url http://localhost:8983/solr/gettingstarted/update...
Entering auto mode. File endings considered are 
xml,json,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf,htm,html,txt,log
POSTing file docs.xml (application/xml) to [base]
SimplePostTool: WARNING: Solr returned an error #400 (Bad Request) for url: 
http://localhost:8983/solr/gettingstarted/update
SimplePostTool: WARNING: Response: ?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status400/intint 
name=QTime429/int/lstlst name=errorstr name=msgundefined field: 
comments/strint name=code400/int/lst
/response
SimplePostTool: WARNING: IOException while reading response: 
java.io.IOException: Server returned HTTP response code: 400 for URL: 
http://localhost:8983/solr/gettingstarted/update
1 files indexed.
COMMITting Solr index changes to 
http://localhost:8983/solr/gettingstarted/update...
Time spent: 0:00:00.481
{code}

Logs from the leader are:

{code}
INFO  - 2015-02-09 17:08:27.882; org.apache.solr.schema.ManagedIndexSchema; 
Persisted managed schema version 1 at /configs/gettingstarted/managed-schema
ERROR - 2015-02-09 17:08:27.882; org.apache.solr.schema.ManagedIndexSchema; Bad 
version when trying to persist schema using 0 due to: 
org.apache.zookeeper.KeeperException$BadVersionException: KeeperErrorCode = 
BadVersion for /configs/gettingstarted/managed-schema
INFO  - 2015-02-09 17:08:27.883; org.apache.solr.schema.ManagedIndexSchema; 
Failed to persist managed schema at /configs/gettingstarted/managed-schema - 
version mismatch
INFO  - 2015-02-09 17:08:27.882; org.apache.solr.schema.ZkIndexSchemaReader$1; 
A schema change: WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/configs/gettingstarted/managed-schema, has occurred - updating schema 
from ZooKeeper ...
INFO  - 2015-02-09 17:08:27.882; org.apache.solr.schema.ZkIndexSchemaReader$1; 
A schema change: WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/configs/gettingstarted/managed-schema, has occurred - updating schema 
from ZooKeeper ...
INFO  - 2015-02-09 17:08:27.884; org.apache.solr.schema.ZkIndexSchemaReader; 
Retrieved schema version 1 from ZooKeeper
ERROR - 2015-02-09 17:08:27.891; org.apache.solr.schema.ManagedIndexSchema; Bad 
version when trying to persist schema using 0 due to: 
org.apache.zookeeper.KeeperException$BadVersionException: KeeperErrorCode = 
BadVersion for /configs/gettingstarted/managed-schema
INFO  - 2015-02-09 17:08:27.892; org.apache.solr.schema.ManagedIndexSchema; 
Failed to persist managed schema at /configs/gettingstarted/managed-schema - 
version mismatch
ERROR - 2015-02-09 17:08:27.896; org.apache.solr.schema.ManagedIndexSchema; Bad 
version when trying to persist schema using 0 due to: 
org.apache.zookeeper.KeeperException$BadVersionException: KeeperErrorCode = 
BadVersion for /configs/gettingstarted/managed-schema
INFO  - 2015-02-09 17:08:27.896; org.apache.solr.schema.ManagedIndexSchema; 
Failed to persist managed schema at /configs/gettingstarted/managed-schema - 
version mismatch
INFO  - 2015-02-09 17:08:27.896; org.apache.solr.schema.ZkIndexSchemaReader; 
Retrieved schema version 1 from ZooKeeper
INFO  - 2015-02-09 17:08:27.947; 
org.apache.solr.update.processor.LogUpdateProcessor; 
[gettingstarted_shard1_replica2] webapp=/solr path=/update 
params={update.distrib=TOLEADERupdate.chain=add-unknown-fields-to-the-schemadistrib.from=http://192.168.1.2:8983/solr/gettingstarted_shard2_replica2/wt=javabinversion=2}
 {} 0 90
ERROR - 2015-02-09 17:08:27.949; org.apache.solr.common.SolrException; 
org.apache.solr.common.SolrException: undefined field: comments
at org.apache.solr.schema.IndexSchema.getField(IndexSchema.java:1221)
at 
org.apache.solr.schema.IndexSchema.getCopyFieldsList(IndexSchema.java:1338)
at 
org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:110)
at 
org.apache.solr.update.AddUpdateCommand$1.next(AddUpdateCommand.java:187)
at 
org.apache.solr.update.AddUpdateCommand$1.next(AddUpdateCommand.java:162)
at 

[jira] [Commented] (SOLR-6693) Start script for windows fails with 32bit JRE

2015-02-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312339#comment-14312339
 ] 

ASF subversion and git services commented on SOLR-6693:
---

Commit 1658428 from [~thelabdude] in branch 'dev/branches/lucene_solr_5_0'
[ https://svn.apache.org/r1658428 ]

SOLR-6693: bin\solr.cmd doesn't support 32-bit JRE/JDK running on Windows due 
to parenthesis in JAVA_HOME

 Start script for windows fails with 32bit JRE
 -

 Key: SOLR-6693
 URL: https://issues.apache.org/jira/browse/SOLR-6693
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 4.10.2
 Environment: WINDOWS 8.1
Reporter: Jan Høydahl
Assignee: Timothy Potter
  Labels: bin\solr.cmd
 Fix For: 5.0, Trunk

 Attachments: SOLR-6693.patch, SOLR-6693.patch, SOLR-6693.patch, 
 solr.cmd, solr.cmd.patch


 *Reproduce:*
 # Install JRE8 from www.java.com (typically {{C:\Program Files 
 (x86)\Java\jre1.8.0_25}})
 # Run the command {{bin\solr start -V}}
 The result is:
 {{\Java\jre1.8.0_25\bin\java was unexpected at this time.}}
 *Reason*
 This comes from bad quoting of the {{%SOLR%}} variable. I think it's because 
 of the parenthesis that it freaks out. I think the same would apply for a 
 32-bit JDK because of the (x86) in the path, but I have not tested.
 Tip: You can remove the line {{@ECHO OFF}} at the top to see exactly which is 
 the offending line
 *Solution*
 Quoting the lines where %JAVA% is printed, e.g. instead of
 {noformat}
   @echo Using Java: %JAVA%
 {noformat}
 then use
 {noformat}
   @echo Using Java: %JAVA%
 {noformat}
 This is needed several places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6224) move package.htmls to package-info.java for better tooling support

2015-02-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312440#comment-14312440
 ] 

ASF subversion and git services commented on LUCENE-6224:
-

Commit 1658460 from [~jpountz] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1658460 ]

LUCENE-6224: cut over more package.htmls

 move package.htmls to package-info.java for better tooling support
 --

 Key: LUCENE-6224
 URL: https://issues.apache.org/jira/browse/LUCENE-6224
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir

 Today, on java8, if you typo a link in the package documentation of 
 org.apache.lucene.search (package.html) like this:
 {code}
 {@link org.apache.lucene.search.TermQueryX TermQuery}
 {code}
 then javadoc will silently do the wrong thing, it will generate a 
 codexxx/code block with no link at all.
 On the other hand, if instead we do it as package-info.java, then it shows up 
 in big red letters as an error in my IDE, doclint catches it at compile time, 
 etc, and we ensure our links are doing what we want.
 {code}
 [javac] 
 /home/rmuir/workspace/trunk/lucene/core/src/java/org/apache/lucene/search/package-info.java:75:
  error: reference not found
 [javac] {@link org.apache.lucene.search.TermQueryX TermQuery}
 {code}
 I think we should cutover? this also helps us rely less on our own linting 
 scripts long term because now doclint is checking these files too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_40-ea-b22) - Build # 11767 - Failure!

2015-02-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11767/
Java: 64bit/jdk1.8.0_40-ea-b22 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
{   responseHeader:{ status:404, QTime:3},   error:{ 
msg:no such blob or version available: test/1, code:404}}

Stack Trace:
java.lang.AssertionError: {
  responseHeader:{
status:404,
QTime:3},
  error:{
msg:no such blob or version available: test/1,
code:404}}
at 
__randomizedtesting.SeedInfo.seed([A4ACF6659668D699:7CE1DB3261B57339]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:108)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Updated] (SOLR-5507) Admin UI - Refactoring using AngularJS

2015-02-09 Thread Upayavira (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Upayavira updated SOLR-5507:

Attachment: SOLR5507.patch.gz

Patch that keeps RAT happy - I have executed (cd solr/webapp; ant rat-sources) 
without complaint.

To achieve this I added license headers to all of the library files. I also 
opted to add licenses to each AngularJS library file, rather than modify 
lucene/common-build.

This patch also includes a functionally correct cloud/tree page.

 Admin UI - Refactoring using AngularJS
 --

 Key: SOLR-5507
 URL: https://issues.apache.org/jira/browse/SOLR-5507
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Reporter: Stefan Matheis (steffkes)
Assignee: Erick Erickson
Priority: Minor
 Attachments: SOLR-5507.patch, SOLR5507.patch, SOLR5507.patch, 
 SOLR5507.patch, SOLR5507.patch, SOLR5507.patch, SOLR5507.patch, 
 SOLR5507.patch.gz


 On the LSR in Dublin, i've talked again to [~upayavira] and this time we 
 talked about Refactoring the existing UI - using AngularJS: providing (more, 
 internal) structure and what not ;
 He already started working on the Refactoring, so this is more a 'tracking' 
 issue about the progress he/we do there.
 Will extend this issue with a bit more context  additional information, w/ 
 thoughts about the possible integration in the existing UI and more (:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5743) Faceting with BlockJoin support

2015-02-09 Thread Dr Oleg Savrasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dr Oleg Savrasov updated SOLR-5743:
---
Attachment: SOLR-5743.patch

 Faceting with BlockJoin support
 ---

 Key: SOLR-5743
 URL: https://issues.apache.org/jira/browse/SOLR-5743
 Project: Solr
  Issue Type: New Feature
Reporter: abipc
  Labels: features
 Attachments: SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch, 
 SOLR-5743.patch


 For a sample inventory(note - nested documents) like this -   
  doc
 field name=id10/field
 field name=type_sparent/field
 field name=BRAND_sNike/field
 doc
 field name=id11/field
 field name=COLOR_sRed/field
 field name=SIZE_sXL/field
 /doc
 doc
 field name=id12/field
 field name=COLOR_sBlue/field
 field name=SIZE_sXL/field
 /doc
 /doc
 Faceting results must contain - 
 Red(1)
 XL(1) 
 Blue(1) 
 for a q=* query. 
 PS : The inventory example has been taken from this blog - 
 http://blog.griddynamics.com/2013/09/solr-block-join-support.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6693) Start script for windows fails with 32bit JRE

2015-02-09 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter resolved SOLR-6693.
--
Resolution: Fixed

 Start script for windows fails with 32bit JRE
 -

 Key: SOLR-6693
 URL: https://issues.apache.org/jira/browse/SOLR-6693
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 4.10.2
 Environment: WINDOWS 8.1
Reporter: Jan Høydahl
Assignee: Timothy Potter
  Labels: bin\solr.cmd
 Fix For: 5.0, Trunk

 Attachments: SOLR-6693.patch, SOLR-6693.patch, SOLR-6693.patch, 
 solr.cmd, solr.cmd.patch


 *Reproduce:*
 # Install JRE8 from www.java.com (typically {{C:\Program Files 
 (x86)\Java\jre1.8.0_25}})
 # Run the command {{bin\solr start -V}}
 The result is:
 {{\Java\jre1.8.0_25\bin\java was unexpected at this time.}}
 *Reason*
 This comes from bad quoting of the {{%SOLR%}} variable. I think it's because 
 of the parenthesis that it freaks out. I think the same would apply for a 
 32-bit JDK because of the (x86) in the path, but I have not tested.
 Tip: You can remove the line {{@ECHO OFF}} at the top to see exactly which is 
 the offending line
 *Solution*
 Quoting the lines where %JAVA% is printed, e.g. instead of
 {noformat}
   @echo Using Java: %JAVA%
 {noformat}
 then use
 {noformat}
   @echo Using Java: %JAVA%
 {noformat}
 This is needed several places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4479) TermVectorComponent NPE when running Solr Cloud

2015-02-09 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312362#comment-14312362
 ] 

Noble Paul commented on SOLR-4479:
--

If {{RequestHandler instanceof SearchHandler}} we can make {{shards.qt=qt}} by 
default. For others, let the requesthandler set the value or let the user 
configure it explicitly  

 TermVectorComponent NPE when running Solr Cloud
 ---

 Key: SOLR-4479
 URL: https://issues.apache.org/jira/browse/SOLR-4479
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.1
Reporter: Vitali Kviatkouski
Assignee: Timothy Potter

 When running Solr Cloud (just simply 2 shards - as described in wiki), got NPE
 java.lang.NullPointerException
   at 
 org.apache.solr.handler.component.TermVectorComponent.finishStage(TermVectorComponent.java:437)
   at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:317)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   at 
 org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:242)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1816)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:448)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:269)
   at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1307)
   at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:453)
   at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
   at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:560)
   at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1072)
   at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:382)
   at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1006)
   at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
   at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
 . Skipped
 To reproduce, follow the guide in wiki 
 (http://wiki.apache.org/solr/SolrCloud), add some documents and then request 
 http://localhost:8983/solr/collection1/tvrh?q=*%3A*
 If I include term search vector component in search handler, I get (on second 
 shard):
 SEVERE: null:java.lang.NullPointerException
 at 
 org.apache.solr.handler.component.TermVectorComponent.process(TermVectorComponent.java:321)
 at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:206)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1699)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6066) Collector that manages diversity in search results

2015-02-09 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6066:
-
Attachment: LUCENE-6066.patch

Hi Mark,

I played with your patch to see if removing the code duplication of 
PriorityQueue would hurt the benchmark and everything looks ok:

{code}
TaskQPS baseline  StdDev   QPS patch  StdDev
Pct diff
 LowSloppyPhrase   69.77  (4.5%)   69.25  (3.8%)   
-0.7% (  -8% -7%)
PKLookup  259.92  (3.2%)  258.50  (2.0%)   
-0.5% (  -5% -4%)
HighSloppyPhrase   13.96  (5.1%)   13.92  (4.8%)   
-0.3% (  -9% -   10%)
OrNotHighLow 1135.87  (6.6%) 1132.89  (5.5%)   
-0.3% ( -11% -   12%)
  AndHighLow 1075.94  (5.2%) 1073.63  (4.6%)   
-0.2% (  -9% -   10%)
   LowPhrase  124.58  (2.0%)  124.49  (1.8%)   
-0.1% (  -3% -3%)
   MedPhrase   78.58  (1.5%)   78.56  (1.6%)   
-0.0% (  -3% -3%)
 Prefix3   77.58  (4.7%)   77.59  (3.6%)
0.0% (  -7% -8%)
  HighPhrase   14.14  (1.4%)   14.16  (1.6%)
0.1% (  -2% -3%)
  AndHighMed  248.72  (3.9%)  249.23  (3.5%)
0.2% (  -6% -7%)
  Fuzzy1   72.16  (5.6%)   72.32  (6.2%)
0.2% ( -10% -   12%)
HighTerm   71.70  (5.3%)   71.91  (5.1%)
0.3% (  -9% -   11%)
   OrHighLow   68.70  (5.4%)   68.91  (5.7%)
0.3% ( -10% -   11%)
 MedTerm  220.94  (5.8%)  221.62  (5.4%)
0.3% ( -10% -   12%)
Wildcard   20.86  (1.6%)   20.92  (1.4%)
0.3% (  -2% -3%)
 LowSpanNear   16.46  (2.3%)   16.51  (2.3%)
0.3% (  -4% -5%)
 MedSpanNear   18.46  (2.4%)   18.52  (2.1%)
0.3% (  -3% -4%)
  IntNRQ6.63  (4.0%)6.65  (4.0%)
0.4% (  -7% -8%)
   OrHighNotHigh   38.52  (1.7%)   38.65  (1.5%)
0.4% (  -2% -3%)
   OrNotHighHigh   79.04  (2.3%)   79.33  (1.8%)
0.4% (  -3% -4%)
OrHighNotMed   52.77  (1.9%)   52.97  (1.5%)
0.4% (  -2% -3%)
 MedSloppyPhrase   44.24  (2.9%)   44.42  (2.5%)
0.4% (  -4% -5%)
   OrHighMed   47.19  (5.2%)   47.37  (5.4%)
0.4% (  -9% -   11%)
OrHighNotLow   85.13  (2.7%)   85.48  (2.1%)
0.4% (  -4% -5%)
  OrHighHigh   26.42  (5.1%)   26.55  (5.0%)
0.5% (  -9% -   11%)
 AndHighHigh   84.14  (3.6%)   84.67  (3.0%)
0.6% (  -5% -7%)
HighSpanNear   50.80  (1.8%)   51.18  (1.4%)
0.7% (  -2% -4%)
  Fuzzy2   38.02  (8.4%)   38.54  (7.5%)
1.3% ( -13% -   18%)
 LowTerm 1395.69  (8.9%) 1420.90  (8.4%)
1.8% ( -14% -   20%)
OrNotHighMed  310.39  (4.4%)  316.65  (3.8%)
2.0% (  -5% -   10%)
 Respell   82.66  (4.7%)   84.39  (4.4%)
2.1% (  -6% -   11%)
{code}

I attached the patch that I tested with.

+1 to commit

 Collector that manages diversity in search results
 --

 Key: LUCENE-6066
 URL: https://issues.apache.org/jira/browse/LUCENE-6066
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/query/scoring
Reporter: Mark Harwood
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-6066.patch, LUCENE-PQRemoveV8.patch, 
 LUCENE-PQRemoveV9.patch


 This issue provides a new collector for situations where a client doesn't 
 want more than N matches for any given key (e.g. no more than 5 products from 
 any one retailer in a marketplace). In these circumstances a document that 
 was previously thought of as competitive during collection has to be removed 
 from the final PQ and replaced with another doc (eg a retailer who already 
 has 5 matches in the PQ receives a 6th match which is better than his 
 previous ones). This requires a new remove method on the existing 
 PriorityQueue class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6224) move package.htmls to package-info.java for better tooling support

2015-02-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312470#comment-14312470
 ] 

ASF subversion and git services commented on LUCENE-6224:
-

Commit 1658465 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1658465 ]

LUCENE-6224: cut over more package.htmls

 move package.htmls to package-info.java for better tooling support
 --

 Key: LUCENE-6224
 URL: https://issues.apache.org/jira/browse/LUCENE-6224
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir

 Today, on java8, if you typo a link in the package documentation of 
 org.apache.lucene.search (package.html) like this:
 {code}
 {@link org.apache.lucene.search.TermQueryX TermQuery}
 {code}
 then javadoc will silently do the wrong thing, it will generate a 
 codexxx/code block with no link at all.
 On the other hand, if instead we do it as package-info.java, then it shows up 
 in big red letters as an error in my IDE, doclint catches it at compile time, 
 etc, and we ensure our links are doing what we want.
 {code}
 [javac] 
 /home/rmuir/workspace/trunk/lucene/core/src/java/org/apache/lucene/search/package-info.java:75:
  error: reference not found
 [javac] {@link org.apache.lucene.search.TermQueryX TermQuery}
 {code}
 I think we should cutover? this also helps us rely less on our own linting 
 scripts long term because now doclint is checking these files too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5743) Faceting with BlockJoin support

2015-02-09 Thread Dr Oleg Savrasov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312419#comment-14312419
 ] 

Dr Oleg Savrasov commented on SOLR-5743:


After investigating it, I've found that float and int types work fine for 
multivalued fields, i.e. they should be configured like

field name=RETAILER_ID type=int indexed=true stored=true 
docValues=true multiValued=true/
field name=PRICE type=float indexed=true stored=true docValues=true 
multiValued=true/

Unit test in the patch is extended to cover int and float types.
I'll try to find out if it's possible to make it working for 
multiValued=false.

 Faceting with BlockJoin support
 ---

 Key: SOLR-5743
 URL: https://issues.apache.org/jira/browse/SOLR-5743
 Project: Solr
  Issue Type: New Feature
Reporter: abipc
  Labels: features
 Attachments: SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch, 
 SOLR-5743.patch


 For a sample inventory(note - nested documents) like this -   
  doc
 field name=id10/field
 field name=type_sparent/field
 field name=BRAND_sNike/field
 doc
 field name=id11/field
 field name=COLOR_sRed/field
 field name=SIZE_sXL/field
 /doc
 doc
 field name=id12/field
 field name=COLOR_sBlue/field
 field name=SIZE_sXL/field
 /doc
 /doc
 Faceting results must contain - 
 Red(1)
 XL(1) 
 Blue(1) 
 for a q=* query. 
 PS : The inventory example has been taken from this blog - 
 http://blog.griddynamics.com/2013/09/solr-block-join-support.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 758 - Still Failing

2015-02-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/758/

7 tests failed.
REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test

Error Message:
shard4 is not consistent.  Got 324 from 
http://127.0.0.1:59280/yo/bk/collection1lastClient and got 297 from 
http://127.0.0.1:59362/yo/bk/collection1

Stack Trace:
java.lang.AssertionError: shard4 is not consistent.  Got 324 from 
http://127.0.0.1:59280/yo/bk/collection1lastClient and got 297 from 
http://127.0.0.1:59362/yo/bk/collection1
at 
__randomizedtesting.SeedInfo.seed([B33638437AC7E87F:3B620799D43B8587]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1246)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1225)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test(ChaosMonkeySafeLeaderTest.java:161)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 

[jira] [Commented] (LUCENE-3973) Incorporate PMD / FindBugs

2015-02-09 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313404#comment-14313404
 ] 

Robert Muir commented on LUCENE-3973:
-

Here's my patch for tooling for javac warnings. I don't think more tooling is 
needed on the javac side, no fancy ant macros, no bikesheds, instead just code 
fixing. Folks can either fix the warnings, or add \@SuppressWarnings and fix 
them later. This way, any new warnings will fail the build if introduced.

If this is too much to handle at once, and we want to do it per-module, then 
put a javac.args=-Xlint:all before common-build.xml is imported, in each 
module that fails.

{noformat}
Index: lucene/common-build.xml
===
--- lucene/common-build.xml (revision 1658537)
+++ lucene/common-build.xml (working copy)
@@ -164,7 +164,8 @@
   property name=javac.debug value=on/
   property name=javac.source value=1.8/
   property name=javac.target value=1.8/
-  property name=javac.args value=-Xlint -Xlint:-deprecation -Xlint:-serial 
-Xlint:-options/
+  !-- all warnings, except deprecation. --
+  property name=javac.args value=-Werror -Xlint:auxiliaryclass -Xlint:cast 
-Xlint:classfile -Xlint:-deprecation -Xlint:dep-ann -Xlint:divzero -Xlint:empty 
-Xlint:fallthrough -Xlint:finally -Xlint:options -Xlint:overloads 
-Xlint:overrides -Xlint:path -Xlint:processing -Xlint:rawtypes -Xlint:static 
-Xlint:try -Xlint:unchecked -Xlint:varargs/
   property name=javadoc.link 
value=http://download.oracle.com/javase/8/docs/api//
   property name=javadoc.link.junit 
value=http://junit.sourceforge.net/javadoc//
   property name=javadoc.packagelist.dir 
location=${common.dir}/tools/javadoc/
{noformat}

 Incorporate PMD / FindBugs
 --

 Key: LUCENE-3973
 URL: https://issues.apache.org/jira/browse/LUCENE-3973
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Chris Male
  Labels: newdev
 Attachments: LUCENE-3973.patch, LUCENE-3973.patch, LUCENE-3973.patch, 
 LUCENE-3973.patch, LUCENE-3973.patch, LUCENE-3973.patch, LUCENE-3973.patch, 
 LUCENE-3973.patch, core.html, solr-core.html


 This has been touched on a few times over the years.  Having static analysis 
 as part of our build seems like a big win.  For example, we could use PMD to 
 look at {{System.out.println}} statements like discussed in LUCENE-3877 and 
 we could possibly incorporate the nocommit / @author checks as well.
 There are a few things to work out as part of this:
 - Should we use both PMD and FindBugs or just one of them? They look at code 
 from different perspectives (bytecode vs source code) and target different 
 issues.  At the moment I'm in favour of trying both but that might be too 
 heavy handed for our needs.
 - What checks should we use? There's no point having the analysis if it's 
 going to raise too many false-positives or problems we don't deem 
 problematic.  
 - How should the analysis be integrated in our build? Need to work out when 
 the analysis should run, how it should be incorporated in Ant and/or Maven, 
 what impact errors should have.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6231) smokeTestRelease.py should retry failed downloads

2015-02-09 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-6231:
---
Attachment: LUCENE-6231.patch

Patch adding auto-retry to smoke tester downloads.

I successfully used this patch to run the smoke tester against the 5.0 RC2.

 smokeTestRelease.py should retry failed downloads
 -

 Key: LUCENE-6231
 URL: https://issues.apache.org/jira/browse/LUCENE-6231
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Steve Rowe
Assignee: Steve Rowe
 Attachments: LUCENE-6231.patch


 In the 5.0 RC2 vote thread, [~anshumg] mentioned that 6 attempts at running 
 the smoke tester against the people.apache.org RC URL all failed because of 
 download failures.
 I had the same problem - my first two attempts also failed because of failed 
 downloads - here's the trace from one of them:
 {noformat}
 Traceback (most recent call last):
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
  line 1248, in do_open
 h.request(req.get_method(), req.selector, req.data, headers)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/http/client.py,
  line 1061, in request
 self._send_request(method, url, body, headers)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/http/client.py,
  line 1099, in _send_request
 self.endheaders(body)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/http/client.py,
  line 1057, in endheaders
 self._send_output(message_body)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/http/client.py,
  line 902, in _send_output
 self.send(msg)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/http/client.py,
  line 840, in send
 self.connect()
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/http/client.py,
  line 818, in connect
 self.timeout, self.source_address)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/socket.py,
  line 435, in create_connection
 raise err
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/socket.py,
  line 426, in create_connection
 sock.connect(sa)
 TimeoutError: [Errno 60] Operation timed out
 During handling of the above exception, another exception occurred:
 Traceback (most recent call last):
   File dev-tools/scripts/smokeTestRelease.py, line 117, in download
 fIn = urllib.request.urlopen(urlString)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
  line 156, in urlopen
 return opener.open(url, data, timeout)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
  line 469, in open
 response = self._open(req, data)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
  line 487, in _open
 '_open', req)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
  line 447, in _call_chain
 result = func(*args)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
  line 1268, in http_open
 return self.do_open(http.client.HTTPConnection, req)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
  line 1251, in do_open
 raise URLError(err)
 urllib.error.URLError: urlopen error [Errno 60] Operation timed out
 The above exception was the direct cause of the following exception:
 Traceback (most recent call last):
   File dev-tools/scripts/smokeTestRelease.py, line 1523, in module
 main()
   File dev-tools/scripts/smokeTestRelease.py, line 1468, in main
 smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, c.is_signed, ' 
 '.join(c.test_args))
   File dev-tools/scripts/smokeTestRelease.py, line 1517, in smokeTest
 checkMaven(baseURL, tmpDir, svnRevision, version, isSigned)
   File dev-tools/scripts/smokeTestRelease.py, line 1012, in checkMaven
 crawl(artifacts[project], artifactsURL, targetDir)
   File dev-tools/scripts/smokeTestRelease.py, line 1280, in crawl
 crawl(downloadedFiles, subURL, path, exclusions)
   File dev-tools/scripts/smokeTestRelease.py, line 1280, in 

Re: [VOTE] 5.0.0 RC2

2015-02-09 Thread Steve Rowe
+1

SUCCESS! [0:54:06.294759]

Steve

 On Feb 9, 2015, at 6:16 PM, Anshum Gupta ans...@anshumgupta.net wrote:
 
 Please vote for the second release candidate for Lucene/Solr 5.0.0.
 
 The artifacts can be downloaded here:
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-rev1658469
 
 Or you can run the smoke tester directly with this command:
 python3.2 dev-tools/scripts/smokeTestRelease.py 
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-rev1658469
 
 
 I could not get the above command to work as downloading some file or the 
 other timed out for me (over 6 attempts) so I instead downloaded the entire 
 RC as a tgz. I still have it here:
 
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-rev1658469.tgz
 
 Untar the above folder at a location of choice. Do not change the name of the 
 folder as the smokeTestRelease.py extracts information from that.
 
 and then instead of using http, used file://. Here's the command:
 
 python3.2 dev-tools/scripts/smokeTestRelease.py 
 file://path_to_the_extracted_folder
 
 and finally, here's my +1:
 
  SUCCESS! [0:30:50.246761]
 
 
 -- 
 Anshum Gupta
 http://about.me/anshumgupta


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6231) smokeTestRelease.py should retry failed downloads

2015-02-09 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313466#comment-14313466
 ] 

Steve Rowe commented on LUCENE-6231:


I forgot to mention that the smoke tester had to retry downloading one file 
when I ran it against the 5.0 RC2, so the patch worked for me.

 smokeTestRelease.py should retry failed downloads
 -

 Key: LUCENE-6231
 URL: https://issues.apache.org/jira/browse/LUCENE-6231
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Steve Rowe
Assignee: Steve Rowe
 Attachments: LUCENE-6231.patch


 In the 5.0 RC2 vote thread, [~anshumg] mentioned that 6 attempts at running 
 the smoke tester against the people.apache.org RC URL all failed because of 
 download failures.
 I had the same problem - my first two attempts also failed because of failed 
 downloads - here's the trace from one of them:
 {noformat}
 Traceback (most recent call last):
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
  line 1248, in do_open
 h.request(req.get_method(), req.selector, req.data, headers)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/http/client.py,
  line 1061, in request
 self._send_request(method, url, body, headers)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/http/client.py,
  line 1099, in _send_request
 self.endheaders(body)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/http/client.py,
  line 1057, in endheaders
 self._send_output(message_body)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/http/client.py,
  line 902, in _send_output
 self.send(msg)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/http/client.py,
  line 840, in send
 self.connect()
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/http/client.py,
  line 818, in connect
 self.timeout, self.source_address)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/socket.py,
  line 435, in create_connection
 raise err
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/socket.py,
  line 426, in create_connection
 sock.connect(sa)
 TimeoutError: [Errno 60] Operation timed out
 During handling of the above exception, another exception occurred:
 Traceback (most recent call last):
   File dev-tools/scripts/smokeTestRelease.py, line 117, in download
 fIn = urllib.request.urlopen(urlString)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
  line 156, in urlopen
 return opener.open(url, data, timeout)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
  line 469, in open
 response = self._open(req, data)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
  line 487, in _open
 '_open', req)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
  line 447, in _call_chain
 result = func(*args)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
  line 1268, in http_open
 return self.do_open(http.client.HTTPConnection, req)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
  line 1251, in do_open
 raise URLError(err)
 urllib.error.URLError: urlopen error [Errno 60] Operation timed out
 The above exception was the direct cause of the following exception:
 Traceback (most recent call last):
   File dev-tools/scripts/smokeTestRelease.py, line 1523, in module
 main()
   File dev-tools/scripts/smokeTestRelease.py, line 1468, in main
 smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, c.is_signed, ' 
 '.join(c.test_args))
   File dev-tools/scripts/smokeTestRelease.py, line 1517, in smokeTest
 checkMaven(baseURL, tmpDir, svnRevision, version, isSigned)
   File dev-tools/scripts/smokeTestRelease.py, line 1012, in checkMaven
 crawl(artifacts[project], artifactsURL, targetDir)
   File dev-tools/scripts/smokeTestRelease.py, line 1280, in crawl
 crawl(downloadedFiles, subURL, path, exclusions)
   File 

[jira] [Commented] (LUCENE-6231) smokeTestRelease.py should retry failed downloads

2015-02-09 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313669#comment-14313669
 ] 

Ryan Ernst commented on LUCENE-6231:


+1

 smokeTestRelease.py should retry failed downloads
 -

 Key: LUCENE-6231
 URL: https://issues.apache.org/jira/browse/LUCENE-6231
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Steve Rowe
Assignee: Steve Rowe
 Attachments: LUCENE-6231.patch


 In the 5.0 RC2 vote thread, [~anshumg] mentioned that 6 attempts at running 
 the smoke tester against the people.apache.org RC URL all failed because of 
 download failures.
 I had the same problem - my first two attempts also failed because of failed 
 downloads - here's the trace from one of them:
 {noformat}
 Traceback (most recent call last):
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
  line 1248, in do_open
 h.request(req.get_method(), req.selector, req.data, headers)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/http/client.py,
  line 1061, in request
 self._send_request(method, url, body, headers)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/http/client.py,
  line 1099, in _send_request
 self.endheaders(body)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/http/client.py,
  line 1057, in endheaders
 self._send_output(message_body)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/http/client.py,
  line 902, in _send_output
 self.send(msg)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/http/client.py,
  line 840, in send
 self.connect()
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/http/client.py,
  line 818, in connect
 self.timeout, self.source_address)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/socket.py,
  line 435, in create_connection
 raise err
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/socket.py,
  line 426, in create_connection
 sock.connect(sa)
 TimeoutError: [Errno 60] Operation timed out
 During handling of the above exception, another exception occurred:
 Traceback (most recent call last):
   File dev-tools/scripts/smokeTestRelease.py, line 117, in download
 fIn = urllib.request.urlopen(urlString)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
  line 156, in urlopen
 return opener.open(url, data, timeout)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
  line 469, in open
 response = self._open(req, data)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
  line 487, in _open
 '_open', req)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
  line 447, in _call_chain
 result = func(*args)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
  line 1268, in http_open
 return self.do_open(http.client.HTTPConnection, req)
   File 
 /Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
  line 1251, in do_open
 raise URLError(err)
 urllib.error.URLError: urlopen error [Errno 60] Operation timed out
 The above exception was the direct cause of the following exception:
 Traceback (most recent call last):
   File dev-tools/scripts/smokeTestRelease.py, line 1523, in module
 main()
   File dev-tools/scripts/smokeTestRelease.py, line 1468, in main
 smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, c.is_signed, ' 
 '.join(c.test_args))
   File dev-tools/scripts/smokeTestRelease.py, line 1517, in smokeTest
 checkMaven(baseURL, tmpDir, svnRevision, version, isSigned)
   File dev-tools/scripts/smokeTestRelease.py, line 1012, in checkMaven
 crawl(artifacts[project], artifactsURL, targetDir)
   File dev-tools/scripts/smokeTestRelease.py, line 1280, in crawl
 crawl(downloadedFiles, subURL, path, exclusions)
   File dev-tools/scripts/smokeTestRelease.py, line 1280, in crawl
 crawl(downloadedFiles, subURL, path, exclusions)
   File 

[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.8.0_31) - Build # 4371 - Still Failing!

2015-02-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4371/
Java: 64bit/jdk1.8.0_31 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

5 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
Suite timeout exceeded (= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (= 720 msec).
at __randomizedtesting.SeedInfo.seed([14031E033F964C65]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestSolrConfigHandler

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 14031E033F964C65-001\tempDir-010\collection1\conf\params.json: 
java.nio.file.FileSystemException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 14031E033F964C65-001\tempDir-010\collection1\conf\params.json: The process 
cannot access the file because it is being used by another process. 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 14031E033F964C65-001\tempDir-010\collection1\conf: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 14031E033F964C65-001\tempDir-010\collection1\conf
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 14031E033F964C65-001\tempDir-010\collection1: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 14031E033F964C65-001\tempDir-010\collection1
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 14031E033F964C65-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 14031E033F964C65-001\tempDir-010
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 14031E033F964C65-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 14031E033F964C65-001\tempDir-010
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 14031E033F964C65-001: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 14031E033F964C65-001 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 14031E033F964C65-001\tempDir-010\collection1\conf\params.json: 
java.nio.file.FileSystemException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 14031E033F964C65-001\tempDir-010\collection1\conf\params.json: The process 
cannot access the file because it is being used by another process.

   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 14031E033F964C65-001\tempDir-010\collection1\conf: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 14031E033F964C65-001\tempDir-010\collection1\conf
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 14031E033F964C65-001\tempDir-010\collection1: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 14031E033F964C65-001\tempDir-010\collection1
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 14031E033F964C65-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 14031E033F964C65-001\tempDir-010
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 14031E033F964C65-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: 

kicking github sync?

2015-02-09 Thread Ryan McKinley
It looks like the last github sync was 5 days ago :(
https://github.com/apache/lucene-solr/commits/trunk

I know this tends to lag the apache mirror, but 5 days is more than usual

Any idea what (if anything) we can do to kick it?

Thanks
ryan


[jira] [Created] (LUCENE-6231) smokeTestRelease.py should retry failed downloads

2015-02-09 Thread Steve Rowe (JIRA)
Steve Rowe created LUCENE-6231:
--

 Summary: smokeTestRelease.py should retry failed downloads
 Key: LUCENE-6231
 URL: https://issues.apache.org/jira/browse/LUCENE-6231
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Steve Rowe
Assignee: Steve Rowe


In the 5.0 RC2 vote thread, [~anshumg] mentioned that 6 attempts at running the 
smoke tester against the people.apache.org RC URL all failed because of 
download failures.

I had the same problem - my first two attempts also failed because of failed 
downloads - here's the trace from one of them:

{noformat}
Traceback (most recent call last):
  File 
/Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
 line 1248, in do_open
h.request(req.get_method(), req.selector, req.data, headers)
  File 
/Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/http/client.py,
 line 1061, in request
self._send_request(method, url, body, headers)
  File 
/Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/http/client.py,
 line 1099, in _send_request
self.endheaders(body)
  File 
/Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/http/client.py,
 line 1057, in endheaders
self._send_output(message_body)
  File 
/Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/http/client.py,
 line 902, in _send_output
self.send(msg)
  File 
/Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/http/client.py,
 line 840, in send
self.connect()
  File 
/Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/http/client.py,
 line 818, in connect
self.timeout, self.source_address)
  File 
/Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/socket.py,
 line 435, in create_connection
raise err
  File 
/Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/socket.py,
 line 426, in create_connection
sock.connect(sa)
TimeoutError: [Errno 60] Operation timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File dev-tools/scripts/smokeTestRelease.py, line 117, in download
fIn = urllib.request.urlopen(urlString)
  File 
/Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
 line 156, in urlopen
return opener.open(url, data, timeout)
  File 
/Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
 line 469, in open
response = self._open(req, data)
  File 
/Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
 line 487, in _open
'_open', req)
  File 
/Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
 line 447, in _call_chain
result = func(*args)
  File 
/Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
 line 1268, in http_open
return self.do_open(http.client.HTTPConnection, req)
  File 
/Users/sarowe/homebrew/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py,
 line 1251, in do_open
raise URLError(err)
urllib.error.URLError: urlopen error [Errno 60] Operation timed out

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File dev-tools/scripts/smokeTestRelease.py, line 1523, in module
main()
  File dev-tools/scripts/smokeTestRelease.py, line 1468, in main
smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, c.is_signed, ' 
'.join(c.test_args))
  File dev-tools/scripts/smokeTestRelease.py, line 1517, in smokeTest
checkMaven(baseURL, tmpDir, svnRevision, version, isSigned)
  File dev-tools/scripts/smokeTestRelease.py, line 1012, in checkMaven
crawl(artifacts[project], artifactsURL, targetDir)
  File dev-tools/scripts/smokeTestRelease.py, line 1280, in crawl
crawl(downloadedFiles, subURL, path, exclusions)
  File dev-tools/scripts/smokeTestRelease.py, line 1280, in crawl
crawl(downloadedFiles, subURL, path, exclusions)
  File dev-tools/scripts/smokeTestRelease.py, line 1283, in crawl
download(text, subURL, targetDir, quiet=True)
  File dev-tools/scripts/smokeTestRelease.py, line 139, in download
raise RuntimeError('failed to download url %s' % urlString) from e
RuntimeError: failed to download url 

[jira] [Commented] (LUCENE-6232) Replace ValueSource context Map with a more concrete data type

2015-02-09 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313470#comment-14313470
 ] 

Mike Drob commented on LUCENE-6232:
---

What are our guarantees about backwards compatability? This would touch a _lot_ 
of code across both Solr and Lucene.

Do we need to leave the existing methods, and add deprecation annotations? When 
can we excise them?

 Replace ValueSource context Map with a more concrete data type
 --

 Key: LUCENE-6232
 URL: https://issues.apache.org/jira/browse/LUCENE-6232
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mike Drob

 Inspired by LUCENE-3973
 The context object used by ValueSource and friends is a raw Map that provides 
 no type safety guarantees. In our current state, there are lots of warnings 
 about unchecked casts, raw types, and generally unsafe code from the 
 compiler's perspective.
 There are several common patterns and types of Objects that we store in the 
 context. It would be beneficial to instead use a class with typed methods for 
 get/set of Scorer, Weights, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7005) facet.heatmap for spatial heatmap faceting on RPT

2015-02-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313399#comment-14313399
 ] 

ASF subversion and git services commented on SOLR-7005:
---

Commit 1658614 from [~dsmiley] in branch 'dev/trunk'
[ https://svn.apache.org/r1658614 ]

SOLR-7005: New facet.heatmap on spatial RPT fields

 facet.heatmap for spatial heatmap faceting on RPT
 -

 Key: SOLR-7005
 URL: https://issues.apache.org/jira/browse/SOLR-7005
 Project: Solr
  Issue Type: New Feature
  Components: spatial
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.1

 Attachments: SOLR-7005_heatmap.patch, SOLR-7005_heatmap.patch, 
 SOLR-7005_heatmap.patch, SOLR-7005_heatmap.patch, heatmap_512x256.png, 
 heatmap_64x32.png


 This is a new feature that uses the new spatial Heatmap / 2D PrefixTree cell 
 counter in Lucene spatial LUCENE-6191.  This is a form of faceting, and 
 as-such I think it should live in the facet parameter namespace.  Here's 
 what the parameters are:
 * facet=true
 * facet.heatmap=fieldname
 * facet.heatmap.bbox=\[-180 -90 TO 180 90]
 * facet.heatmap.gridLevel=6
 * facet.heatmap.distErrPct=0.10
 Like other faceting features, the fieldName can have local-params to exclude 
 filter queries or specify an output key.
 The bbox is optional; you get the whole world or you can specify a box or 
 actually any shape that WKT supports (you get the bounding box of whatever 
 you put).
 Ultimately, this feature needs to know the grid level, which together with 
 the input shape will yield a certain number of cells.  You can specify 
 gridLevel exactly, or don't and instead provide distErrPct which is computed 
 like it is for the RPT field type as seen in the schema.  0.10 yielded ~4k 
 cells but it'll vary.  There's also a facet.heatmap.maxCells safety net 
 defaulting to 100k.  Exceed this and you get an error.
 The output is (JSON):
 {noformat}
 {gridLevel=6,columns=64,rows=64,minX=-180.0,maxX=180.0,minY=-90.0,maxY=90.0,counts=[[0,
  0, 2, 1, ],[1, 1, 3, 2, ...],...]}
 {noformat}
 counts is null if all would be 0.  Perhaps individual row arrays should 
 likewise be null... I welcome feedback.
 I'm toying with an output format option in which you can specify a base-64'ed 
 grayscale PNG.
 Obviously this should support sharded / distributed environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] 5.0.0 RC2

2015-02-09 Thread Steve Rowe
FYI, I also had trouble with failed downloads (timeouts), so I modified 
smokeTestRelease.py to auto-retry - see the patch on 
https://issues.apache.org/jira/browse/LUCENE-6231.

Steve

 On Feb 9, 2015, at 10:20 PM, Steve Rowe sar...@gmail.com wrote:
 
 +1
 
 SUCCESS! [0:54:06.294759]
 
 Steve
 
 On Feb 9, 2015, at 6:16 PM, Anshum Gupta ans...@anshumgupta.net wrote:
 
 Please vote for the second release candidate for Lucene/Solr 5.0.0.
 
 The artifacts can be downloaded here:
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-rev1658469
 
 Or you can run the smoke tester directly with this command:
 python3.2 dev-tools/scripts/smokeTestRelease.py 
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-rev1658469
 
 
 I could not get the above command to work as downloading some file or the 
 other timed out for me (over 6 attempts) so I instead downloaded the entire 
 RC as a tgz. I still have it here:
 
 http://people.apache.org/~anshum/staging_area/lucene-solr-5.0.0-RC2-rev1658469.tgz
 
 Untar the above folder at a location of choice. Do not change the name of 
 the folder as the smokeTestRelease.py extracts information from that.
 
 and then instead of using http, used file://. Here's the command:
 
 python3.2 dev-tools/scripts/smokeTestRelease.py 
 file://path_to_the_extracted_folder
 
 and finally, here's my +1:
 
 SUCCESS! [0:30:50.246761]
 
 
 -- 
 Anshum Gupta
 http://about.me/anshumgupta
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6232) Replace ValueSource context Map with a more concrete data type

2015-02-09 Thread Mike Drob (JIRA)
Mike Drob created LUCENE-6232:
-

 Summary: Replace ValueSource context Map with a more concrete data 
type
 Key: LUCENE-6232
 URL: https://issues.apache.org/jira/browse/LUCENE-6232
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mike Drob


Inspired by LUCENE-3973

The context object used by ValueSource and friends is a raw Map that provides 
no type safety guarantees. In our current state, there are lots of warnings 
about unchecked casts, raw types, and generally unsafe code from the compiler's 
perspective.

There are several common patterns and types of Objects that we store in the 
context. It would be beneficial to instead use a class with typed methods for 
get/set of Scorer, Weights, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7005) facet.heatmap for spatial heatmap faceting on RPT

2015-02-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313468#comment-14313468
 ] 

ASF subversion and git services commented on SOLR-7005:
---

Commit 1658617 from [~dsmiley] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1658617 ]

SOLR-7005: New facet.heatmap on spatial RPT fields

 facet.heatmap for spatial heatmap faceting on RPT
 -

 Key: SOLR-7005
 URL: https://issues.apache.org/jira/browse/SOLR-7005
 Project: Solr
  Issue Type: New Feature
  Components: spatial
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.1

 Attachments: SOLR-7005_heatmap.patch, SOLR-7005_heatmap.patch, 
 SOLR-7005_heatmap.patch, SOLR-7005_heatmap.patch, heatmap_512x256.png, 
 heatmap_64x32.png


 This is a new feature that uses the new spatial Heatmap / 2D PrefixTree cell 
 counter in Lucene spatial LUCENE-6191.  This is a form of faceting, and 
 as-such I think it should live in the facet parameter namespace.  Here's 
 what the parameters are:
 * facet=true
 * facet.heatmap=fieldname
 * facet.heatmap.bbox=\[-180 -90 TO 180 90]
 * facet.heatmap.gridLevel=6
 * facet.heatmap.distErrPct=0.10
 Like other faceting features, the fieldName can have local-params to exclude 
 filter queries or specify an output key.
 The bbox is optional; you get the whole world or you can specify a box or 
 actually any shape that WKT supports (you get the bounding box of whatever 
 you put).
 Ultimately, this feature needs to know the grid level, which together with 
 the input shape will yield a certain number of cells.  You can specify 
 gridLevel exactly, or don't and instead provide distErrPct which is computed 
 like it is for the RPT field type as seen in the schema.  0.10 yielded ~4k 
 cells but it'll vary.  There's also a facet.heatmap.maxCells safety net 
 defaulting to 100k.  Exceed this and you get an error.
 The output is (JSON):
 {noformat}
 {gridLevel=6,columns=64,rows=64,minX=-180.0,maxX=180.0,minY=-90.0,maxY=90.0,counts=[[0,
  0, 2, 1, ],[1, 1, 3, 2, ...],...]}
 {noformat}
 counts is null if all would be 0.  Perhaps individual row arrays should 
 likewise be null... I welcome feedback.
 I'm toying with an output format option in which you can specify a base-64'ed 
 grayscale PNG.
 Obviously this should support sharded / distributed environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: kicking github sync?

2015-02-09 Thread Mark Miller
Anyone filed an infra ticket? It should be synced close to right away with
our git mirror and usually no more than a couple hours on github. Neither
are updating properly.

Mark
On Mon, Feb 9, 2015 at 9:42 PM Ryan McKinley ryan...@gmail.com wrote:

 It looks like the last github sync was 5 days ago :(
 https://github.com/apache/lucene-solr/commits/trunk

 I know this tends to lag the apache mirror, but 5 days is more than usual

 Any idea what (if anything) we can do to kick it?

 Thanks
 ryan



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2617 - Still Failing

2015-02-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2617/

6 tests failed.
FAILED:  org.apache.solr.handler.component.DistributedMLTComponentTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:50042//collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:50042//collection1
at 
__randomizedtesting.SeedInfo.seed([B9BDF1EA2520E0BF:31E9CE308BDC8D47]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:568)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:214)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:210)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:309)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryServer(BaseDistributedSearchTestCase.java:538)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:586)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:568)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:547)
at 
org.apache.solr.handler.component.DistributedMLTComponentTest.test(DistributedMLTComponentTest.java:126)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 

[jira] [Updated] (SOLR-7033) RecoveryStrategy should not publish any state when closed / cancelled.

2015-02-09 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-7033:
--
Attachment: SOLR-7033.patch

Here is patch. If we don't end up doing an rc3, I'll spin it off into a new 
issue.

 RecoveryStrategy should not publish any state when closed / cancelled.
 --

 Key: SOLR-7033
 URL: https://issues.apache.org/jira/browse/SOLR-7033
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Blocker
 Fix For: 5.0, Trunk

 Attachments: SOLR-7033.patch, SOLR-7033.patch, SOLR-7033.patch, 
 SOLR-7033.patch, SOLR-7033.patch


 Currently, when closed / cancelled, RecoveryStrategy can publish a recovery 
 failed state. In a bad loop (like when no one can become leader because no 
 one had a last state of active) this can cause very fast looped publishing of 
 this state to zk.
 It's an outstanding item to improve that specific scenario anyway, but 
 regardless, we should fix the close / cancel path to never publish any state 
 to zk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6775) Creating backup snapshot null pointer exception

2015-02-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313471#comment-14313471
 ] 

Mark Miller commented on SOLR-6775:
---

The related tests have been failing on windows runs since this went in. A 
couple unrelated tests as well.

junit.framework.TestSuite.org.apache.solr.core.TestSolrConfigHandler
junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZk2Test
org.apache.solr.handler.TestReplicationHandlerBackup.doTestBackup
org.apache.solr.handler.TestReplicationHandlerBackup.testBackupOnCommit

 Creating backup snapshot null pointer exception
 ---

 Key: SOLR-6775
 URL: https://issues.apache.org/jira/browse/SOLR-6775
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 4.10
 Environment: Linux Server, Java version 1.7.0_21, Solr version 
 4.10.0
Reporter: Ryan Hesson
Assignee: Shalin Shekhar Mangar
  Labels: snapshot, solr
 Fix For: Trunk, 5.1

 Attachments: SOLR-6775.patch, SOLR-6775.patch


 I set up Solr Replication. I have one master on a server, one slave on 
 another server. The replication of data appears functioning correctly. The 
 issue is when the master SOLR tries to create a snapshot backup it gets a 
 null pointer exception. 
 org.apache.solr.handler.SnapShooter createSnapshot method calls 
 org.apache.solr.handler.SnapPuller.delTree(snapShotDir); at line 162 and the 
 exception happens within  org.apache.solr.handler.SnapPuller at line 1026 
 because snapShotDir is null. 
 Here is the actual log output:
 58319963 [qtp12610551-16] INFO  org.apache.solr.core.SolrCore  - newest 
 commit generation = 349
 58319983 [Thread-19] INFO  org.apache.solr.handler.SnapShooter  - Creating 
 backup snapshot...
 Exception in thread Thread-19 java.lang.NullPointerException
 at org.apache.solr.handler.SnapPuller.delTree(SnapPuller.java:1026)
 at 
 org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:162)
 at org.apache.solr.handler.SnapShooter$1.run(SnapShooter.java:91)
 I may have missed how to set the directory in the documentation but I've 
 looked around without much luck. I thought the process was to use the same 
 directory as the index data for the snapshots. Is this a known issue with 
 this release or am I missing how to set the value? If someone could tell me 
 how to set snapshotdir or confirm that it is an issue and a different way of 
 backing up the index is needed it would be much appreciated. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7005) facet.heatmap for spatial heatmap faceting on RPT

2015-02-09 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-7005:
---
Description: 
This is a new feature that uses the new spatial Heatmap / 2D PrefixTree cell 
counter in Lucene spatial LUCENE-6191.  This is a form of faceting, and as-such 
I think it should live in the facet parameter namespace.  Here's what the 
parameters are:
* facet=true
* facet.heatmap=fieldname
* facet.heatmap.geom=\[-180 -90 TO 180 90]
* facet.heatmap.gridLevel=6
* facet.heatmap.distErrPct=0.15
* facet.heatmap.format=ints2D | png
(Officially see FacetParams where options are documented)

Like other faceting features, the fieldName can have local-params to exclude 
filter queries or specify an output key.  This could be quite useful in doing 
difference faceting on the same spatial data to identify relative change 
against a baseline.

The {{geom}} is optional; you get the whole world or you can specify a box or 
WKT.

Ultimately, this feature needs to know the grid level, which together with the 
input shape will yield a certain number of cells.  You can specify gridLevel 
exactly, or don't and instead provide distErrPct which is computed like it is 
for the RPT field type as seen in the schema.  0.10 yielded ~4k cells but it'll 
vary.  There's also a facet.heatmap.maxCells safety net defaulting to 100k.  
Exceed this and you get an error.

The output is (JSON):
{noformat}
{gridLevel=6,columns=64,rows=64,minX=-180.0,maxX=180.0,minY=-90.0,maxY=90.0,counts_ints2D=[[0,
 0, 2, 1, ],[1, 1, 3, 2, ...],...]}
{noformat}
counts_ints2D is null if all would be 0.  individual row arrays should likewise 
be null... I welcome feedback.

If you set the output to 'png' then you get a 4-byte per pixel/cell PNG, or 
null if all counts are 0.  The high byte (alpha channel) is inverted so that 
counts need to exceed 2^24 before the image will start to fade out if you try 
and view it.

This supports sharded / distributed queries.

  was:
This is a new feature that uses the new spatial Heatmap / 2D PrefixTree cell 
counter in Lucene spatial LUCENE-6191.  This is a form of faceting, and as-such 
I think it should live in the facet parameter namespace.  Here's what the 
parameters are:
* facet=true
* facet.heatmap=fieldname
* facet.heatmap.bbox=\[-180 -90 TO 180 90]
* facet.heatmap.gridLevel=6
* facet.heatmap.distErrPct=0.10

Like other faceting features, the fieldName can have local-params to exclude 
filter queries or specify an output key.

The bbox is optional; you get the whole world or you can specify a box or 
actually any shape that WKT supports (you get the bounding box of whatever you 
put).

Ultimately, this feature needs to know the grid level, which together with the 
input shape will yield a certain number of cells.  You can specify gridLevel 
exactly, or don't and instead provide distErrPct which is computed like it is 
for the RPT field type as seen in the schema.  0.10 yielded ~4k cells but it'll 
vary.  There's also a facet.heatmap.maxCells safety net defaulting to 100k.  
Exceed this and you get an error.

The output is (JSON):
{noformat}
{gridLevel=6,columns=64,rows=64,minX=-180.0,maxX=180.0,minY=-90.0,maxY=90.0,counts=[[0,
 0, 2, 1, ],[1, 1, 3, 2, ...],...]}
{noformat}
counts is null if all would be 0.  Perhaps individual row arrays should 
likewise be null... I welcome feedback.

I'm toying with an output format option in which you can specify a base-64'ed 
grayscale PNG.

Obviously this should support sharded / distributed environments.


 facet.heatmap for spatial heatmap faceting on RPT
 -

 Key: SOLR-7005
 URL: https://issues.apache.org/jira/browse/SOLR-7005
 Project: Solr
  Issue Type: New Feature
  Components: spatial
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.1

 Attachments: SOLR-7005_heatmap.patch, SOLR-7005_heatmap.patch, 
 SOLR-7005_heatmap.patch, SOLR-7005_heatmap.patch, heatmap_512x256.png, 
 heatmap_64x32.png


 This is a new feature that uses the new spatial Heatmap / 2D PrefixTree cell 
 counter in Lucene spatial LUCENE-6191.  This is a form of faceting, and 
 as-such I think it should live in the facet parameter namespace.  Here's 
 what the parameters are:
 * facet=true
 * facet.heatmap=fieldname
 * facet.heatmap.geom=\[-180 -90 TO 180 90]
 * facet.heatmap.gridLevel=6
 * facet.heatmap.distErrPct=0.15
 * facet.heatmap.format=ints2D | png
 (Officially see FacetParams where options are documented)
 Like other faceting features, the fieldName can have local-params to exclude 
 filter queries or specify an output key.  This could be quite useful in doing 
 difference faceting on the same spatial data to identify relative change 
 against a baseline.
 The {{geom}} is optional; you get the whole world or you can specify a 

[jira] [Commented] (LUCENE-6229) Remove Scorer.getChildren?

2015-02-09 Thread Stefan Pohl (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313722#comment-14313722
 ] 

Stefan Pohl commented on LUCENE-6229:
-

Hi Adrian, Mike,
thanks for your recent efforts in cleaning up with many outstanding 
refactorings.

I'm using this functionality in analytics/debugging contexts where it's not 
necessary to have best possible performance (e.g. using BooleanScorer).
LUCENE-2590 doesn't seems to be a feature that you would assume Lucene to use 
internally, and I doubt many actual users of this functionality track JIRA and 
would speak up here.

Could this functionality be provided in a different way that doesn't have the 
problems you want to address here? E.g. could users hint the search to require 
this functionality (needClauses?), which in turn leads to not using optimized 
implementations that cannot (easily) provide this information?

 Remove Scorer.getChildren?
 --

 Key: LUCENE-6229
 URL: https://issues.apache.org/jira/browse/LUCENE-6229
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Priority: Minor

 This API is used in a single place in our code base: 
 ToParentBlockJoinCollector. In addition, the usage is a bit buggy given that 
 using this API from a collector only works if setScorer is called with an 
 actual Scorer (and not eg. FakeScorer or BooleanScorer like you would get in 
 disjunctions) so it needs a custom IndexSearcher that does not use the 
 BulkScorer API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7047) solr.cmd fails if Solr installation path contains parenthesis

2015-02-09 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter resolved SOLR-7047.
--
   Resolution: Fixed
Fix Version/s: (was: 5.1)
   5.0
 Assignee: Timothy Potter

Resolved by the solution for SOLR-6693

 solr.cmd fails if Solr installation path contains parenthesis
 -

 Key: SOLR-7047
 URL: https://issues.apache.org/jira/browse/SOLR-7047
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
 Environment: Windows with 32 bit Windows
Reporter: Jan Høydahl
Assignee: Timothy Potter
 Fix For: 5.0


 Steps to reproduce
 {code}
   jar xvf solr-5.0.0.zip
   rename solr-5.0.0 solr (5)
   cd solr (5)\bin
   solr.cmd start
 {code}
 The script fails when trying to assign an environment variable using 
 {{SOLR_TIP}}, which contains parens.
 This is more or less the same root issue as SOLR-6693 where the issue is that 
 {{SOLR_HOME}} contains parens in case of 32 bit Windows, i.e. {{C:\Program 
 Files (x86)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6066) Collector that manages diversity in search results

2015-02-09 Thread Mark Harwood (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Harwood updated LUCENE-6066:
-
Attachment: LUCENE-PQRemoveV9.patch

Move DiversifiedTopDocsCollector and related unit test to misc.
Added experimental annotation.
Removed superfluous if ==0  test in PriorityQueue.

Thanks, Adrien.

 Collector that manages diversity in search results
 --

 Key: LUCENE-6066
 URL: https://issues.apache.org/jira/browse/LUCENE-6066
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/query/scoring
Reporter: Mark Harwood
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-PQRemoveV8.patch, LUCENE-PQRemoveV9.patch


 This issue provides a new collector for situations where a client doesn't 
 want more than N matches for any given key (e.g. no more than 5 products from 
 any one retailer in a marketplace). In these circumstances a document that 
 was previously thought of as competitive during collection has to be removed 
 from the final PQ and replaced with another doc (eg a retailer who already 
 has 5 matches in the PQ receives a 6th match which is better than his 
 previous ones). This requires a new remove method on the existing 
 PriorityQueue class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6224) move package.htmls to package-info.java for better tooling support

2015-02-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312389#comment-14312389
 ] 

ASF subversion and git services commented on LUCENE-6224:
-

Commit 1658447 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1658447 ]

LUCENE-6224: cut over more package.htmls

 move package.htmls to package-info.java for better tooling support
 --

 Key: LUCENE-6224
 URL: https://issues.apache.org/jira/browse/LUCENE-6224
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir

 Today, on java8, if you typo a link in the package documentation of 
 org.apache.lucene.search (package.html) like this:
 {code}
 {@link org.apache.lucene.search.TermQueryX TermQuery}
 {code}
 then javadoc will silently do the wrong thing, it will generate a 
 codexxx/code block with no link at all.
 On the other hand, if instead we do it as package-info.java, then it shows up 
 in big red letters as an error in my IDE, doclint catches it at compile time, 
 etc, and we ensure our links are doing what we want.
 {code}
 [javac] 
 /home/rmuir/workspace/trunk/lucene/core/src/java/org/apache/lucene/search/package-info.java:75:
  error: reference not found
 [javac] {@link org.apache.lucene.search.TermQueryX TermQuery}
 {code}
 I think we should cutover? this also helps us rely less on our own linting 
 scripts long term because now doclint is checking these files too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4479) TermVectorComponent NPE when running Solr Cloud

2015-02-09 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312336#comment-14312336
 ] 

Shalin Shekhar Mangar commented on SOLR-4479:
-

{quote}
But this has me thinking about whether there's a bigger bug at play here? 
Specifically, if Solr is in distributed mode, then the shards.qt parameter 
should default to the same path as the top-level request handler (/tvrh in this 
example). I tried the same with the /spell request handler and same result, the 
underlying distributed shard requests all went to /select and since the 
SpellChecking component is not wired into /select by default, there's really no 
spell checking happening on each shard.
In other words, if you send a distributed query to /tvrh without the shards.qt 
parameter, then the underlying shard requests are sent to /select and not /tvrh 
on each replica. The work-around is simple but seems like the default behavior 
should be to work without shards.qt???
{quote}

I think that makes sense.

The reason behind having shards.qt is that in old-style distributed search, 
people would put shards=abc,xyz,pqr in the defaults section of the request 
handler and therefore they need shards.qt to send the non-distrib query to a 
different handler which does not hard code the shards parameter. So anyone who 
has this situation currently should already specify a shards.qt parameter 
different than qt. So defaulting shards.qt the same as qt makes sense.

 TermVectorComponent NPE when running Solr Cloud
 ---

 Key: SOLR-4479
 URL: https://issues.apache.org/jira/browse/SOLR-4479
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.1
Reporter: Vitali Kviatkouski
Assignee: Timothy Potter

 When running Solr Cloud (just simply 2 shards - as described in wiki), got NPE
 java.lang.NullPointerException
   at 
 org.apache.solr.handler.component.TermVectorComponent.finishStage(TermVectorComponent.java:437)
   at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:317)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   at 
 org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:242)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1816)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:448)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:269)
   at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1307)
   at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:453)
   at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
   at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:560)
   at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1072)
   at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:382)
   at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1006)
   at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
   at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
 . Skipped
 To reproduce, follow the guide in wiki 
 (http://wiki.apache.org/solr/SolrCloud), add some documents and then request 
 http://localhost:8983/solr/collection1/tvrh?q=*%3A*
 If I include term search vector component in search handler, I get (on second 
 shard):
 SEVERE: null:java.lang.NullPointerException
 at 
 org.apache.solr.handler.component.TermVectorComponent.process(TermVectorComponent.java:321)
 at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:206)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1699)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6693) Start script for windows fails with 32bit JRE

2015-02-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312338#comment-14312338
 ] 

ASF subversion and git services commented on SOLR-6693:
---

Commit 1658426 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1658426 ]

SOLR-6693: bin\solr.cmd doesn't support 32-bit JRE/JDK running on Windows due 
to parenthesis in JAVA_HOME

 Start script for windows fails with 32bit JRE
 -

 Key: SOLR-6693
 URL: https://issues.apache.org/jira/browse/SOLR-6693
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 4.10.2
 Environment: WINDOWS 8.1
Reporter: Jan Høydahl
Assignee: Timothy Potter
  Labels: bin\solr.cmd
 Fix For: 5.0, Trunk

 Attachments: SOLR-6693.patch, SOLR-6693.patch, SOLR-6693.patch, 
 solr.cmd, solr.cmd.patch


 *Reproduce:*
 # Install JRE8 from www.java.com (typically {{C:\Program Files 
 (x86)\Java\jre1.8.0_25}})
 # Run the command {{bin\solr start -V}}
 The result is:
 {{\Java\jre1.8.0_25\bin\java was unexpected at this time.}}
 *Reason*
 This comes from bad quoting of the {{%SOLR%}} variable. I think it's because 
 of the parenthesis that it freaks out. I think the same would apply for a 
 32-bit JDK because of the (x86) in the path, but I have not tested.
 Tip: You can remove the line {{@ECHO OFF}} at the top to see exactly which is 
 the offending line
 *Solution*
 Quoting the lines where %JAVA% is printed, e.g. instead of
 {noformat}
   @echo Using Java: %JAVA%
 {noformat}
 then use
 {noformat}
   @echo Using Java: %JAVA%
 {noformat}
 This is needed several places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7076) TikaEntityProcessor should have support for onError=skip

2015-02-09 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-7076:
-
Attachment: SOLR-7076.patch

 TikaEntityProcessor should have support for onError=skip
 

 Key: SOLR-7076
 URL: https://issues.apache.org/jira/browse/SOLR-7076
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-7076.patch, SOLR-7076.patch


 There is no reason why we can't continue if one doc failed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2615 - Still Failing

2015-02-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2615/

6 tests failed.
REGRESSION:  org.apache.solr.handler.component.DistributedMLTComponentTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:50672/i/bl/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:50672/i/bl/collection1
at 
__randomizedtesting.SeedInfo.seed([59903ADC7C8588BC:D1C40506D279E544]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:568)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:214)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:210)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:309)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryServer(BaseDistributedSearchTestCase.java:538)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:586)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:568)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:547)
at 
org.apache.solr.handler.component.DistributedMLTComponentTest.test(DistributedMLTComponentTest.java:126)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 

[jira] [Commented] (SOLR-6640) Replication can cause index corruption.

2015-02-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312607#comment-14312607
 ] 

ASF subversion and git services commented on SOLR-6640:
---

Commit 1658519 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1658519 ]

SOLR-6920, SOLR-6640: Make constant and fix logging.

 Replication can cause index corruption.
 ---

 Key: SOLR-6640
 URL: https://issues.apache.org/jira/browse/SOLR-6640
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 5.0
Reporter: Shalin Shekhar Mangar
Assignee: Mark Miller
Priority: Blocker
 Fix For: 5.0, Trunk

 Attachments: Lucene-Solr-5.x-Linux-64bit-jdk1.8.0_20-Build-11333.txt, 
 SOLR-6640-test.patch, SOLR-6640.patch, SOLR-6640.patch, SOLR-6640.patch, 
 SOLR-6640.patch, SOLR-6640_new_index_dir.patch, SOLR-6920.patch, 
 corruptindex.log


 Test failure found on jenkins:
 http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11333/
 {code}
 1 tests failed.
 REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch
 Error Message:
 shard2 is not consistent.  Got 62 from 
 http://127.0.0.1:57436/collection1lastClient and got 24 from 
 http://127.0.0.1:53065/collection1
 Stack Trace:
 java.lang.AssertionError: shard2 is not consistent.  Got 62 from 
 http://127.0.0.1:57436/collection1lastClient and got 24 from 
 http://127.0.0.1:53065/collection1
 at 
 __randomizedtesting.SeedInfo.seed([F4B371D421E391CD:7555FFCC56BCF1F1]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1255)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1234)
 at 
 org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:162)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
 {code}
 Cause of inconsistency is:
 {code}
 Caused by: org.apache.lucene.index.CorruptIndexException: file mismatch, 
 expected segment id=yhq3vokoe1den2av9jbd3yp8, got=yhq3vokoe1den2av9jbd3yp7 
 (resource=BufferedChecksumIndexInput(MMapIndexInput(path=/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeySafeLeaderTest-F4B371D421E391CD-001/tempDir-001/jetty3/index/_1_2.liv)))
[junit4]   2  at 
 org.apache.lucene.codecs.CodecUtil.checkSegmentHeader(CodecUtil.java:259)
[junit4]   2  at 
 org.apache.lucene.codecs.lucene50.Lucene50LiveDocsFormat.readLiveDocs(Lucene50LiveDocsFormat.java:88)
[junit4]   2  at 
 org.apache.lucene.codecs.asserting.AssertingLiveDocsFormat.readLiveDocs(AssertingLiveDocsFormat.java:64)
[junit4]   2  at 
 org.apache.lucene.index.SegmentReader.init(SegmentReader.java:102)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6640) Replication can cause index corruption.

2015-02-09 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-6640.
---
Resolution: Fixed

Thanks to all involved in this. Long time, bad time issue.

 Replication can cause index corruption.
 ---

 Key: SOLR-6640
 URL: https://issues.apache.org/jira/browse/SOLR-6640
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 5.0
Reporter: Shalin Shekhar Mangar
Assignee: Mark Miller
Priority: Blocker
 Fix For: 5.0, Trunk

 Attachments: Lucene-Solr-5.x-Linux-64bit-jdk1.8.0_20-Build-11333.txt, 
 SOLR-6640-test.patch, SOLR-6640.patch, SOLR-6640.patch, SOLR-6640.patch, 
 SOLR-6640.patch, SOLR-6640_new_index_dir.patch, SOLR-6920.patch, 
 corruptindex.log


 Test failure found on jenkins:
 http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11333/
 {code}
 1 tests failed.
 REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch
 Error Message:
 shard2 is not consistent.  Got 62 from 
 http://127.0.0.1:57436/collection1lastClient and got 24 from 
 http://127.0.0.1:53065/collection1
 Stack Trace:
 java.lang.AssertionError: shard2 is not consistent.  Got 62 from 
 http://127.0.0.1:57436/collection1lastClient and got 24 from 
 http://127.0.0.1:53065/collection1
 at 
 __randomizedtesting.SeedInfo.seed([F4B371D421E391CD:7555FFCC56BCF1F1]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1255)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1234)
 at 
 org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:162)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
 {code}
 Cause of inconsistency is:
 {code}
 Caused by: org.apache.lucene.index.CorruptIndexException: file mismatch, 
 expected segment id=yhq3vokoe1den2av9jbd3yp8, got=yhq3vokoe1den2av9jbd3yp7 
 (resource=BufferedChecksumIndexInput(MMapIndexInput(path=/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeySafeLeaderTest-F4B371D421E391CD-001/tempDir-001/jetty3/index/_1_2.liv)))
[junit4]   2  at 
 org.apache.lucene.codecs.CodecUtil.checkSegmentHeader(CodecUtil.java:259)
[junit4]   2  at 
 org.apache.lucene.codecs.lucene50.Lucene50LiveDocsFormat.readLiveDocs(Lucene50LiveDocsFormat.java:88)
[junit4]   2  at 
 org.apache.lucene.codecs.asserting.AssertingLiveDocsFormat.readLiveDocs(AssertingLiveDocsFormat.java:64)
[junit4]   2  at 
 org.apache.lucene.index.SegmentReader.init(SegmentReader.java:102)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6920) During replication use checksums to verify if files are the same

2015-02-09 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-6920.
---
   Resolution: Fixed
Fix Version/s: Trunk
   5.0

Thanks Varun, great job.

 During replication use checksums to verify if files are the same
 

 Key: SOLR-6920
 URL: https://issues.apache.org/jira/browse/SOLR-6920
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Reporter: Varun Thacker
Assignee: Mark Miller
Priority: Critical
 Fix For: 5.0, Trunk

 Attachments: SOLR-6920-5x.patch, SOLR-6920-5x.patch, 
 SOLR-6920-5x.patch, SOLR-6920-5x.patch, SOLR-6920.patch, SOLR-6920.patch, 
 SOLR-6920.patch, SOLR-6920.patch, SOLR-6920.patch, SOLR-6920.patch


 Currently we check if an index file on the master and slave is the same by 
 checking if it's name and file length match. 
 With LUCENE-2446 we now have a checksums for each index file in the segment. 
 We should leverage this to verify if two files are the same.
 Places like SnapPuller.isIndexStale and SnapPuller.downloadIndexFiles should 
 check against the checksum also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6920) During replication use checksums to verify if files are the same

2015-02-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312611#comment-14312611
 ] 

ASF subversion and git services commented on SOLR-6920:
---

Commit 1658524 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1658524 ]

SOLR-6920, SOLR-6640: Make constant and fix logging.

 During replication use checksums to verify if files are the same
 

 Key: SOLR-6920
 URL: https://issues.apache.org/jira/browse/SOLR-6920
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Reporter: Varun Thacker
Assignee: Mark Miller
Priority: Critical
 Attachments: SOLR-6920-5x.patch, SOLR-6920-5x.patch, 
 SOLR-6920-5x.patch, SOLR-6920-5x.patch, SOLR-6920.patch, SOLR-6920.patch, 
 SOLR-6920.patch, SOLR-6920.patch, SOLR-6920.patch, SOLR-6920.patch


 Currently we check if an index file on the master and slave is the same by 
 checking if it's name and file length match. 
 With LUCENE-2446 we now have a checksums for each index file in the segment. 
 We should leverage this to verify if two files are the same.
 Places like SnapPuller.isIndexStale and SnapPuller.downloadIndexFiles should 
 check against the checksum also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6944) ReplicationFactorTest and HttpPartitionTest both fail with org.apache.http.NoHttpResponseException: The target server failed to respond

2015-02-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312627#comment-14312627
 ] 

Mark Miller commented on SOLR-6944:
---

This problem has gotten out of control on jenkins runs. I'll try and look into 
it more soon if no one beats me to it.

 ReplicationFactorTest and HttpPartitionTest both fail with 
 org.apache.http.NoHttpResponseException: The target server failed to respond
 ---

 Key: SOLR-6944
 URL: https://issues.apache.org/jira/browse/SOLR-6944
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-6944.patch


 Our only recourse is to do a client side retry on such errors. I have some 
 retry code for this from SOLR-4509 that I will pull over here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6920) During replication use checksums to verify if files are the same

2015-02-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312629#comment-14312629
 ] 

ASF subversion and git services commented on SOLR-6920:
---

Commit 1658526 from [~markrmil...@gmail.com] in branch 
'dev/branches/lucene_solr_5_0'
[ https://svn.apache.org/r1658526 ]

SOLR-6920, SOLR-6640: Make constant and fix logging.

 During replication use checksums to verify if files are the same
 

 Key: SOLR-6920
 URL: https://issues.apache.org/jira/browse/SOLR-6920
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Reporter: Varun Thacker
Assignee: Mark Miller
Priority: Critical
 Fix For: 5.0, Trunk

 Attachments: SOLR-6920-5x.patch, SOLR-6920-5x.patch, 
 SOLR-6920-5x.patch, SOLR-6920-5x.patch, SOLR-6920.patch, SOLR-6920.patch, 
 SOLR-6920.patch, SOLR-6920.patch, SOLR-6920.patch, SOLR-6920.patch


 Currently we check if an index file on the master and slave is the same by 
 checking if it's name and file length match. 
 With LUCENE-2446 we now have a checksums for each index file in the segment. 
 We should leverage this to verify if two files are the same.
 Places like SnapPuller.isIndexStale and SnapPuller.downloadIndexFiles should 
 check against the checksum also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5890) Delete silently fails if not sent to shard where document was added

2015-02-09 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5890:
-
Fix Version/s: 5.1

 Delete silently fails if not sent to shard where document was added
 ---

 Key: SOLR-5890
 URL: https://issues.apache.org/jira/browse/SOLR-5890
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.7
 Environment: Debian 7.4.
Reporter: Peter Inglesby
Assignee: Noble Paul
  Labels: difficulty-medium, impact-medium, workaround-exists
 Fix For: Trunk, 5.1

 Attachments: 5890_tests.patch, SOLR-5890-without-broadcast.patch, 
 SOLR-5890.patch, SOLR-5890.patch, SOLR-5890.patch, SOLR-5890.patch, 
 SOLR-5890.patch, SOLR-5890.patch, SOLR-5890.patch, SOLR-5890.patch, 
 SOLR-5980.patch


 We have SolrCloud set up with two shards, each with a leader and a replica.  
 We use haproxy to distribute requests between the four nodes.
 Regardless of which node we send an add request to, following a commit, the 
 newly-added document is returned in a search, as expected.
 However, we can only delete a document if the delete request is sent to a 
 node in the shard where the document was added.  If we send the delete 
 request to a node in the other shard (and then send a commit) the document is 
 not deleted.  Such a delete request will get a 200 response, with the 
 following body:
   {'responseHeader'={'status'=0,'QTime'=7}}
 Apart from the the very low QTime, this is indistinguishable from a 
 successful delete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5890) Delete silently fails if not sent to shard where document was added

2015-02-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312668#comment-14312668
 ] 

ASF subversion and git services commented on SOLR-5890:
---

Commit 1658549 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1658549 ]

SOLR-5890: Delete silently fails if not sent to shard where document was added

 Delete silently fails if not sent to shard where document was added
 ---

 Key: SOLR-5890
 URL: https://issues.apache.org/jira/browse/SOLR-5890
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.7
 Environment: Debian 7.4.
Reporter: Peter Inglesby
Assignee: Noble Paul
  Labels: difficulty-medium, impact-medium, workaround-exists
 Fix For: Trunk, 5.1

 Attachments: 5890_tests.patch, SOLR-5890-without-broadcast.patch, 
 SOLR-5890.patch, SOLR-5890.patch, SOLR-5890.patch, SOLR-5890.patch, 
 SOLR-5890.patch, SOLR-5890.patch, SOLR-5890.patch, SOLR-5890.patch, 
 SOLR-5980.patch


 We have SolrCloud set up with two shards, each with a leader and a replica.  
 We use haproxy to distribute requests between the four nodes.
 Regardless of which node we send an add request to, following a commit, the 
 newly-added document is returned in a search, as expected.
 However, we can only delete a document if the delete request is sent to a 
 node in the shard where the document was added.  If we send the delete 
 request to a node in the other shard (and then send a commit) the document is 
 not deleted.  Such a delete request will get a 200 response, with the 
 following body:
   {'responseHeader'={'status'=0,'QTime'=7}}
 Apart from the the very low QTime, this is indistinguishable from a 
 successful delete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4797) Fix remaining Lucene/Solr Javadocs issue

2015-02-09 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-4797.
-
Resolution: Fixed

doclint passes everywhere now. Thank you Uwe for helping with 5.x here!

 Fix remaining Lucene/Solr Javadocs issue
 

 Key: LUCENE-4797
 URL: https://issues.apache.org/jira/browse/LUCENE-4797
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/javadocs
Affects Versions: 4.1
Reporter: Uwe Schindler
Assignee: Uwe Schindler
  Labels: Java8
 Fix For: Trunk, 5.1

 Attachments: LUCENE-4797-branch5x.patch, LUCENE-4797-java7.patch


 Java 8 has a new feature (enabled by default): 
 http://openjdk.java.net/jeps/172
 It fails the build on:
 - incorrect links (@see, @link,...)
 - incorrect HTML entities
 - invalid HTML in general
 Thanks to our linter written in HTMLTidy and Python, most of the bugs are 
 already solved in our source code, but the Oracle Linter finds some more 
 problems, our linter does not:
 - missing escapes 
 - invalid entities
 Unfortunately the versions of JDK8 released up to today have a bug, making 
 optional closing tags (which are valid HTML4), like /p, mandatory. This 
 will be fixed in b78.
 Currently there is another bug in the Oracle javadocs tool (it fails to copy 
 doc-files folders), but this is under investigation at the moment.
 We should clean up our javadocs, so the pass the new JDK8 javadocs tool with 
 build 78+. Maybe we can put our own linter out of service, once we rely on 
 Java 8 :-)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7093) Cleanup comment and move hardcoded value of file size to replicate to a final

2015-02-09 Thread Anshum Gupta (JIRA)
Anshum Gupta created SOLR-7093:
--

 Summary: Cleanup comment and move hardcoded value of file size to 
replicate to a final
 Key: SOLR-7093
 URL: https://issues.apache.org/jira/browse/SOLR-7093
 Project: Solr
  Issue Type: Improvement
Affects Versions: 5.0
Reporter: Anshum Gupta
Priority: Minor


Creating this issue for tracking the commit as I've already cut an RC. If 
another RC happens, it would go into 5.0, else would be released with 5.0.1/5.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6224) move package.htmls to package-info.java for better tooling support

2015-02-09 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-6224.
--
   Resolution: Fixed
Fix Version/s: 5.1
   Trunk

Resolved, hurray!

 move package.htmls to package-info.java for better tooling support
 --

 Key: LUCENE-6224
 URL: https://issues.apache.org/jira/browse/LUCENE-6224
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Fix For: Trunk, 5.1


 Today, on java8, if you typo a link in the package documentation of 
 org.apache.lucene.search (package.html) like this:
 {code}
 {@link org.apache.lucene.search.TermQueryX TermQuery}
 {code}
 then javadoc will silently do the wrong thing, it will generate a 
 codexxx/code block with no link at all.
 On the other hand, if instead we do it as package-info.java, then it shows up 
 in big red letters as an error in my IDE, doclint catches it at compile time, 
 etc, and we ensure our links are doing what we want.
 {code}
 [javac] 
 /home/rmuir/workspace/trunk/lucene/core/src/java/org/apache/lucene/search/package-info.java:75:
  error: reference not found
 [javac] {@link org.apache.lucene.search.TermQueryX TermQuery}
 {code}
 I think we should cutover? this also helps us rely less on our own linting 
 scripts long term because now doclint is checking these files too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6971) TestRebalanceLeaders fails too often.

2015-02-09 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312731#comment-14312731
 ] 

Erick Erickson commented on SOLR-6971:
--

Well, one of the main _points_ of unit tests is to hit cases you didn't 
explicitly know to test in the first place ;)...

Anyway, I have a long boring plane flight ahead of me, I'll see if I can hack 
up some kind of dump when this
happens for testing only, then ask you to put it on locally to see if we can 
gather some kind of information about
where this originates. If that goes well, a patch probably tomorrow.



 TestRebalanceLeaders fails too often.
 -

 Key: SOLR-6971
 URL: https://issues.apache.org/jira/browse/SOLR-6971
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Assignee: Erick Erickson
Priority: Minor

 I see this fail too much - I've seen 3 different fail types so far.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7091) Data-driven schema and block-join style update requests don't play well together

2015-02-09 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312758#comment-14312758
 ] 

Steve Rowe commented on SOLR-7091:
--

I see the same problem on standalone Solr - from 
{{solr/example/schemaless/logs/solr.log}} after launching the schemaless 
example ({{bin/solr -e schemaless}} and sending the nested docs update from the 
ref guide link in the issue description:

{noformat}
INFO  - 2015-02-09 19:50:36.131; org.apache.solr.schema.ManagedIndexSchema; 
Upgraded to managed schema at 
/Users/sarowe/svn/lucene/dev/branches/lucene_solr_5_0/solr/example/schemaless/solr/gettingstarted/conf/managed-schema
INFO  - 2015-02-09 19:50:36.144; 
org.apache.solr.update.processor.LogUpdateProcessor; [gettingstarted] 
webapp=/solr path=/update params={} {} 0 84
ERROR - 2015-02-09 19:50:36.145; org.apache.solr.common.SolrException; 
org.apache.solr.common.SolrException: undefined field: comments
at org.apache.solr.schema.IndexSchema.getField(IndexSchema.java:1222)
at 
org.apache.solr.schema.IndexSchema.getCopyFieldsList(IndexSchema.java:1339)
at 
org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:110)
at 
org.apache.solr.update.AddUpdateCommand$1.next(AddUpdateCommand.java:186)
at 
org.apache.solr.update.AddUpdateCommand$1.next(AddUpdateCommand.java:161)
at 
org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:256)
at 
org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:411)
at 
org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1199)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:238)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:166)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
at 
org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:328)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
at 
org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:117)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
at 
org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:117)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
at 
org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:117)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
at 
org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:117)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
at 
org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:79)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
at 
org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:117)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:931)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1085)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:697)
at 
org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:104)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
at 
org.apache.solr.update.processor.AbstractDefaultValueUpdateProcessorFactory$DefaultValueUpdateProcessor.processAdd(AbstractDefaultValueUpdateProcessorFactory.java:94)
at 
org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:247)
at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:174)
at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:103)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
  

[jira] [Updated] (LUCENE-6230) fix or remove ecj linter, jtidy, etc?

2015-02-09 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-6230:

Attachment: LUCENE-6230_fix_ecj_increase_ant.patch

Here is my patch to fix the ECJ speed (requires ant upgrade). But if this is 
not providing value over what doclint already does, maybe its best to just 
remove it.

 fix or remove ecj linter, jtidy, etc?
 -

 Key: LUCENE-6230
 URL: https://issues.apache.org/jira/browse/LUCENE-6230
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6230_fix_ecj_increase_ant.patch


 We now have doclint running on compile/javadoc to find problems. Can we 
 remove some extra linters?
 Jtidy: this consumes a ton of memory, and doesn't have good error messages. 
 Can we just remove it in trunk at least?
 ECJ: this is slow, unless we upgrade minimum ant version from 1.8.2 to 1.8.3, 
 then we can make it _really_ not generate .class files, because the javac 
 task has a createMissingPackageInfoClass we can disable. alternatively, we 
 could also remove this checker. I am unsure if its providing anything beyond 
 doclint.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.0-Linux (64bit/jdk1.8.0_40-ea-b22) - Build # 117 - Failure!

2015-02-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.0-Linux/117/
Java: 64bit/jdk1.8.0_40-ea-b22 -XX:+UseCompressedOops -XX:+UseSerialGC

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
Some resources were not closed, shutdown, or released.

Stack Trace:
java.lang.AssertionError: Some resources were not closed, shutdown, or released.
at __randomizedtesting.SeedInfo.seed([6F0FB8F562FCC336]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:213)
at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:790)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
6 threads leaked from SUITE scope at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 1) Thread[id=4804, 
name=zkCallback-648-thread-2, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=4733, 
name=TEST-CollectionsAPIDistributedZkTest.testDistribSearch-seed#[6F0FB8F562FCC336]-SendThread(127.0.0.1:52384),
 state=TIMED_WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
 at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:940)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1003)
3) Thread[id=4810, name=zkCallback-648-thread-3, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 

[jira] [Updated] (SOLR-7073) Add an API to add a jar to a collection's classpath

2015-02-09 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-7073:
-
Description: 
The idea of having separate classloaders for each component may be counter 
productive.  This aims to add a jar dependency to the collection itself and 
each core belonging to that collection will have the jar in the classpath

As we load everything from the .system collection , we cannot make the core 
loading delayed till .system is fully loaded and is available . 

There is a new  set of  config commands to manage the dependencies on a 
collection level. It is possible to have more than one jar as a dependency. The 
lib attribute is same as the blob name that we use in the blob store API
example:
{code}
curl http://localhost:8983/solr/collection1/config -H 
'Content-type:application/json'  -d '{
add-runtimelib : {name: jarname , version:2 },
update-runtimelib :{name: jarname ,version:3},
delete-runtimelib :jarname 
}' 
{code}

The same is added to the overlay.json .

The default SolrResourceLoader does not have visibility to  these jars . There 
is a classloader that can access these jars which is made available only to 
those components which are specially annotated

Every pluggable component can have an optional extra attribute called 
{{runtimeLib=true}}, which means, these components are not be loaded at core 
load time. They are tried to be loaded on demand and if all the dependency jars 
are not available at the component load time an error is thrown . 

example of registering a valueSourceParser which depends on the runtime 
classloader
{code}
curl http://localhost:8983/solr/collection1/config -H 
'Content-type:application/json'  -d '{
create-valuesourceparser : {name: nvl ,
runtimeLib: true, 
class:solr.org.apache.solr.search.function.NvlValueSourceParser , 
nvlFloatValue:0.0 }  
}'
{code} 

  was:
The idea of having separate classloaders for each component may be counter 
productive.  This aims to add a jar dependency to the collection itself and 
each core belonging to that collection will have the jar in the classpath

As we load everything from the .system collection , we cannot make the core 
loading delayed till .system is fully loaded and is available . 

There is a new  set of  config commands to manage the dependencies on a 
collection level. It is possible to have more than one jar as a dependency. The 
lib attribute is same as the blob name that we use in the blob store API
example:
{code}
curl http://localhost:8983/solr/collection1/config -H 
'Content-type:application/json'  -d '{
add-runtime-lib : {lib: jarname , version:2 },
update-runtime-lib :{lib: jarname ,version:3},
delete-runtime-lib :jarname 
}' 
{code}

The same is added to the overlay.json .

The default SolrResourceLoader does not have visibility to  these jars . There 
is a classloader that can access these jars which is made available only to 
those components which are specially annotated

Every pluggable component can have an optional extra attribute called 
{{runtimeLib=true}}, which means, these components are not be loaded at core 
load time. They are tried to be loaded on demand and if all the dependency jars 
are not available at the component load time an error is thrown . 

example of registering a valueSourceParser which depends on the runtime 
classloader
{code}
curl http://localhost:8983/solr/collection1/config -H 
'Content-type:application/json'  -d '{
create-valuesourceparser : {name: nvl ,
runtimeLib: true, 
class:solr.org.apache.solr.search.function.NvlValueSourceParser , 
nvlFloatValue:0.0 }  
}'
{code} 


 Add an API to add a jar to a collection's classpath
 ---

 Key: SOLR-7073
 URL: https://issues.apache.org/jira/browse/SOLR-7073
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul

 The idea of having separate classloaders for each component may be counter 
 productive.  This aims to add a jar dependency to the collection itself and 
 each core belonging to that collection will have the jar in the classpath
 As we load everything from the .system collection , we cannot make the core 
 loading delayed till .system is fully loaded and is available . 
 There is a new  set of  config commands to manage the dependencies on a 
 collection level. It is possible to have more than one jar as a dependency. 
 The lib attribute is same as the blob name that we use in the blob store API
 example:
 {code}
 curl http://localhost:8983/solr/collection1/config -H 
 'Content-type:application/json'  -d '{
 add-runtimelib : {name: jarname , version:2 },
 update-runtimelib :{name: jarname ,version:3},
 delete-runtimelib :jarname 
 }' 
 {code}
 The same is added to the overlay.json .
 The default SolrResourceLoader does not have visibility to  these jars . 
 There is a 

[jira] [Commented] (LUCENE-6226) Allow TermScorer to expose positions, offsets and payloads

2015-02-09 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14311921#comment-14311921
 ] 

Adrien Grand commented on LUCENE-6226:
--

Maybe we should not allow collectors to consume positions (yet) (ie. 
Collector.postingsFlags() should remain Collector.needsScores())? Positions can 
only be iterated once while it's quite typical to wrap several collectors into 
a single one using MultiCollector. So if you have several collectors that need 
positions in a MultiCollector, it would only work fine for the first one?

Also why does TermScorer track the number of times that nextPosition() has been 
called in order to return NO_MORE_POSITIONS? The wrapped PostingsEnum should 
already take care of it?

 Allow TermScorer to expose positions, offsets and payloads
 --

 Key: LUCENE-6226
 URL: https://issues.apache.org/jira/browse/LUCENE-6226
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6226.patch, LUCENE-6226.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6227) Add BooleanClause.Occur.FILTER

2015-02-09 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312524#comment-14312524
 ] 

Adrien Grand commented on LUCENE-6227:
--

I agree that the names lack symmetry and it would be nice to fix it... I like 
the idea of renaming MUST_NOT to something like FILTER_NEGATION to make clear 
that it does not score. Or maybe even shorter, eg. FILTER_NOT?

 Add BooleanClause.Occur.FILTER
 --

 Key: LUCENE-6227
 URL: https://issues.apache.org/jira/browse/LUCENE-6227
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6227.patch, LUCENE-6227.patch, LUCENE-6227.patch


 Now that we have weight-level control of whether scoring is needed or not, we 
 could add a new clause type to BooleanQuery. It would behave like MUST exept 
 that it would not participate in scoring.
 Why do we need it given that we already have FilteredQuery? The idea is that 
 by having a single query that performs conjunctions, we could potentially 
 take better decisions. It's not ready to replace FilteredQuery yet as 
 FilteredQuery has handling of random-access filters that BooleanQuery 
 doesn't, but it's a first step towards that direction and eventually 
 FilteredQuery would just rewrite to a BooleanQuery.
 I've been calling this new clause type FILTER so far, but feel free to 
 propose a better name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5890) Delete silently fails if not sent to shard where document was added

2015-02-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312541#comment-14312541
 ] 

ASF subversion and git services commented on SOLR-5890:
---

Commit 1658486 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1658486 ]

SOLR-5890: Delete silently fails if not sent to shard where document was
  added

 Delete silently fails if not sent to shard where document was added
 ---

 Key: SOLR-5890
 URL: https://issues.apache.org/jira/browse/SOLR-5890
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.7
 Environment: Debian 7.4.
Reporter: Peter Inglesby
Assignee: Noble Paul
  Labels: difficulty-medium, impact-medium, workaround-exists
 Fix For: Trunk

 Attachments: 5890_tests.patch, SOLR-5890-without-broadcast.patch, 
 SOLR-5890.patch, SOLR-5890.patch, SOLR-5890.patch, SOLR-5890.patch, 
 SOLR-5890.patch, SOLR-5890.patch, SOLR-5890.patch, SOLR-5890.patch, 
 SOLR-5980.patch


 We have SolrCloud set up with two shards, each with a leader and a replica.  
 We use haproxy to distribute requests between the four nodes.
 Regardless of which node we send an add request to, following a commit, the 
 newly-added document is returned in a search, as expected.
 However, we can only delete a document if the delete request is sent to a 
 node in the shard where the document was added.  If we send the delete 
 request to a node in the other shard (and then send a commit) the document is 
 not deleted.  Such a delete request will get a 200 response, with the 
 following body:
   {'responseHeader'={'status'=0,'QTime'=7}}
 Apart from the the very low QTime, this is indistinguishable from a 
 successful delete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6640) Replication can cause index corruption.

2015-02-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312617#comment-14312617
 ] 

ASF subversion and git services commented on SOLR-6640:
---

Commit 1658524 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1658524 ]

SOLR-6920, SOLR-6640: Make constant and fix logging.

 Replication can cause index corruption.
 ---

 Key: SOLR-6640
 URL: https://issues.apache.org/jira/browse/SOLR-6640
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 5.0
Reporter: Shalin Shekhar Mangar
Assignee: Mark Miller
Priority: Blocker
 Fix For: 5.0, Trunk

 Attachments: Lucene-Solr-5.x-Linux-64bit-jdk1.8.0_20-Build-11333.txt, 
 SOLR-6640-test.patch, SOLR-6640.patch, SOLR-6640.patch, SOLR-6640.patch, 
 SOLR-6640.patch, SOLR-6640_new_index_dir.patch, SOLR-6920.patch, 
 corruptindex.log


 Test failure found on jenkins:
 http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11333/
 {code}
 1 tests failed.
 REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch
 Error Message:
 shard2 is not consistent.  Got 62 from 
 http://127.0.0.1:57436/collection1lastClient and got 24 from 
 http://127.0.0.1:53065/collection1
 Stack Trace:
 java.lang.AssertionError: shard2 is not consistent.  Got 62 from 
 http://127.0.0.1:57436/collection1lastClient and got 24 from 
 http://127.0.0.1:53065/collection1
 at 
 __randomizedtesting.SeedInfo.seed([F4B371D421E391CD:7555FFCC56BCF1F1]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1255)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1234)
 at 
 org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:162)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
 {code}
 Cause of inconsistency is:
 {code}
 Caused by: org.apache.lucene.index.CorruptIndexException: file mismatch, 
 expected segment id=yhq3vokoe1den2av9jbd3yp8, got=yhq3vokoe1den2av9jbd3yp7 
 (resource=BufferedChecksumIndexInput(MMapIndexInput(path=/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeySafeLeaderTest-F4B371D421E391CD-001/tempDir-001/jetty3/index/_1_2.liv)))
[junit4]   2  at 
 org.apache.lucene.codecs.CodecUtil.checkSegmentHeader(CodecUtil.java:259)
[junit4]   2  at 
 org.apache.lucene.codecs.lucene50.Lucene50LiveDocsFormat.readLiveDocs(Lucene50LiveDocsFormat.java:88)
[junit4]   2  at 
 org.apache.lucene.codecs.asserting.AssertingLiveDocsFormat.readLiveDocs(AssertingLiveDocsFormat.java:64)
[junit4]   2  at 
 org.apache.lucene.index.SegmentReader.init(SegmentReader.java:102)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1346: POMs out of sync

2015-02-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1346/

No tests ran.

Build Log:
[...truncated 37043 lines...]
-validate-maven-dependencies:
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-test-framework:6.0.0-SNAPSHOT: checking for updates 
from maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-test-framework:6.0.0-SNAPSHOT: checking for updates 
from releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-parent:6.0.0-SNAPSHOT: checking for updates from 
sonatype.releases
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-parent:6.0.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-parent:6.0.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-analyzers-common:6.0.0-SNAPSHOT: checking for updates 
from maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-analyzers-common:6.0.0-SNAPSHOT: checking for updates 
from releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-analyzers-kuromoji:6.0.0-SNAPSHOT: checking for 
updates from maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-analyzers-kuromoji:6.0.0-SNAPSHOT: checking for 
updates from releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-analyzers-phonetic:6.0.0-SNAPSHOT: checking for 
updates from maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-analyzers-phonetic:6.0.0-SNAPSHOT: checking for 
updates from releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-backward-codecs:6.0.0-SNAPSHOT: checking for updates 
from maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-backward-codecs:6.0.0-SNAPSHOT: checking for updates 
from releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-codecs:6.0.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-codecs:6.0.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-core:6.0.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-core:6.0.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-expressions:6.0.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-expressions:6.0.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-grouping:6.0.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-grouping:6.0.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-highlighter:6.0.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-highlighter:6.0.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-join:6.0.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-join:6.0.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-memory:6.0.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-memory:6.0.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-misc:6.0.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-misc:6.0.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-queries:6.0.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-queries:6.0.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-queryparser:6.0.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-queryparser:6.0.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-spatial:6.0.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-spatial:6.0.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] 

[jira] [Commented] (SOLR-5379) Query-time multi-word synonym expansion

2015-02-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312536#comment-14312536
 ] 

Rafał Kuć commented on SOLR-5379:
-

I have the code updated to Solr 4.10.3 and I'm running tests now. I see a few 
issues with the code right now (i.e. some static, magic string objects, because 
some classes were moved outside of Lucene core). I'll attach the updated patch 
tomorrow, but I'm not sure if there will be another release from 4.x branch. So 
I guess the easiest way would be to get the code polished for 5.x branch and 
try committing there. What do you think?

 Query-time multi-word synonym expansion
 ---

 Key: SOLR-5379
 URL: https://issues.apache.org/jira/browse/SOLR-5379
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Reporter: Tien Nguyen Manh
  Labels: multi-word, queryparser, synonym
 Fix For: 4.9, Trunk

 Attachments: conf-test-files-4_8_1.patch, quoted-4_8_1.patch, 
 quoted.patch, synonym-expander-4_8_1.patch, synonym-expander.patch


 While dealing with synonym at query time, solr failed to work with multi-word 
 synonyms due to some reasons:
 - First the lucene queryparser tokenizes user query by space so it split 
 multi-word term into two terms before feeding to synonym filter, so synonym 
 filter can't recognized multi-word term to do expansion
 - Second, if synonym filter expand into multiple terms which contains 
 multi-word synonym, The SolrQueryParseBase currently use MultiPhraseQuery to 
 handle synonyms. But MultiPhraseQuery don't work with term have different 
 number of words.
 For the first one, we can extend quoted all multi-word synonym in user query 
 so that lucene queryparser don't split it. There are a jira task related to 
 this one https://issues.apache.org/jira/browse/LUCENE-2605.
 For the second, we can replace MultiPhraseQuery by an appropriate BoleanQuery 
 SHOULD which contains multiple PhraseQuery in case tokens stream have 
 multi-word synonym.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6228) Do not expose full-fledged scorers in LeafCollector.setScorer

2015-02-09 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312551#comment-14312551
 ] 

Ryan Ernst commented on LUCENE-6228:


Having both {{Score}} and {{Scorer} is really confusing...can we have the 
interface still be {{Scorer}} and the abstract class be something else...maybe 
{{ScoringEnum}}?

 Do not expose full-fledged scorers in LeafCollector.setScorer
 -

 Key: LUCENE-6228
 URL: https://issues.apache.org/jira/browse/LUCENE-6228
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6228.patch


 Currently LeafCollector.setScorer takes a Scorer, which I don't like because 
 several methods should never be called in the context of a Collector (like 
 nextDoc or advance).
 I think it's even more trappy for methods that might seem to work in some 
 particular cases but will not work in the general case, like getChildren 
 which will not work if you have a specialized BulkScorer or iterating over 
 positions which will not work if you are in a MultiCollector and another leaf 
 collector consumes positions too.
 So I think we should restrict what can be seen from a collector to avoid such 
 traps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6228) Do not expose full-fledged scorers in LeafCollector.setScorer

2015-02-09 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312551#comment-14312551
 ] 

Ryan Ernst edited comment on LUCENE-6228 at 2/9/15 6:10 PM:


Having both {{Score}} and {{Scorer}} is really confusing...can we have the 
interface still be {{Scorer}} and the abstract class be something else...maybe 
{{ScoringEnum}}?


was (Author: rjernst):
Having both {{Score}} and {{Scorer} is really confusing...can we have the 
interface still be {{Scorer}} and the abstract class be something else...maybe 
{{ScoringEnum}}?

 Do not expose full-fledged scorers in LeafCollector.setScorer
 -

 Key: LUCENE-6228
 URL: https://issues.apache.org/jira/browse/LUCENE-6228
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6228.patch


 Currently LeafCollector.setScorer takes a Scorer, which I don't like because 
 several methods should never be called in the context of a Collector (like 
 nextDoc or advance).
 I think it's even more trappy for methods that might seem to work in some 
 particular cases but will not work in the general case, like getChildren 
 which will not work if you have a specialized BulkScorer or iterating over 
 positions which will not work if you are in a MultiCollector and another leaf 
 collector consumes positions too.
 So I think we should restrict what can be seen from a collector to avoid such 
 traps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6920) During replication use checksums to verify if files are the same

2015-02-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312601#comment-14312601
 ] 

ASF subversion and git services commented on SOLR-6920:
---

Commit 1658519 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1658519 ]

SOLR-6920, SOLR-6640: Make constant and fix logging.

 During replication use checksums to verify if files are the same
 

 Key: SOLR-6920
 URL: https://issues.apache.org/jira/browse/SOLR-6920
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Reporter: Varun Thacker
Assignee: Mark Miller
Priority: Critical
 Attachments: SOLR-6920-5x.patch, SOLR-6920-5x.patch, 
 SOLR-6920-5x.patch, SOLR-6920-5x.patch, SOLR-6920.patch, SOLR-6920.patch, 
 SOLR-6920.patch, SOLR-6920.patch, SOLR-6920.patch, SOLR-6920.patch


 Currently we check if an index file on the master and slave is the same by 
 checking if it's name and file length match. 
 With LUCENE-2446 we now have a checksums for each index file in the segment. 
 We should leverage this to verify if two files are the same.
 Places like SnapPuller.isIndexStale and SnapPuller.downloadIndexFiles should 
 check against the checksum also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7092) The lease renewer we create in SOLR-6969 can end up running through other tests because we don't shut it down.

2015-02-09 Thread Mark Miller (JIRA)
Mark Miller created SOLR-7092:
-

 Summary: The lease renewer we create in SOLR-6969 can end up 
running through other tests because we don't shut it down.
 Key: SOLR-7092
 URL: https://issues.apache.org/jira/browse/SOLR-7092
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6969) When opening an HDFSTransactionLog for append we must first attempt to recover it's lease to prevent data loss.

2015-02-09 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-6969.
---
Resolution: Fixed

I filed SOLR-7092 for that issue.

 When opening an HDFSTransactionLog for append we must first attempt to 
 recover it's lease to prevent data loss.
 ---

 Key: SOLR-6969
 URL: https://issues.apache.org/jira/browse/SOLR-6969
 Project: Solr
  Issue Type: Bug
  Components: hdfs
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Critical
 Fix For: 5.0, Trunk

 Attachments: SOLR-6969.patch, SOLR-6969.patch


 This can happen after a hard crash and restart. The current workaround is to 
 stop and wait it out and start again. We should retry and wait a given amount 
 of time as we do when we detect safe mode though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6640) Replication can cause index corruption.

2015-02-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312635#comment-14312635
 ] 

ASF subversion and git services commented on SOLR-6640:
---

Commit 1658526 from [~markrmil...@gmail.com] in branch 
'dev/branches/lucene_solr_5_0'
[ https://svn.apache.org/r1658526 ]

SOLR-6920, SOLR-6640: Make constant and fix logging.

 Replication can cause index corruption.
 ---

 Key: SOLR-6640
 URL: https://issues.apache.org/jira/browse/SOLR-6640
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 5.0
Reporter: Shalin Shekhar Mangar
Assignee: Mark Miller
Priority: Blocker
 Fix For: 5.0, Trunk

 Attachments: Lucene-Solr-5.x-Linux-64bit-jdk1.8.0_20-Build-11333.txt, 
 SOLR-6640-test.patch, SOLR-6640.patch, SOLR-6640.patch, SOLR-6640.patch, 
 SOLR-6640.patch, SOLR-6640_new_index_dir.patch, SOLR-6920.patch, 
 corruptindex.log


 Test failure found on jenkins:
 http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11333/
 {code}
 1 tests failed.
 REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch
 Error Message:
 shard2 is not consistent.  Got 62 from 
 http://127.0.0.1:57436/collection1lastClient and got 24 from 
 http://127.0.0.1:53065/collection1
 Stack Trace:
 java.lang.AssertionError: shard2 is not consistent.  Got 62 from 
 http://127.0.0.1:57436/collection1lastClient and got 24 from 
 http://127.0.0.1:53065/collection1
 at 
 __randomizedtesting.SeedInfo.seed([F4B371D421E391CD:7555FFCC56BCF1F1]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1255)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1234)
 at 
 org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:162)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
 {code}
 Cause of inconsistency is:
 {code}
 Caused by: org.apache.lucene.index.CorruptIndexException: file mismatch, 
 expected segment id=yhq3vokoe1den2av9jbd3yp8, got=yhq3vokoe1den2av9jbd3yp7 
 (resource=BufferedChecksumIndexInput(MMapIndexInput(path=/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeySafeLeaderTest-F4B371D421E391CD-001/tempDir-001/jetty3/index/_1_2.liv)))
[junit4]   2  at 
 org.apache.lucene.codecs.CodecUtil.checkSegmentHeader(CodecUtil.java:259)
[junit4]   2  at 
 org.apache.lucene.codecs.lucene50.Lucene50LiveDocsFormat.readLiveDocs(Lucene50LiveDocsFormat.java:88)
[junit4]   2  at 
 org.apache.lucene.codecs.asserting.AssertingLiveDocsFormat.readLiveDocs(AssertingLiveDocsFormat.java:64)
[junit4]   2  at 
 org.apache.lucene.index.SegmentReader.init(SegmentReader.java:102)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6227) Add BooleanClause.Occur.FILTER

2015-02-09 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312504#comment-14312504
 ] 

Hoss Man commented on LUCENE-6227:
--

two tangential thoughts/questions...

1) From an API/conceptual standpoint, does it make more sense for this to be a 
new Occur instance (the Occur.FILTER here) or would it make more sense for 
this to be a property on BooleanClause that could be set to true with either 
MUST or MUST_NOT clauses?

2) Assuming it's a new Occur.FILTER, should we plan on renaming Occur.MUST_NOT 
to something like Occur.FILTER_NEGATION since (unless i'm missunderstanding 
something) the non-scoring semantics of Occur.FILTER and Occur.MUST_NOT are 
basicly the inverse of each other right?  so it seems like we should probably 
do something ot make it more clear that Occur.MUST_NOT has more in common with 
FILTER then with MUST ?

 Add BooleanClause.Occur.FILTER
 --

 Key: LUCENE-6227
 URL: https://issues.apache.org/jira/browse/LUCENE-6227
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6227.patch, LUCENE-6227.patch, LUCENE-6227.patch


 Now that we have weight-level control of whether scoring is needed or not, we 
 could add a new clause type to BooleanQuery. It would behave like MUST exept 
 that it would not participate in scoring.
 Why do we need it given that we already have FilteredQuery? The idea is that 
 by having a single query that performs conjunctions, we could potentially 
 take better decisions. It's not ready to replace FilteredQuery yet as 
 FilteredQuery has handling of random-access filters that BooleanQuery 
 doesn't, but it's a first step towards that direction and eventually 
 FilteredQuery would just rewrite to a BooleanQuery.
 I've been calling this new clause type FILTER so far, but feel free to 
 propose a better name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6810) Faster searching limited but high rows across many shards all with many hits

2015-02-09 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6810:

Attachment: SOLR-6810-trunk.patch

Patch for trunk. There are a few test failures related to distributed IDF which 
needs fixing but all other tests pass. I'll try to get through them this week.

 Faster searching limited but high rows across many shards all with many hits
 

 Key: SOLR-6810
 URL: https://issues.apache.org/jira/browse/SOLR-6810
 Project: Solr
  Issue Type: Improvement
  Components: search
Reporter: Per Steffensen
Assignee: Shalin Shekhar Mangar
  Labels: distributed_search, performance
 Attachments: SOLR-6810-trunk.patch, branch_5x_rev1642874.patch, 
 branch_5x_rev1642874.patch, branch_5x_rev1645549.patch


 Searching limited but high rows across many shards all with many hits is 
 slow
 E.g.
 * Query from outside client: q=somethingrows=1000
 * Resulting in sub-requests to each shard something a-la this
 ** 1) q=somethingrows=1000fl=id,score
 ** 2) Request the full documents with ids in the global-top-1000 found among 
 the top-1000 from each shard
 What does the subject mean
 * limited but high rows means 1000 in the example above
 * many shards means 200-1000 in our case
 * all with many hits means that each of the shards have a significant 
 number of hits on the query
 The problem grows on all three factors above
 Doing such a query on our system takes between 5 min to 1 hour - depending on 
 a lot of things. It ought to be much faster, so lets make it.
 Profiling show that the problem is that it takes lots of time to access the 
 store to get id’s for (up to) 1000 docs (value of rows parameter) per shard. 
 Having 1000 shards its up to 1 mio ids that has to be fetched. There is 
 really no good reason to ever read information from store for more than the 
 overall top-1000 documents, that has to be returned to the client.
 For further detail see mail-thread Slow searching limited but high rows 
 across many shards all with high hits started 13/11-2014 on 
 dev@lucene.apache.org



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-09 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312640#comment-14312640
 ] 

Noble Paul commented on SOLR-6736:
--

DELETE can wait, may be. That is something we can add later if required

 A collections-like request handler to manage solr configurations on zookeeper
 -

 Key: SOLR-6736
 URL: https://issues.apache.org/jira/browse/SOLR-6736
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Varun Rajput
Assignee: Anshum Gupta
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6736.patch


 Managing Solr configuration files on zookeeper becomes cumbersome while using 
 solr in cloud mode, especially while trying out changes in the 
 configurations. 
 It will be great if there is a request handler that can provide an API to 
 manage the configurations similar to the collections handler that would allow 
 actions like uploading new configurations, linking them to a collection, 
 deleting configurations, etc.
 example : 
 {code}
 #use the following command to upload a new configset called mynewconf. This 
 will fail if there is alredy a conf called 'mynewconf'. The file could be a 
 jar , zip or a tar file which contains all the files for the this conf.
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @testconf.zip http://localhost:8983/solr/admin/configs/mynewconf
 {code}
 A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
 available
 A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
 list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-09 Thread Varun Rajput (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312606#comment-14312606
 ] 

Varun Rajput commented on SOLR-6736:


Thanks for the feedback, I will get to this later today

 A collections-like request handler to manage solr configurations on zookeeper
 -

 Key: SOLR-6736
 URL: https://issues.apache.org/jira/browse/SOLR-6736
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Varun Rajput
Assignee: Anshum Gupta
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6736.patch


 Managing Solr configuration files on zookeeper becomes cumbersome while using 
 solr in cloud mode, especially while trying out changes in the 
 configurations. 
 It will be great if there is a request handler that can provide an API to 
 manage the configurations similar to the collections handler that would allow 
 actions like uploading new configurations, linking them to a collection, 
 deleting configurations, etc.
 example : 
 {code}
 #use the following command to upload a new configset called mynewconf. This 
 will fail if there is alredy a conf called 'mynewconf'. The file could be a 
 jar , zip or a tar file which contains all the files for the this conf.
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @testconf.zip http://localhost:8983/solr/admin/configs/mynewconf
 {code}
 A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
 available
 A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
 list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6971) TestRebalanceLeaders fails too often.

2015-02-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312633#comment-14312633
 ] 

Mark Miller commented on SOLR-6971:
---

I'm seeing it elsewhere too I think. In any case, I'm not sure it's related to 
the test or what it tests - but it happens to hit this.

 TestRebalanceLeaders fails too often.
 -

 Key: SOLR-6971
 URL: https://issues.apache.org/jira/browse/SOLR-6971
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Assignee: Erick Erickson
Priority: Minor

 I see this fail too much - I've seen 3 different fail types so far.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6224) move package.htmls to package-info.java for better tooling support

2015-02-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312491#comment-14312491
 ] 

ASF subversion and git services commented on LUCENE-6224:
-

Commit 1658475 from [~jpountz] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1658475 ]

LUCENE-6224: cut over more package.htmls

 move package.htmls to package-info.java for better tooling support
 --

 Key: LUCENE-6224
 URL: https://issues.apache.org/jira/browse/LUCENE-6224
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir

 Today, on java8, if you typo a link in the package documentation of 
 org.apache.lucene.search (package.html) like this:
 {code}
 {@link org.apache.lucene.search.TermQueryX TermQuery}
 {code}
 then javadoc will silently do the wrong thing, it will generate a 
 codexxx/code block with no link at all.
 On the other hand, if instead we do it as package-info.java, then it shows up 
 in big red letters as an error in my IDE, doclint catches it at compile time, 
 etc, and we ensure our links are doing what we want.
 {code}
 [javac] 
 /home/rmuir/workspace/trunk/lucene/core/src/java/org/apache/lucene/search/package-info.java:75:
  error: reference not found
 [javac] {@link org.apache.lucene.search.TermQueryX TermQuery}
 {code}
 I think we should cutover? this also helps us rely less on our own linting 
 scripts long term because now doclint is checking these files too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5890) Delete silently fails if not sent to shard where document was added

2015-02-09 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-5890.
--
   Resolution: Fixed
Fix Version/s: (was: 5.0)

 Delete silently fails if not sent to shard where document was added
 ---

 Key: SOLR-5890
 URL: https://issues.apache.org/jira/browse/SOLR-5890
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.7
 Environment: Debian 7.4.
Reporter: Peter Inglesby
Assignee: Noble Paul
  Labels: difficulty-medium, impact-medium, workaround-exists
 Fix For: Trunk

 Attachments: 5890_tests.patch, SOLR-5890-without-broadcast.patch, 
 SOLR-5890.patch, SOLR-5890.patch, SOLR-5890.patch, SOLR-5890.patch, 
 SOLR-5890.patch, SOLR-5890.patch, SOLR-5890.patch, SOLR-5890.patch, 
 SOLR-5980.patch


 We have SolrCloud set up with two shards, each with a leader and a replica.  
 We use haproxy to distribute requests between the four nodes.
 Regardless of which node we send an add request to, following a commit, the 
 newly-added document is returned in a search, as expected.
 However, we can only delete a document if the delete request is sent to a 
 node in the shard where the document was added.  If we send the delete 
 request to a node in the other shard (and then send a commit) the document is 
 not deleted.  Such a delete request will get a 200 response, with the 
 following body:
   {'responseHeader'={'status'=0,'QTime'=7}}
 Apart from the the very low QTime, this is indistinguishable from a 
 successful delete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >