[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_31) - Build # 4466 - Still Failing!

2015-02-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4466/
Java: 32bit/jdk1.8.0_31 -server -XX:+UseConcMarkSweepGC

5 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZk2Test

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 AD67B4CAAB2A4EBC-001\tempDir-002: java.nio.file.AccessDeniedException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 AD67B4CAAB2A4EBC-001\tempDir-002
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 AD67B4CAAB2A4EBC-001: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 AD67B4CAAB2A4EBC-001 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 AD67B4CAAB2A4EBC-001\tempDir-002: java.nio.file.AccessDeniedException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 AD67B4CAAB2A4EBC-001\tempDir-002
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 AD67B4CAAB2A4EBC-001: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 AD67B4CAAB2A4EBC-001

at __randomizedtesting.SeedInfo.seed([AD67B4CAAB2A4EBC]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:286)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:170)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestSolrConfigHandler

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 AD67B4CAAB2A4EBC-001\tempDir-010\collection1\conf\params.json: 
java.nio.file.FileSystemException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 AD67B4CAAB2A4EBC-001\tempDir-010\collection1\conf\params.json: The process 
cannot access the file because it is being used by another process. 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 AD67B4CAAB2A4EBC-001\tempDir-010\collection1\conf: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 AD67B4CAAB2A4EBC-001\tempDir-010\collection1\conf
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 AD67B4CAAB2A4EBC-001\tempDir-010\collection1: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 AD67B4CAAB2A4EBC-001\tempDir-010\collection1
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 AD67B4CAAB2A4EBC-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 AD67B4CAAB2A4EBC-001\tempDir-010 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts

[jira] [Commented] (SOLR-6234) Scoring modes for query time join

2015-02-05 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14308784#comment-14308784
 ] 

Mikhail Khludnev commented on SOLR-6234:


that's would be great if we replicate SOLR-4905 here also. Please vote, let me 
know if you need it.

> Scoring modes for query time join 
> --
>
> Key: SOLR-6234
> URL: https://issues.apache.org/jira/browse/SOLR-6234
> Project: Solr
>  Issue Type: New Feature
>  Components: query parsers
>Affects Versions: 4.10.3, Trunk
>Reporter: Mikhail Khludnev
>  Labels: features, patch, test
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6234.patch, SOLR-6234.patch
>
>
> it adds {{scorejoin}} query parser which calls Lucene's JoinUtil underneath. 
> It supports:
> - {{score=none|avg|max|total}} local param (passed as ScoreMode to JoinUtil)
>  - {{score=none}} is *default*, eg if you *omit* this localparam 
> - supports {{b=100}} param to pass {{Query.setBoost()}}.
> - {{multiVals=true|false}} is introduced 
> - there is a test coverage for cross core join case. 
> - so far it joins string and multivalue string fields (Sorted, SortedSet, 
> Binary), but not Numerics DVs. follow-up LUCENE-5868  
> -there was a bug in cross core join, however there is a workaround for it- 
> it's fixed in Dec'14 patch.
> Note: the development of this patch was sponsored by an anonymous contributor 
> and approved for release under Apache License.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5961) Solr gets crazy on /overseer/queue state change

2015-02-05 Thread Gopal Patwa (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14308716#comment-14308716
 ] 

Gopal Patwa commented on SOLR-5961:
---

we also had similar problem today as Ugo mention in our Production system, this 
was cause after machine reboot for zookeeper (5 node) and 8 node solr cloud 
(single shard) to install some unix security patch.

JDK 7, Solr 4.10.3, CentOS

But after reboot, we saw huge amount of message were in overseer/queue

./zkCli.sh -server localhost:2181 ls /search/catalog/overseer/queue  | sed 
's/,/\n/g' | wc -l
178587

We have very small cluster (8 nodes), how come overseer/queue should have 17k+ 
messages, due to this leader node took almost few hours to come from recovery.

Logs from zookeeper:
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = 
ConnectionLoss for /overseer/queue


> Solr gets crazy on /overseer/queue state change
> ---
>
> Key: SOLR-5961
> URL: https://issues.apache.org/jira/browse/SOLR-5961
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.7.1
> Environment: CentOS, 1 shard - 3 replicas, ZK cluster with 3 nodes 
> (separate machines)
>Reporter: Maxim Novikov
>Assignee: Shalin Shekhar Mangar
>Priority: Critical
>
> No idea how to reproduce it, but sometimes Solr stars littering the log with 
> the following messages:
> 419158 [localhost-startStop-1-EventThread] INFO  
> org.apache.solr.cloud.DistributedQueue  ? LatchChildWatcher fired on path: 
> /overseer/queue state: SyncConnected type NodeChildrenChanged
> 419190 [Thread-3] INFO  org.apache.solr.cloud.Overseer  ? Update state 
> numShards=1 message={
>   "operation":"state",
>   "state":"recovering",
>   "base_url":"http://${IP_ADDRESS}/solr";,
>   "core":"${CORE_NAME}",
>   "roles":null,
>   "node_name":"${NODE_NAME}_solr",
>   "shard":"shard1",
>   "collection":"${COLLECTION_NAME}",
>   "numShards":"1",
>   "core_node_name":"core_node2"}
> It continues spamming these messages with no delay and the restarting of all 
> the nodes does not help. I have even tried to stop all the nodes in the 
> cluster first, but then when I start one, the behavior doesn't change, it 
> gets crazy nuts with this " /overseer/queue state" again.
> PS The only way to handle this was to stop everything, manually clean up all 
> the data in ZooKeeper related to Solr, and then rebuild everything from 
> scratch. As you should understand, it is kinda unbearable in the production 
> environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-05 Thread Varun Rajput (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14308656#comment-14308656
 ] 

Varun Rajput commented on SOLR-6736:


Yes, I have something similar in mind and will upload a patch soon.

> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: 5.0, Trunk
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2595 - Still Failing

2015-02-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2595/

5 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.NoHttpResponseException: The target server failed to respond
at 
__randomizedtesting.SeedInfo.seed([3EFCC107E0AFECF5:B6A8FEDD4E53810D]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:865)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:730)
at 
org.apache.solr.cloud.HttpPartitionTest.doSendDoc(HttpPartitionTest.java:484)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:501)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:193)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.ut

[jira] [Commented] (SOLR-4407) SSL Certificate based authentication for SolrCloud

2015-02-05 Thread Steve Davids (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14308429#comment-14308429
 ] 

Steve Davids commented on SOLR-4407:


Sorry for not being more specific. Yes, the instructions does allow for 
specifying your own self-signed certificate and importing that specific 
certificate in a new trust store that will be loaded by the container - this 
will lock it down to the specific certificate. The modification that I have 
done is to create a custom servlet container to openly accept client 
certificates within an organization, perform an LDAP lookup (via cert DN) to 
pull groups then grant access if they are apart of a specific group. With this 
capability we are able to grant access via LDAP groups which is a preferred 
route of client authentication for our specific use-case. 

So, to answer your question:

bq. What aspect of SSL do you think isn't already configurable?

SSL is configurable via trust stores but mechanisms for a customizable 
certificate based authentication system isn't in place, such as the case above 
(get cert DN + user lookup via LDAP to authorize).

> SSL Certificate based authentication for SolrCloud
> --
>
> Key: SOLR-4407
> URL: https://issues.apache.org/jira/browse/SOLR-4407
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Affects Versions: 4.1
>Reporter: Sindre Fiskaa
>Assignee: Steve Rowe
>  Labels: Authentication, Certificate, SSL
> Fix For: 4.7, Trunk
>
>
> I need to be able to secure sensitive information in solrnodes running in a 
> SolrCloud with either SSL client/server certificates or http basic auth..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2594 - Still Failing

2015-02-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2594/

5 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.NoHttpResponseException: The target server failed to respond
at 
__randomizedtesting.SeedInfo.seed([B0A2FC0F7C89ED32:38F6C3D5D27580CA]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:865)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:730)
at 
org.apache.solr.cloud.HttpPartitionTest.doSendDoc(HttpPartitionTest.java:484)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:501)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:193)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.ut

Re: Are docs updated based on comparing the id before analysis?

2015-02-05 Thread Shawn Heisey
On 2/5/2015 5:24 PM, Erick Erickson wrote:
> Hmmm, driving away from my client, I got to wondering about routing in
> SolrCloud. You'd have to apply the analysis chain _before_ you routed
> on ID, and I have no clue what would happen with things like the !
> operator in the id field.

I didn't even think about SolrCloud.  Fun.

> So to handle my "rule of thumb", which is that anything that a human
> could possibly enter should _not_ be case sensitive, the 
> field needs to be
> 1> normalized as far as case is concerned at index time
> 2> have a query-time transformation done to match <1>. So something
> like this should do it assuming that
> the indexer took care to uppercase the :
> 
>   
> 
>   
>  
> 
> 
>   
> 

I realize with what I'm saying below that it is outside "typical user"
land, but it might work.  For an advanced user it wouldn't even be all
that messy.  Proceeding into "thinking out loud" territory:

A custom UpdateRequestProcessor could do all the normalization on the
uniqueKey field at index time.  If we used that processor in combination
with a fieldType like the one you outlined above, I think it would
work.  The simple version of that processor would just be a
case-changing filter.

Getting back to what a typical user wants to happen ... an update
processor could be included in Solr that figures out the configured
uniqueKey field and lowercases the input on that field.  We could
provide documentation showing how to insert it into the default update
chain to allow case-insensitive unique IDs.  If somebody needs more
complicated normalization (perhaps they want to use the ICU folding
class instead of Java's built-in lowercase capability, or do some really
wild stuff that's domain-specific), they can write their own processor,
and maybe even their own analysis component for the query side.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6915) SaslZkACLProvider and Kerberos Test Using MiniKdc

2015-02-05 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14308354#comment-14308354
 ] 

Gregory Chanan commented on SOLR-6915:
--

[~elecharny] sorry for the late reply.  I'm just starting up a Hadoop MiniKDC.  
See the code here for more details: 
https://github.com/apache/hadoop/blob/4641196fe02af5cab3d56a9f3c78875c495dbe03/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop/minikdc/MiniKdc.java#L322-L389

> SaslZkACLProvider and Kerberos Test Using MiniKdc
> -
>
> Key: SOLR-6915
> URL: https://issues.apache.org/jira/browse/SOLR-6915
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-6915.patch, SOLR-6915.patch, fail.log, fail.log, 
> tests-failures.txt
>
>
> We should provide a ZkACLProvider that requires SASL authentication.  This 
> provider will be useful for administration in a kerberos environment.   In 
> such an environment, the administrator wants solr to authenticate to 
> zookeeper using SASL, since this is only way to authenticate with zookeeper 
> via kerberos.
> The authorization model in such a setup can vary, e.g. you can imagine a 
> scenario where solr owns (is the only writer of) the non-config znodes, but 
> some set of trusted users are allowed to modify the configs.  It's hard to 
> predict all the possibilities here, but one model that seems generally useful 
> is to have a model where solr itself owns all the znodes and all actions that 
> require changing the znodes are routed to Solr APIs.  That seems simple and 
> reasonable as a first version.
> As for testing, I noticed while working on SOLR-6625 that we don't really 
> have any infrastructure for testing kerberos integration in unit tests.  
> Internally, I've been testing using kerberos-enabled VM clusters, but this 
> isn't great since we won't notice any breakages until someone actually spins 
> up a VM.  So part of this JIRA is to provide some infrastructure for testing 
> kerberos at the unit test level (using Hadoop's MiniKdc, HADOOP-9848).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.0-Linux (32bit/jdk1.8.0_40-ea-b22) - Build # 96 - Failure!

2015-02-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.0-Linux/96/
Java: 32bit/jdk1.8.0_40-ea-b22 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.DistribCursorPagingTest

Error Message:
Some resources were not closed, shutdown, or released.

Stack Trace:
java.lang.AssertionError: Some resources were not closed, shutdown, or released.
at __randomizedtesting.SeedInfo.seed([50EBE3422336EF6C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:213)
at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:790)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 9343 lines...]
   [junit4] Suite: org.apache.solr.cloud.DistribCursorPagingTest
   [junit4]   2> Creating dataDir: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.0-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.DistribCursorPagingTest
 50EBE3422336EF6C-001/init-core-data-001
   [junit4]   2> 421012 T2068 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(true) and clientAuth (false)
   [junit4]   2> 421012 T2068 oas.BaseDistributedSearchTestCase.initHostContext 
Setting hostContext system property: /
   [junit4]   2> 421014 T2068 oas.SolrTestCaseJ4.setUp ###Starting 
testDistribSearch
   [junit4]   2> 421015 T2068 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   1> client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 421015 T2069 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2> 421115 T2068 oasc.ZkTestServer.run start zk server on 
port:55514
   [junit4]   2> 421116 T2068 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2> 421116 T2068 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2> 421119 T2076 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@14d1a5a name:ZooKeeperConnection 
Watcher:127.0.0.1:55514 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2> 421120 T2068 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2> 421120 T2068 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2> 421121 T2068 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2> 421123 T2068 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2> 421138 T2068 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2> 421140 T2079 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@14caa84 name:

Re: Are docs updated based on comparing the id before analysis?

2015-02-05 Thread Erick Erickson
Hmmm, driving away from my client, I got to wondering about routing in
SolrCloud. You'd have to apply the analysis chain _before_ you routed
on ID, and I have no clue what would happen with things like the !
operator in the id field.

So I think this is a documentation issue. I wrote a small program (see
below) that produces fantastic results. It creates a  from
the letters "abcd" and randomly uppercases each letter. I tried this
on a 4 shard setup (trunk). The "id" field is a KeywordTokenizer and
UpperCaseFilter. (I assume LowerCase would have the same problem).

At the end of indexing 1,000 documents as above, the numDocs/maxDoc were:
shard1 - 316/316
shard2 - 5/320
shard3 - 297/297
shard4 - 67/67

Which indicates that the routing is sensitive to case, which is not at
all surprising when I finally stopped and _thought_.

So to handle my "rule of thumb", which is that anything that a human
could possibly enter should _not_ be case sensitive, the 
field needs to be
1> normalized as far as case is concerned at index time
2> have a query-time transformation done to match <1>. So something
like this should do it assuming that
the indexer took care to uppercase the :

  

  
 


  



FWIW..

*

package problem;


import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.impl.CloudSolrClient;
import org.apache.solr.common.SolrInputDocument;

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;


public class Test {
  private CloudSolrClient _server;
  private long _start = System.currentTimeMillis();
  private int _total = 0;


  public static void main(String[] args) {
try {
  Test idxer = new Test("localhost:2181");
  idxer.doIt();
  idxer.finish();
} catch (Exception e) {
  e.printStackTrace();
}
  }

  public Test(String zkUrl) throws IOException, SolrServerException {
_server = new CloudSolrClient(zkUrl);
_server.setDefaultCollection("eoe");
  }

  private void finish() throws IOException, SolrServerException {
_server.commit();
  }
  Random rand = new Random();

  private void doIt() throws IOException, SolrServerException {
List list = new ArrayList<>(1000);

for (int idx = 0; idx < 1000; ++idx) {
  SolrInputDocument doc = new SolrInputDocument();

  StringBuilder sb = new StringBuilder();
  addOne("a", sb);
  addOne("b", sb);
  addOne("c", sb);
  addOne("e", sb);

  doc.addField("id", sb.toString());
  list.add(doc);

}
_server.add(list);

  }

  void addOne(String str, StringBuilder sb) {
if (rand.nextBoolean()) {
  sb.append(str);
  return;
}
sb.append(str.toUpperCase());
  }
}

On Thu, Feb 5, 2015 at 1:21 PM, Shawn Heisey  wrote:
> On 2/5/2015 10:57 AM, Erick Erickson wrote:
>> Thanks for confirming I'm not completely crazy.
>>
>> I don't think it's A Good Thing to _require_ that all ID normalization
>> be done on the client, it'd have to be done both at index and query
>> time, too much chance for things to get out of sync. Although I guess
>> this is _actually_ what happens with the string type. H.  So I'm
>> -1 on <2> above as it would require this.
>>
>> And having s that are text fields _is_ fraught with danger
>> if you tokenize it, but KeywordTokenizer doesn't.
>
> 
>
>> Personally I feel like this is a JIRA, but I can see arguments the
>> other way as I'm not entirely sure what you'd do if multiple tokens
>> came out of the analysis chain. Maybe fail the document at index time?
>>
>> What _is_ unreasonable IMO is that we allow this surprising behavior,
>> so regardless of the above I'm +1 on keeping users from being
>> surprised by this behavior
>
> My earlier statements were written with the assumption that the current
> behavior exists because it is difficult to allow the desired behavior.
> I believe that if it were easy to do, it would have already been done.
>
> If it's possible to allow what we both think is rational user
> expectation (case-insensitive uniqueKey values), I agree that we need to
> allow it.  Whether or not it's readily achievable is the question.
>
> Thanks,
> Shawn
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_31) - Build # 4361 - Still Failing!

2015-02-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4361/
Java: 32bit/jdk1.8.0_31 -client -XX:+UseSerialGC

4 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZk2Test

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 1C2989F9B74E8EE9-001\tempDir-002: java.nio.file.AccessDeniedException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 1C2989F9B74E8EE9-001\tempDir-002
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 1C2989F9B74E8EE9-001: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 1C2989F9B74E8EE9-001 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 1C2989F9B74E8EE9-001\tempDir-002: java.nio.file.AccessDeniedException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 1C2989F9B74E8EE9-001\tempDir-002
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 1C2989F9B74E8EE9-001: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 1C2989F9B74E8EE9-001

at __randomizedtesting.SeedInfo.seed([1C2989F9B74E8EE9]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:294)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:170)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.handler.TestReplicationHandlerBackup.doTestBackup

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup
 1C2989F9B74E8EE9-001\solr-instance-001\collection1: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup
 1C2989F9B74E8EE9-001\solr-instance-001\collection1
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup
 1C2989F9B74E8EE9-001\solr-instance-001: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup
 1C2989F9B74E8EE9-001\solr-instance-001 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup
 1C2989F9B74E8EE9-001\solr-instance-001\collection1: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup
 1C2989F9B74E8EE9-001\solr-instance-001\collection1
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup
 1C2989F9B74E8EE9-001\solr-instance-001: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandlerBackup
 1C2989F9B74E8EE9-001\solr-instance-001

at 
__randomizedtesting.SeedInfo.seed([1C2989F9

[jira] [Updated] (LUCENE-6221) escape whole word operators (OR, AND, NOT)

2015-02-05 Thread Shaun A Elliott (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaun A Elliott updated LUCENE-6221:

Description: 
The current QueryParser escape method does not escape whole word operators (OR, 
AND, NOT):

{code}
  public static String escape(String s) {
StringBuilder sb = new StringBuilder();
for (int i = 0; i < s.length(); i++) {
  char c = s.charAt(i);
  // These characters are part of the query syntax and must be escaped
  if (c == '\\' || c == '+' || c == '-' || c == '!' || c == '(' || c == ')' 
|| c == ':'
|| c == '^' || c == '[' || c == ']' || c == '\"' || c == '{' || c == 
'}' || c == '~'
|| c == '*' || c == '?' || c == '|' || c == '&' || c == '/') {
sb.append('\\');
  }
  sb.append(c);
}
return sb.toString();
  }
{code}

It would be better if these words were escaped too.


  was:
The current QueryParser escape method does not escape whole word operators (OR, 
AND, NOT):

{code}
  public static String escape(String s) {
StringBuilder sb = new StringBuilder();
for (int i = 0; i < s.length(); i++) {
  char c = s.charAt(i);
  // These characters are part of the query syntax and must be escaped
  if (c == '\\' || c == '+' || c == '-' || c == '!' || c == '(' || c == ')' 
|| c == ':'
|| c == '^' || c == '[' || c == ']' || c == '\"' || c == '{' || c == 
'}' || c == '~'
|| c == '*' || c == '?' || c == '|' || c == '&' || c == '/') {
sb.append('\\');
  }
  sb.append(c);
}
return sb.toString();
  }
{code}

It would be better if these words were escaped too.

Summary: escape whole word operators (OR, AND, NOT)  (was: escape 
operators)

> escape whole word operators (OR, AND, NOT)
> --
>
> Key: LUCENE-6221
> URL: https://issues.apache.org/jira/browse/LUCENE-6221
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Shaun A Elliott
>
> The current QueryParser escape method does not escape whole word operators 
> (OR, AND, NOT):
> {code}
>   public static String escape(String s) {
> StringBuilder sb = new StringBuilder();
> for (int i = 0; i < s.length(); i++) {
>   char c = s.charAt(i);
>   // These characters are part of the query syntax and must be escaped
>   if (c == '\\' || c == '+' || c == '-' || c == '!' || c == '(' || c == 
> ')' || c == ':'
> || c == '^' || c == '[' || c == ']' || c == '\"' || c == '{' || c == 
> '}' || c == '~'
> || c == '*' || c == '?' || c == '|' || c == '&' || c == '/') {
> sb.append('\\');
>   }
>   sb.append(c);
> }
> return sb.toString();
>   }
> {code}
> It would be better if these words were escaped too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6221) escape operators

2015-02-05 Thread Shaun A Elliott (JIRA)
Shaun A Elliott created LUCENE-6221:
---

 Summary: escape operators
 Key: LUCENE-6221
 URL: https://issues.apache.org/jira/browse/LUCENE-6221
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/queryparser
Reporter: Shaun A Elliott


The current QueryParser escape method does not escape whole word operators (OR, 
AND, NOT):

{code}
  public static String escape(String s) {
StringBuilder sb = new StringBuilder();
for (int i = 0; i < s.length(); i++) {
  char c = s.charAt(i);
  // These characters are part of the query syntax and must be escaped
  if (c == '\\' || c == '+' || c == '-' || c == '!' || c == '(' || c == ')' 
|| c == ':'
|| c == '^' || c == '[' || c == ']' || c == '\"' || c == '{' || c == 
'}' || c == '~'
|| c == '*' || c == '?' || c == '|' || c == '&' || c == '/') {
sb.append('\\');
  }
  sb.append(c);
}
return sb.toString();
  }
{code}

It would be better if these words were escaped too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6865) Upgrade HttpClient to 4.4

2015-02-05 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14308106#comment-14308106
 ] 

Shawn Heisey commented on SOLR-6865:


The javadoc problem was noticed and corrected by its author, so my patch is 
ready.


> Upgrade HttpClient to 4.4
> -
>
> Key: SOLR-6865
> URL: https://issues.apache.org/jira/browse/SOLR-6865
> Project: Solr
>  Issue Type: Task
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Priority: Minor
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-6865.patch
>
>
> HttpClient 4.4 has been released.  5.0 seems like a good time to upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4905) Allow fromIndex parameter to JoinQParserPlugin to refer to a single-sharded collection that has a replica on all nodes

2015-02-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14308088#comment-14308088
 ] 

ASF subversion and git services commented on SOLR-4905:
---

Commit 1657701 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1657701 ]

SOLR-4905: Allow fromIndex parameter to JoinQParserPlugin to refer to a 
single-sharded collection that has a replica on all nodes

> Allow fromIndex parameter to JoinQParserPlugin to refer to a single-sharded 
> collection that has a replica on all nodes
> --
>
> Key: SOLR-4905
> URL: https://issues.apache.org/jira/browse/SOLR-4905
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Philip K. Warren
>Assignee: Timothy Potter
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-4905.patch, SOLR-4905.patch, patch.txt
>
>
> Using a non-SolrCloud setup, it is possible to perform cross core joins 
> (http://wiki.apache.org/solr/Join). When testing with SolrCloud, however, 
> neither the collection name, alias name (we have created aliases to SolrCloud 
> collections), or the automatically generated core name (i.e. 
> _shard1_replica1) work as the fromIndex parameter for a 
> cross-core join.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4905) Allow fromIndex parameter to JoinQParserPlugin to refer to a single-sharded collection that has a replica on all nodes

2015-02-05 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter resolved SOLR-4905.
--
   Resolution: Fixed
Fix Version/s: 5.1
   Trunk

> Allow fromIndex parameter to JoinQParserPlugin to refer to a single-sharded 
> collection that has a replica on all nodes
> --
>
> Key: SOLR-4905
> URL: https://issues.apache.org/jira/browse/SOLR-4905
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Philip K. Warren
>Assignee: Timothy Potter
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-4905.patch, SOLR-4905.patch, patch.txt
>
>
> Using a non-SolrCloud setup, it is possible to perform cross core joins 
> (http://wiki.apache.org/solr/Join). When testing with SolrCloud, however, 
> neither the collection name, alias name (we have created aliases to SolrCloud 
> collections), or the automatically generated core name (i.e. 
> _shard1_replica1) work as the fromIndex parameter for a 
> cross-core join.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6693) Start script for windows fails with 32bit JRE

2015-02-05 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14308082#comment-14308082
 ] 

Timothy Potter commented on SOLR-6693:
--

Thanks for the heads-up on the issues with {{resolve_java_version}} [~janhoy] 
... From what I can tell, the best approach is to use my java -version string 
parsing as you suggested, but still use some of your -d64 and -server checking. 
Cooking up a new patch now ...

> Start script for windows fails with 32bit JRE
> -
>
> Key: SOLR-6693
> URL: https://issues.apache.org/jira/browse/SOLR-6693
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 4.10.2
> Environment: WINDOWS 8.1
>Reporter: Jan Høydahl
>Assignee: Timothy Potter
>  Labels: bin\solr.cmd
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6693.patch, SOLR-6693.patch, solr.cmd, 
> solr.cmd.patch
>
>
> *Reproduce:*
> # Install JRE8 from www.java.com (typically {{C:\Program Files 
> (x86)\Java\jre1.8.0_25}})
> # Run the command {{bin\solr start -V}}
> The result is:
> {{\Java\jre1.8.0_25\bin\java was unexpected at this time.}}
> *Reason*
> This comes from bad quoting of the {{%SOLR%}} variable. I think it's because 
> of the parenthesis that it freaks out. I think the same would apply for a 
> 32-bit JDK because of the (x86) in the path, but I have not tested.
> Tip: You can remove the line {{@ECHO OFF}} at the top to see exactly which is 
> the offending line
> *Solution*
> Quoting the lines where %JAVA% is printed, e.g. instead of
> {noformat}
>   @echo Using Java: %JAVA%
> {noformat}
> then use
> {noformat}
>   @echo "Using Java: %JAVA%"
> {noformat}
> This is needed several places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release 5.0.0 RC1

2015-02-05 Thread Anshum Gupta
I plan on cutting the next RC sometime tomorrow if nothing new pops up. I'm
just giving the extra day to see how the fix for SOLR-6640 behaves.

On Thu, Jan 29, 2015 at 9:56 AM, Anshum Gupta 
wrote:

> Sure Mark and Adrien.
>
> In general, +1 to anything that fixes a *reasonably critical* bug without
> affecting everything else and I'll let people who're committing define the
> *reasonable* bounds.
> In case something jumps out, I'll intervene (and others can too).
>
> P.S: No part above translates to let's get everything in. Also, no new
> features/feel-good improvements please. :)
>
> On Thu, Jan 29, 2015 at 8:21 AM, Mark Miller 
> wrote:
>
>> We should probably roll SOLR-6969 into a respin as well. I finally have a
>> good fix and it's a pretty critical data loss issue on hdfs as transactions
>> logs can easily be ignored instead of replayed.
>>
>> - Mark
>>
>> On Thu, Jan 29, 2015 at 9:01 AM, Adrien Grand  wrote:
>>
>>> Hi Anshum,
>>>
>>> I'd like to get https://issues.apache.org/jira/browse/LUCENE-6207 in
>>> 5.0.0.
>>>
>>> On Wed, Jan 28, 2015 at 6:54 PM, Anshum Gupta 
>>> wrote:
>>> > +1 on that. Thanks Mike.
>>> >
>>> > On Wed, Jan 28, 2015 at 9:45 AM, Michael McCandless
>>> >  wrote:
>>> >>
>>> >> I'd like to fix https://issues.apache.org/jira/browse/LUCENE-6205 for
>>> >> 5.0.0 ... the fix is low risk and the bug looks like index corruption
>>> >> when it strikes.
>>> >>
>>> >> Mike McCandless
>>> >>
>>> >> http://blog.mikemccandless.com
>>> >>
>>> >>
>>> >> On Tue, Jan 27, 2015 at 12:01 PM, Anshum Gupta <
>>> ans...@anshumgupta.net>
>>> >> wrote:
>>> >> > I would say that this is not the time to push stuff that we forgot
>>> to
>>> >> > put in
>>> >> > but to get critical/blocker bug fixes to make sure that Lucene and
>>> Solr
>>> >> > do
>>> >> > not break and work as documented, when released. Let's not try to
>>> shoot
>>> >> > ourselves in the foot by changing more things than that as it'll be
>>> >> > tough to
>>> >> > track and manage if that starts to happen.
>>> >> >
>>> >> > About the respin, I'm just waiting for SOLR-6640. Everything else
>>> that
>>> >> > was
>>> >> > reported in the earlier RC stands fixed. If anyone finds bugs that
>>> >> > impact
>>> >> > the release of 5.0 as is, please fix and commit but not any other
>>> >> > change.
>>> >> >
>>> >> > Thanks for being patient.
>>> >> >
>>> >> >
>>> >> > On Tue, Jan 27, 2015 at 8:45 AM, Ryan Ernst 
>>> wrote:
>>> >> >>>
>>> >> >>> I just filed https://issues.apache.org/jira/browse/SOLR-7041
>>> "Nuke
>>> >> >>> defaultSearchField and solrQueryParser from schema”. Has it been
>>> >> >>> discussed
>>> >> >>> already?
>>> >> >>
>>> >> >>
>>> >> >>>
>>> >> >>>  What about https://issues.apache.org/jira/browse/SOLR-4586
>>> >> >>>
>>> >> >>> I have hit this trappy magic1024 limit myself and it would be
>>> great if
>>> >> >>>
>>> >> >>> it could be removed for 5.0.
>>> >> >>
>>> >> >>
>>> >> >> A respin is not the time to cram in more changes (especially
>>> >> >> controversial
>>> >> >> ones).
>>> >> >>
>>> >> >> On Tue, Jan 27, 2015 at 5:28 AM, Mike Murphy <
>>> mmurphy3...@gmail.com>
>>> >> >> wrote:
>>> >> >>>
>>> >> >>> What about https://issues.apache.org/jira/browse/SOLR-4586
>>> >> >>> I have hit this trappy magic1024 limit myself and it would be
>>> great if
>>> >> >>> it could be removed for 5.0.
>>> >> >>>
>>> >> >>> On Tue, Jan 27, 2015 at 5:28 AM, Jan Høydahl <
>>> jan@cominvent.com>
>>> >> >>> wrote:
>>> >> >>> > I just filed https://issues.apache.org/jira/browse/SOLR-7041
>>> "Nuke
>>> >> >>> > defaultSearchField and solrQueryParser from schema”. Has it been
>>> >> >>> > discussed
>>> >> >>> > already?
>>> >> >>> >
>>> >> >>> > --
>>> >> >>> > Jan Høydahl, search solution architect
>>> >> >>> > Cominvent AS - www.cominvent.com
>>> >> >>> >
>>> >> >>> > 25. jan. 2015 kl. 20.07 skrev Uwe Schindler :
>>> >> >>> >
>>> >> >>> > In addition,
>>> >> >>> >
>>> >> >>> > on most computers you extract to your windows “Desktop” and for
>>> most
>>> >> >>> > users
>>> >> >>> > this is also using a white space (the user name has in most
>>> cases
>>> >> >>> > white
>>> >> >>> > space in the name), this is also bad user experience.
>>> >> >>> >
>>> >> >>> > Uwe
>>> >> >>> >
>>> >> >>> > -
>>> >> >>> > Uwe Schindler
>>> >> >>> > H.-H.-Meier-Allee 63, D-28213 Bremen
>>> >> >>> > http://www.thetaphi.de
>>> >> >>> > eMail: u...@thetaphi.de
>>> >> >>> >
>>> >> >>> > From: Anshum Gupta [mailto:ans...@anshumgupta.net]
>>> >> >>> > Sent: Sunday, January 25, 2015 7:58 PM
>>> >> >>> > To: dev@lucene.apache.org
>>> >> >>> > Subject: Re: [VOTE] Release 5.0.0 RC1
>>> >> >>> >
>>> >> >>> > I'm not really a windows user so I don't really know what's a
>>> fix
>>> >> >>> > for
>>> >> >>> > the
>>> >> >>> > paths with a space. May be we can either fix it or document the
>>> way
>>> >> >>> > to
>>> >> >>> > use
>>> >> >>> > it so that this doesn't happen (put the path in quotes or
>>> >> >>> > something?).
>>> >> >>> > The
>>> >> >>> > worst ca

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2593 - Still Failing

2015-02-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2593/

5 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:36903/c8n_1x2_shard1_replica2

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:36903/c8n_1x2_shard1_replica2
at 
__randomizedtesting.SeedInfo.seed([9D826D3E9F73EAAF:15D652E4318F8757]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:787)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:730)
at 
org.apache.solr.cloud.HttpPartitionTest.doSendDoc(HttpPartitionTest.java:484)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:501)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:193)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
co

[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-05 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14308036#comment-14308036
 ] 

Anshum Gupta commented on SOLR-6736:


Something like this:

curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
@testconf.gz http://localhost:8983/solr/admin/configs?action=ADD

The conf could be zip/gz/jar.

> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: 5.0, Trunk
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3973) Incorporate PMD / FindBugs

2015-02-05 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated LUCENE-3973:
--
Attachment: LUCENE-3973.patch

I took the latest patch available and brought it up to current trunk. It's 
still needs a more complete ruleset to use, but is a reasonable starting point 
after 2 years of inactivity.

> Incorporate PMD / FindBugs
> --
>
> Key: LUCENE-3973
> URL: https://issues.apache.org/jira/browse/LUCENE-3973
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Reporter: Chris Male
>  Labels: newdev
> Attachments: LUCENE-3973.patch, LUCENE-3973.patch, LUCENE-3973.patch, 
> LUCENE-3973.patch, LUCENE-3973.patch, LUCENE-3973.patch, core.html, 
> solr-core.html
>
>
> This has been touched on a few times over the years.  Having static analysis 
> as part of our build seems like a big win.  For example, we could use PMD to 
> look at {{System.out.println}} statements like discussed in LUCENE-3877 and 
> we could possibly incorporate the nocommit / @author checks as well.
> There are a few things to work out as part of this:
> - Should we use both PMD and FindBugs or just one of them? They look at code 
> from different perspectives (bytecode vs source code) and target different 
> issues.  At the moment I'm in favour of trying both but that might be too 
> heavy handed for our needs.
> - What checks should we use? There's no point having the analysis if it's 
> going to raise too many false-positives or problems we don't deem 
> problematic.  
> - How should the analysis be integrated in our build? Need to work out when 
> the analysis should run, how it should be incorporated in Ant and/or Maven, 
> what impact errors should have.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80-ea-b05) - Build # 11581 - Failure!

2015-02-05 Thread Robert Muir
it is the common javadocs bug when superclasses implement interfaces
(same reason why all concrete codec consumer/producers redundantly say
'implements Closeable').
javadoc for the method is completely empty, and javadoc does not know
that it is specified by the interface (no specified-by or other links)
or even the parent superclass! Its like it just gives up.

I will add the useless 'implements' workaround...

why does 'documentation-lint' currently only fail here on java7, but
pass on java8? I generated javadocs with java8, and it still has the
problem, yet builds don't fail.

On Thu, Feb 5, 2015 at 3:10 PM, Policeman Jenkins Server
 wrote:
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11581/
> Java: 64bit/jdk1.7.0_80-ea-b05 -XX:+UseCompressedOops -XX:+UseParallelGC
>
> All tests passed
>
> Build Log:
> [...truncated 45178 lines...]
> -documentation-lint:
>  [echo] checking for broken html...
> [jtidy] Checking for broken html (such as invalid tags)...
>[delete] Deleting directory 
> /mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build/jtidy_tmp
>  [echo] Checking for broken links...
>  [exec]
>  [exec] Crawl/parse...
>  [exec]
>  [exec] Verify...
>  [echo] Checking for missing docs...
>  [exec]
>  [exec] build/docs/facet/org/apache/lucene/facet/FacetsCollector.html
>  [exec]   missing Methods: needsScores()
>  [exec]
>  [exec] Missing javadocs were found!
>
> BUILD FAILED
> /mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:529: The following 
> error occurred while executing this line:
> /mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:83: The following 
> error occurred while executing this line:
> /mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build.xml:134: The 
> following error occurred while executing this line:
> /mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build.xml:169: The 
> following error occurred while executing this line:
> /mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:2481:
>  exec returned: 1
>
> Total time: 77 minutes 52 seconds
> Build step 'Invoke Ant' marked build as failure
> [description-setter] Description set: Java: 64bit/jdk1.7.0_80-ea-b05 
> -XX:+UseCompressedOops -XX:+UseParallelGC
> Archiving artifacts
> Recording test results
> Email was triggered for: Failure - Any
> Sending email for trigger: Failure - Any
>
>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6912) config API for managing search components

2015-02-05 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6912:
-
Attachment: SOLR-6912.patch

this is an omnibus patch which takes care of all components with a "name"

> config API for managing search components
> -
>
> Key: SOLR-6912
> URL: https://issues.apache.org/jira/browse/SOLR-6912
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-6912.patch
>
>
> example
> {code}
> curl http://localhost:8983/solr/collection1/config -H 
> 'Content-type:application/json'  -d '{
> "create-searchcomponent" : {"name": "spell" ,
>   "class":"solr.SpellCheckComponent" 
> , "queryAnalyzerFieldType":"text_general" 
>   
>  },
> "update-searchcomponent" :{"name": "spell" ,
>   "class":"solr.SpellCheckComponent" ,
>"queryAnalyzerFieldType":"text_es" 
>  },
> "delete-searchcomponent" :"spell" 
> }'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6693) Start script for windows fails with 32bit JRE

2015-02-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14308010#comment-14308010
 ] 

Jan Høydahl commented on SOLR-6693:
---

Thanks for bringing this forward, [~thelabdude]

Have not tested the patch, but please do not use the {{resolve_java_version}} 
function from my earlier patch. As noted in [this 
comment|https://issues.apache.org/jira/browse/SOLR-6693?focusedCommentId=14206246&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14206246]
 it will not test whether your specific Java is a certain version, but use the 
registry to see if it finds ANY Java in the system satisfying the requirements.

So I propose you continue using the {{java -version}} string parsing you 
started on, perhaps extending it a bit.

> Start script for windows fails with 32bit JRE
> -
>
> Key: SOLR-6693
> URL: https://issues.apache.org/jira/browse/SOLR-6693
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 4.10.2
> Environment: WINDOWS 8.1
>Reporter: Jan Høydahl
>Assignee: Timothy Potter
>  Labels: bin\solr.cmd
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6693.patch, SOLR-6693.patch, solr.cmd, 
> solr.cmd.patch
>
>
> *Reproduce:*
> # Install JRE8 from www.java.com (typically {{C:\Program Files 
> (x86)\Java\jre1.8.0_25}})
> # Run the command {{bin\solr start -V}}
> The result is:
> {{\Java\jre1.8.0_25\bin\java was unexpected at this time.}}
> *Reason*
> This comes from bad quoting of the {{%SOLR%}} variable. I think it's because 
> of the parenthesis that it freaks out. I think the same would apply for a 
> 32-bit JDK because of the (x86) in the path, but I have not tested.
> Tip: You can remove the line {{@ECHO OFF}} at the top to see exactly which is 
> the offending line
> *Solution*
> Quoting the lines where %JAVA% is printed, e.g. instead of
> {noformat}
>   @echo Using Java: %JAVA%
> {noformat}
> then use
> {noformat}
>   @echo "Using Java: %JAVA%"
> {noformat}
> This is needed several places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_31) - Build # 4465 - Still Failing!

2015-02-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4465/
Java: 32bit/jdk1.8.0_31 -server -XX:+UseParallelGC

5 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZk2Test

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 5C8D3B28F24071AC-001\tempDir-002: java.nio.file.AccessDeniedException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 5C8D3B28F24071AC-001\tempDir-002
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 5C8D3B28F24071AC-001: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 5C8D3B28F24071AC-001 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 5C8D3B28F24071AC-001\tempDir-002: java.nio.file.AccessDeniedException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 5C8D3B28F24071AC-001\tempDir-002
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 5C8D3B28F24071AC-001: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.BasicDistributedZk2Test
 5C8D3B28F24071AC-001

at __randomizedtesting.SeedInfo.seed([5C8D3B28F24071AC]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:286)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:170)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestSolrConfigHandler

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 5C8D3B28F24071AC-001\tempDir-010\collection1\conf\params.json: 
java.nio.file.FileSystemException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 5C8D3B28F24071AC-001\tempDir-010\collection1\conf\params.json: The process 
cannot access the file because it is being used by another process. 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 5C8D3B28F24071AC-001\tempDir-010\collection1\conf: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 5C8D3B28F24071AC-001\tempDir-010\collection1\conf
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 5C8D3B28F24071AC-001\tempDir-010\collection1: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 5C8D3B28F24071AC-001\tempDir-010\collection1
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 5C8D3B28F24071AC-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 5C8D3B28F24071AC-001\tempDir-010
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cor

[jira] [Updated] (SOLR-6865) Upgrade HttpClient to 4.4

2015-02-05 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-6865:
---
Attachment: SOLR-6865.patch

This patch passes all tests in branch_5x.  It *almost* passes precommit, but 
the failure is unrelated to this issue, and can be fixed by adding javadoc to 
one method.

> Upgrade HttpClient to 4.4
> -
>
> Key: SOLR-6865
> URL: https://issues.apache.org/jira/browse/SOLR-6865
> Project: Solr
>  Issue Type: Task
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Priority: Minor
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-6865.patch
>
>
> HttpClient 4.4 has been released.  5.0 seems like a good time to upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-02-05 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta reassigned SOLR-6736:
--

Assignee: Anshum Gupta

> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: 5.0, Trunk
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4407) SSL Certificate based authentication for SolrCloud

2015-02-05 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307968#comment-14307968
 ] 

Hoss Man commented on SOLR-4407:


bq. So, for the time being it works, but if we move Solr away from users being 
able to customize their servlet containers (standalone app mode) then Solr will 
need to make this capability configurable somehow.

I don't understand this comment at all.

have you looked at the "Enabling SSL" ref guide page that steve mentioned?  

For Solr 5.0 it has been brought up to date with all the necessary details on 
running Solr with SSL (notably SOLR_SSL_OPTS) w/o the user needing/having any 
special information about if/when there is a servlet container being used under 
the covers by Solr.  

What aspect of SSL do you think isn't already configurable?

> SSL Certificate based authentication for SolrCloud
> --
>
> Key: SOLR-4407
> URL: https://issues.apache.org/jira/browse/SOLR-4407
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Affects Versions: 4.1
>Reporter: Sindre Fiskaa
>Assignee: Steve Rowe
>  Labels: Authentication, Certificate, SSL
> Fix For: 4.7, Trunk
>
>
> I need to be able to secure sensitive information in solrnodes running in a 
> SolrCloud with either SSL client/server certificates or http basic auth..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6693) Start script for windows fails with 32bit JRE

2015-02-05 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-6693:
-
Attachment: SOLR-6693.patch

Here's a patch for trunk that incorporates [~janhoy]'s patch from Nov-06 (sorry 
I overlooked that previously and there was some very good stuff in it) and 
[~chrish619]'s patch from earlier this week. It also fixes SOLR-7047.

We would like to get this into the 5.0 release, but I'm not comfortable doing 
that without someone else trying this patch out in their environment. I tested 
with a JRE installed in {{c:\Program Files (x86)\Java\jre7}} and solr installed 
in: {{c:\solr (5.0)\}}.

Please review / try this out ASAP and let me know if there are any other issues.

> Start script for windows fails with 32bit JRE
> -
>
> Key: SOLR-6693
> URL: https://issues.apache.org/jira/browse/SOLR-6693
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 4.10.2
> Environment: WINDOWS 8.1
>Reporter: Jan Høydahl
>Assignee: Timothy Potter
>  Labels: bin\solr.cmd
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6693.patch, SOLR-6693.patch, solr.cmd, 
> solr.cmd.patch
>
>
> *Reproduce:*
> # Install JRE8 from www.java.com (typically {{C:\Program Files 
> (x86)\Java\jre1.8.0_25}})
> # Run the command {{bin\solr start -V}}
> The result is:
> {{\Java\jre1.8.0_25\bin\java was unexpected at this time.}}
> *Reason*
> This comes from bad quoting of the {{%SOLR%}} variable. I think it's because 
> of the parenthesis that it freaks out. I think the same would apply for a 
> 32-bit JDK because of the (x86) in the path, but I have not tested.
> Tip: You can remove the line {{@ECHO OFF}} at the top to see exactly which is 
> the offending line
> *Solution*
> Quoting the lines where %JAVA% is printed, e.g. instead of
> {noformat}
>   @echo Using Java: %JAVA%
> {noformat}
> then use
> {noformat}
>   @echo "Using Java: %JAVA%"
> {noformat}
> This is needed several places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6693) Start script for windows fails with 32bit JRE

2015-02-05 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter reassigned SOLR-6693:


Assignee: Timothy Potter

> Start script for windows fails with 32bit JRE
> -
>
> Key: SOLR-6693
> URL: https://issues.apache.org/jira/browse/SOLR-6693
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 4.10.2
> Environment: WINDOWS 8.1
>Reporter: Jan Høydahl
>Assignee: Timothy Potter
>  Labels: bin\solr.cmd
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6693.patch, solr.cmd, solr.cmd.patch
>
>
> *Reproduce:*
> # Install JRE8 from www.java.com (typically {{C:\Program Files 
> (x86)\Java\jre1.8.0_25}})
> # Run the command {{bin\solr start -V}}
> The result is:
> {{\Java\jre1.8.0_25\bin\java was unexpected at this time.}}
> *Reason*
> This comes from bad quoting of the {{%SOLR%}} variable. I think it's because 
> of the parenthesis that it freaks out. I think the same would apply for a 
> 32-bit JDK because of the (x86) in the path, but I have not tested.
> Tip: You can remove the line {{@ECHO OFF}} at the top to see exactly which is 
> the offending line
> *Solution*
> Quoting the lines where %JAVA% is printed, e.g. instead of
> {noformat}
>   @echo Using Java: %JAVA%
> {noformat}
> then use
> {noformat}
>   @echo "Using Java: %JAVA%"
> {noformat}
> This is needed several places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6220) Move needsScores from Weight.scorer to Query.createWeight

2015-02-05 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307936#comment-14307936
 ] 

Robert Muir commented on LUCENE-6220:
-

+1 if we can do this somehow. When i looked into it, it seemed it might require 
major refactoring of IndexSearcher. But I think it would end up better if we 
can do it!

> Move needsScores from Weight.scorer to Query.createWeight
> -
>
> Key: LUCENE-6220
> URL: https://issues.apache.org/jira/browse/LUCENE-6220
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: Trunk, 5.1
>
> Attachments: LUCENE-6220.patch
>
>
> Whether scores are needed is currently a Scorer-level property while it 
> should actually be a Weight thing I think?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80-ea-b05) - Build # 11581 - Failure!

2015-02-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11581/
Java: 64bit/jdk1.7.0_80-ea-b05 -XX:+UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 45178 lines...]
-documentation-lint:
 [echo] checking for broken html...
[jtidy] Checking for broken html (such as invalid tags)...
   [delete] Deleting directory 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build/jtidy_tmp
 [echo] Checking for broken links...
 [exec] 
 [exec] Crawl/parse...
 [exec] 
 [exec] Verify...
 [echo] Checking for missing docs...
 [exec] 
 [exec] build/docs/facet/org/apache/lucene/facet/FacetsCollector.html
 [exec]   missing Methods: needsScores()
 [exec] 
 [exec] Missing javadocs were found!

BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:529: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:83: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build.xml:134: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build.xml:169: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:2481: 
exec returned: 1

Total time: 77 minutes 52 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 64bit/jdk1.7.0_80-ea-b05 
-XX:+UseCompressedOops -XX:+UseParallelGC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1341: POMs out of sync

2015-02-05 Thread Shalin Shekhar Mangar
Fixed. Thanks Steve!

On Fri, Feb 6, 2015 at 1:33 AM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:

> Good catch! Yes, google commons is what we need. I'll fix.
>
> On Fri, Feb 6, 2015 at 12:01 AM, Steve Rowe  wrote:
>
>> Shalin,
>>
>> The offending line is this import statement:
>>
>> 26: import
>> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.common.collect.Lists;
>>
>> I’m not even sure what is happening there, but I assume it’s some form of
>> code duplication within the junit4 lib?
>>
>> I suspect the junit4 jar is not on the Maven test classpath because the
>> Maven build uses the surefire plugin as its test runner rather than the
>> junit4 runner.
>>
>> The direct package should suffice, no?:
>>
>> import com.google.common.collect.Lists;
>>
>> Steve
>>
>> > On Feb 5, 2015, at 12:54 PM, Shalin Shekhar Mangar <
>> shalinman...@gmail.com> wrote:
>> >
>> > That's strange. This is code that I committed today but all tests and
>> precommit passed. I'll dig.
>> >
>> > On Thu, Feb 5, 2015 at 11:09 PM, Uwe Schindler  wrote:
>> > Very strange error:
>> >
>> >   [mvn] [WARNING]
>> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/src/test/org/apache/solr/search/TestAnalyticsQParserPlugin.java:
>> Recompile with -Xlint:unchecked for details.
>> >   [mvn] [INFO] 4 warnings
>> >   [mvn] [INFO]
>> -
>> >   [mvn] [INFO]
>> -
>> >   [mvn] [ERROR] COMPILATION ERROR :
>> >   [mvn] [INFO]
>> -
>> >   [mvn] [ERROR]
>> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZk2Test.java:[26,80]
>> package
>> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.common.collect
>> does not exist
>> >   [mvn] [ERROR]
>> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZk2Test.java:[420,29]
>> cannot find symbol
>> >   [mvn]   symbol:   variable Lists
>> >   [mvn]   location: class
>> org.apache.solr.cloud.BasicDistributedZk2Test
>> >   [mvn] [INFO] 2 errors
>> >   [mvn] [INFO]
>> -
>> >
>> > Does anybody has an idea how this comes? I cannot reproduce with ANT.
>> >
>> > Uwe
>> >
>> > -
>> > Uwe Schindler
>> > H.-H.-Meier-Allee 63, D-28213 Bremen
>> > http://www.thetaphi.de
>> > eMail: u...@thetaphi.de
>> >
>> > > -Original Message-
>> > > From: Apache Jenkins Server [mailto:jenk...@builds.apache.org]
>> > > Sent: Thursday, February 05, 2015 6:03 PM
>> > > To: dev@lucene.apache.org
>> > > Subject: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1341: POMs out of
>> > > sync
>> > >
>> > > Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1341/
>> > >
>> > > No tests ran.
>> > >
>> > > Build Log:
>> > > [...truncated 39352 lines...]
>> > >   [mvn] [INFO]
>> -
>> > >   [mvn] [INFO]
>> -
>> > >   [mvn] [ERROR] COMPILATION ERROR :
>> > >   [mvn] [INFO]
>> -
>> > >
>> > > [...truncated 798 lines...]
>> > > BUILD FAILED
>> > > /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-
>> > > trunk/build.xml:542: The following error occurred while executing
>> this line:
>> > > /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-
>> > > trunk/build.xml:204: The following error occurred while executing
>> this line:
>> > > : Java returned: 1
>> > >
>> > > Total time: 22 minutes 6 seconds
>> > > Build step 'Invoke Ant' marked build as failure Email was triggered
>> for: Failure
>> > > Sending email for trigger: Failure
>> > >
>> >
>> >
>> >
>> > -
>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>> >
>> >
>> >
>> >
>> > --
>> > Regards,
>> > Shalin Shekhar Mangar.
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>
>
> --
> Regards,
> Shalin Shekhar Mangar.
>



-- 
Regards,
Shalin Shekhar Mangar.


[jira] [Commented] (SOLR-6775) Creating backup snapshot null pointer exception

2015-02-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307897#comment-14307897
 ] 

ASF subversion and git services commented on SOLR-6775:
---

Commit 1657681 from sha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1657681 ]

SOLR-6775: Import the right Lists class

> Creating backup snapshot null pointer exception
> ---
>
> Key: SOLR-6775
> URL: https://issues.apache.org/jira/browse/SOLR-6775
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 4.10
> Environment: Linux Server, Java version "1.7.0_21", Solr version 
> 4.10.0
>Reporter: Ryan Hesson
>Assignee: Shalin Shekhar Mangar
>  Labels: snapshot, solr
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-6775.patch, SOLR-6775.patch
>
>
> I set up Solr Replication. I have one master on a server, one slave on 
> another server. The replication of data appears functioning correctly. The 
> issue is when the master SOLR tries to create a snapshot backup it gets a 
> null pointer exception. 
> org.apache.solr.handler.SnapShooter createSnapshot method calls 
> org.apache.solr.handler.SnapPuller.delTree(snapShotDir); at line 162 and the 
> exception happens within  org.apache.solr.handler.SnapPuller at line 1026 
> because snapShotDir is null. 
> Here is the actual log output:
> 58319963 [qtp12610551-16] INFO  org.apache.solr.core.SolrCore  - newest 
> commit generation = 349
> 58319983 [Thread-19] INFO  org.apache.solr.handler.SnapShooter  - Creating 
> backup snapshot...
> Exception in thread "Thread-19" java.lang.NullPointerException
> at org.apache.solr.handler.SnapPuller.delTree(SnapPuller.java:1026)
> at 
> org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:162)
> at org.apache.solr.handler.SnapShooter$1.run(SnapShooter.java:91)
> I may have missed how to set the directory in the documentation but I've 
> looked around without much luck. I thought the process was to use the same 
> directory as the index data for the snapshots. Is this a known issue with 
> this release or am I missing how to set the value? If someone could tell me 
> how to set snapshotdir or confirm that it is an issue and a different way of 
> backing up the index is needed it would be much appreciated. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6775) Creating backup snapshot null pointer exception

2015-02-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307893#comment-14307893
 ] 

ASF subversion and git services commented on SOLR-6775:
---

Commit 1657680 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1657680 ]

SOLR-6775: Import the right Lists class

> Creating backup snapshot null pointer exception
> ---
>
> Key: SOLR-6775
> URL: https://issues.apache.org/jira/browse/SOLR-6775
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 4.10
> Environment: Linux Server, Java version "1.7.0_21", Solr version 
> 4.10.0
>Reporter: Ryan Hesson
>Assignee: Shalin Shekhar Mangar
>  Labels: snapshot, solr
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-6775.patch, SOLR-6775.patch
>
>
> I set up Solr Replication. I have one master on a server, one slave on 
> another server. The replication of data appears functioning correctly. The 
> issue is when the master SOLR tries to create a snapshot backup it gets a 
> null pointer exception. 
> org.apache.solr.handler.SnapShooter createSnapshot method calls 
> org.apache.solr.handler.SnapPuller.delTree(snapShotDir); at line 162 and the 
> exception happens within  org.apache.solr.handler.SnapPuller at line 1026 
> because snapShotDir is null. 
> Here is the actual log output:
> 58319963 [qtp12610551-16] INFO  org.apache.solr.core.SolrCore  - newest 
> commit generation = 349
> 58319983 [Thread-19] INFO  org.apache.solr.handler.SnapShooter  - Creating 
> backup snapshot...
> Exception in thread "Thread-19" java.lang.NullPointerException
> at org.apache.solr.handler.SnapPuller.delTree(SnapPuller.java:1026)
> at 
> org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:162)
> at org.apache.solr.handler.SnapShooter$1.run(SnapShooter.java:91)
> I may have missed how to set the directory in the documentation but I've 
> looked around without much luck. I thought the process was to use the same 
> directory as the index data for the snapshots. Is this a known issue with 
> this release or am I missing how to set the value? If someone could tell me 
> how to set snapshotdir or confirm that it is an issue and a different way of 
> backing up the index is needed it would be much appreciated. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1341: POMs out of sync

2015-02-05 Thread Shalin Shekhar Mangar
Good catch! Yes, google commons is what we need. I'll fix.

On Fri, Feb 6, 2015 at 12:01 AM, Steve Rowe  wrote:

> Shalin,
>
> The offending line is this import statement:
>
> 26: import
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.common.collect.Lists;
>
> I’m not even sure what is happening there, but I assume it’s some form of
> code duplication within the junit4 lib?
>
> I suspect the junit4 jar is not on the Maven test classpath because the
> Maven build uses the surefire plugin as its test runner rather than the
> junit4 runner.
>
> The direct package should suffice, no?:
>
> import com.google.common.collect.Lists;
>
> Steve
>
> > On Feb 5, 2015, at 12:54 PM, Shalin Shekhar Mangar <
> shalinman...@gmail.com> wrote:
> >
> > That's strange. This is code that I committed today but all tests and
> precommit passed. I'll dig.
> >
> > On Thu, Feb 5, 2015 at 11:09 PM, Uwe Schindler  wrote:
> > Very strange error:
> >
> >   [mvn] [WARNING]
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/src/test/org/apache/solr/search/TestAnalyticsQParserPlugin.java:
> Recompile with -Xlint:unchecked for details.
> >   [mvn] [INFO] 4 warnings
> >   [mvn] [INFO]
> -
> >   [mvn] [INFO]
> -
> >   [mvn] [ERROR] COMPILATION ERROR :
> >   [mvn] [INFO]
> -
> >   [mvn] [ERROR]
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZk2Test.java:[26,80]
> package
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.common.collect
> does not exist
> >   [mvn] [ERROR]
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZk2Test.java:[420,29]
> cannot find symbol
> >   [mvn]   symbol:   variable Lists
> >   [mvn]   location: class
> org.apache.solr.cloud.BasicDistributedZk2Test
> >   [mvn] [INFO] 2 errors
> >   [mvn] [INFO]
> -
> >
> > Does anybody has an idea how this comes? I cannot reproduce with ANT.
> >
> > Uwe
> >
> > -
> > Uwe Schindler
> > H.-H.-Meier-Allee 63, D-28213 Bremen
> > http://www.thetaphi.de
> > eMail: u...@thetaphi.de
> >
> > > -Original Message-
> > > From: Apache Jenkins Server [mailto:jenk...@builds.apache.org]
> > > Sent: Thursday, February 05, 2015 6:03 PM
> > > To: dev@lucene.apache.org
> > > Subject: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1341: POMs out of
> > > sync
> > >
> > > Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1341/
> > >
> > > No tests ran.
> > >
> > > Build Log:
> > > [...truncated 39352 lines...]
> > >   [mvn] [INFO]
> -
> > >   [mvn] [INFO]
> -
> > >   [mvn] [ERROR] COMPILATION ERROR :
> > >   [mvn] [INFO]
> -
> > >
> > > [...truncated 798 lines...]
> > > BUILD FAILED
> > > /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-
> > > trunk/build.xml:542: The following error occurred while executing this
> line:
> > > /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-
> > > trunk/build.xml:204: The following error occurred while executing this
> line:
> > > : Java returned: 1
> > >
> > > Total time: 22 minutes 6 seconds
> > > Build step 'Invoke Ant' marked build as failure Email was triggered
> for: Failure
> > > Sending email for trigger: Failure
> > >
> >
> >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
> >
> >
> >
> > --
> > Regards,
> > Shalin Shekhar Mangar.
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


-- 
Regards,
Shalin Shekhar Mangar.


[jira] [Updated] (SOLR-6648) AnalyzingInfixLookupFactory always highlights suggestions

2015-02-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-6648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-6648:

Fix Version/s: (was: 5.0)
   5.1

> AnalyzingInfixLookupFactory always highlights suggestions
> -
>
> Key: SOLR-6648
> URL: https://issues.apache.org/jira/browse/SOLR-6648
> Project: Solr
>  Issue Type: Sub-task
>Affects Versions: 4.9, 4.9.1, 4.10, 4.10.1
>Reporter: Varun Thacker
>Assignee: Tomás Fernández Löbbe
>  Labels: suggester
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-6648-v4.10.3.patch, SOLR-6648.patch
>
>
> When using AnalyzingInfixLookupFactory suggestions always return with the 
> match term as highlighted and 'allTermsRequired' is always set to true.
> We should be able to configure those.
> Steps to reproduce - 
> schema additions
> {code}
> 
> 
>   mySuggester
>   AnalyzingInfixLookupFactory
>   DocumentDictionaryFactory 
>   suggestField
>   weight
>   textSuggest
> 
>   
>   
> 
>   true
>   10
> 
> 
>   suggest
> 
>   
> {code}
> solrconfig changes -
> {code}
>  positionIncrementGap="100">
>
>   
>   
>   
>
>   
> stored="true"/>
> {code}
> Add 3 documents - 
> {code}
> curl http://localhost:8983/solr/update/json?commit=true -H 
> 'Content-type:application/json' -d '
> [ {"id" : "1", "suggestField" : "bass fishing"}, {"id" : "2", "suggestField" 
> : "sea bass"}, {"id" : "3", "suggestField" : "sea bass fishing"} ]
> '
> {code}
> Query -
> {code}
> http://localhost:8983/solr/collection1/suggest?suggest.build=true&suggest.dictionary=mySuggester&q=bass&wt=json&indent=on
> {code}
> Response 
> {code}
> {
>   "responseHeader":{
> "status":0,
> "QTime":25},
>   "command":"build",
>   "suggest":{"mySuggester":{
>   "bass":{
> "numFound":3,
> "suggestions":[{
> "term":"bass fishing",
> "weight":0,
> "payload":""},
>   {
> "term":"sea bass",
> "weight":0,
> "payload":""},
>   {
> "term":"sea bass fishing",
> "weight":0,
> "payload":""}]
> {code}
> The problem is in SolrSuggester Line 200 where we say lookup.lookup()
> This constructor does not take allTermsRequired and doHighlight since it's 
> only tuneable to AnalyzingInfixSuggester and not the other lookup 
> implementations.
> If different Lookup implementations have different params as their 
> constructors, these sort of issues will always keep happening. Maybe we 
> should not keep it generic and do instanceof checks and set params 
> accordingly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7082) Streaming Aggregation for SolrCloud

2015-02-05 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-7082:
-
Affects Version/s: (was: 5.1)

> Streaming Aggregation for SolrCloud
> ---
>
> Key: SOLR-7082
> URL: https://issues.apache.org/jira/browse/SOLR-7082
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Joel Bernstein
> Fix For: Trunk
>
> Attachments: SOLR-7082.patch
>
>
> This issue provides a general purpose streaming aggregation framework for 
> SolrCloud. An overview of how it works can be found at this link:
> http://heliosearch.org/streaming-aggregation-for-solrcloud/
> This functionality allows SolrCloud users to perform operations that we're 
> typically done using map/reduce or a parallel computing platform. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7082) Streaming Aggregation for SolrCloud

2015-02-05 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-7082:
-
Fix Version/s: Trunk

> Streaming Aggregation for SolrCloud
> ---
>
> Key: SOLR-7082
> URL: https://issues.apache.org/jira/browse/SOLR-7082
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Joel Bernstein
> Fix For: Trunk
>
> Attachments: SOLR-7082.patch
>
>
> This issue provides a general purpose streaming aggregation framework for 
> SolrCloud. An overview of how it works can be found at this link:
> http://heliosearch.org/streaming-aggregation-for-solrcloud/
> This functionality allows SolrCloud users to perform operations that we're 
> typically done using map/reduce or a parallel computing platform. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6526) Solr Streaming API

2015-02-05 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-6526.
--
Resolution: Duplicate

This ticket has been superseded by SOLR-7082. 

> Solr Streaming API
> --
>
> Key: SOLR-6526
> URL: https://issues.apache.org/jira/browse/SOLR-6526
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Reporter: Joel Bernstein
> Fix For: Trunk
>
> Attachments: SOLR-6526.patch
>
>
> It would be great if there was a SolrJ library that could connect to Solr's 
> /export handler (SOLR-5244) and perform streaming operations on the sorted 
> result sets.
> This ticket defines the base interfaces and implementations for the Streaming 
> API. The base API contains three classes:
> *SolrStream*: This represents a stream from a single Solr instance. It speaks 
> directly to the /export handler and provides methods to read() Tuples and 
> close() the stream
> *CloudSolrStream*: This represents a stream from a SolrCloud collection. It 
> speaks with Zk to discover the Solr instances in the collection and then 
> creates SolrStreams to make the requests. The results from the underlying 
> streams are merged inline to produce a single sorted stream of tuples.
> *Tuple*: The data structure returned by the read() method of the SolrStream 
> API. It is nested to support grouping and Cartesian product set operations.
> Once these base classes are implemented it paves the way for building 
> *Decorator* streams that perform operations on the sorted Tuple sets. For 
> example:
> {code}
> //Create three CloudSolrStreams to different solr cloud clusters. They could 
> be anywhere in the world.
> SolrStream stream1 = new CloudSolrStream(zkUrl1, queryRequest1, "a"); // 
> Alias this stream as "a"
> SolrStream stream2 = new CloudSolrStream(zkUrl2, queryRequest2, "b"); // 
> Alias this stream as "b"
> SolrStream stream3 = new CloudSolrStream(zkUrl3, queryRequest3, "c"); // 
> Alias this stream as "c"
> // Merge Join stream1 and stream2 using a comparator to compare tuples.
> MergeJoinStream joinStream1 = new MergeJoinStream(stream1, stream2, new 
> MyComp());
> //Hash join the tuples from the joinStream1 with stream3 the HashKey()'s 
> define the hashKeys for tuples 
> HashJoinStream joinStream2 = new HashJoinStream(joinStream1,stream3, new 
> HashKey(), new HashKey());
> //Sum the aliased fields from the joined tuples.
> SumStream sumStream1 = new SumStream(joinStream2, "a.field1");
> SumStream sumStream2 = new SumStream(sumStream1, "b.field2");
> Tuple t = null;
> //Read from the stream until it's finished.
> while((t != sumStream2().read()) != null);
> //Get the sums from the joined data.
> long sum1 = sumStream1.getSum();
> long sum2 = sumStream2.getSum();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7082) Streaming Aggregation for SolrCloud

2015-02-05 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307869#comment-14307869
 ] 

Joel Bernstein commented on SOLR-7082:
--

The initial patch includes a fully operational parallel streaming framework 
with tests.

It's a fairly large patch so I'll be updating this ticket with details about 
the design and code.

> Streaming Aggregation for SolrCloud
> ---
>
> Key: SOLR-7082
> URL: https://issues.apache.org/jira/browse/SOLR-7082
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Affects Versions: 5.1
>Reporter: Joel Bernstein
> Attachments: SOLR-7082.patch
>
>
> This issue provides a general purpose streaming aggregation framework for 
> SolrCloud. An overview of how it works can be found at this link:
> http://heliosearch.org/streaming-aggregation-for-solrcloud/
> This functionality allows SolrCloud users to perform operations that we're 
> typically done using map/reduce or a parallel computing platform. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7082) Streaming Aggregation for SolrCloud

2015-02-05 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-7082:
-
Attachment: SOLR-7082.patch

> Streaming Aggregation for SolrCloud
> ---
>
> Key: SOLR-7082
> URL: https://issues.apache.org/jira/browse/SOLR-7082
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Affects Versions: 5.1
>Reporter: Joel Bernstein
> Attachments: SOLR-7082.patch
>
>
> This issue provides a general purpose streaming aggregation framework for 
> SolrCloud. An overview of how it works can be found at this link:
> http://heliosearch.org/streaming-aggregation-for-solrcloud/
> This functionality allows SolrCloud users to perform operations that we're 
> typically done using map/reduce or a parallel computing platform. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7082) Streaming Aggregation for SolrCloud

2015-02-05 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-7082:


 Summary: Streaming Aggregation for SolrCloud
 Key: SOLR-7082
 URL: https://issues.apache.org/jira/browse/SOLR-7082
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Affects Versions: 5.1
Reporter: Joel Bernstein


This issue provides a general purpose streaming aggregation framework for 
SolrCloud. An overview of how it works can be found at this link:

http://heliosearch.org/streaming-aggregation-for-solrcloud/

This functionality allows SolrCloud users to perform operations that we're 
typically done using map/reduce or a parallel computing platform. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6648) AnalyzingInfixLookupFactory always highlights suggestions

2015-02-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307847#comment-14307847
 ] 

ASF subversion and git services commented on SOLR-6648:
---

Commit 1657671 from [~tomasflobbe] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1657671 ]

SOLR-6648: Add support for highlight and allTermsRequired configuration in 
AnalyzingInfix and BlendedInfix Solr suggesters

> AnalyzingInfixLookupFactory always highlights suggestions
> -
>
> Key: SOLR-6648
> URL: https://issues.apache.org/jira/browse/SOLR-6648
> Project: Solr
>  Issue Type: Sub-task
>Affects Versions: 4.9, 4.9.1, 4.10, 4.10.1
>Reporter: Varun Thacker
>Assignee: Tomás Fernández Löbbe
>  Labels: suggester
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6648-v4.10.3.patch, SOLR-6648.patch
>
>
> When using AnalyzingInfixLookupFactory suggestions always return with the 
> match term as highlighted and 'allTermsRequired' is always set to true.
> We should be able to configure those.
> Steps to reproduce - 
> schema additions
> {code}
> 
> 
>   mySuggester
>   AnalyzingInfixLookupFactory
>   DocumentDictionaryFactory 
>   suggestField
>   weight
>   textSuggest
> 
>   
>   
> 
>   true
>   10
> 
> 
>   suggest
> 
>   
> {code}
> solrconfig changes -
> {code}
>  positionIncrementGap="100">
>
>   
>   
>   
>
>   
> stored="true"/>
> {code}
> Add 3 documents - 
> {code}
> curl http://localhost:8983/solr/update/json?commit=true -H 
> 'Content-type:application/json' -d '
> [ {"id" : "1", "suggestField" : "bass fishing"}, {"id" : "2", "suggestField" 
> : "sea bass"}, {"id" : "3", "suggestField" : "sea bass fishing"} ]
> '
> {code}
> Query -
> {code}
> http://localhost:8983/solr/collection1/suggest?suggest.build=true&suggest.dictionary=mySuggester&q=bass&wt=json&indent=on
> {code}
> Response 
> {code}
> {
>   "responseHeader":{
> "status":0,
> "QTime":25},
>   "command":"build",
>   "suggest":{"mySuggester":{
>   "bass":{
> "numFound":3,
> "suggestions":[{
> "term":"bass fishing",
> "weight":0,
> "payload":""},
>   {
> "term":"sea bass",
> "weight":0,
> "payload":""},
>   {
> "term":"sea bass fishing",
> "weight":0,
> "payload":""}]
> {code}
> The problem is in SolrSuggester Line 200 where we say lookup.lookup()
> This constructor does not take allTermsRequired and doHighlight since it's 
> only tuneable to AnalyzingInfixSuggester and not the other lookup 
> implementations.
> If different Lookup implementations have different params as their 
> constructors, these sort of issues will always keep happening. Maybe we 
> should not keep it generic and do instanceof checks and set params 
> accordingly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7081) create/delete/create collection (new test case)

2015-02-05 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307838#comment-14307838
 ] 

Ramkumar Aiyengar edited comment on SOLR-7081 at 2/5/15 7:36 PM:
-

bq. We should almost just release note not to count on auto core creation in 
Solr 5 so that we can fix this stuff by default without an option before 6.

+1


was (Author: andyetitmoves):
+1

> create/delete/create collection (new test case)
> ---
>
> Key: SOLR-7081
> URL: https://issues.apache.org/jira/browse/SOLR-7081
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Priority: Minor
>
> Unexpectedly the second collection create fails (saying that the collection 
> already exists) despite the collection delete having apparently succeeded.
> Collection create/delete/create is probably an uncommon operational sequence 
> but perhaps the test failure indicates that something unexpected is happening 
> elsewhere.
> github pull request and test log extracts to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6220) Move needsScores from Weight.scorer to Query.createWeight

2015-02-05 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6220:
-
Attachment: LUCENE-6220.patch

Here is a work-in-progress patch: IndexSearcher currently needs to guess in 
advance whether the collector will need scores. I'll have a second look at it 
tomorrow to see if I can somehow refactor it so that we always have a collector 
ready before creating the weight.

> Move needsScores from Weight.scorer to Query.createWeight
> -
>
> Key: LUCENE-6220
> URL: https://issues.apache.org/jira/browse/LUCENE-6220
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: Trunk, 5.1
>
> Attachments: LUCENE-6220.patch
>
>
> Whether scores are needed is currently a Scorer-level property while it 
> should actually be a Weight thing I think?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7081) create/delete/create collection (new test case)

2015-02-05 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307838#comment-14307838
 ] 

Ramkumar Aiyengar commented on SOLR-7081:
-

+1

> create/delete/create collection (new test case)
> ---
>
> Key: SOLR-7081
> URL: https://issues.apache.org/jira/browse/SOLR-7081
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Priority: Minor
>
> Unexpectedly the second collection create fails (saying that the collection 
> already exists) despite the collection delete having apparently succeeded.
> Collection create/delete/create is probably an uncommon operational sequence 
> but perhaps the test failure indicates that something unexpected is happening 
> elsewhere.
> github pull request and test log extracts to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6220) Move needsScores from Weight.scorer to Query.createWeight

2015-02-05 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307830#comment-14307830
 ] 

Michael McCandless commented on LUCENE-6220:


+1

> Move needsScores from Weight.scorer to Query.createWeight
> -
>
> Key: LUCENE-6220
> URL: https://issues.apache.org/jira/browse/LUCENE-6220
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: Trunk, 5.1
>
>
> Whether scores are needed is currently a Scorer-level property while it 
> should actually be a Weight thing I think?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6220) Move needsScores from Weight.scorer to Query.createWeight

2015-02-05 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-6220:


 Summary: Move needsScores from Weight.scorer to Query.createWeight
 Key: LUCENE-6220
 URL: https://issues.apache.org/jira/browse/LUCENE-6220
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1


Whether scores are needed is currently a Scorer-level property while it should 
actually be a Weight thing I think?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1342: POMs out of sync

2015-02-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1342/

No tests ran.

Build Log:
[...truncated 36916 lines...]
-validate-maven-dependencies:
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-test-framework:6.0.0-SNAPSHOT: checking for updates 
from maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-test-framework:6.0.0-SNAPSHOT: checking for updates 
from releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-parent:6.0.0-SNAPSHOT: checking for updates from 
sonatype.releases
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-parent:6.0.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-parent:6.0.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-analyzers-common:6.0.0-SNAPSHOT: checking for updates 
from maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-analyzers-common:6.0.0-SNAPSHOT: checking for updates 
from releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-analyzers-kuromoji:6.0.0-SNAPSHOT: checking for 
updates from maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-analyzers-kuromoji:6.0.0-SNAPSHOT: checking for 
updates from releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-analyzers-phonetic:6.0.0-SNAPSHOT: checking for 
updates from maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-analyzers-phonetic:6.0.0-SNAPSHOT: checking for 
updates from releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-backward-codecs:6.0.0-SNAPSHOT: checking for updates 
from maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-backward-codecs:6.0.0-SNAPSHOT: checking for updates 
from releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-codecs:6.0.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-codecs:6.0.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-core:6.0.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-core:6.0.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-expressions:6.0.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-expressions:6.0.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-grouping:6.0.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-grouping:6.0.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-highlighter:6.0.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-highlighter:6.0.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-join:6.0.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-join:6.0.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-memory:6.0.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-memory:6.0.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-misc:6.0.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-misc:6.0.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-queries:6.0.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-queries:6.0.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-queryparser:6.0.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-queryparser:6.0.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-spatial:6.0.0-SNAPSHOT: checking for updates from 
maven-restlet
[artifact:dependencies] [INFO] snapshot 
org.apache.lucene:lucene-spatial:6.0.0-SNAPSHOT: checking for updates from 
releases.cloudera.com
[artifact:dependencies] [IN

[jira] [Commented] (SOLR-7081) create/delete/create collection (new test case)

2015-02-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307742#comment-14307742
 ] 

Mark Miller commented on SOLR-7081:
---

Hmm...on first glance this looks 'zk should be the truth' issue stuff. I really 
wanted to get a better start on that in for 5.0. Alas.

We should almost just release note not to count on auto core creation in Solr 5 
so that we can fix this stuff by default without an option before 6.

> create/delete/create collection (new test case)
> ---
>
> Key: SOLR-7081
> URL: https://issues.apache.org/jira/browse/SOLR-7081
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Priority: Minor
>
> Unexpectedly the second collection create fails (saying that the collection 
> already exists) despite the collection delete having apparently succeeded.
> Collection create/delete/create is probably an uncommon operational sequence 
> but perhaps the test failure indicates that something unexpected is happening 
> elsewhere.
> github pull request and test log extracts to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7081) create/delete/create collection (new test case)

2015-02-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307742#comment-14307742
 ] 

Mark Miller edited comment on SOLR-7081 at 2/5/15 6:45 PM:
---

Hmm...on first glance this looks like 'zk should be the truth' issue stuff. I 
really wanted to get a better start on that in for 5.0. Alas.

We should almost just release note not to count on auto core creation in Solr 5 
so that we can fix this stuff by default without an option before 6.


was (Author: markrmil...@gmail.com):
Hmm...on first glance this looks 'zk should be the truth' issue stuff. I really 
wanted to get a better start on that in for 5.0. Alas.

We should almost just release note not to count on auto core creation in Solr 5 
so that we can fix this stuff by default without an option before 6.

> create/delete/create collection (new test case)
> ---
>
> Key: SOLR-7081
> URL: https://issues.apache.org/jira/browse/SOLR-7081
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Priority: Minor
>
> Unexpectedly the second collection create fails (saying that the collection 
> already exists) despite the collection delete having apparently succeeded.
> Collection create/delete/create is probably an uncommon operational sequence 
> but perhaps the test failure indicates that something unexpected is happening 
> elsewhere.
> github pull request and test log extracts to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7081) create/delete/create collection (new test case)

2015-02-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307735#comment-14307735
 ] 

Mark Miller commented on SOLR-7081:
---

I thought some test (like the collections api test) actually did this type of 
thing. Perhaps it's different somehow or I am remembering wrong. In either 
case, new testing always appreciated. Perhaps this leads to the root cause of 
some random fails I've seen where you surprisingly get this error.

> create/delete/create collection (new test case)
> ---
>
> Key: SOLR-7081
> URL: https://issues.apache.org/jira/browse/SOLR-7081
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Priority: Minor
>
> Unexpectedly the second collection create fails (saying that the collection 
> already exists) despite the collection delete having apparently succeeded.
> Collection create/delete/create is probably an uncommon operational sequence 
> but perhaps the test failure indicates that something unexpected is happening 
> elsewhere.
> github pull request and test log extracts to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7081) create/delete/create collection (new test case)

2015-02-05 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307734#comment-14307734
 ] 

Christine Poerschke commented on SOLR-7081:
---

Here's extract of interesting things from the test (`ant test 
-Dtestcase=DoubleTestMiniSolrCloudCluster`) output:

{code}
   [junit4]   2> 37011 T207 oasc.SolrException.log ERROR Failed to delete 
instance dir for core:testSolrCloudCollection_shard1_replica2 
dir:/mydirectory/solr/build/solr-core/test/J0/temp/solr.cloud.DoubleTestMiniSolrCloudCluster
 A33CCC8883EFD522-001/tempDir-001/./testSolrCloudCollection_shard1_replica2
   [junit4]   2> 37012 T207 oasc.ElectionContext.cancelElection canceling 
election 
/collections/testSolrCloudCollection/leader_elect/shard1/election/93266847936610328-core_node2-n_03
   ...
   [junit4]   2> 37024 T206 oasc.SolrException.log ERROR Failed to delete 
instance dir for core:testSolrCloudCollection_shard1_replica1 
dir:/mydirectory/solr/build/solr-core/test/J0/temp/solr.cloud.DoubleTestMiniSolrCloudCluster
 A33CCC8883EFD522-001/tempDir-001/./testSolrCloudCollection_shard1_replica1
   [junit4]   2> 37024 T206 oasc.ElectionContext.cancelElection canceling 
election 
/collections/testSolrCloudCollection/leader_elect/shard1/election/93266847936610328-core_node3-n_02
{code}
Some errors deleting the instance directory (on T206 and T207).

{code}
   [junit4]   2> 37677 T13 
oasc.TestMiniSolrCloudCluster.waitForCollectionToDisappear Wait for collection 
to disappear - collection: testSolrCloudCollection failOnTimeout:true timeout 
(sec):330
   ...
   [junit4]   2> 37679 T13 
oasc.TestMiniSolrCloudCluster.waitForCollectionToDisappear Collection has 
disappeared - collection: testSolrCloudCollection
{code}
But the collection is being reported as having disappeared (on T13).

{code}
   [junit4]   2> 37710 T13 oasu.DefaultSolrCoreState.closeIndexWriter closing 
IndexWriter with IndexWriterCloser
   [junit4]   2> 37709 T212 oasco.ClusterStateMutator.createCollection building 
a new cName: testSolrCloudCollection
   [junit4]   2> 37716 T13 oasc.SolrCore.closeSearcher 
[testSolrCloudCollection_shard2_replica2] Closing main searcher on request.
{code}
Though on T13 there are also still traces of shard2 replica still being around 
(after the reported disappearance of the collection). Note that this is shard2 
and the deleting errors earlier were for shard1. At this point T212 is 
beginning the second create operation.

Now on T264 some replaying of operations (delete sub-operations?).
{code}
   [junit4]   2> 37793 T264 oasc.Overseer$ClusterStateUpdater.run Replaying 
operations from work queue.
   [junit4]   2> 37794 T264 oasc.Overseer$ClusterStateUpdater.run 
processMessage: queueSize: 0, message = {
   [junit4]   2>  "core":"testSolrCloudCollection_shard2_replica2",
   [junit4]   2>  "core_node_name":"core_node4",
   [junit4]   2>  "roles":null,
   [junit4]   2>  "base_url":"http://127.0.0.1:4/solr";,
   [junit4]   2>  "node_name":"127.0.0.1:4_solr",
   [junit4]   2>  "numShards":"2",
   [junit4]   2>  "state":"down",
   [junit4]   2>  "shard":"shard2",
   [junit4]   2>  "collection":"testSolrCloudCollection",
   [junit4]   2>  "operation":"state"}
   [junit4]   2> 37795 T264 oasco.ReplicaMutator.updateState Update state 
numShards=2 message={
   [junit4]   2>  "core":"testSolrCloudCollection_shard2_replica2",
   [junit4]   2>  "core_node_name":"core_node4",
   [junit4]   2>  "roles":null,
   [junit4]   2>  "base_url":"http://127.0.0.1:4/solr";,
   [junit4]   2>  "node_name":"127.0.0.1:4_solr",
   [junit4]   2>  "numShards":"2",
   [junit4]   2>  "state":"down",
   [junit4]   2>  "shard":"shard2",
   [junit4]   2>  "collection":"testSolrCloudCollection",
   [junit4]   2>  "operation":"state"}
   [junit4]   2> 37796 T264 oasco.ClusterStateMutator.createCollection building 
a new cName: testSolrCloudCollection
{code}
Following the replay the second collection create progresses on T264.

{code}
   [junit4]   2> 41121 T280 oasc.OverseerCollectionProcessor.processMessage 
WARN OverseerCollectionProcessor.processMessage : create , {
   [junit4]   2>  "operation":"create",
   [junit4]   2>  "fromApi":"true",
   [junit4]   2>  "name":"testSolrCloudCollection",
   [junit4]   2>  "replicationFactor":"2",
   [junit4]   2>  "collection.configName":"solrCloudCollectionConfig",
   [junit4]   2>  "numShards":"2",
   [junit4]   2>  "stateFormat":"2",
   [junit4]   2>  "property.solr.tests.ramBufferSizeMB":"100",
   [junit4]   2>  "property.solr.tests.maxIndexingThreads":"-1",
   [junit4]   2>  
"property.solr.tests.mergeScheduler":"org.apache.lucene.index.ConcurrentMergeSchedul

Re: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1341: POMs out of sync

2015-02-05 Thread Steve Rowe
Shalin,

The offending line is this import statement:

26: import 
com.carrotsearch.ant.tasks.junit4.dependencies.com.google.common.collect.Lists;

I’m not even sure what is happening there, but I assume it’s some form of code 
duplication within the junit4 lib?

I suspect the junit4 jar is not on the Maven test classpath because the Maven 
build uses the surefire plugin as its test runner rather than the junit4 runner.

The direct package should suffice, no?:

import com.google.common.collect.Lists;

Steve

> On Feb 5, 2015, at 12:54 PM, Shalin Shekhar Mangar  
> wrote:
> 
> That's strange. This is code that I committed today but all tests and 
> precommit passed. I'll dig.
> 
> On Thu, Feb 5, 2015 at 11:09 PM, Uwe Schindler  wrote:
> Very strange error:
> 
>   [mvn] [WARNING] 
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/src/test/org/apache/solr/search/TestAnalyticsQParserPlugin.java:
>  Recompile with -Xlint:unchecked for details.
>   [mvn] [INFO] 4 warnings
>   [mvn] [INFO] 
> -
>   [mvn] [INFO] 
> -
>   [mvn] [ERROR] COMPILATION ERROR :
>   [mvn] [INFO] 
> -
>   [mvn] [ERROR] 
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZk2Test.java:[26,80]
>  package 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.common.collect does 
> not exist
>   [mvn] [ERROR] 
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZk2Test.java:[420,29]
>  cannot find symbol
>   [mvn]   symbol:   variable Lists
>   [mvn]   location: class org.apache.solr.cloud.BasicDistributedZk2Test
>   [mvn] [INFO] 2 errors
>   [mvn] [INFO] 
> -
> 
> Does anybody has an idea how this comes? I cannot reproduce with ANT.
> 
> Uwe
> 
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
> 
> > -Original Message-
> > From: Apache Jenkins Server [mailto:jenk...@builds.apache.org]
> > Sent: Thursday, February 05, 2015 6:03 PM
> > To: dev@lucene.apache.org
> > Subject: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1341: POMs out of
> > sync
> >
> > Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1341/
> >
> > No tests ran.
> >
> > Build Log:
> > [...truncated 39352 lines...]
> >   [mvn] [INFO] 
> > -
> >   [mvn] [INFO] 
> > -
> >   [mvn] [ERROR] COMPILATION ERROR :
> >   [mvn] [INFO] 
> > -
> >
> > [...truncated 798 lines...]
> > BUILD FAILED
> > /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-
> > trunk/build.xml:542: The following error occurred while executing this line:
> > /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-
> > trunk/build.xml:204: The following error occurred while executing this line:
> > : Java returned: 1
> >
> > Total time: 22 minutes 6 seconds
> > Build step 'Invoke Ant' marked build as failure Email was triggered for: 
> > Failure
> > Sending email for trigger: Failure
> >
> 
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 
> 
> 
> 
> -- 
> Regards,
> Shalin Shekhar Mangar.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6648) AnalyzingInfixLookupFactory always highlights suggestions

2015-02-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307698#comment-14307698
 ] 

ASF subversion and git services commented on SOLR-6648:
---

Commit 1657655 from [~tomasflobbe] in branch 'dev/trunk'
[ https://svn.apache.org/r1657655 ]

SOLR-6648: Add support for highlight and allTermsRequired configuration in 
AnalyzingInfix and BlendedInfix Solr suggesters

> AnalyzingInfixLookupFactory always highlights suggestions
> -
>
> Key: SOLR-6648
> URL: https://issues.apache.org/jira/browse/SOLR-6648
> Project: Solr
>  Issue Type: Sub-task
>Affects Versions: 4.9, 4.9.1, 4.10, 4.10.1
>Reporter: Varun Thacker
>Assignee: Tomás Fernández Löbbe
>  Labels: suggester
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6648-v4.10.3.patch, SOLR-6648.patch
>
>
> When using AnalyzingInfixLookupFactory suggestions always return with the 
> match term as highlighted and 'allTermsRequired' is always set to true.
> We should be able to configure those.
> Steps to reproduce - 
> schema additions
> {code}
> 
> 
>   mySuggester
>   AnalyzingInfixLookupFactory
>   DocumentDictionaryFactory 
>   suggestField
>   weight
>   textSuggest
> 
>   
>   
> 
>   true
>   10
> 
> 
>   suggest
> 
>   
> {code}
> solrconfig changes -
> {code}
>  positionIncrementGap="100">
>
>   
>   
>   
>
>   
> stored="true"/>
> {code}
> Add 3 documents - 
> {code}
> curl http://localhost:8983/solr/update/json?commit=true -H 
> 'Content-type:application/json' -d '
> [ {"id" : "1", "suggestField" : "bass fishing"}, {"id" : "2", "suggestField" 
> : "sea bass"}, {"id" : "3", "suggestField" : "sea bass fishing"} ]
> '
> {code}
> Query -
> {code}
> http://localhost:8983/solr/collection1/suggest?suggest.build=true&suggest.dictionary=mySuggester&q=bass&wt=json&indent=on
> {code}
> Response 
> {code}
> {
>   "responseHeader":{
> "status":0,
> "QTime":25},
>   "command":"build",
>   "suggest":{"mySuggester":{
>   "bass":{
> "numFound":3,
> "suggestions":[{
> "term":"bass fishing",
> "weight":0,
> "payload":""},
>   {
> "term":"sea bass",
> "weight":0,
> "payload":""},
>   {
> "term":"sea bass fishing",
> "weight":0,
> "payload":""}]
> {code}
> The problem is in SolrSuggester Line 200 where we say lookup.lookup()
> This constructor does not take allTermsRequired and doHighlight since it's 
> only tuneable to AnalyzingInfixSuggester and not the other lookup 
> implementations.
> If different Lookup implementations have different params as their 
> constructors, these sort of issues will always keep happening. Maybe we 
> should not keep it generic and do instanceof checks and set params 
> accordingly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Are docs updated based on comparing the id before analysis?

2015-02-05 Thread Shawn Heisey
On 2/5/2015 10:57 AM, Erick Erickson wrote:
> Thanks for confirming I'm not completely crazy.
>
> I don't think it's A Good Thing to _require_ that all ID normalization
> be done on the client, it'd have to be done both at index and query
> time, too much chance for things to get out of sync. Although I guess
> this is _actually_ what happens with the string type. H.  So I'm
> -1 on <2> above as it would require this.
>
> And having s that are text fields _is_ fraught with danger
> if you tokenize it, but KeywordTokenizer doesn't.



> Personally I feel like this is a JIRA, but I can see arguments the
> other way as I'm not entirely sure what you'd do if multiple tokens
> came out of the analysis chain. Maybe fail the document at index time?
>
> What _is_ unreasonable IMO is that we allow this surprising behavior,
> so regardless of the above I'm +1 on keeping users from being
> surprised by this behavior

My earlier statements were written with the assumption that the current
behavior exists because it is difficult to allow the desired behavior. 
I believe that if it were easy to do, it would have already been done.

If it's possible to allow what we both think is rational user
expectation (case-insensitive uniqueKey values), I agree that we need to
allow it.  Whether or not it's readily achievable is the question.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7081) create/delete/create collection (new test case)

2015-02-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307681#comment-14307681
 ] 

ASF GitHub Bot commented on SOLR-7081:
--

GitHub user cpoerschke opened a pull request:

https://github.com/apache/lucene-solr/pull/127

SOLR-7081: create/delete/create collection (new test case)

https://issues.apache.org/jira/i#browse/SOLR-7081

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bloomberg/lucene-solr 
trunk-create-delete-create-collection

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/127.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #127


commit 24e87d6b3e180ce644acfd1896e43cdcb512a4be
Author: Christine Poerschke 
Date:   2015-01-21T10:14:38Z

SOLR-: TestMiniSolrCloudCluster.testBasics tidies up after itself, adds 
DoubleTestMiniSolrCloudCluster test case.

TestMiniSolrCloudCluster.testBasics now re-creates the server it removed 
for test purposes, thus restoring the original NUM_SERVERS count. 
TestMiniSolrCloudCluster.testBasics now also deletes the collection it created 
for test purposes (this revision adds a MiniSolrCloudCluster.deleteCollection 
method).

DoubleTestMiniSolrCloudCluster is a new test case. 
DoubleTestMiniSolrCloudCluster.testBasics calls 
TestMiniSolrCloudCluster.testBasics twice in a row.




> create/delete/create collection (new test case)
> ---
>
> Key: SOLR-7081
> URL: https://issues.apache.org/jira/browse/SOLR-7081
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Priority: Minor
>
> Unexpectedly the second collection create fails (saying that the collection 
> already exists) despite the collection delete having apparently succeeded.
> Collection create/delete/create is probably an uncommon operational sequence 
> but perhaps the test failure indicates that something unexpected is happening 
> elsewhere.
> github pull request and test log extracts to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-7081: create/delete/create collecti...

2015-02-05 Thread cpoerschke
GitHub user cpoerschke opened a pull request:

https://github.com/apache/lucene-solr/pull/127

SOLR-7081: create/delete/create collection (new test case)

https://issues.apache.org/jira/i#browse/SOLR-7081

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bloomberg/lucene-solr 
trunk-create-delete-create-collection

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/127.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #127


commit 24e87d6b3e180ce644acfd1896e43cdcb512a4be
Author: Christine Poerschke 
Date:   2015-01-21T10:14:38Z

SOLR-: TestMiniSolrCloudCluster.testBasics tidies up after itself, adds 
DoubleTestMiniSolrCloudCluster test case.

TestMiniSolrCloudCluster.testBasics now re-creates the server it removed 
for test purposes, thus restoring the original NUM_SERVERS count. 
TestMiniSolrCloudCluster.testBasics now also deletes the collection it created 
for test purposes (this revision adds a MiniSolrCloudCluster.deleteCollection 
method).

DoubleTestMiniSolrCloudCluster is a new test case. 
DoubleTestMiniSolrCloudCluster.testBasics calls 
TestMiniSolrCloudCluster.testBasics twice in a row.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7081) create/delete/create collection (new test case)

2015-02-05 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-7081:
-

 Summary: create/delete/create collection (new test case)
 Key: SOLR-7081
 URL: https://issues.apache.org/jira/browse/SOLR-7081
 Project: Solr
  Issue Type: Test
Reporter: Christine Poerschke
Priority: Minor


Unexpectedly the second collection create fails (saying that the collection 
already exists) despite the collection delete having apparently succeeded.

Collection create/delete/create is probably an uncommon operational sequence 
but perhaps the test failure indicates that something unexpected is happening 
elsewhere.

github pull request and test log extracts to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Static Analysis Tooling

2015-02-05 Thread Mike Drob
Ah, didn't see that one because it's even older than the other issue I
found. Looks like this has been waiting for a long time. I'll take further
discussion to the JIRA.

On Thu, Feb 5, 2015 at 11:52 AM, david.w.smi...@gmail.com <
david.w.smi...@gmail.com> wrote:

> +1 to this idea.  Note this is tracked as
> https://issues.apache.org/jira/browse/LUCENE-3973
>
> ~ David Smiley
> Freelance Apache Lucene/Solr Search Consultant/Developer
> http://www.linkedin.com/in/davidwsmiley
>
> On Thu, Feb 5, 2015 at 12:43 PM, Mike Drob  wrote:
>
>> Devs,
>>
>> I'd like to bring up static analysis for Solr and Lucene again. It's been
>> about a year since the last conversation[1] and it might be time to
>> revisit. There is a JIRA issue too[2], but it's also in need of some love.
>>
>> ASF already provides a Sonar instance that we might be able to use[3],
>> alternatively we can just hook up whatever static analysis tool works well
>> with ant (this is most of them) and rely on Jenkins to provide reports. The
>> Eclipse FindBugs plug-in works pretty well for me personally.
>>
>> I will plan on submitting first some patches to fix issues found as
>> "critical" in my local instance. Then I will work on adding analysis to the
>> build, and figuring out how to fail the build if we exceed a certain
>> threshold. And then we can incrementally lower the threshold while fixing
>> additional issues.
>>
>> Does this sound like a reasonable plan? I want to give folks a heads up
>> before creating a bunch of issues - FindBugs currently reports just over
>> 500 hits on trunk.
>>
>> Mike
>>
>> [1]: http://markmail.org/thread/pxf7lg7kzflnknmm
>> [2]: https://issues.apache.org/jira/browse/LUCENE-5130
>> [3]: https://analysis.apache.org/
>>
>
>


RE: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1341: POMs out of sync

2015-02-05 Thread Uwe Schindler
Hi,

 

For now I deleted the Maven Cache. Maybe it’s some dependency problem. I also 
triggered a new build.

 

Uwe

 

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

  http://www.thetaphi.de

eMail: u...@thetaphi.de

 

From: Shalin Shekhar Mangar [mailto:shalinman...@gmail.com] 
Sent: Thursday, February 05, 2015 6:54 PM
To: dev@lucene.apache.org
Subject: Re: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1341: POMs out of sync

 

That's strange. This is code that I committed today but all tests and precommit 
passed. I'll dig.

 

On Thu, Feb 5, 2015 at 11:09 PM, Uwe Schindler  wrote:

Very strange error:

  [mvn] [WARNING] 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/src/test/org/apache/solr/search/TestAnalyticsQParserPlugin.java:
 Recompile with -Xlint:unchecked for details.
  [mvn] [INFO] 4 warnings
  [mvn] [INFO] -
  [mvn] [INFO] -
  [mvn] [ERROR] COMPILATION ERROR :
  [mvn] [INFO] -
  [mvn] [ERROR] 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZk2Test.java:[26,80]
 package 
com.carrotsearch.ant.tasks.junit4.dependencies.com.google.common.collect does 
not exist
  [mvn] [ERROR] 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZk2Test.java:[420,29]
 cannot find symbol
  [mvn]   symbol:   variable Lists
  [mvn]   location: class org.apache.solr.cloud.BasicDistributedZk2Test
  [mvn] [INFO] 2 errors
  [mvn] [INFO] -

Does anybody has an idea how this comes? I cannot reproduce with ANT.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


> -Original Message-
> From: Apache Jenkins Server [mailto:jenk...@builds.apache.org]
> Sent: Thursday, February 05, 2015 6:03 PM
> To: dev@lucene.apache.org
> Subject: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1341: POMs out of
> sync
>
> Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1341/
>
> No tests ran.
>
> Build Log:
> [...truncated 39352 lines...]
>   [mvn] [INFO] 
> -
>   [mvn] [INFO] 
> -
>   [mvn] [ERROR] COMPILATION ERROR :
>   [mvn] [INFO] 
> -
>
> [...truncated 798 lines...]
> BUILD FAILED
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-
> trunk/build.xml:542: The following error occurred while executing this line:
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-
> trunk/build.xml:204: The following error occurred while executing this line:
> : Java returned: 1
>
> Total time: 22 minutes 6 seconds
> Build step 'Invoke Ant' marked build as failure Email was triggered for: 
> Failure
> Sending email for trigger: Failure
>




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org





 

-- 

Regards,
Shalin Shekhar Mangar.



Re: Are docs updated based on comparing the id before analysis?

2015-02-05 Thread Erick Erickson
Shawn:

Thanks for confirming I'm not completely crazy.

I don't think it's A Good Thing to _require_ that all ID normalization be
done on the client, it'd have to be done both at index and query time, too
much chance for things to get out of sync. Although I guess this is
_actually_ what happens with the string type. H.  So I'm -1 on <2>
above as it would require this.

And having s that are text fields _is_ fraught with danger if
you tokenize it, but KeywordTokenizer doesn't. In this particular case, the
following works, but only because this data happens to have all the alpha
characters uppercase at index time:


  

  
  


  


or even

  


  


Personally I feel like this is a JIRA, but I can see arguments the other
way as I'm not entirely sure what you'd do if multiple tokens came out of
the analysis chain. Maybe fail the document at index time?

What _is_ unreasonable IMO is that we allow this surprising behavior, so
regardless of the above I'm +1 on keeping users from being surprised by
this behavior

Thanks!
Erick


On Thu, Feb 5, 2015 at 11:42 AM, Shawn Heisey  wrote:

> On 2/5/2015 6:40 AM, Erick Erickson wrote:
> > And is this intended behavior?
> >
> > Either this is something we need to document better (or I've just
> > missed it) or I'll file a JIRA.
> >
> > I have a  defined as "lowercase", which is just a
> > KeywordTokenizer followed by a LowercaseFilter. This definition does
> > not detect duplicate IDs.
>
> I was using this exact fieldType as my uniqueKey for quite a while.  I
> never had a problem with it, but I read something saying that using a
> TextField type for a uniqueKey was a potential recipe for disaster, even
> if it would reliably produce a single token from the input, which that
> analysis chain does.  I changed it to StrField and reindexed based on that.
>
> For many reasons other than potential problems with Solr, it's a good
> idea to ensure the unique identifier field is completely normalized
> before it makes it into your source repository.
>
> It looks like you are correct about what happens with analysis on the
> uniqueKey field:
>
> https://wiki.apache.org/solr/UniqueKey#Text_field_in_the_document
>
> IMHO a couple of things need to happen:
>
> 1) The documentation needs to be a lot clearer ... this needs mention in
> more places.  A note in various schema.xml examples would be excellent.
> The reference guide may not have this information ... I haven't been
> able to check thoroughly.
> 2) We should consider throwing a fatal error during core startup if the
> uniqueKey is potentially ambiguous.  For instance if it is a TextField,
> it might have analysis that will be ignored, so refusing to start the
> core will bring the administrator's attention to a configuration mistake
> that can lead to unexpected behavior.  Is a Trie type with a nonzero
> precisionStep OK?  Internally that will produce multiple tokens, so I'm
> not sure.
>
> Thanks,
> Shawn
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: Static Analysis Tooling

2015-02-05 Thread david.w.smi...@gmail.com
+1 to this idea.  Note this is tracked as
https://issues.apache.org/jira/browse/LUCENE-3973

~ David Smiley
Freelance Apache Lucene/Solr Search Consultant/Developer
http://www.linkedin.com/in/davidwsmiley

On Thu, Feb 5, 2015 at 12:43 PM, Mike Drob  wrote:

> Devs,
>
> I'd like to bring up static analysis for Solr and Lucene again. It's been
> about a year since the last conversation[1] and it might be time to
> revisit. There is a JIRA issue too[2], but it's also in need of some love.
>
> ASF already provides a Sonar instance that we might be able to use[3],
> alternatively we can just hook up whatever static analysis tool works well
> with ant (this is most of them) and rely on Jenkins to provide reports. The
> Eclipse FindBugs plug-in works pretty well for me personally.
>
> I will plan on submitting first some patches to fix issues found as
> "critical" in my local instance. Then I will work on adding analysis to the
> build, and figuring out how to fail the build if we exceed a certain
> threshold. And then we can incrementally lower the threshold while fixing
> additional issues.
>
> Does this sound like a reasonable plan? I want to give folks a heads up
> before creating a bunch of issues - FindBugs currently reports just over
> 500 hits on trunk.
>
> Mike
>
> [1]: http://markmail.org/thread/pxf7lg7kzflnknmm
> [2]: https://issues.apache.org/jira/browse/LUCENE-5130
> [3]: https://analysis.apache.org/
>


Re: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1341: POMs out of sync

2015-02-05 Thread Shalin Shekhar Mangar
That's strange. This is code that I committed today but all tests and
precommit passed. I'll dig.

On Thu, Feb 5, 2015 at 11:09 PM, Uwe Schindler  wrote:

> Very strange error:
>
>   [mvn] [WARNING]
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/src/test/org/apache/solr/search/TestAnalyticsQParserPlugin.java:
> Recompile with -Xlint:unchecked for details.
>   [mvn] [INFO] 4 warnings
>   [mvn] [INFO]
> -
>   [mvn] [INFO]
> -
>   [mvn] [ERROR] COMPILATION ERROR :
>   [mvn] [INFO]
> -
>   [mvn] [ERROR]
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZk2Test.java:[26,80]
> package
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.common.collect
> does not exist
>   [mvn] [ERROR]
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZk2Test.java:[420,29]
> cannot find symbol
>   [mvn]   symbol:   variable Lists
>   [mvn]   location: class org.apache.solr.cloud.BasicDistributedZk2Test
>   [mvn] [INFO] 2 errors
>   [mvn] [INFO]
> -
>
> Does anybody has an idea how this comes? I cannot reproduce with ANT.
>
> Uwe
>
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
> > -Original Message-
> > From: Apache Jenkins Server [mailto:jenk...@builds.apache.org]
> > Sent: Thursday, February 05, 2015 6:03 PM
> > To: dev@lucene.apache.org
> > Subject: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1341: POMs out of
> > sync
> >
> > Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1341/
> >
> > No tests ran.
> >
> > Build Log:
> > [...truncated 39352 lines...]
> >   [mvn] [INFO]
> -
> >   [mvn] [INFO]
> -
> >   [mvn] [ERROR] COMPILATION ERROR :
> >   [mvn] [INFO]
> -
> >
> > [...truncated 798 lines...]
> > BUILD FAILED
> > /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-
> > trunk/build.xml:542: The following error occurred while executing this
> line:
> > /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-
> > trunk/build.xml:204: The following error occurred while executing this
> line:
> > : Java returned: 1
> >
> > Total time: 22 minutes 6 seconds
> > Build step 'Invoke Ant' marked build as failure Email was triggered for:
> Failure
> > Sending email for trigger: Failure
> >
>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


-- 
Regards,
Shalin Shekhar Mangar.


[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2592 - Still Failing

2015-02-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2592/

6 tests failed.
REGRESSION:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
some core start times did not change on reload

Stack Trace:
java.lang.AssertionError: some core start times did not change on reload
at 
__randomizedtesting.SeedInfo.seed([A96A216EF4FF9BC1:213E1EB45A03F639]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:741)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:191)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
  

[jira] [Closed] (SOLR-7080) Can't bootstrap custom router.field from core.properties into zookeeper

2015-02-05 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley closed SOLR-7080.
--
Resolution: Won't Fix

> Can't bootstrap custom router.field from core.properties into zookeeper
> ---
>
> Key: SOLR-7080
> URL: https://issues.apache.org/jira/browse/SOLR-7080
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10
>Reporter: Peter Ciuffetti
>
> When the collections API is used to create a collection with a custom 
> router.field, this configuration detail is stored in zookeeper and is visible 
> with action=CLUSTERSTATUS.   But there is no apparent way to bootstrap this 
> value from (say) core.properties or solrconfig.xml.
> In general this is an issue when trying to migrate cores to new servers or 
> when trying to recover a completely failed zookeeper environment.  But I 
> think it should be possible to establish this configuration detail from some 
> one of the configuration settings in either core.properties or solrconfig.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Static Analysis Tooling

2015-02-05 Thread Mike Drob
Devs,

I'd like to bring up static analysis for Solr and Lucene again. It's been
about a year since the last conversation[1] and it might be time to
revisit. There is a JIRA issue too[2], but it's also in need of some love.

ASF already provides a Sonar instance that we might be able to use[3],
alternatively we can just hook up whatever static analysis tool works well
with ant (this is most of them) and rely on Jenkins to provide reports. The
Eclipse FindBugs plug-in works pretty well for me personally.

I will plan on submitting first some patches to fix issues found as
"critical" in my local instance. Then I will work on adding analysis to the
build, and figuring out how to fail the build if we exceed a certain
threshold. And then we can incrementally lower the threshold while fixing
additional issues.

Does this sound like a reasonable plan? I want to give folks a heads up
before creating a bunch of issues - FindBugs currently reports just over
500 hits on trunk.

Mike

[1]: http://markmail.org/thread/pxf7lg7kzflnknmm
[2]: https://issues.apache.org/jira/browse/LUCENE-5130
[3]: https://analysis.apache.org/


RE: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1341: POMs out of sync

2015-02-05 Thread Uwe Schindler
Very strange error:

  [mvn] [WARNING] 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/src/test/org/apache/solr/search/TestAnalyticsQParserPlugin.java:
 Recompile with -Xlint:unchecked for details.
  [mvn] [INFO] 4 warnings 
  [mvn] [INFO] -
  [mvn] [INFO] -
  [mvn] [ERROR] COMPILATION ERROR : 
  [mvn] [INFO] -
  [mvn] [ERROR] 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZk2Test.java:[26,80]
 package 
com.carrotsearch.ant.tasks.junit4.dependencies.com.google.common.collect does 
not exist
  [mvn] [ERROR] 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZk2Test.java:[420,29]
 cannot find symbol
  [mvn]   symbol:   variable Lists
  [mvn]   location: class org.apache.solr.cloud.BasicDistributedZk2Test
  [mvn] [INFO] 2 errors 
  [mvn] [INFO] -

Does anybody has an idea how this comes? I cannot reproduce with ANT.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de

> -Original Message-
> From: Apache Jenkins Server [mailto:jenk...@builds.apache.org]
> Sent: Thursday, February 05, 2015 6:03 PM
> To: dev@lucene.apache.org
> Subject: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1341: POMs out of
> sync
> 
> Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1341/
> 
> No tests ran.
> 
> Build Log:
> [...truncated 39352 lines...]
>   [mvn] [INFO] 
> -
>   [mvn] [INFO] 
> -
>   [mvn] [ERROR] COMPILATION ERROR :
>   [mvn] [INFO] 
> -
> 
> [...truncated 798 lines...]
> BUILD FAILED
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-
> trunk/build.xml:542: The following error occurred while executing this line:
> /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-
> trunk/build.xml:204: The following error occurred while executing this line:
> : Java returned: 1
> 
> Total time: 22 minutes 6 seconds
> Build step 'Invoke Ant' marked build as failure Email was triggered for: 
> Failure
> Sending email for trigger: Failure
> 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6865) Upgrade HttpClient to 4.4

2015-02-05 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307618#comment-14307618
 ] 

Shawn Heisey commented on SOLR-6865:


The full release is official now, early indications are that all tests will 
pass on 5x.


> Upgrade HttpClient to 4.4
> -
>
> Key: SOLR-6865
> URL: https://issues.apache.org/jira/browse/SOLR-6865
> Project: Solr
>  Issue Type: Task
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Priority: Minor
> Fix For: Trunk, 5.1
>
>
> HttpClient 4.4 has been released.  5.0 seems like a good time to upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7020) Stop requiring jetty.xml edits to enable bin/solr to start in SSL mode

2015-02-05 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307603#comment-14307603
 ] 

Steve Rowe commented on SOLR-7020:
--

bq. Still, the change log for 5.0 should be identical in both places.

Completely agree, my mistake in not thinking of it.

> Stop requiring jetty.xml edits to enable bin/solr to start in SSL mode
> --
>
> Key: SOLR-7020
> URL: https://issues.apache.org/jira/browse/SOLR-7020
> Project: Solr
>  Issue Type: Task
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Minor
> Fix For: 5.0, 5.1
>
> Attachments: SOLR-7020.patch
>
>
> Right now we tell people to edit {{server/etc/jetty.xml}} to enable SSL: 
> comment out the non-SSL connector(s), uncomment the SSL connector.
> Jetty can be started using alternate configuration files - see 
> https://wiki.eclipse.org/Jetty/Reference/jetty.xml_usage - we should make use 
> of this capability and provide an SSL-enabled alternative to {{jetty.xml}} 
> that {{bin/solr start}} can use when SSL is enabled.  That way no manual 
> edits to {{jetty.xml}} will be required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6865) Upgrade HttpClient to 4.4

2015-02-05 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-6865:
---
Fix Version/s: (was: 5.0)
   5.1

> Upgrade HttpClient to 4.4
> -
>
> Key: SOLR-6865
> URL: https://issues.apache.org/jira/browse/SOLR-6865
> Project: Solr
>  Issue Type: Task
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Priority: Minor
> Fix For: Trunk, 5.1
>
>
> HttpClient 4.4 has been released.  5.0 seems like a good time to upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6367) empty tlog on HDFS when hard crash - no docs to replay on recovery

2015-02-05 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-6367.
---
Resolution: Duplicate

I think SOLR-6969 has resolved this.

> empty tlog on HDFS when hard crash - no docs to replay on recovery
> --
>
> Key: SOLR-6367
> URL: https://issues.apache.org/jira/browse/SOLR-6367
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Mark Miller
> Fix For: 5.0, Trunk
>
>
> Filing this bug based on an email to solr-user@lucene from Tom Chen (Fri, 18 
> Jul 2014)...
> {panel}
> Reproduce steps:
> 1) Setup Solr to run on HDFS like this:
> {noformat}
> java -Dsolr.directoryFactory=HdfsDirectoryFactory
>  -Dsolr.lock.type=hdfs
>  -Dsolr.hdfs.home=hdfs://host:port/path
> {noformat}
> For the purpose of this testing, turn off the default auto commit in 
> solrconfig.xml, i.e. comment out autoCommit like this:
> {code}
> 
> {code}
> 2) Add a document without commit:
> {{curl "http://localhost:8983/solr/collection1/update?commit=false"; -H
> "Content-type:text/xml; charset=utf-8" --data-binary "@solr.xml"}}
> 3) Solr generate empty tlog file (0 file size, the last one ends with 6):
> {noformat}
> [hadoop@hdtest042 exampledocs]$ hadoop fs -ls
> /path/collection1/core_node1/data/tlog
> Found 5 items
> -rw-r--r--   1 hadoop hadoop667 2014-07-18 08:47
> /path/collection1/core_node1/data/tlog/tlog.001
> -rw-r--r--   1 hadoop hadoop 67 2014-07-18 08:47
> /path/collection1/core_node1/data/tlog/tlog.003
> -rw-r--r--   1 hadoop hadoop667 2014-07-18 08:47
> /path/collection1/core_node1/data/tlog/tlog.004
> -rw-r--r--   1 hadoop hadoop  0 2014-07-18 09:02
> /path/collection1/core_node1/data/tlog/tlog.005
> -rw-r--r--   1 hadoop hadoop  0 2014-07-18 09:02
> /path/collection1/core_node1/data/tlog/tlog.006
> {noformat}
> 4) Simulate Solr crash by killing the process with -9 option.
> 5) restart the Solr process. Observation is that uncommitted document are
> not replayed, files in tlog directory are cleaned up. Hence uncommitted
> document(s) is lost.
> Am I missing anything or this is a bug?
> BTW, additional observations:
> a) If in step 4) Solr is stopped gracefully (i.e. without -9 option),
> non-empty tlog file is geneated and after re-starting Solr, uncommitted
> document is replayed as expected.
> b) If Solr doesn't run on HDFS (i.e. on local file system), this issue is
> not observed either.
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6648) AnalyzingInfixLookupFactory always highlights suggestions

2015-02-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307581#comment-14307581
 ] 

Tomás Fernández Löbbe commented on SOLR-6648:
-

Sorry for the delay [~boonious], the patch looks good. I'll commit shortly

> AnalyzingInfixLookupFactory always highlights suggestions
> -
>
> Key: SOLR-6648
> URL: https://issues.apache.org/jira/browse/SOLR-6648
> Project: Solr
>  Issue Type: Sub-task
>Affects Versions: 4.9, 4.9.1, 4.10, 4.10.1
>Reporter: Varun Thacker
>Assignee: Tomás Fernández Löbbe
>  Labels: suggester
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6648-v4.10.3.patch, SOLR-6648.patch
>
>
> When using AnalyzingInfixLookupFactory suggestions always return with the 
> match term as highlighted and 'allTermsRequired' is always set to true.
> We should be able to configure those.
> Steps to reproduce - 
> schema additions
> {code}
> 
> 
>   mySuggester
>   AnalyzingInfixLookupFactory
>   DocumentDictionaryFactory 
>   suggestField
>   weight
>   textSuggest
> 
>   
>   
> 
>   true
>   10
> 
> 
>   suggest
> 
>   
> {code}
> solrconfig changes -
> {code}
>  positionIncrementGap="100">
>
>   
>   
>   
>
>   
> stored="true"/>
> {code}
> Add 3 documents - 
> {code}
> curl http://localhost:8983/solr/update/json?commit=true -H 
> 'Content-type:application/json' -d '
> [ {"id" : "1", "suggestField" : "bass fishing"}, {"id" : "2", "suggestField" 
> : "sea bass"}, {"id" : "3", "suggestField" : "sea bass fishing"} ]
> '
> {code}
> Query -
> {code}
> http://localhost:8983/solr/collection1/suggest?suggest.build=true&suggest.dictionary=mySuggester&q=bass&wt=json&indent=on
> {code}
> Response 
> {code}
> {
>   "responseHeader":{
> "status":0,
> "QTime":25},
>   "command":"build",
>   "suggest":{"mySuggester":{
>   "bass":{
> "numFound":3,
> "suggestions":[{
> "term":"bass fishing",
> "weight":0,
> "payload":""},
>   {
> "term":"sea bass",
> "weight":0,
> "payload":""},
>   {
> "term":"sea bass fishing",
> "weight":0,
> "payload":""}]
> {code}
> The problem is in SolrSuggester Line 200 where we say lookup.lookup()
> This constructor does not take allTermsRequired and doHighlight since it's 
> only tuneable to AnalyzingInfixSuggester and not the other lookup 
> implementations.
> If different Lookup implementations have different params as their 
> constructors, these sort of issues will always keep happening. Maybe we 
> should not keep it generic and do instanceof checks and set params 
> accordingly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4839) Jetty 9

2015-02-05 Thread Steve Davids (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307578#comment-14307578
 ] 

Steve Davids commented on SOLR-4839:


I was creating a new MiniSolrCluster test because I needed to have the ability 
to define multiple cores and I was never able to get the test to work via 
eclipse, traced it down to be this issue.

Sent from my iPhone



> Jetty 9
> ---
>
> Key: SOLR-4839
> URL: https://issues.apache.org/jira/browse/SOLR-4839
> Project: Solr
>  Issue Type: Improvement
>Reporter: Bill Bell
>Assignee: Shalin Shekhar Mangar
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-4839-fix-eclipse.patch, 
> SOLR-4839-mod-JettySolrRunner.patch, SOLR-4839.patch, SOLR-4839.patch, 
> SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch, 
> SOLR-4839.patch, SOLR-4839.patch, SOLR-4839.patch
>
>
> Implement Jetty 9



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7080) Can't bootstrap custom router.field from core.properties into zookeeper

2015-02-05 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307577#comment-14307577
 ] 

Noble Paul commented on SOLR-7080:
--

agree with [~dsmiley] We are planning to totally eliminate all ways to create a 
collection through core.properties or solr.xml

> Can't bootstrap custom router.field from core.properties into zookeeper
> ---
>
> Key: SOLR-7080
> URL: https://issues.apache.org/jira/browse/SOLR-7080
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10
>Reporter: Peter Ciuffetti
>
> When the collections API is used to create a collection with a custom 
> router.field, this configuration detail is stored in zookeeper and is visible 
> with action=CLUSTERSTATUS.   But there is no apparent way to bootstrap this 
> value from (say) core.properties or solrconfig.xml.
> In general this is an issue when trying to migrate cores to new servers or 
> when trying to recover a completely failed zookeeper environment.  But I 
> think it should be possible to establish this configuration detail from some 
> one of the configuration settings in either core.properties or solrconfig.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7005) facet.heatmap for spatial heatmap faceting on RPT

2015-02-05 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307571#comment-14307571
 ] 

David Smiley commented on SOLR-7005:


bq. i suspect it's safe/efficient to remove all the facet params up front, and 
let the various types of faceting re-add the params they need if/when they need 
refined? ... but i'm not certain about that.

I suspect that as well.  I'll look into finding an existing test that exercises 
distributed refinement, and then I'll see what happens when I throw in extra 
types of faceting. I don't want to commit that to whatever test that is but 
I'll use it as a tool to observe.  If we're right, then I'll file an issue as 
an optimization w/ fix; though arguably it could be considered a performance 
bug.

> facet.heatmap for spatial heatmap faceting on RPT
> -
>
> Key: SOLR-7005
> URL: https://issues.apache.org/jira/browse/SOLR-7005
> Project: Solr
>  Issue Type: New Feature
>  Components: spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 5.1
>
> Attachments: SOLR-7005_heatmap.patch, SOLR-7005_heatmap.patch, 
> SOLR-7005_heatmap.patch, heatmap_512x256.png, heatmap_64x32.png
>
>
> This is a new feature that uses the new spatial Heatmap / 2D PrefixTree cell 
> counter in Lucene spatial LUCENE-6191.  This is a form of faceting, and 
> as-such I think it should live in the "facet" parameter namespace.  Here's 
> what the parameters are:
> * facet=true
> * facet.heatmap=fieldname
> * facet.heatmap.bbox=\["-180 -90" TO "180 90"]
> * facet.heatmap.gridLevel=6
> * facet.heatmap.distErrPct=0.10
> Like other faceting features, the fieldName can have local-params to exclude 
> filter queries or specify an output key.
> The bbox is optional; you get the whole world or you can specify a box or 
> actually any shape that WKT supports (you get the bounding box of whatever 
> you put).
> Ultimately, this feature needs to know the grid level, which together with 
> the input shape will yield a certain number of cells.  You can specify 
> gridLevel exactly, or don't and instead provide distErrPct which is computed 
> like it is for the RPT field type as seen in the schema.  0.10 yielded ~4k 
> cells but it'll vary.  There's also a facet.heatmap.maxCells safety net 
> defaulting to 100k.  Exceed this and you get an error.
> The output is (JSON):
> {noformat}
> {gridLevel=6,columns=64,rows=64,minX=-180.0,maxX=180.0,minY=-90.0,maxY=90.0,counts=[[0,
>  0, 2, 1, ],[1, 1, 3, 2, ...],...]}
> {noformat}
> counts is null if all would be 0.  Perhaps individual row arrays should 
> likewise be null... I welcome feedback.
> I'm toying with an output format option in which you can specify a base-64'ed 
> grayscale PNG.
> Obviously this should support sharded / distributed environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6845) Add buildOnStartup option for suggesters

2015-02-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307559#comment-14307559
 ] 

Tomás Fernández Löbbe commented on SOLR-6845:
-

Thanks [~varunthacker], fixed

> Add buildOnStartup option for suggesters
> 
>
> Key: SOLR-6845
> URL: https://issues.apache.org/jira/browse/SOLR-6845
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Tomás Fernández Löbbe
> Fix For: Trunk, 5.1
>
> Attachments: SOLR-6845.patch, SOLR-6845.patch, SOLR-6845.patch, 
> tests-failures.txt
>
>
> SOLR-6679 was filed to track the investigation into the following problem...
> {panel}
> The stock solrconfig provides a bad experience with a large index... start up 
> Solr and it will spin at 100% CPU for minutes, unresponsive, while it 
> apparently builds a suggester index.
> ...
> This is what I did:
> 1) indexed 10M very small docs (only takes a few minutes).
> 2) shut down Solr
> 3) start up Solr and watch it be unresponsive for over 4 minutes!
> I didn't even use any of the fields specified in the suggester config and I 
> never called the suggest request handler.
> {panel}
> ..but ultimately focused on removing/disabling the suggester from the sample 
> configs.
> Opening this new issue to focus on actually trying to identify the root 
> problem & fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4479) TermVectorComponent NPE when running Solr Cloud

2015-02-05 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307558#comment-14307558
 ] 

Shalin Shekhar Mangar commented on SOLR-4479:
-

Thanks Tim!

> TermVectorComponent NPE when running Solr Cloud
> ---
>
> Key: SOLR-4479
> URL: https://issues.apache.org/jira/browse/SOLR-4479
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.1
>Reporter: Vitali Kviatkouski
>Assignee: Timothy Potter
>
> When running Solr Cloud (just simply 2 shards - as described in wiki), got NPE
> java.lang.NullPointerException
>   at 
> org.apache.solr.handler.component.TermVectorComponent.finishStage(TermVectorComponent.java:437)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:317)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   at 
> org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:242)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1816)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:448)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:269)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1307)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:453)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:560)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1072)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:382)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1006)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> . Skipped
> To reproduce, follow the guide in wiki 
> (http://wiki.apache.org/solr/SolrCloud), add some documents and then request 
> http://localhost:8983/solr/collection1/tvrh?q=*%3A*
> If I include term search vector component in search handler, I get (on second 
> shard):
> SEVERE: null:java.lang.NullPointerException
> at 
> org.apache.solr.handler.component.TermVectorComponent.process(TermVectorComponent.java:321)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:206)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1699)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7020) Stop requiring jetty.xml edits to enable bin/solr to start in SSL mode

2015-02-05 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307557#comment-14307557
 ] 

Shalin Shekhar Mangar commented on SOLR-7020:
-

bq. Unusual situation where trunk didn't get any changes because of the Jetty 9 
stuff.

Yes, Steve. The Jetty 9 stuff is overdue on branch_5x and I'll get to it soon. 
Still, the change log for 5.0 should be identical in both places. Thanks for 
updating!

> Stop requiring jetty.xml edits to enable bin/solr to start in SSL mode
> --
>
> Key: SOLR-7020
> URL: https://issues.apache.org/jira/browse/SOLR-7020
> Project: Solr
>  Issue Type: Task
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Minor
> Fix For: 5.0, 5.1
>
> Attachments: SOLR-7020.patch
>
>
> Right now we tell people to edit {{server/etc/jetty.xml}} to enable SSL: 
> comment out the non-SSL connector(s), uncomment the SSL connector.
> Jetty can be started using alternate configuration files - see 
> https://wiki.eclipse.org/Jetty/Reference/jetty.xml_usage - we should make use 
> of this capability and provide an SSL-enabled alternative to {{jetty.xml}} 
> that {{bin/solr start}} can use when SSL is enabled.  That way no manual 
> edits to {{jetty.xml}} will be required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1341: POMs out of sync

2015-02-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1341/

No tests ran.

Build Log:
[...truncated 39352 lines...]
  [mvn] [INFO] -
  [mvn] [INFO] -
  [mvn] [ERROR] COMPILATION ERROR : 
  [mvn] [INFO] -

[...truncated 798 lines...]
BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:542:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:204:
 The following error occurred while executing this line:
: Java returned: 1

Total time: 22 minutes 6 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (LUCENE-6191) Spatial 2D faceting (heatmaps)

2015-02-05 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-6191:
-
Attachment: LUCENE-6191__Spatial_heatmap.patch

This is an updated patch.
* Check for null input range; swap in world-bounds.
* Add some static tests that don't use randomization (in addition to extensive 
randomized one).  They aren't much... but it's something.

And that's about it.  I plan to commit this this weekend to trunk & 5x (which 
means copying PrefixTreeFacetCounter from trunk along with this patch).

One thing i'm 50/50 on is the ordering of the heatmap counts (int[] counts).  
The layout is column 1, column 2, ... etc..  Alternatively, perhaps the layout 
should be row 1, row 2, row 3,   It's arbitrary of course, and so I'm 
inclined to let it be since it really doesn't matter.  The the by-row layout 
would feel more closely aligned with viewing one's screen and match the 
orientation of some image/screen APIs that draw top->down.

> Spatial 2D faceting (heatmaps)
> --
>
> Key: LUCENE-6191
> URL: https://issues.apache.org/jira/browse/LUCENE-6191
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 5.1
>
> Attachments: LUCENE-6191__Spatial_heatmap.patch, 
> LUCENE-6191__Spatial_heatmap.patch
>
>
> Lucene spatial's PrefixTree (grid) based strategies index data in a way 
> highly amenable to faceting on grids cells to compute a so-called _heatmap_. 
> The underlying code in this patch uses the PrefixTreeFacetCounter utility 
> class which was recently refactored out of faceting for NumberRangePrefixTree 
> LUCENE-5735.  At a low level, the terms (== grid cells) are navigated 
> per-segment, forward only with TermsEnum.seek, so it's pretty quick and 
> furthermore requires no extra caches & no docvalues.  Ideally you should use 
> QuadPrefixTree (or Flex once it comes out) to maximize the number grid levels 
> which in turn maximizes the fidelity of choices when you ask for a grid 
> covering a region.  Conveniently, the provided capability returns the data in 
> a 2-D grid of counts, so the caller needn't know a thing about how the data 
> is encoded in the prefix tree.  Well almost... at this point they need to 
> provide a grid level, but I'll soon provide a means of deriving the grid 
> level based on a min/max cell count.
> I recommend QuadPrefixTree with geo=false so that you can provide a square 
> world-bounds (360x360 degrees), which means square grid cells which are more 
> desirable to display than rectangular cells.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Are docs updated based on comparing the id before analysis?

2015-02-05 Thread Shawn Heisey
On 2/5/2015 6:40 AM, Erick Erickson wrote:
> And is this intended behavior?
>
> Either this is something we need to document better (or I've just
> missed it) or I'll file a JIRA.
>
> I have a  defined as "lowercase", which is just a
> KeywordTokenizer followed by a LowercaseFilter. This definition does
> not detect duplicate IDs.

I was using this exact fieldType as my uniqueKey for quite a while.  I
never had a problem with it, but I read something saying that using a
TextField type for a uniqueKey was a potential recipe for disaster, even
if it would reliably produce a single token from the input, which that
analysis chain does.  I changed it to StrField and reindexed based on that.

For many reasons other than potential problems with Solr, it's a good
idea to ensure the unique identifier field is completely normalized
before it makes it into your source repository.

It looks like you are correct about what happens with analysis on the
uniqueKey field:

https://wiki.apache.org/solr/UniqueKey#Text_field_in_the_document

IMHO a couple of things need to happen:

1) The documentation needs to be a lot clearer ... this needs mention in
more places.  A note in various schema.xml examples would be excellent. 
The reference guide may not have this information ... I haven't been
able to check thoroughly.
2) We should consider throwing a fatal error during core startup if the
uniqueKey is potentially ambiguous.  For instance if it is a TextField,
it might have analysis that will be ignored, so refusing to start the
core will bring the administrator's attention to a configuration mistake
that can lead to unexpected behavior.  Is a Trie type with a nonzero
precisionStep OK?  Internally that will produce multiple tokens, so I'm
not sure.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6832) Queries be served locally rather than being forwarded to another replica

2015-02-05 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307520#comment-14307520
 ] 

Timothy Potter commented on SOLR-6832:
--

Thanks for the updated patch. Only thing we need now is a good unit test. I can 
take a stab at that over the next few days.

> Queries be served locally rather than being forwarded to another replica
> 
>
> Key: SOLR-6832
> URL: https://issues.apache.org/jira/browse/SOLR-6832
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.2
>Reporter: Sachin Goyal
>Assignee: Timothy Potter
> Attachments: SOLR-6832.patch, SOLR-6832.patch
>
>
> Currently, I see that code flow for a query in SolrCloud is as follows:
> For distributed query:
> SolrCore -> SearchHandler.handleRequestBody() -> HttpShardHandler.submit()
> For non-distributed query:
> SolrCore -> SearchHandler.handleRequestBody() -> QueryComponent.process()
> \\
> \\
> \\
> For a distributed query, the request is always sent to all the shards even if 
> the originating SolrCore (handling the original distributed query) is a 
> replica of one of the shards.
> If the original Solr-Core can check itself before sending http requests for 
> any shard, we can probably save some network hopping and gain some 
> performance.
> \\
> \\
> We can change SearchHandler.handleRequestBody() or HttpShardHandler.submit() 
> to fix this behavior (most likely the former and not the latter).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 754 - Still Failing

2015-02-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/754/

6 tests failed.
REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test

Error Message:
expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: expected:<0> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([7BA0F00E400B55A8:F3F4CFD4EEF73850]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test(ChaosMonkeySafeLeaderTest.java:152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
c

[jira] [Commented] (SOLR-5507) Admin UI - Refactoring using AngularJS

2015-02-05 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307462#comment-14307462
 ] 

Upayavira commented on SOLR-5507:
-

I've managed to get RAT to be happy locally.

To run rat on the webapp alone, enter solr/webapp and run 'ant rat-sources'.

My source files are currently mid-development, so I will submit a new 
Rat-friendly patch when get to my next milestone.

Note, rather than adding headers to all of the AngularJS files, I'm proposing 
to add this patch to lucene/common-build.xml, under the RAT, MIT license 
section:

  



> Admin UI - Refactoring using AngularJS
> --
>
> Key: SOLR-5507
> URL: https://issues.apache.org/jira/browse/SOLR-5507
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Reporter: Stefan Matheis (steffkes)
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-5507.patch, SOLR5507.patch, SOLR5507.patch, 
> SOLR5507.patch, SOLR5507.patch, SOLR5507.patch, SOLR5507.patch
>
>
> On the LSR in Dublin, i've talked again to [~upayavira] and this time we 
> talked about Refactoring the existing UI - using AngularJS: providing (more, 
> internal) structure and what not ;>
> He already started working on the Refactoring, so this is more a 'tracking' 
> issue about the progress he/we do there.
> Will extend this issue with a bit more context & additional information, w/ 
> thoughts about the possible integration in the existing UI and more (:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7020) Stop requiring jetty.xml edits to enable bin/solr to start in SSL mode

2015-02-05 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307461#comment-14307461
 ] 

Steve Rowe commented on SOLR-7020:
--

bq. Hi Steve, this isn't mentioned in the CHANGES.txt on trunk in the 5.0 
section.

Thanks Shalin, I've added the entry on trunk.  Unusual situation where trunk 
didn't get any changes because of the Jetty 9 stuff.

> Stop requiring jetty.xml edits to enable bin/solr to start in SSL mode
> --
>
> Key: SOLR-7020
> URL: https://issues.apache.org/jira/browse/SOLR-7020
> Project: Solr
>  Issue Type: Task
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Minor
> Fix For: 5.0, 5.1
>
> Attachments: SOLR-7020.patch
>
>
> Right now we tell people to edit {{server/etc/jetty.xml}} to enable SSL: 
> comment out the non-SSL connector(s), uncomment the SSL connector.
> Jetty can be started using alternate configuration files - see 
> https://wiki.eclipse.org/Jetty/Reference/jetty.xml_usage - we should make use 
> of this capability and provide an SSL-enabled alternative to {{jetty.xml}} 
> that {{bin/solr start}} can use when SSL is enabled.  That way no manual 
> edits to {{jetty.xml}} will be required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7020) Stop requiring jetty.xml edits to enable bin/solr to start in SSL mode

2015-02-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307459#comment-14307459
 ] 

ASF subversion and git services commented on SOLR-7020:
---

Commit 1657615 from [~steve_rowe] in branch 'dev/trunk'
[ https://svn.apache.org/r1657615 ]

SOLR-7020: add CHANGES entry on trunk

> Stop requiring jetty.xml edits to enable bin/solr to start in SSL mode
> --
>
> Key: SOLR-7020
> URL: https://issues.apache.org/jira/browse/SOLR-7020
> Project: Solr
>  Issue Type: Task
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Minor
> Fix For: 5.0, 5.1
>
> Attachments: SOLR-7020.patch
>
>
> Right now we tell people to edit {{server/etc/jetty.xml}} to enable SSL: 
> comment out the non-SSL connector(s), uncomment the SSL connector.
> Jetty can be started using alternate configuration files - see 
> https://wiki.eclipse.org/Jetty/Reference/jetty.xml_usage - we should make use 
> of this capability and provide an SSL-enabled alternative to {{jetty.xml}} 
> that {{bin/solr start}} can use when SSL is enabled.  That way no manual 
> edits to {{jetty.xml}} will be required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4479) TermVectorComponent NPE when running Solr Cloud

2015-02-05 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307460#comment-14307460
 ] 

Timothy Potter commented on SOLR-4479:
--

I'm bumping into this with some of the Spark integration work I'm doing and 
know that Shalin is super busy with other stuff, so I'll take it up.

> TermVectorComponent NPE when running Solr Cloud
> ---
>
> Key: SOLR-4479
> URL: https://issues.apache.org/jira/browse/SOLR-4479
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.1
>Reporter: Vitali Kviatkouski
>Assignee: Timothy Potter
>
> When running Solr Cloud (just simply 2 shards - as described in wiki), got NPE
> java.lang.NullPointerException
>   at 
> org.apache.solr.handler.component.TermVectorComponent.finishStage(TermVectorComponent.java:437)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:317)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   at 
> org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:242)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1816)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:448)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:269)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1307)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:453)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:560)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1072)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:382)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1006)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> . Skipped
> To reproduce, follow the guide in wiki 
> (http://wiki.apache.org/solr/SolrCloud), add some documents and then request 
> http://localhost:8983/solr/collection1/tvrh?q=*%3A*
> If I include term search vector component in search handler, I get (on second 
> shard):
> SEVERE: null:java.lang.NullPointerException
> at 
> org.apache.solr.handler.component.TermVectorComponent.process(TermVectorComponent.java:321)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:206)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1699)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-4479) TermVectorComponent NPE when running Solr Cloud

2015-02-05 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter reassigned SOLR-4479:


Assignee: Timothy Potter  (was: Shalin Shekhar Mangar)

> TermVectorComponent NPE when running Solr Cloud
> ---
>
> Key: SOLR-4479
> URL: https://issues.apache.org/jira/browse/SOLR-4479
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.1
>Reporter: Vitali Kviatkouski
>Assignee: Timothy Potter
>
> When running Solr Cloud (just simply 2 shards - as described in wiki), got NPE
> java.lang.NullPointerException
>   at 
> org.apache.solr.handler.component.TermVectorComponent.finishStage(TermVectorComponent.java:437)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:317)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   at 
> org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:242)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1816)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:448)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:269)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1307)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:453)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:560)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1072)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:382)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1006)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> . Skipped
> To reproduce, follow the guide in wiki 
> (http://wiki.apache.org/solr/SolrCloud), add some documents and then request 
> http://localhost:8983/solr/collection1/tvrh?q=*%3A*
> If I include term search vector component in search handler, I get (on second 
> shard):
> SEVERE: null:java.lang.NullPointerException
> at 
> org.apache.solr.handler.component.TermVectorComponent.process(TermVectorComponent.java:321)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:206)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1699)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6227) ChaosMonkeySafeLeaderTest failures on jenkins

2015-02-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307441#comment-14307441
 ] 

Mark Miller commented on SOLR-6227:
---

Cool, thanks Shalin.

> ChaosMonkeySafeLeaderTest failures on jenkins
> -
>
> Key: SOLR-6227
> URL: https://issues.apache.org/jira/browse/SOLR-6227
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, Tests
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.0, Trunk
>
>
> This is happening very frequently.
> {code}
> 1 tests failed.
> REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch
> Error Message:
> shard1 is not consistent.  Got 143 from 
> https://127.0.0.1:36610/xvv/collection1lastClient and got 142 from 
> https://127.0.0.1:33168/xvv/collection1
> Stack Trace:
> java.lang.AssertionError: shard1 is not consistent.  Got 143 from 
> https://127.0.0.1:36610/xvv/collection1lastClient and got 142 from 
> https://127.0.0.1:33168/xvv/collection1
> at 
> __randomizedtesting.SeedInfo.seed([3C1FB6EAFE71:BDF938F2AA829E4D]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at 
> org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1139)
> at 
> org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
> at 
> org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:150)
> at 
> org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6066) Collector that manages diversity in search results

2015-02-05 Thread Mark Harwood (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Harwood updated LUCENE-6066:
-
Attachment: (was: LUCENE-PQRemoveV7.patch)

> Collector that manages diversity in search results
> --
>
> Key: LUCENE-6066
> URL: https://issues.apache.org/jira/browse/LUCENE-6066
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/query/scoring
>Reporter: Mark Harwood
>Priority: Minor
> Fix For: 5.0
>
> Attachments: LUCENE-PQRemoveV8.patch
>
>
> This issue provides a new collector for situations where a client doesn't 
> want more than N matches for any given key (e.g. no more than 5 products from 
> any one retailer in a marketplace). In these circumstances a document that 
> was previously thought of as competitive during collection has to be removed 
> from the final PQ and replaced with another doc (eg a retailer who already 
> has 5 matches in the PQ receives a 6th match which is better than his 
> previous ones). This requires a new remove method on the existing 
> PriorityQueue class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6066) Collector that manages diversity in search results

2015-02-05 Thread Mark Harwood (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Harwood updated LUCENE-6066:
-
Attachment: (was: LUCENE-PQRemoveV6.patch)

> Collector that manages diversity in search results
> --
>
> Key: LUCENE-6066
> URL: https://issues.apache.org/jira/browse/LUCENE-6066
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/query/scoring
>Reporter: Mark Harwood
>Priority: Minor
> Fix For: 5.0
>
> Attachments: LUCENE-PQRemoveV8.patch
>
>
> This issue provides a new collector for situations where a client doesn't 
> want more than N matches for any given key (e.g. no more than 5 products from 
> any one retailer in a marketplace). In these circumstances a document that 
> was previously thought of as competitive during collection has to be removed 
> from the final PQ and replaced with another doc (eg a retailer who already 
> has 5 matches in the PQ receives a 6th match which is better than his 
> previous ones). This requires a new remove method on the existing 
> PriorityQueue class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6066) Collector that manages diversity in search results

2015-02-05 Thread Mark Harwood (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Harwood updated LUCENE-6066:
-
Attachment: LUCENE-PQRemoveV8.patch

Tabs removed. Ant precommit now passes. Still no Bee Gees (sorry, Mike).
Will commit to trunk and 5.1 in a day or 2 if no objections. 

> Collector that manages diversity in search results
> --
>
> Key: LUCENE-6066
> URL: https://issues.apache.org/jira/browse/LUCENE-6066
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/query/scoring
>Reporter: Mark Harwood
>Priority: Minor
> Fix For: 5.0
>
> Attachments: LUCENE-PQRemoveV8.patch
>
>
> This issue provides a new collector for situations where a client doesn't 
> want more than N matches for any given key (e.g. no more than 5 products from 
> any one retailer in a marketplace). In these circumstances a document that 
> was previously thought of as competitive during collection has to be removed 
> from the final PQ and replaced with another doc (eg a retailer who already 
> has 5 matches in the PQ receives a 6th match which is better than his 
> previous ones). This requires a new remove method on the existing 
> PriorityQueue class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Interesting resource for Unix shell script cleanup

2015-02-05 Thread david.w.smi...@gmail.com
Cool!

~ David Smiley
Freelance Apache Lucene/Solr Search Consultant/Developer
http://www.linkedin.com/in/davidwsmiley

On Thu, Feb 5, 2015 at 10:25 AM, Steve Rowe  wrote:

> > On Feb 5, 2015, at 9:51 AM, Alexandre Rafalovitch 
> wrote:
> >
> > Hi,
> >
> > Just saw a link to http://www.shellcheck.net/ .
> >
> > I run Solr start script and it picked up a couple of interesting
> > issues around variable escaping and deprecated shell commands.
> >
> > Is that something that's worth making JIRA about?
> >
>
> +1
>
> Steve
> http://www.lucidworks.com
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (SOLR-7080) Can't bootstrap custom router.field from core.properties into zookeeper

2015-02-05 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14307411#comment-14307411
 ] 

David Smiley commented on SOLR-7080:


-1; sorry.
Collections are supposed to be created with the collections API (REST).  
core.properties has info specific to the core/shard/replica but the router 
field is collection-wide.

> Can't bootstrap custom router.field from core.properties into zookeeper
> ---
>
> Key: SOLR-7080
> URL: https://issues.apache.org/jira/browse/SOLR-7080
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10
>Reporter: Peter Ciuffetti
>
> When the collections API is used to create a collection with a custom 
> router.field, this configuration detail is stored in zookeeper and is visible 
> with action=CLUSTERSTATUS.   But there is no apparent way to bootstrap this 
> value from (say) core.properties or solrconfig.xml.
> In general this is an issue when trying to migrate cores to new servers or 
> when trying to recover a completely failed zookeeper environment.  But I 
> think it should be possible to establish this configuration detail from some 
> one of the configuration settings in either core.properties or solrconfig.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Interesting resource for Unix shell script cleanup

2015-02-05 Thread Steve Rowe
> On Feb 5, 2015, at 9:51 AM, Alexandre Rafalovitch  wrote:
> 
> Hi,
> 
> Just saw a link to http://www.shellcheck.net/ .
> 
> I run Solr start script and it picked up a couple of interesting
> issues around variable escaping and deprecated shell commands.
> 
> Is that something that's worth making JIRA about?
> 

+1

Steve
http://www.lucidworks.com
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7080) Can't bootstrap custom router.field from core.properties into zookeeper

2015-02-05 Thread Peter Ciuffetti (JIRA)
Peter Ciuffetti created SOLR-7080:
-

 Summary: Can't bootstrap custom router.field from core.properties 
into zookeeper
 Key: SOLR-7080
 URL: https://issues.apache.org/jira/browse/SOLR-7080
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10
Reporter: Peter Ciuffetti


When the collections API is used to create a collection with a custom 
router.field, this configuration detail is stored in zookeeper and is visible 
with action=CLUSTERSTATUS.   But there is no apparent way to bootstrap this 
value from (say) core.properties or solrconfig.xml.

In general this is an issue when trying to migrate cores to new servers or when 
trying to recover a completely failed zookeeper environment.  But I think it 
should be possible to establish this configuration detail from some one of the 
configuration settings in either core.properties or solrconfig.xml.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Interesting resource for Unix shell script cleanup

2015-02-05 Thread Erik Hatcher
Neat tool.  I just pasted bin/post in there and it pointed out some things to 
update (though nothing glaringly wrong mentioned, thankfully).

By all means, please make JIRAs for any findings/patches.  Maybe this is a tool 
we could even incorporate into our tests in some way?

Erik


> On Feb 5, 2015, at 9:51 AM, Alexandre Rafalovitch  wrote:
> 
> Hi,
> 
> Just saw a link to http://www.shellcheck.net/ .
> 
> I run Solr start script and it picked up a couple of interesting
> issues around variable escaping and deprecated shell commands.
> 
> Is that something that's worth making JIRA about?
> 
> Regards,
>   Alex.
> 
> Sign up for my Solr resources newsletter at http://www.solr-start.com/
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >