[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 768 - Failure

2016-01-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/768/

3 tests failed.
FAILED:  org.apache.solr.client.solrj.impl.CloudSolrClientTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([830A2CA54617026C:B5E137FE8EB6F94]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.allTests(CloudSolrClientTest.java:232)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.test(CloudSolrClientTest.java:118)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 902 - Still Failing

2016-01-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/902/

1 tests failed.
FAILED:  org.apache.lucene.index.TestDuelingCodecsAtNight.testBigEquals

Error Message:
this IndexWriter is closed

Stack Trace:
org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:714)
at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:728)
at 
org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1300)
at 
org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1283)
at 
org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:141)
at 
org.apache.lucene.index.TestDuelingCodecs.createRandomIndex(TestDuelingCodecs.java:139)
at 
org.apache.lucene.index.TestDuelingCodecsAtNight.testBigEquals(TestDuelingCodecsAtNight.java:34)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.fst.BytesStore.skipBytes(BytesStore.java:307)
at org.apache.lucene.util.fst.FST.pack(FST.java:1762)
at org.apache.lucene.util.fst.Builder.finish(Builder.java:503)
at 

[jira] [Comment Edited] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2016-01-04 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081576#comment-15081576
 ] 

Yonik Seeley edited comment on SOLR-8453 at 1/4/16 7:00 PM:


bq. That's unfortunate if one can't provide an error response before the 
request has finished.

Hmmm, OK... it doesn't look like that's happening:

{code}
~$ nc 127.0.0.1 8983
POST /solr/techproducts/update HTTP/1.1
Host: localhost:8983
User-Agent: curl/7.43.0
Accept: */*
Content-type:application/json
Content-Length: 10


[
 {"id_not_exist" : "TestDoc1", "title" : "test1"},
{code}
{code}
HTTP/1.1 400 Bad Request
Content-Type: text/plain;charset=utf-8
Transfer-Encoding: chunked

7E
{"responseHeader":{"status":400,"QTime":6312},"error":{"msg":"Document is 
missing mandatory uniqueKey field: id","code":400}}

0
{code}


was (Author: ysee...@gmail.com):
bq. That's unfortunate if one can't provide an error response before the 
request has finished.

Hmmm, OK... it doesn't look like that's happening:

~$ nc 127.0.0.1 8983
POST /solr/techproducts/update HTTP/1.1
Host: localhost:8983
User-Agent: curl/7.43.0
Accept: */*
Content-type:application/json
Content-Length: 10


[
 {"id_not_exist" : "TestDoc1", "title" : "test1"},
HTTP/1.1 400 Bad Request
Content-Type: text/plain;charset=utf-8
Transfer-Encoding: chunked

7E
{"responseHeader":{"status":400,"QTime":6312},"error":{"msg":"Document is 
missing mandatory uniqueKey field: id","code":400}}

0


> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop and finish up the 
> request before the server does something else with that connection.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is still in the process of sending the request. 
> It seems previously, Jetty would not go after the connection very quickly 
> after the server processing thread was stopped via exception, and the client 
> (usually?) had time to clean up properly. But after the Jetty upgrade from 
> 9.2 to 9.3, Jetty closes the connection on the server sooner than previous 
> versions (?), and the client does not end up getting notified of the original 
> exception at all and instead hits a connection reset exception. The result 
> was random fails due to connection reset throughout our tests and one 
> particular test failing consistently. Even before this update, it does not 
> seem like we are acting in a safe or 'behaved' manner, but our version of 
> Jetty was relaxed enough (or a bug was fixed?) for our tests to work out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_66) - Build # 5392 - Still Failing!

2016-01-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5392/
Java: 32bit/jdk1.8.0_66 -client -XX:+UseParallelGC

8 tests failed.
FAILED:  org.apache.solr.cloud.BasicDistributedZkTest.test

Error Message:
.response.numFound:1!=0

Stack Trace:
junit.framework.AssertionFailedError: .response.numFound:1!=0
at 
__randomizedtesting.SeedInfo.seed([927C10977C94CFFD:1A282F4DD268A205]:0)
at junit.framework.Assert.fail(Assert.java:50)
at 
org.apache.solr.BaseDistributedSearchTestCase.compareSolrResponses(BaseDistributedSearchTestCase.java:893)
at 
org.apache.solr.BaseDistributedSearchTestCase.compareResponses(BaseDistributedSearchTestCase.java:912)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryAndCompare(BaseDistributedSearchTestCase.java:655)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.queryAndCompareReplicas(AbstractFullDistribZkTestBase.java:1042)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.queryAndCompareShards(AbstractFullDistribZkTestBase.java:1059)
at 
org.apache.solr.cloud.BasicDistributedZkTest.test(BasicDistributedZkTest.java:165)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 

[jira] [Commented] (SOLR-8475) Some refactoring to SolrIndexSearcher

2016-01-04 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081655#comment-15081655
 ] 

Shawn Heisey commented on SOLR-8475:


If it's possible to leave deprecated inner classes extending the extracted 
classes, then existing user code should work just fine.  I haven't attempted to 
do this, but I think that should work.

The discussion really becomes moot if we expect to create branch_6x in the near 
future (perhaps after 5.5 is released).  If that's the case, then we should 
concentrate all major efforts on 6.0 and not make big changes like this to 5.x.


> Some refactoring to SolrIndexSearcher
> -
>
> Key: SOLR-8475
> URL: https://issues.apache.org/jira/browse/SOLR-8475
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8475.patch, SOLR-8475.patch, SOLR-8475.patch, 
> SOLR-8475.patch, SOLR-8475.patch
>
>
> While reviewing {{SolrIndexSearcher}}, I started to correct a thing here and 
> there, and eventually it led to these changes:
> * Moving {{QueryCommand}} and {{QueryResult}} to their own classes.
> * Moving FilterImpl into a private static class (was package-private and 
> defined in the same .java file, but separate class).
> * Some code formatting, imports organizing and minor log changes.
> * Removed fieldNames (handled the TODO in the code)
> * Got rid of usage of deprecated classes such as {{LegacyNumericUtils}} and 
> {{Legacy-*-Field}}.
> I wish we'd cut down the size of this file much more (it's 2500 lines now), 
> but I've decided to stop here so that the patch is manageable. I would like 
> to explore further refactorings afterwards, e.g. extracting cache management 
> code to an outer class (but keep {{SolrIndexSearcher}}'s API the same, if 
> possible).
> If you have additional ideas of more cleanups / simplifications, I'd be glad 
> to do them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk-9-ea+95) - Build # 15436 - Still Failing!

2016-01-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15436/
Java: 32bit/jdk-9-ea+95 -client -XX:+UseG1GC -XX:-CompactStrings

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [NRTCachingDirectory]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [NRTCachingDirectory]
at __randomizedtesting.SeedInfo.seed([F1F832FF62AC2D5B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:229)
at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:747)




Build Log:
[...truncated 10360 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandler
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_F1F832FF62AC2D5B-001/init-core-data-001
   [junit4]   2> 937663 INFO  
(TEST-TestReplicationHandler.doTestIndexFetchWithMasterUrl-seed#[F1F832FF62AC2D5B])
 [] o.a.s.SolrTestCaseJ4 ###Starting doTestIndexFetchWithMasterUrl
   [junit4]   2> 937663 INFO  
(TEST-TestReplicationHandler.doTestIndexFetchWithMasterUrl-seed#[F1F832FF62AC2D5B])
 [] o.a.s.SolrTestCaseJ4 Writing core.properties file to 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_F1F832FF62AC2D5B-001/solr-instance-001/collection1
   [junit4]   2> 937668 INFO  
(TEST-TestReplicationHandler.doTestIndexFetchWithMasterUrl-seed#[F1F832FF62AC2D5B])
 [] o.e.j.s.Server jetty-9.3.6.v20151106
   [junit4]   2> 937669 INFO  
(TEST-TestReplicationHandler.doTestIndexFetchWithMasterUrl-seed#[F1F832FF62AC2D5B])
 [] o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@6ad7f6{/solr,null,AVAILABLE}
   [junit4]   2> 937674 INFO  
(TEST-TestReplicationHandler.doTestIndexFetchWithMasterUrl-seed#[F1F832FF62AC2D5B])
 [] o.e.j.s.ServerConnector Started 
ServerConnector@ee293d{HTTP/1.1,[http/1.1]}{127.0.0.1:52661}
   [junit4]   2> 937675 INFO  
(TEST-TestReplicationHandler.doTestIndexFetchWithMasterUrl-seed#[F1F832FF62AC2D5B])
 [] o.e.j.s.Server Started @939162ms
   [junit4]   2> 937675 INFO  
(TEST-TestReplicationHandler.doTestIndexFetchWithMasterUrl-seed#[F1F832FF62AC2D5B])
 [] o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_F1F832FF62AC2D5B-001/solr-instance-001/collection1/data,
 hostContext=/solr, 

[jira] [Commented] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2016-01-04 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081576#comment-15081576
 ] 

Yonik Seeley commented on SOLR-8453:


bq. That's unfortunate if one can't provide an error response before the 
request has finished.

Hmmm, OK... it doesn't look like that's happening:

~$ nc 127.0.0.1 8983
POST /solr/techproducts/update HTTP/1.1
Host: localhost:8983
User-Agent: curl/7.43.0
Accept: */*
Content-type:application/json
Content-Length: 10


[
 {"id_not_exist" : "TestDoc1", "title" : "test1"},
HTTP/1.1 400 Bad Request
Content-Type: text/plain;charset=utf-8
Transfer-Encoding: chunked

7E
{"responseHeader":{"status":400,"QTime":6312},"error":{"msg":"Document is 
missing mandatory uniqueKey field: id","code":400}}

0


> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop and finish up the 
> request before the server does something else with that connection.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is still in the process of sending the request. 
> It seems previously, Jetty would not go after the connection very quickly 
> after the server processing thread was stopped via exception, and the client 
> (usually?) had time to clean up properly. But after the Jetty upgrade from 
> 9.2 to 9.3, Jetty closes the connection on the server sooner than previous 
> versions (?), and the client does not end up getting notified of the original 
> exception at all and instead hits a connection reset exception. The result 
> was random fails due to connection reset throughout our tests and one 
> particular test failing consistently. Even before this update, it does not 
> seem like we are acting in a safe or 'behaved' manner, but our version of 
> Jetty was relaxed enough (or a bug was fixed?) for our tests to work out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2016-01-04 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081576#comment-15081576
 ] 

Yonik Seeley edited comment on SOLR-8453 at 1/4/16 7:18 PM:


bq. That's unfortunate if one can't provide an error response before the 
request has finished.

Hmmm, OK... it doesn't look like that's happening:

{code}
~$ nc 127.0.0.1 8983
POST /solr/techproducts/update HTTP/1.1
Host: localhost:8983
User-Agent: curl/7.43.0
Accept: */*
Content-type:application/json
Content-Length: 10


[
 {"id_not_exist" : "TestDoc1", "title" : "test1"},
{code}
{code}
HTTP/1.1 400 Bad Request
Content-Type: text/plain;charset=utf-8
Transfer-Encoding: chunked

7E
{"responseHeader":{"status":400,"QTime":6312},"error":{"msg":"Document is 
missing mandatory uniqueKey field: id","code":400}}

0
{code}

I guess this suggests that we should be able to handle things better on the 
client side?


was (Author: ysee...@gmail.com):
bq. That's unfortunate if one can't provide an error response before the 
request has finished.

Hmmm, OK... it doesn't look like that's happening:

{code}
~$ nc 127.0.0.1 8983
POST /solr/techproducts/update HTTP/1.1
Host: localhost:8983
User-Agent: curl/7.43.0
Accept: */*
Content-type:application/json
Content-Length: 10


[
 {"id_not_exist" : "TestDoc1", "title" : "test1"},
{code}
{code}
HTTP/1.1 400 Bad Request
Content-Type: text/plain;charset=utf-8
Transfer-Encoding: chunked

7E
{"responseHeader":{"status":400,"QTime":6312},"error":{"msg":"Document is 
missing mandatory uniqueKey field: id","code":400}}

0
{code}

> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop and finish up the 
> request before the server does something else with that connection.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is still in the process of sending the request. 
> It seems previously, Jetty would not go after the connection very quickly 
> after the server processing thread was stopped via exception, and the client 
> (usually?) had time to clean up properly. But after the Jetty upgrade from 
> 9.2 to 9.3, Jetty closes the connection on the server sooner than previous 
> versions (?), and the client does not end up getting notified of the original 
> exception at all and instead hits a connection reset exception. The result 
> was random fails due to connection reset throughout our tests and one 
> particular test failing consistently. Even before this update, it does not 
> seem like we are acting in a safe or 'behaved' manner, but our version of 
> Jetty was relaxed enough (or a bug was fixed?) for our tests to work out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6957) NRTCachingDirectory is missing createTempOutput

2016-01-04 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081791#comment-15081791
 ] 

Michael McCandless commented on LUCENE-6957:


Thanks [~jpountz], good catch, I'll fix!

> NRTCachingDirectory is missing createTempOutput
> ---
>
> Key: LUCENE-6957
> URL: https://issues.apache.org/jira/browse/LUCENE-6957
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk
>
> Attachments: LUCENE-6957.patch
>
>
> It's broken now because it simply delegates to the wrapped dir now,
> which can create an output that already exists in the ram dir cache.
> This bug only affects trunk (it's never been released).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Breaking Java back-compat in Solr

2016-01-04 Thread Jack Krupansky
I suspect that half the issue here is that 6.0 is viewed as too far away so
that any trunk-only enhancements are then seen as not having any near-term
relevance. If 6.0 were targeted for sometime within the next six months,
would that not take a lot out of the urgency for major/breaking changes in
dot releases?

Anybody object to a Solr 6.0 in June or thereabouts? Would the folks in
Elasticsearch land object to a Lucene 6.0 release in that timeframe (if not
sooner!)?

I'm +1 for saying that dot releases be limited to "no surprises", easy
upgrades, with no app/custom code changes for the external and general
internal APIs, but under the condition that a major release is never more
than a year away. In any case, make a commitment to users that they can
always safely and painlessly upgrade from x.y to x.z without code changes.

Sure, minor and even major enhancements can occur in dot releases - to the
extent that they "drop in" without introducing compatibility issues, with
compatibility defined as back-compat with the Lucene index, the HTTP API,
the Solr plugin API and any general core interfaces that reasonable plugins
might use.

And if this policy puts greater pressure on getting an earlier 6.0 release,
so be it. +1 for that.

Whether the Lucene guys have the same concerns as the Solr guys is an
interesting question.


-- Jack Krupansky

On Mon, Jan 4, 2016 at 12:30 PM, Yonik Seeley  wrote:

> On Mon, Jan 4, 2016 at 12:07 PM, Alexandre Rafalovitch
>  wrote:
> > Solr plugin story is muddy enough as it is. Plugins are hard to find,
> > share. So, in my eyes, breaking them is not a big effect as if we had
> > a big active registry.
>
> I think private plugins / components are more the issue here (a custom
> qparser, search component, update processor).
> The basic question is: should people using these be able to upgrade
> from 5.4 to 5.5 without having to change and recompile their code?
>
> -Yonik
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2992 - Failure!

2016-01-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2992/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
some core start times did not change on reload

Stack Trace:
java.lang.AssertionError: some core start times did not change on reload
at 
__randomizedtesting.SeedInfo.seed([632BEACE139E7CEF:EB7FD514BD621117]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:743)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:160)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

[jira] [Commented] (SOLR-8470) Make PKIAuthPlugin''s token's TTL configurable

2016-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15080892#comment-15080892
 ] 

ASF subversion and git services commented on SOLR-8470:
---

Commit 1722813 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1722813 ]

SOLR-8470: Make TTL of PKIAuthenticationPlugin's tokens configurable through a 
system property (pkiauth.ttl)

> Make PKIAuthPlugin''s token's TTL configurable
> --
>
> Key: SOLR-8470
> URL: https://issues.apache.org/jira/browse/SOLR-8470
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-8470.patch
>
>
> Currently the PKIAuthenticationPlugin has hardcoded the ttl to 5000ms. There 
> are users who have experienced timeouts. Make this configurable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7865) lookup method implemented in BlendedInfixLookupFactory does not respect suggest.count

2016-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15080930#comment-15080930
 ] 

ASF subversion and git services commented on SOLR-7865:
---

Commit 1722823 from [~anshumg] in branch 'dev/branches/lucene_solr_5_3'
[ https://svn.apache.org/r1722823 ]

SOLR-7865: BlendedInfixSuggester was returning too many results (merge from 
branch_5x for 5.3.2 release)

> lookup method implemented in BlendedInfixLookupFactory does not respect 
> suggest.count
> -
>
> Key: SOLR-7865
> URL: https://issues.apache.org/jira/browse/SOLR-7865
> Project: Solr
>  Issue Type: Bug
>  Components: Suggester
>Affects Versions: 5.2.1
>Reporter: Arcadius Ahouansou
>Assignee: Michael McCandless
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE_7865.patch
>
>
> The following test failes in the TestBlendedInfixSuggestions.java:
> This is mainly because {code}num * numFactor{code} get called multiple times 
> from 
> https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/spelling/suggest/fst/BlendedInfixLookupFactory.java#L118
> The test is expecting count=1 but we get all 3 docs out.
> {code}
>   public void testSuggestCount() {
> assertQ(req("qt", URI, "q", "the", SuggesterParams.SUGGEST_COUNT, "1", 
> SuggesterParams.SUGGEST_DICT, "blended_infix_suggest_linear"),
> 
> "//lst[@name='suggest']/lst[@name='blended_infix_suggest_linear']/lst[@name='the']/int[@name='numFound'][.='1']"
> );
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_66) - Build # 5391 - Failure!

2016-01-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5391/
Java: 32bit/jdk1.8.0_66 -client -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 55747 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\build.xml:794: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\build.xml:674: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\build.xml:657: The 
following files are missing svn:eol-style (or binary svn:mime-type):
* ./solr/core/src/java/org/apache/solr/handler/admin/CoreAdminOperation.java

Total time: 93 minutes 49 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 1065 - Still Failing

2016-01-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/1065/

1 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=18754, name=Thread-13368, 
state=RUNNABLE, group=TGRP-FullSolrCloudDistribCmdsTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=18754, name=Thread-13368, state=RUNNABLE, 
group=TGRP-FullSolrCloudDistribCmdsTest]
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:55827/collection1
at __randomizedtesting.SeedInfo.seed([52BE79E325438F54]:0)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest$1IndexThread.run(FullSolrCloudDistribCmdsTest.java:645)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: http://127.0.0.1:55827/collection1
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:585)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:167)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest$1IndexThread.run(FullSolrCloudDistribCmdsTest.java:643)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:480)
... 5 more




Build Log:
[...truncated 10682 lines...]
   [junit4] Suite: org.apache.solr.cloud.FullSolrCloudDistribCmdsTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/build/solr-core/test/J2/temp/solr.cloud.FullSolrCloudDistribCmdsTest_52BE79E325438F54-001/init-core-data-001
   [junit4]   2> 1607281 INFO  
(SUITE-FullSolrCloudDistribCmdsTest-seed#[52BE79E325438F54]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 1607287 INFO  
(TEST-FullSolrCloudDistribCmdsTest.test-seed#[52BE79E325438F54]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1607288 INFO  (Thread-13198) [] o.a.s.c.ZkTestServer 
client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1607288 INFO  (Thread-13198) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1607388 INFO  
(TEST-FullSolrCloudDistribCmdsTest.test-seed#[52BE79E325438F54]) [] 
o.a.s.c.ZkTestServer start zk server on port:40033
   [junit4]   2> 1607388 INFO  
(TEST-FullSolrCloudDistribCmdsTest.test-seed#[52BE79E325438F54]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 1607388 INFO  

[jira] [Created] (SOLR-8480) Progress info for TupleStream

2016-01-04 Thread Cao Manh Dat (JIRA)
Cao Manh Dat created SOLR-8480:
--

 Summary: Progress info for TupleStream
 Key: SOLR-8480
 URL: https://issues.apache.org/jira/browse/SOLR-8480
 Project: Solr
  Issue Type: Improvement
  Components: SolrJ
Reporter: Cao Manh Dat


I suggest adding progress info for TupleStream. It can be very helpful for 
tracking consuming process
{code}
public abstract class TupleStream {
   public abstract long getSize();
   public abstract long getConsumed();
}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk-9-ea+95) - Build # 15432 - Still Failing!

2016-01-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15432/
Java: 32bit/jdk-9-ea+95 -client -XX:+UseConcMarkSweepGC -XX:-CompactStrings

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=4107, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)2) Thread[id=4111, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)3) Thread[id=4108, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)4) Thread[id=4110, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)5) Thread[id=4109, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=4107, name=apacheds, state=WAITING, 
group=TGRP-SaslZkACLProviderTest]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:516)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
   2) Thread[id=4111, name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]
at 

[jira] [Created] (SOLR-8481) TestSearchPerf no longer needs to duplicate SolrIndexSearcher.(NO_CHECK_QCACHE|NO_CHECK_FILTERCACHE)

2016-01-04 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-8481:
-

 Summary: TestSearchPerf no longer needs to duplicate 
SolrIndexSearcher.(NO_CHECK_QCACHE|NO_CHECK_FILTERCACHE)
 Key: SOLR-8481
 URL: https://issues.apache.org/jira/browse/SOLR-8481
 Project: Solr
  Issue Type: Test
Reporter: Christine Poerschke
Assignee: Christine Poerschke
Priority: Minor


{{TestSearchPerf.doListGen}} no longer needs to duplicate 
{{SolrIndexSearcher.(NO_CHECK_QCACHE|GET_DOCSET|NO_CHECK_FILTERCACHE|GET_SCORES)}}
 since they are now visible to it (at package or public level).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8482) add & use QueryCommand.[gs]etTerminateEarly accessors

2016-01-04 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-8482:
-

 Summary: add & use QueryCommand.[gs]etTerminateEarly accessors
 Key: SOLR-8482
 URL: https://issues.apache.org/jira/browse/SOLR-8482
 Project: Solr
  Issue Type: Task
Reporter: Christine Poerschke
Assignee: Christine Poerschke
Priority: Minor


* the {{getTerminateEarly}} accessor would be an alternative to callers 
directly using the command flags i.e. {{(getFlags() & TERMINATE_EARLY) == 
TERMINATE_EARLY}}
* similar accessors {{isNeedDocSet}} and {{setNeedDocSet}} already exist with 
respect to the {{GET_DOCSET}} portion of the command flags



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8482) add & use QueryCommand.[gs]etTerminateEarly accessors

2016-01-04 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-8482:
--
Attachment: SOLR-8482.patch

> add & use QueryCommand.[gs]etTerminateEarly accessors
> -
>
> Key: SOLR-8482
> URL: https://issues.apache.org/jira/browse/SOLR-8482
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8482.patch
>
>
> * the {{getTerminateEarly}} accessor would be an alternative to callers 
> directly using the command flags i.e. {{(getFlags() & TERMINATE_EARLY) == 
> TERMINATE_EARLY}}
> * similar accessors {{isNeedDocSet}} and {{setNeedDocSet}} already exist with 
> respect to the {{GET_DOCSET}} portion of the command flags



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8470) Make TTL of PKIAuthenticationPlugin's tokens configurable through a system property

2016-01-04 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8470:
-
Fix Version/s: Trunk
   5.5
   5.3.2

> Make TTL of PKIAuthenticationPlugin's tokens configurable through a system 
> property
> ---
>
> Key: SOLR-8470
> URL: https://issues.apache.org/jira/browse/SOLR-8470
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.3.2, 5.5, Trunk
>
> Attachments: SOLR-8470.patch
>
>
> Currently the PKIAuthenticationPlugin has hardcoded the ttl to 5000ms. There 
> are users who have experienced timeouts. Make this configurable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8481) TestSearchPerf no longer needs to duplicate SolrIndexSearcher.(NO_CHECK_QCACHE|NO_CHECK_FILTERCACHE)

2016-01-04 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-8481:
--
Attachment: SOLR-8481.patch

> TestSearchPerf no longer needs to duplicate 
> SolrIndexSearcher.(NO_CHECK_QCACHE|NO_CHECK_FILTERCACHE)
> 
>
> Key: SOLR-8481
> URL: https://issues.apache.org/jira/browse/SOLR-8481
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8481.patch
>
>
> {{TestSearchPerf.doListGen}} no longer needs to duplicate 
> {{SolrIndexSearcher.(NO_CHECK_QCACHE|GET_DOCSET|NO_CHECK_FILTERCACHE|GET_SCORES)}}
>  since they are now visible to it (at package or public level).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6956) TestBKDTree.testRandomMedium() failure: some hits were wrong

2016-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081010#comment-15081010
 ] 

ASF subversion and git services commented on LUCENE-6956:
-

Commit 1722841 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1722841 ]

LUCENE-6956: make sure specific test method fails, instead of relying on 
'unhandled exc in thread' from test framework

> TestBKDTree.testRandomMedium() failure: some hits were wrong
> 
>
> Key: LUCENE-6956
> URL: https://issues.apache.org/jira/browse/LUCENE-6956
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Michael McCandless
>
> My Jenkins found a reproducible seed for a failure of 
> {{TestBKDTree.testRandomMedium()}} on branch_5x with Java8:
> {noformat}
>   [junit4] Suite: org.apache.lucene.bkdtree.TestBKDTree
>[junit4]   1> T1: id=29784 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29528
>[junit4]   1>   lat=86.88086835667491 lon=-8.821268286556005
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=29801 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29545
>[junit4]   1>   lat=86.88149104826152 lon=-9.34366637840867
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=29961 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29705
>[junit4]   1>   lat=86.8706679996103 lon=-9.38328042626381
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30015 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29759
>[junit4]   1>   lat=86.84762765653431 lon=-9.44802425801754
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30017 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29761
>[junit4]   1>   lat=86.8753323610872 lon=-9.091365560889244
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30042 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29786
>[junit4]   1>   lat=86.85837233439088 lon=-9.127480667084455
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30061 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29805
>[junit4]   1>   lat=86.85876209288836 lon=-9.408821929246187
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30077 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29821
>[junit4]   1>   lat=86.84681385755539 lon=-8.837449550628662
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30185 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] 

[jira] [Commented] (LUCENE-6956) TestBKDTree.testRandomMedium() failure: some hits were wrong

2016-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081012#comment-15081012
 ] 

ASF subversion and git services commented on LUCENE-6956:
-

Commit 1722843 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1722843 ]

LUCENE-6956: make sure specific test method fails, instead of relying on 
'unhandled exc in thread' from test framework

> TestBKDTree.testRandomMedium() failure: some hits were wrong
> 
>
> Key: LUCENE-6956
> URL: https://issues.apache.org/jira/browse/LUCENE-6956
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Michael McCandless
>
> My Jenkins found a reproducible seed for a failure of 
> {{TestBKDTree.testRandomMedium()}} on branch_5x with Java8:
> {noformat}
>   [junit4] Suite: org.apache.lucene.bkdtree.TestBKDTree
>[junit4]   1> T1: id=29784 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29528
>[junit4]   1>   lat=86.88086835667491 lon=-8.821268286556005
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=29801 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29545
>[junit4]   1>   lat=86.88149104826152 lon=-9.34366637840867
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=29961 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29705
>[junit4]   1>   lat=86.8706679996103 lon=-9.38328042626381
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30015 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29759
>[junit4]   1>   lat=86.84762765653431 lon=-9.44802425801754
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30017 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29761
>[junit4]   1>   lat=86.8753323610872 lon=-9.091365560889244
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30042 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29786
>[junit4]   1>   lat=86.85837233439088 lon=-9.127480667084455
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30061 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29805
>[junit4]   1>   lat=86.85876209288836 lon=-9.408821929246187
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30077 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29821
>[junit4]   1>   lat=86.84681385755539 lon=-8.837449550628662
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30185 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 

[jira] [Commented] (SOLR-8146) Allowing SolrJ CloudSolrClient to have preferred replica for query/read

2016-01-04 Thread Arcadius Ahouansou (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15080997#comment-15080997
 ] 

Arcadius Ahouansou commented on SOLR-8146:
--

Hello [~noble.paul]
Thank you very much for your suggestions.

Regarding: {{preferredNodes=hostPattern:}},

If I understand well ( and correct me if I am wrong), in order to use the 
preferredNodes snitch, one will have to add that snitch to the collection. Is 
this correct?

The way the current implementation works is that there is not change at all on 
the SolrCloud server or collection. 

All the configuration is on the client SolrJ: This is on purpose because it's 
the client SolrJ that needs to choose its preferred servers.
Ideally, with the use of snitch, we would like to let the client make this 
choice without having to add anything to the server or collection.

How can this be achieved? Any hint will be appreciated.

Thank you very my [~noble.paul]



> Allowing SolrJ CloudSolrClient to have preferred replica for query/read
> ---
>
> Key: SOLR-8146
> URL: https://issues.apache.org/jira/browse/SOLR-8146
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: 5.3
>Reporter: Arcadius Ahouansou
> Attachments: SOLR-8146.patch, SOLR-8146.patch, SOLR-8146.patch
>
>
> h2. Backgrouds
> Currently, the CloudSolrClient randomly picks a replica to query.
> This is done by shuffling the list of live URLs to query then, picking the 
> first item from the list.
> This ticket is to allow more flexibility and control to some extend which 
> URLs will be picked up for queries.
> Note that this is for queries only and would not affect update/delete/admin 
> operations.
> h2. Implementation
> The current patch uses regex pattern and moves to the top of the list of URLs 
> only those matching the given regex specified by the system property 
> {code}solr.preferredQueryNodePattern{code}
> Initially, I thought it may be good to have Solr nodes tagged with a string 
> pattern (snitch?) and use that pattern for matching the URLs.
> Any comment, recommendation or feedback would be appreciated.
> h2. Use Cases
> There are many cases where the ability to choose the node where queries go 
> can be very handy:
> h3. Special node for manual user queries and analytics
> One may have a SolrCLoud cluster where every node host the same set of 
> collections with:  
> - multiple large SolrCLoud nodes (L) used for production apps and 
> - have 1 small node (S) in the same cluster with less ram/cpu used only for 
> manual user queries, data export and other production issue investigation.
> This ticket would allow to configure the applications using SolrJ to query 
> only the (L) nodes
> This use case is similar to the one described in SOLR-5501 raised by [~manuel 
> lenormand]
> h3. Minimizing network traffic
>  
> For simplicity, let's say that we have  a SolrSloud cluster deployed on 2 (or 
> N) separate racks: rack1 and rack2.
> On each rack, we have a set of SolrCloud VMs as well as a couple of client 
> VMs querying solr using SolrJ.
> All solr nodes are identical and have the same number of collections.
> What we would like to achieve is:
> - clients on rack1 will by preference query only SolrCloud nodes on rack1, 
> and 
> - clients on rack2 will by preference query only SolrCloud nodes on rack2.
> - Cross-rack read will happen if and only if one of the racks has no 
> available Solr node to serve a request.
> In other words, we want read operations to be local to a rack whenever 
> possible.
> Note that write/update/delete/admin operations should not be affected.
> Note that in our use case, we have a cross DC deployment. So, replace 
> rack1/rack2 by DC1/DC2
> Any comment would be very appreciated.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80) - Build # 15135 - Still Failing!

2016-01-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/15135/
Java: 64bit/jdk1.7.0_80 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

5 tests failed.
FAILED:  org.apache.solr.cloud.BasicDistributedZk2Test.test

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:50698/collection1, http://127.0.0.1:57370/collection1]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:50698/collection1, 
http://127.0.0.1:57370/collection1]
at 
__randomizedtesting.SeedInfo.seed([40885DE05505D283:C8DC623AFBF9BF7B]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:352)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:807)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.queryServer(AbstractFullDistribZkTestBase.java:1378)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:610)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:592)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:571)
at 
org.apache.solr.cloud.BasicDistributedZk2Test.brindDownShardIndexSomeDocsAndRecover(BasicDistributedZk2Test.java:280)
at 
org.apache.solr.cloud.BasicDistributedZk2Test.test(BasicDistributedZk2Test.java:98)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-7865) lookup method implemented in BlendedInfixLookupFactory does not respect suggest.count

2016-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15080932#comment-15080932
 ] 

ASF subversion and git services commented on SOLR-7865:
---

Commit 1722825 from [~anshumg] in branch 'dev/trunk'
[ https://svn.apache.org/r1722825 ]

SOLR-7865: Adding change log entry for 5.3.2 release

> lookup method implemented in BlendedInfixLookupFactory does not respect 
> suggest.count
> -
>
> Key: SOLR-7865
> URL: https://issues.apache.org/jira/browse/SOLR-7865
> Project: Solr
>  Issue Type: Bug
>  Components: Suggester
>Affects Versions: 5.2.1
>Reporter: Arcadius Ahouansou
>Assignee: Michael McCandless
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE_7865.patch
>
>
> The following test failes in the TestBlendedInfixSuggestions.java:
> This is mainly because {code}num * numFactor{code} get called multiple times 
> from 
> https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/spelling/suggest/fst/BlendedInfixLookupFactory.java#L118
> The test is expecting count=1 but we get all 3 docs out.
> {code}
>   public void testSuggestCount() {
> assertQ(req("qt", URI, "q", "the", SuggesterParams.SUGGEST_COUNT, "1", 
> SuggesterParams.SUGGEST_DICT, "blended_infix_suggest_linear"),
> 
> "//lst[@name='suggest']/lst[@name='blended_infix_suggest_linear']/lst[@name='the']/int[@name='numFound'][.='1']"
> );
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7865) lookup method implemented in BlendedInfixLookupFactory does not respect suggest.count

2016-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15080934#comment-15080934
 ] 

ASF subversion and git services commented on SOLR-7865:
---

Commit 1722826 from [~anshumg] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1722826 ]

SOLR-7865: Adding change log entry for 5.3.2 release(merge from trunk)

> lookup method implemented in BlendedInfixLookupFactory does not respect 
> suggest.count
> -
>
> Key: SOLR-7865
> URL: https://issues.apache.org/jira/browse/SOLR-7865
> Project: Solr
>  Issue Type: Bug
>  Components: Suggester
>Affects Versions: 5.2.1
>Reporter: Arcadius Ahouansou
>Assignee: Michael McCandless
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE_7865.patch
>
>
> The following test failes in the TestBlendedInfixSuggestions.java:
> This is mainly because {code}num * numFactor{code} get called multiple times 
> from 
> https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/spelling/suggest/fst/BlendedInfixLookupFactory.java#L118
> The test is expecting count=1 but we get all 3 docs out.
> {code}
>   public void testSuggestCount() {
> assertQ(req("qt", URI, "q", "the", SuggesterParams.SUGGEST_COUNT, "1", 
> SuggesterParams.SUGGEST_DICT, "blended_infix_suggest_linear"),
> 
> "//lst[@name='suggest']/lst[@name='blended_infix_suggest_linear']/lst[@name='the']/int[@name='numFound'][.='1']"
> );
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



CfP about Geospatial Track at ApacheCon, Vancouver

2016-01-04 Thread Uwe Schindler
Hi Committers, hi Lucene users,

On the next ApacheCon in Vancouver, Canada (May 9 - 13 2016), there will be a 
track about geospatial data. The track is organized by Chris Mattmann together 
with George Percivall of the OGC (Open Geospatial Consortium). As I am also a 
member of OGC, they invited me to ask the Lucene Community to propose talks. 
Apache Lucene, Solr, and Elasticsearch have great geospatial features, this 
would be a good idea to present them. This is especially important because the 
current OGC standards are very RDBMS-focused (like filter definitions, 
services,...), so we can use the track to talk with OGC representatives to 
better match OGC standards with full text search.

I am not sure if I can manage to get to Vancouver, but the others are kindly 
invited to submit talks. It is not yet sure if the track will be part of 
ApacheCon Core or ApacheCon BigData. I will keep you informed. If you have talk 
suggestions, please send them to me or Chris Mattmann. Alternatively, submit 
them to the Big Data track @ 
http://events.linuxfoundation.org/events/apache-big-data-north-america/program/cfp
 (and mention geospatial track).

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-6956) TestBKDTree.testRandomMedium() failure: some hits were wrong

2016-01-04 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless reassigned LUCENE-6956:
--

Assignee: Michael McCandless

> TestBKDTree.testRandomMedium() failure: some hits were wrong
> 
>
> Key: LUCENE-6956
> URL: https://issues.apache.org/jira/browse/LUCENE-6956
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Michael McCandless
>
> My Jenkins found a reproducible seed for a failure of 
> {{TestBKDTree.testRandomMedium()}} on branch_5x with Java8:
> {noformat}
>   [junit4] Suite: org.apache.lucene.bkdtree.TestBKDTree
>[junit4]   1> T1: id=29784 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29528
>[junit4]   1>   lat=86.88086835667491 lon=-8.821268286556005
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=29801 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29545
>[junit4]   1>   lat=86.88149104826152 lon=-9.34366637840867
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=29961 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29705
>[junit4]   1>   lat=86.8706679996103 lon=-9.38328042626381
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30015 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29759
>[junit4]   1>   lat=86.84762765653431 lon=-9.44802425801754
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30017 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29761
>[junit4]   1>   lat=86.8753323610872 lon=-9.091365560889244
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30042 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29786
>[junit4]   1>   lat=86.85837233439088 lon=-9.127480667084455
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30061 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29805
>[junit4]   1>   lat=86.85876209288836 lon=-9.408821929246187
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30077 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29821
>[junit4]   1>   lat=86.84681385755539 lon=-8.837449550628662
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30185 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29929
>[junit4]   1>   lat=86.84285902418196 lon=-9.196635894477367
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: 

[jira] [Commented] (SOLR-8470) Make PKIAuthPlugin''s token's TTL configurable

2016-01-04 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15080789#comment-15080789
 ] 

Anshum Gupta commented on SOLR-8470:


LGTM!

> Make PKIAuthPlugin''s token's TTL configurable
> --
>
> Key: SOLR-8470
> URL: https://issues.apache.org/jira/browse/SOLR-8470
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-8470.patch
>
>
> Currently the PKIAuthenticationPlugin has hardcoded the ttl to 5000ms. There 
> are users who have experienced timeouts. Make this configurable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8458) Add Streaming Expressions tests for parameter substitution

2016-01-04 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-8458:
---
Attachment: SOLR-8458.patch

[~dpgove] Sorry for misunderstanding some aspects of streaming. It's quite 
compact and clean now.

> Add Streaming Expressions tests for parameter substitution
> --
>
> Key: SOLR-8458
> URL: https://issues.apache.org/jira/browse/SOLR-8458
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-8458.patch, SOLR-8458.patch, SOLR-8458.patch, 
> SOLR-8458.patch
>
>
> This ticket is to add Streaming Expression tests that exercise the existing 
> macro expansion feature described here:  
> http://yonik.com/solr-query-parameter-substitution/
> Sample syntax below:
> {code}
> http://localhost:8983/col/stream?expr=merge(${left}, ${right}, 
> ...)=search(...)=search(...)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_66) - Build # 15431 - Still Failing!

2016-01-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15431/
Java: 64bit/jdk1.8.0_66 -XX:-UseCompressedOops -XX:+UseSerialGC

43 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.TestTolerantSearch

Error Message:
IOException occured when talking to server at: http://127.0.0.1:41610/solr

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:41610/solr
at __randomizedtesting.SeedInfo.seed([CDF3692F1ECCA878]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:590)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.TestTolerantSearch.createThings(TestTolerantSearch.java:76)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:811)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:209)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 

[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_66) - Build # 5521 - Still Failing!

2016-01-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5521/
Java: 32bit/jdk1.8.0_66 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.search.AnalyticsMergeStrategyTest.test

Error Message:
Error from server at http://127.0.0.1:59640/u/collection1: 
java.util.concurrent.ExecutionException: java.lang.IllegalStateException: 
Scheme 'http' not registered.

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:59640/u/collection1: 
java.util.concurrent.ExecutionException: java.lang.IllegalStateException: 
Scheme 'http' not registered.
at 
__randomizedtesting.SeedInfo.seed([A0DB93FEAEA695EC:288FAC24005AF814]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryServer(BaseDistributedSearchTestCase.java:562)
at 
org.apache.solr.search.AnalyticsMergeStrategyTest.test(AnalyticsMergeStrategyTest.java:85)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-8470) Make TTL of PKIAuthenticationPlugin's tokens configurable through a system property

2016-01-04 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8470:
-
Summary: Make TTL of PKIAuthenticationPlugin's tokens configurable through 
a system property  (was: Make PKIAuthPlugin''s token's TTL configurable)

> Make TTL of PKIAuthenticationPlugin's tokens configurable through a system 
> property
> ---
>
> Key: SOLR-8470
> URL: https://issues.apache.org/jira/browse/SOLR-8470
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-8470.patch
>
>
> Currently the PKIAuthenticationPlugin has hardcoded the ttl to 5000ms. There 
> are users who have experienced timeouts. Make this configurable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8470) Make PKIAuthPlugin''s token's TTL configurable

2016-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15080889#comment-15080889
 ] 

ASF subversion and git services commented on SOLR-8470:
---

Commit 1722811 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1722811 ]

SOLR-8470: Make TTL of PKIAuthenticationPlugin's tokens configurable through a 
system property
  (pkiauth.ttl)

> Make PKIAuthPlugin''s token's TTL configurable
> --
>
> Key: SOLR-8470
> URL: https://issues.apache.org/jira/browse/SOLR-8470
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-8470.patch
>
>
> Currently the PKIAuthenticationPlugin has hardcoded the ttl to 5000ms. There 
> are users who have experienced timeouts. Make this configurable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8470) Make TTL of PKIAuthenticationPlugin's tokens configurable through a system property

2016-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15080899#comment-15080899
 ] 

ASF subversion and git services commented on SOLR-8470:
---

Commit 1722815 from [~noble.paul] in branch 'dev/branches/lucene_solr_5_3'
[ https://svn.apache.org/r1722815 ]

SOLR-8470: Make TTL of PKIAuthenticationPlugin's tokens configurable through a 
system property (pkiauth.ttl)

> Make TTL of PKIAuthenticationPlugin's tokens configurable through a system 
> property
> ---
>
> Key: SOLR-8470
> URL: https://issues.apache.org/jira/browse/SOLR-8470
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-8470.patch
>
>
> Currently the PKIAuthenticationPlugin has hardcoded the ttl to 5000ms. There 
> are users who have experienced timeouts. Make this configurable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8485) SelectStream only works with all lowercase field names and doesn't handle quoted selected fields

2016-01-04 Thread Dennis Gove (JIRA)
Dennis Gove created SOLR-8485:
-

 Summary: SelectStream only works with all lowercase field names 
and doesn't handle quoted selected fields
 Key: SOLR-8485
 URL: https://issues.apache.org/jira/browse/SOLR-8485
 Project: Solr
  Issue Type: Bug
Reporter: Dennis Gove
Priority: Minor


Three issues exist if one creates a SelectStream with an expression.

{code}
select(
  search(collection1, fl="personId_i,rating_f", q="rating_f:*", 
sort="personId_i asc"),
  personId_i as personId,
  rating_f as rating
)
{code}

"personId_i as personId" will be parsed as "personid_i as personid"

1. The incoming tuple will contain a field "personId_i" but the selection will 
be looking for a field "personid_i". This field won't be found in the incoming 
tuple (notice the case difference) and as such no field personId will exist in 
the outgoing tuple.

2. If (1) wasn't an issue, the outgoing tuple would have in a field "personid" 
and not the expected "personId" (notice the case difference). This can lead to 
other down-the-road issues.

Also, if one were to quote the selected fields such as in
{code}
select(
  search(collection1, fl="personId_i,rating_f", q="rating_f:*", 
sort="personId_i asc"),
  "personId_i as personId",
  "rating_f as rating"
)
{code}
then the quotes would be included in the field name. Wrapping quotes should be 
handled properly such that they are removed from the parameters before they are 
parsed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8485) SelectStream only works with all lowercase field names and doesn't handle quoted selected fields

2016-01-04 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove updated SOLR-8485:
--
Description: 
Three issues exist if one creates a SelectStream with an expression.

{code}
select(
  search(collection1, fl="personId_i,rating_f", q="rating_f:*", 
sort="personId_i asc"),
  personId_i as personId,
  rating_f as rating
)
{code}

"personId_i as personId" will be parsed as "personid_i as personid"

1. The incoming tuple will contain a field "personId_i" but the selection will 
be looking for a field "personid_i". This field won't be found in the incoming 
tuple (notice the case difference) and as such no field personId will exist in 
the outgoing tuple.

2. If (1) wasn't an issue, the outgoing tuple would have in a field "personid" 
and not the expected "personId" (notice the case difference). This can lead to 
other down-the-road issues.

3. Also, if one were to quote the selected fields such as in
{code}
select(
  search(collection1, fl="personId_i,rating_f", q="rating_f:*", 
sort="personId_i asc"),
  "personId_i as personId",
  "rating_f as rating"
)
{code}
then the quotes would be included in the field name. Wrapping quotes should be 
handled properly such that they are removed from the parameters before they are 
parsed.

  was:
Three issues exist if one creates a SelectStream with an expression.

{code}
select(
  search(collection1, fl="personId_i,rating_f", q="rating_f:*", 
sort="personId_i asc"),
  personId_i as personId,
  rating_f as rating
)
{code}

"personId_i as personId" will be parsed as "personid_i as personid"

1. The incoming tuple will contain a field "personId_i" but the selection will 
be looking for a field "personid_i". This field won't be found in the incoming 
tuple (notice the case difference) and as such no field personId will exist in 
the outgoing tuple.

2. If (1) wasn't an issue, the outgoing tuple would have in a field "personid" 
and not the expected "personId" (notice the case difference). This can lead to 
other down-the-road issues.

Also, if one were to quote the selected fields such as in
{code}
select(
  search(collection1, fl="personId_i,rating_f", q="rating_f:*", 
sort="personId_i asc"),
  "personId_i as personId",
  "rating_f as rating"
)
{code}
then the quotes would be included in the field name. Wrapping quotes should be 
handled properly such that they are removed from the parameters before they are 
parsed.


> SelectStream only works with all lowercase field names and doesn't handle 
> quoted selected fields
> 
>
> Key: SOLR-8485
> URL: https://issues.apache.org/jira/browse/SOLR-8485
> Project: Solr
>  Issue Type: Bug
>Reporter: Dennis Gove
>Priority: Minor
>  Labels: streaming
>
> Three issues exist if one creates a SelectStream with an expression.
> {code}
> select(
>   search(collection1, fl="personId_i,rating_f", q="rating_f:*", 
> sort="personId_i asc"),
>   personId_i as personId,
>   rating_f as rating
> )
> {code}
> "personId_i as personId" will be parsed as "personid_i as personid"
> 1. The incoming tuple will contain a field "personId_i" but the selection 
> will be looking for a field "personid_i". This field won't be found in the 
> incoming tuple (notice the case difference) and as such no field personId 
> will exist in the outgoing tuple.
> 2. If (1) wasn't an issue, the outgoing tuple would have in a field 
> "personid" and not the expected "personId" (notice the case difference). This 
> can lead to other down-the-road issues.
> 3. Also, if one were to quote the selected fields such as in
> {code}
> select(
>   search(collection1, fl="personId_i,rating_f", q="rating_f:*", 
> sort="personId_i asc"),
>   "personId_i as personId",
>   "rating_f as rating"
> )
> {code}
> then the quotes would be included in the field name. Wrapping quotes should 
> be handled properly such that they are removed from the parameters before 
> they are parsed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2016-01-04 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081892#comment-15081892
 ] 

Joel Bernstein commented on SOLR-7535:
--

Just thinking about how useful it will be to use the UpdateStream to wrap a 
RollupStream:

{code}

parallel(update(rollup(search(

{code}

> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, 
> SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch
>
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6960) TestUninvertingReader.testFieldInfos() failure

2016-01-04 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081927#comment-15081927
 ] 

Steve Rowe commented on LUCENE-6960:


Another reproducing seed:

{noformat}
  [junit4] Suite: org.apache.lucene.uninverting.TestUninvertingReader
   [junit4]   2> NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestUninvertingReader -Dtests.method=testFieldInfos 
-Dtests.seed=5B62AE4F881EC66 -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=zh -Dtests.timezone=Hongkong -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 0.35s | TestUninvertingReader.testFieldInfos <<<
   [junit4]> Throwable #1: java.lang.AssertionError: expected:<0> but 
was:
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([5B62AE4F881EC66:1582152375A367CF]:0)
   [junit4]>at 
org.apache.lucene.uninverting.TestUninvertingReader.testFieldInfos(TestUninvertingReader.java:385)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
   [junit4]   2> NOTE: test params are: codec=SimpleText, 
sim=RandomSimilarityProvider(queryNorm=true,coord=crazy): {}, locale=zh, 
timezone=Hongkong
   [junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
1.8.0_45 (64-bit)/cpus=16,threads=1,free=417980096,total=514850816
   [junit4]   2> NOTE: All tests run in this JVM: [TestUninvertingReader]
   [junit4] Completed [1/1 (1!)] in 0.84s, 1 test, 1 failure <<< FAILURES!
{noformat}

> TestUninvertingReader.testFieldInfos() failure
> --
>
> Key: LUCENE-6960
> URL: https://issues.apache.org/jira/browse/LUCENE-6960
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 5.5, Trunk
>Reporter: Steve Rowe
>
> My Jenkins found a reproducible seed for 
> {{TestUninvertingReader.testFieldInfos()}} - fails on both branch_5x and 
> trunk:
> {noformat}
>[junit4] Suite: org.apache.lucene.uninverting.TestUninvertingReader
>[junit4]   2> NOTE: download the large Jenkins line-docs file by running 
> 'ant get-jenkins-line-docs' in the lucene directory.
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestUninvertingReader -Dtests.method=testFieldInfos 
> -Dtests.seed=349A6776161E26B5 -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=sr_ME -Dtests.timezone=US/Indiana-Starke -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 0.31s | TestUninvertingReader.testFieldInfos <<<
>[junit4]> Throwable #1: java.lang.AssertionError: expected:<0> but 
> was:
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([349A6776161E26B5:24AE58B19B3CAD1C]:0)
>[junit4]>at 
> org.apache.lucene.uninverting.TestUninvertingReader.testFieldInfos(TestUninvertingReader.java:385)
>[junit4]>at java.lang.Thread.run(Thread.java:745)
>[junit4]   2> NOTE: test params are: codec=SimpleText, 
> sim=ClassicSimilarity, locale=sr_ME, timezone=US/Indiana-Starke
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_45 (64-bit)/cpus=16,threads=1,free=412590336,total=514850816
>[junit4]   2> NOTE: All tests run in this JVM: [TestUninvertingReader]
>[junit4] Completed [1/1 (1!)] in 0.47s, 1 test, 1 failure <<< FAILURES!
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request:

2016-01-04 Thread seh
Github user seh commented on the pull request:


https://github.com/apache/lucene-solr/commit/c4bbf9cc5e8b1869d40cd7f619e40c8a4864d531#commitcomment-15253022
  
In solr/bin/solr:
In solr/bin/solr on line 53:
Now that `UNPACK_WAR_CMD` is no longer used in this script, why continue to 
require _jar_ or _unzip_? Can we remove this requirement?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Breaking Java back-compat in Solr

2016-01-04 Thread Gregory Chanan
Has there been any discussion about annotating back compat expectations
universally, similar to hadoop's use of InterfaceStability?  That of course
only solves the first issue: "gets really tricky and confusing in terms of
what level of back-compat needs to be maintained", because it's defined by
the annotation.  It doesn't solve the policy issue of which annotation to
use for a given class, of course.

On Mon, Jan 4, 2016 at 12:55 PM, Jack Krupansky 
wrote:

> I suspect that half the issue here is that 6.0 is viewed as too far away
> so that any trunk-only enhancements are then seen as not having any
> near-term relevance. If 6.0 were targeted for sometime within the next six
> months, would that not take a lot out of the urgency for major/breaking
> changes in dot releases?
>
> Anybody object to a Solr 6.0 in June or thereabouts? Would the folks in
> Elasticsearch land object to a Lucene 6.0 release in that timeframe (if not
> sooner!)?
>
> I'm +1 for saying that dot releases be limited to "no surprises", easy
> upgrades, with no app/custom code changes for the external and general
> internal APIs, but under the condition that a major release is never more
> than a year away. In any case, make a commitment to users that they can
> always safely and painlessly upgrade from x.y to x.z without code changes.
>
> Sure, minor and even major enhancements can occur in dot releases - to the
> extent that they "drop in" without introducing compatibility issues, with
> compatibility defined as back-compat with the Lucene index, the HTTP API,
> the Solr plugin API and any general core interfaces that reasonable plugins
> might use.
>
> And if this policy puts greater pressure on getting an earlier 6.0
> release, so be it. +1 for that.
>
> Whether the Lucene guys have the same concerns as the Solr guys is an
> interesting question.
>
>
> -- Jack Krupansky
>
> On Mon, Jan 4, 2016 at 12:30 PM, Yonik Seeley  wrote:
>
>> On Mon, Jan 4, 2016 at 12:07 PM, Alexandre Rafalovitch
>>  wrote:
>> > Solr plugin story is muddy enough as it is. Plugins are hard to find,
>> > share. So, in my eyes, breaking them is not a big effect as if we had
>> > a big active registry.
>>
>> I think private plugins / components are more the issue here (a custom
>> qparser, search component, update processor).
>> The basic question is: should people using these be able to upgrade
>> from 5.4 to 5.5 without having to change and recompile their code?
>>
>> -Yonik
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>


[jira] [Updated] (SOLR-8485) SelectStream only works with all lowercase field names and doesn't handle quoted selected fields

2016-01-04 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove updated SOLR-8485:
--
Attachment: SOLR-8485.patch

This patch corrects issues (1) and (2). 

> SelectStream only works with all lowercase field names and doesn't handle 
> quoted selected fields
> 
>
> Key: SOLR-8485
> URL: https://issues.apache.org/jira/browse/SOLR-8485
> Project: Solr
>  Issue Type: Bug
>Reporter: Dennis Gove
>Priority: Minor
>  Labels: streaming
> Attachments: SOLR-8485.patch
>
>
> Three issues exist if one creates a SelectStream with an expression.
> {code}
> select(
>   search(collection1, fl="personId_i,rating_f", q="rating_f:*", 
> sort="personId_i asc"),
>   personId_i as personId,
>   rating_f as rating
> )
> {code}
> "personId_i as personId" will be parsed as "personid_i as personid"
> 1. The incoming tuple will contain a field "personId_i" but the selection 
> will be looking for a field "personid_i". This field won't be found in the 
> incoming tuple (notice the case difference) and as such no field personId 
> will exist in the outgoing tuple.
> 2. If (1) wasn't an issue, the outgoing tuple would have in a field 
> "personid" and not the expected "personId" (notice the case difference). This 
> can lead to other down-the-road issues.
> 3. Also, if one were to quote the selected fields such as in
> {code}
> select(
>   search(collection1, fl="personId_i,rating_f", q="rating_f:*", 
> sort="personId_i asc"),
>   "personId_i as personId",
>   "rating_f as rating"
> )
> {code}
> then the quotes would be included in the field name. Wrapping quotes should 
> be handled properly such that they are removed from the parameters before 
> they are parsed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Breaking Java back-compat in Solr

2016-01-04 Thread Shawn Heisey
On 1/4/2016 1:55 PM, Jack Krupansky wrote:
> I suspect that half the issue here is that 6.0 is viewed as too far
> away so that any trunk-only enhancements are then seen as not having
> any near-term relevance. If 6.0 were targeted for sometime within the
> next six months, would that not take a lot out of the urgency for
> major/breaking changes in dot releases?
>
> Anybody object to a Solr 6.0 in June or thereabouts? Would the folks
> in Elasticsearch land object to a Lucene 6.0 release in that timeframe
> (if not sooner!)?
>

I said much the same thing on SOLR-8475 a short time ago.  I'm all for
creating branch_6x in the very near future and looking forward to the
actual release a few months after that.  The CHANGES.txt for 5.5 looks
very extensive for both solr and lucene, so I believe that we should
probably get 5.5 out the door first.

> I'm +1 for saying that dot releases be limited to "no surprises", easy
> upgrades, with no app/custom code changes for the external and general
> internal APIs, but under the condition that a major release is never
> more than a year away. In any case, make a commitment to users that
> they can always safely and painlessly upgrade from x.y to x.z without
> code changes.

That is what we aim for.  I personally am not opposed to making very
minor changes to my custom code when a new minor version comes out, but
if that's avoidable, everybody wins.

Any time I write code that uses the Solr API (separate from the SolrJ
API), I presume ahead of time that my plugin jar may not work when I
upgrade Solr.  This is probably paranoia, but because I recompile my
plugin anytime I upgrade, I know that any problems I encounter are
likely due to my own mistakes.

> Whether the Lucene guys have the same concerns as the Solr guys is an
> interesting question.

A "user" of Lucene is typically a developer, someone who presumably
knows how to fix problems created by API changes.  If their code stops
working when they upgrade a dependency like Lucene, they can adapt. 
Also, because of the very nature of the project, I think that Lucene
devs are very good about indicating which Lucene APIs are
expert/internal and subject to change.  Lucene internals are very
complex, but we have some incredibly smart people here who know them
very well, and they know which APIs are unlikely to be found in typical
user programs.

A typical Solr user is not a developer, and just wants everything to
work, potentially with custom code that they cannot change or
recompile.  I don't think the Solr devs are less intelligent than the
Lucene devs ... but because Solr is primarily an application rather than
an API, I don't think that there is as much effort in the Solr code to
indicate which APIs should be considered expert/internal.

I know from experience that third-party Solr plugins are sometimes
extremely version-specific, so the goal of custom code working with a
new minor version is not always achieved.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8486) No longer require jar/unzip for bin/solr

2016-01-04 Thread Steven E. Harris (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082043#comment-15082043
 ] 

Steven E. Harris commented on SOLR-8486:


And thank you for the prompt fix. I look forward to Solr 5.5.

> No longer require jar/unzip for bin/solr
> 
>
> Key: SOLR-8486
> URL: https://issues.apache.org/jira/browse/SOLR-8486
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8486.patch
>
>
> Now that we do not ship with a {{.war}} anymore, the {{bin/solr}} script has 
> some dead code related to {{UNPACK_WAR_CMD}}, see comment in [this pull 
> request|https://github.com/apache/lucene-solr/commit/c4bbf9cc5e8b1869d40cd7f619e40c8a4864d531#commitcomment-15253022].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8486) No longer require jar/unzip for bin/solr

2016-01-04 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-8486.
---
Resolution: Fixed

Thanks for reporting, [~seh]

> No longer require jar/unzip for bin/solr
> 
>
> Key: SOLR-8486
> URL: https://issues.apache.org/jira/browse/SOLR-8486
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8486.patch
>
>
> Now that we do not ship with a {{.war}} anymore, the {{bin/solr}} script has 
> some dead code related to {{UNPACK_WAR_CMD}}, see comment in [this pull 
> request|https://github.com/apache/lucene-solr/commit/c4bbf9cc5e8b1869d40cd7f619e40c8a4864d531#commitcomment-15253022].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2016-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082093#comment-15082093
 ] 

ASF subversion and git services commented on SOLR-7535:
---

Commit 1722990 from [~joel.bernstein] in branch 'dev/trunk'
[ https://svn.apache.org/r1722990 ]

SOLR-7535: Add UpdateStream to Streaming API and Streaming Expression

> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, 
> SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, 
> SOLR-7535.patch
>
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8479) Add JDBCStream for integration with external data sources

2016-01-04 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove updated SOLR-8479:
--
Attachment: SOLR-8479.patch

New patch with a few changes.

1. Added some new tests
2. Made driverClassName an optional property. if provided then we will call 
Class.forName(driverClassName); during open(). Also added a call to 
DriverManager.getDriver(connectionUrl) during open() to validate that the 
driver can be found. If not then an exception is thrown. This will prevent us 
from continuing if the jdbc driver is not loaded.
3. Changed the default handling types so that Double is handled as a direct 
class while Float is converted to a Doube. This keeps in line with the rest of 
the Streaming API. 

> Add JDBCStream for integration with external data sources
> -
>
> Key: SOLR-8479
> URL: https://issues.apache.org/jira/browse/SOLR-8479
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrJ
>Reporter: Dennis Gove
>Priority: Minor
> Attachments: SOLR-8479.patch, SOLR-8479.patch, SOLR-8479.patch
>
>
> Given that the Streaming API can merge and join multiple incoming SolrStreams 
> to perform complex operations on the resulting combined datasets I think it 
> would be beneficial to also support incoming streams from other data sources. 
> The JDBCStream will provide a Streaming API interface to any data source 
> which provides a JDBC driver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8486) No longer require jar/unzip for bin/solr

2016-01-04 Thread JIRA
Jan Høydahl created SOLR-8486:
-

 Summary: No longer require jar/unzip for bin/solr
 Key: SOLR-8486
 URL: https://issues.apache.org/jira/browse/SOLR-8486
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools
Reporter: Jan Høydahl
 Fix For: Trunk


Now that we do not ship with a {{.war}} anymore, the {{bin/solr}} script has 
some dead code related to {{UNPACK_WAR_CMD}}, see comment in [this pull 
request|https://github.com/apache/lucene-solr/commit/c4bbf9cc5e8b1869d40cd7f619e40c8a4864d531#commitcomment-15253022].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8486) No longer require jar/unzip for bin/solr

2016-01-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082010#comment-15082010
 ] 

Jan Høydahl commented on SOLR-8486:
---

No need for a separate PR, I'll attach a patch here

> No longer require jar/unzip for bin/solr
> 
>
> Key: SOLR-8486
> URL: https://issues.apache.org/jira/browse/SOLR-8486
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Reporter: Jan Høydahl
> Fix For: Trunk
>
>
> Now that we do not ship with a {{.war}} anymore, the {{bin/solr}} script has 
> some dead code related to {{UNPACK_WAR_CMD}}, see comment in [this pull 
> request|https://github.com/apache/lucene-solr/commit/c4bbf9cc5e8b1869d40cd7f619e40c8a4864d531#commitcomment-15253022].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2016-01-04 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081855#comment-15081855
 ] 

Joel Bernstein commented on SOLR-7535:
--

The UpdateStream worked well during manual testing. The test involved streaming 
5 million documents from a source collection into a separate destination 
collection. I used very small documents for the test which loaded at a rate of 
about 20,000 documents per second. The stream from the source collection was 
moving at a rate of over 1 million documents per second so there was 
significant blocking on the export. This did not cause any problems. I tested 
loading from a single node and in parallel with two nodes. No performance 
increase could be seen in parallel mode because I believe my laptop was already 
maxed out. In theory when indexing to a large cluster we would see performance 
improvements when indexing in parallel.

I believe this ticket is now ready to commit.

I ran into a few "ease of use" issues that made it tricky to get the update 
expression running. I fixed a couple of these issues as part this ticket and 
I'll open another ticket to address the others.





> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, 
> SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch
>
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6955) The release smoke tester inappropriately requires back compat index testing for versions greater than the one being smoke tested

2016-01-04 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-6955:
---
Fix Version/s: 5.3.2

> The release smoke tester inappropriately requires back compat index testing 
> for versions greater than the one being smoke tested
> 
>
> Key: LUCENE-6955
> URL: https://issues.apache.org/jira/browse/LUCENE-6955
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/test
>Affects Versions: 5.3.2
>Reporter: Steve Rowe
>Priority: Blocker
> Fix For: 5.3.2
>
>
> I ran {{ant nightly-smoke}} on my laptop against the lucene_solr_5_3 branch 
> and got the following error:
> {noformat}
>[smoker] Verify...
>[smoker]   confirm all releases have coverage in TestBackwardsCompatibility
>[smoker] find all past Lucene releases...
>[smoker] run TestBackwardsCompatibility..
>[smoker] Releases that don't seem to be tested:
>[smoker]   5.4.0
>[smoker] Traceback (most recent call last):
>[smoker]   File 
> "/Users/sarowe/svn/lucene/dev/branches/lucene_solr_5_3/dev-tools/scripts/smokeTestRele
>[smoker] ase.py", line 1449, in 
>[smoker] main()
>[smoker]   File 
> "/Users/sarowe/svn/lucene/dev/branches/lucene_solr_5_3/dev-tools/scripts/smokeTestRelease.py",
>  line 1394, in main
>[smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
> c.is_signed, ' '.join(c.test_args))
>[smoker]   File 
> "/Users/sarowe/svn/lucene/dev/branches/lucene_solr_5_3/dev-tools/scripts/smokeTestRelease.py",
>  line 1432, in smokeTest
>[smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % 
> version, svnRevision, version, testArgs, baseURL)
>[smoker]   File 
> "/Users/sarowe/svn/lucene/dev/branches/lucene_solr_5_3/dev-tools/scripts/smokeTestRelease.py",
>  line 583, in unpackAndVerify
>[smoker] verifyUnpacked(java, project, artifact, unpackPath, 
> svnRevision, version, testArgs, tmpDir, baseURL)
>[smoker]   File 
> "/Users/sarowe/svn/lucene/dev/branches/lucene_solr_5_3/dev-tools/scripts/smokeTestRelease.py",
>  line 762, in verifyUnpacked
>[smoker] confirmAllReleasesAreTestedForBackCompat(unpackPath)
>[smoker]   File 
> "/Users/sarowe/svn/lucene/dev/branches/lucene_solr_5_3/dev-tools/scripts/smokeTestRelease.py",
>  line 1387, in confirmAllReleasesAreTestedForBackCompat
>[smoker] raise RuntimeError('some releases are not tested by 
> TestBackwardsCompatibility?')
>[smoker] RuntimeError: some releases are not tested by 
> TestBackwardsCompatibility?
> {noformat}
> Here's the relevant section of {{smokeTestRelease.py}} - 
> {{getAllLuceneReleases()}} fetches all dotted-version entries in the file 
> listing page returned by the web server at 
> https://archive.apache.org/dist/lucene/java/:
> {code}
> def confirmAllReleasesAreTestedForBackCompat(unpackPath):
>   print('find all past Lucene releases...')
>   allReleases = getAllLuceneReleases()
>   [...]
>   notTested = []
>   for x in allReleases:
> if x not in testedIndices:
>   if '.'.join(str(y) for y in x) in ('1.4.3', '1.9.1', '2.3.1', '2.3.2'):
> # Exempt the dark ages indices
> continue
>   notTested.append(x)
>   if len(notTested) > 0:
> notTested.sort()
> print('Releases that don\'t seem to be tested:')
> failed = True
> for x in notTested:
>   print('  %s' % '.'.join(str(y) for y in x))
> raise RuntimeError('some releases are not tested by 
> TestBackwardsCompatibility?')
> {code}
> I think the code above should allow/ignore versions greater than the version 
> being smoke tested.
> AFAIK, version 5.3.2 will be the first release where a greater version has 
> been released in the past since full back compat testing started being 
> checked for by the smoke tester.  (The last time this happened was when 4.9.1 
> was released after 4.10.0.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2016-01-04 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-7535:
-
Attachment: SOLR-7535.patch

Patch with the latest work. Ready to commit but having a hard time getting the 
full test suite to run through. I had a stall earlier on the 
StreamingExpressionTests which I had never seen before. So I'm being extra 
careful with this. I'd like to run the tests successfully several more times to 
see if it was a one time problem.


> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, 
> SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, 
> SOLR-7535.patch
>
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8486) No longer require jar/unzip for bin/solr

2016-01-04 Thread Steven E. Harris (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081998#comment-15081998
 ] 

Steven E. Harris commented on SOLR-8486:


Thank you. Shall I create a pull request to remove it?

> No longer require jar/unzip for bin/solr
> 
>
> Key: SOLR-8486
> URL: https://issues.apache.org/jira/browse/SOLR-8486
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Reporter: Jan Høydahl
> Fix For: Trunk
>
>
> Now that we do not ship with a {{.war}} anymore, the {{bin/solr}} script has 
> some dead code related to {{UNPACK_WAR_CMD}}, see comment in [this pull 
> request|https://github.com/apache/lucene-solr/commit/c4bbf9cc5e8b1869d40cd7f619e40c8a4864d531#commitcomment-15253022].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8486) No longer require jar/unzip for bin/solr

2016-01-04 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reassigned SOLR-8486:
-

Assignee: Jan Høydahl

> No longer require jar/unzip for bin/solr
> 
>
> Key: SOLR-8486
> URL: https://issues.apache.org/jira/browse/SOLR-8486
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: Trunk
>
> Attachments: SOLR-8486.patch
>
>
> Now that we do not ship with a {{.war}} anymore, the {{bin/solr}} script has 
> some dead code related to {{UNPACK_WAR_CMD}}, see comment in [this pull 
> request|https://github.com/apache/lucene-solr/commit/c4bbf9cc5e8b1869d40cd7f619e40c8a4864d531#commitcomment-15253022].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8486) No longer require jar/unzip for bin/solr

2016-01-04 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-8486:
--
Attachment: SOLR-8486.patch

Attaching patch for trunk

> No longer require jar/unzip for bin/solr
> 
>
> Key: SOLR-8486
> URL: https://issues.apache.org/jira/browse/SOLR-8486
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Reporter: Jan Høydahl
> Fix For: Trunk
>
> Attachments: SOLR-8486.patch
>
>
> Now that we do not ship with a {{.war}} anymore, the {{bin/solr}} script has 
> some dead code related to {{UNPACK_WAR_CMD}}, see comment in [this pull 
> request|https://github.com/apache/lucene-solr/commit/c4bbf9cc5e8b1869d40cd7f619e40c8a4864d531#commitcomment-15253022].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8486) No longer require jar/unzip for bin/solr

2016-01-04 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-8486:
--
Fix Version/s: 5.5

> No longer require jar/unzip for bin/solr
> 
>
> Key: SOLR-8486
> URL: https://issues.apache.org/jira/browse/SOLR-8486
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8486.patch
>
>
> Now that we do not ship with a {{.war}} anymore, the {{bin/solr}} script has 
> some dead code related to {{UNPACK_WAR_CMD}}, see comment in [this pull 
> request|https://github.com/apache/lucene-solr/commit/c4bbf9cc5e8b1869d40cd7f619e40c8a4864d531#commitcomment-15253022].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8486) No longer require jar/unzip for bin/solr

2016-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082040#comment-15082040
 ] 

ASF subversion and git services commented on SOLR-8486:
---

Commit 1722989 from jan...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1722989 ]

SOLR-8486: No longer require jar/unzip for bin/solr (backport)

> No longer require jar/unzip for bin/solr
> 
>
> Key: SOLR-8486
> URL: https://issues.apache.org/jira/browse/SOLR-8486
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8486.patch
>
>
> Now that we do not ship with a {{.war}} anymore, the {{bin/solr}} script has 
> some dead code related to {{UNPACK_WAR_CMD}}, see comment in [this pull 
> request|https://github.com/apache/lucene-solr/commit/c4bbf9cc5e8b1869d40cd7f619e40c8a4864d531#commitcomment-15253022].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8479) Add JDBCStream for integration with external data sources

2016-01-04 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081873#comment-15081873
 ] 

Dennis Gove commented on SOLR-8479:
---

I intend to add a few more tests for failure scenarios and for setting 
connection properties. Barring any issues found with that, I think this will be 
ready to go .

> Add JDBCStream for integration with external data sources
> -
>
> Key: SOLR-8479
> URL: https://issues.apache.org/jira/browse/SOLR-8479
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrJ
>Reporter: Dennis Gove
>Priority: Minor
> Attachments: SOLR-8479.patch, SOLR-8479.patch, SOLR-8479.patch, 
> SOLR-8479.patch
>
>
> Given that the Streaming API can merge and join multiple incoming SolrStreams 
> to perform complex operations on the resulting combined datasets I think it 
> would be beneficial to also support incoming streams from other data sources. 
> The JDBCStream will provide a Streaming API interface to any data source 
> which provides a JDBC driver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2938 - Still Failing!

2016-01-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2938/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [NRTCachingDirectory]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [NRTCachingDirectory]
at __randomizedtesting.SeedInfo.seed([640B8C405AD7BB63]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:228)
at sun.reflect.GeneratedMethodAccessor52.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10711 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandler
   [junit4]   2> Creating dataDir: 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/solr-core/test/J1/temp/solr.handler.TestReplicationHandler_640B8C405AD7BB63-001/init-core-data-001
   [junit4]   2> 1958616 INFO  
(TEST-TestReplicationHandler.testEmptyCommits-seed#[640B8C405AD7BB63]) [] 
o.a.s.SolrTestCaseJ4 ###Starting testEmptyCommits
   [junit4]   2> 1958617 INFO  
(TEST-TestReplicationHandler.testEmptyCommits-seed#[640B8C405AD7BB63]) [] 
o.a.s.SolrTestCaseJ4 Writing core.properties file to 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/solr-core/test/J1/temp/solr.handler.TestReplicationHandler_640B8C405AD7BB63-001/solr-instance-001/collection1
   [junit4]   2> 1958626 INFO  
(TEST-TestReplicationHandler.testEmptyCommits-seed#[640B8C405AD7BB63]) [] 
o.e.j.s.Server jetty-9.2.13.v20150730
   [junit4]   2> 1958629 INFO  
(TEST-TestReplicationHandler.testEmptyCommits-seed#[640B8C405AD7BB63]) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@2bd23e00{/solr,null,AVAILABLE}
   [junit4]   2> 1958630 INFO  
(TEST-TestReplicationHandler.testEmptyCommits-seed#[640B8C405AD7BB63]) [] 
o.e.j.s.ServerConnector Started 
ServerConnector@77e48879{HTTP/1.1}{127.0.0.1:54375}
   [junit4]   2> 1958630 INFO  
(TEST-TestReplicationHandler.testEmptyCommits-seed#[640B8C405AD7BB63]) [] 
o.e.j.s.Server Started @1962732ms
   [junit4]   2> 1958630 INFO  
(TEST-TestReplicationHandler.testEmptyCommits-seed#[640B8C405AD7BB63]) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/solr-core/test/J1/temp/solr.handler.TestReplicationHandler_640B8C405AD7BB63-001/solr-instance-001/collection1/data,
 hostPort=54375, hostContext=/solr}
   [junit4]   2> 1958630 INFO  

[jira] [Commented] (SOLR-8486) No longer require jar/unzip for bin/solr

2016-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082023#comment-15082023
 ] 

ASF subversion and git services commented on SOLR-8486:
---

Commit 1722988 from jan...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1722988 ]

SOLR-8486: No longer require jar/unzip for bin/solr

> No longer require jar/unzip for bin/solr
> 
>
> Key: SOLR-8486
> URL: https://issues.apache.org/jira/browse/SOLR-8486
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: Trunk
>
> Attachments: SOLR-8486.patch
>
>
> Now that we do not ship with a {{.war}} anymore, the {{bin/solr}} script has 
> some dead code related to {{UNPACK_WAR_CMD}}, see comment in [this pull 
> request|https://github.com/apache/lucene-solr/commit/c4bbf9cc5e8b1869d40cd7f619e40c8a4864d531#commitcomment-15253022].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2016-01-04 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082154#comment-15082154
 ] 

Jason Gerlowski commented on SOLR-7535:
---

Happy to help.  Joel did the real work in getting this where it needed to be.

Is it worth creating JIRAs for any of the things that got pushed out of this 
issue ("CommitStream", and "tying this into SqlHandler" were the main takeaways 
I think)?

> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, 
> SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, 
> SOLR-7535.patch
>
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8480) Progress info for TupleStream

2016-01-04 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082151#comment-15082151
 ] 

Cao Manh Dat commented on SOLR-8480:


I'm a new one too. But as Streaming API is getting more and more complicated, 
users may have very long running streaming job (ex : parallel update from many 
sources ...). So it will be necessary to have these info.

{quote}
All things are possible I suppose, but right now there's nothing that knows the 
size of the result-set.
{quote}
I use this snippet to get size of SolrStream (JsonTupleStream#advanceToDocs())
{code}
expect(JSONParser.OBJECT_START);
if (advanceToMapKey("numFound", true)){
  numFound = parser.getLong();
}
{code}

{quote}
For example, consider: unique(search(...)). How would a UniqueStream define its 
size?
{quote}
You are absolutely right. We can change the method to {{ getEstimatedSize() }}. 
It is good enough.

> Progress info for TupleStream
> -
>
> Key: SOLR-8480
> URL: https://issues.apache.org/jira/browse/SOLR-8480
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Cao Manh Dat
>
> I suggest adding progress info for TupleStream. It can be very helpful for 
> tracking consuming process
> {code}
> public abstract class TupleStream {
>public abstract long getSize();
>public abstract long getConsumed();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6961) Improve Exception handling in AnalysisFactory/SPI loader

2016-01-04 Thread Uwe Schindler (JIRA)
Uwe Schindler created LUCENE-6961:
-

 Summary: Improve Exception handling in AnalysisFactory/SPI loader
 Key: LUCENE-6961
 URL: https://issues.apache.org/jira/browse/LUCENE-6961
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/analysis
Affects Versions: 5.4
 Environment: Currently the AnalysisSPILoader used by 
AbstractAnalysisFactory uses a {{catch Exception}} block when invoking the 
constructor. If the constructor throws stuff like IllegalArgumentExceptions or 
similar, this is hidden inside InvocationTargetException, which gets wrapped in 
IllegalArgumentException. This is not useful.

This patch will:
- Only catch ReflectiveOperationException
- If it is InvocationTargetException it will rethrow the cause, if it is 
unchecked. Otherwise it will wrap in RuntimeException
- If the constructor cannot be called at all (reflective access denied, method 
not found,...) UOE is thrown with explaining message.

This patch will be required by next version of LUCENE-6958.
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 5.5, Trunk






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5905) Different behaviour of JapaneseAnalyzer at indexing time vs. at search time

2016-01-04 Thread Trejkaz (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trejkaz updated LUCENE-5905:

Affects Version/s: 5.2.1
  Description: 
A document with the word 秋葉原 in the body, when analysed by the JapaneseAnalyzer 
(AKA Kuromoji), cannot be found when searching for the same text as a phrase 
query.

Two programs are provided to reproduce the issue. Both programs print out the 
term docs and positions and then the result of parsing the phrase query.

As shown by the output, at analysis time, there is a lone Japanese term "秋葉原". 
At query parsing time, there are *three* such terms - "秋葉" and "秋葉原" at 
position 0 and "原" at position 1. Because all terms must be present for a 
phrase query to be a match, the query never matches, which is quite a serious 
issue for us.

*Any workarounds, no matter how hacky, would be extremely helpful at this 
point.*

My guess is that this is a quirk with the analyser. If it happened with 
StandardAnalyzer, surely someone would have discovered it before I did.

Lucene 5.2.1 reproduction:

{code:java}
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.ja.JapaneseAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.document.TextField;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.index.LeafReader;
import org.apache.lucene.index.LeafReaderContext;
import org.apache.lucene.index.MultiFields;
import org.apache.lucene.index.PostingsEnum;
import org.apache.lucene.index.Terms;
import org.apache.lucene.index.TermsEnum;
import org.apache.lucene.queryparser.flexible.standard.StandardQueryParser;
import 
org.apache.lucene.queryparser.flexible.standard.config.StandardQueryConfigHandler;
import org.apache.lucene.search.DocIdSetIterator;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.Bits;
import org.apache.lucene.util.BytesRef;

public class LuceneMissingTerms {
public static void main(String[] args) throws Exception {
try (Directory directory = new RAMDirectory()) {
Analyzer analyser = new JapaneseAnalyzer();

try (IndexWriter writer = new IndexWriter(directory, new 
IndexWriterConfig(analyser))) {
Document document = new Document();
document.add(new TextField("content", "blah blah commercial 
blah blah \u79CB\u8449\u539F blah blah", Field.Store.NO));
writer.addDocument(document);
}

try (IndexReader multiReader = DirectoryReader.open(directory)) {
for (LeafReaderContext leaf : multiReader.leaves()) {
LeafReader reader = leaf.reader();

Terms terms = 
MultiFields.getFields(reader).terms("content");
TermsEnum termsEnum = terms.iterator();
BytesRef text;
//noinspection NestedAssignment
while ((text = termsEnum.next()) != null) {
System.out.println("term: " + text.utf8ToString());

Bits liveDocs = reader.getLiveDocs();
PostingsEnum postingsEnum = 
termsEnum.postings(liveDocs, null, PostingsEnum.POSITIONS);
int doc;
//noinspection NestedAssignment
while ((doc = postingsEnum.nextDoc()) != 
DocIdSetIterator.NO_MORE_DOCS) {
System.out.println("  doc: " + doc);

int freq = postingsEnum.freq();
for (int i = 0; i < freq; i++) {
int pos = postingsEnum.nextPosition();
System.out.println("pos: " + pos);
}
}
}
}

StandardQueryParser queryParser = new 
StandardQueryParser(analyser);

queryParser.setDefaultOperator(StandardQueryConfigHandler.Operator.AND);
// quoted to work around strange behaviour of 
StandardQueryParser treating this as a boolean query.
Query query = queryParser.parse("\"\u79CB\u8449\u539F\"", 
"content");
System.out.println(query);

TopDocs topDocs = new IndexSearcher(multiReader).search(query, 
10);
System.out.println(topDocs.totalHits);
}
}
}
}
{code}

Lucene 4.9 reproduction:

{code:java}
import org.apache.lucene.analysis.Analyzer;
import 

[jira] [Commented] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2016-01-04 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082109#comment-15082109
 ] 

Dennis Gove commented on SOLR-7535:
---

+1 on that. I'm real excited about this!

> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, 
> SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, 
> SOLR-7535.patch
>
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2016-01-04 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082194#comment-15082194
 ] 

Joel Bernstein commented on SOLR-7535:
--

If we don't want to repeat the *collection* in the commit function we can call 
children() on the substream and iterate until it finds the UpdateStream. Then 
get destination collection from the UpdateStream. This would couple the 
CommitStream to the UpdateStream but I think they're tied together anyway. 

Then it would look like this:

{code}
commit(parallel(update(search)))
{code}

> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, 
> SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, 
> SOLR-7535.patch
>
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.3 - Build # 4 - Still Failing

2016-01-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.3/4/

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
KeeperErrorCode = Session expired for /live_nodes

Stack Trace:
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = 
Session expired for /live_nodes
at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1472)
at 
org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:328)
at 
org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:325)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61)
at 
org.apache.solr.common.cloud.SolrZkClient.getChildren(SolrZkClient.java:325)
at 
org.apache.solr.common.cloud.ZkStateReader.updateClusterState(ZkStateReader.java:562)
at 
org.apache.solr.common.cloud.ZkStateReader.updateClusterState(ZkStateReader.java:239)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testNoCollectionSpecified(CollectionsAPIDistributedZkTest.java:464)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:169)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-6961) Improve Exception handling in AnalysisFactory/SPI loader

2016-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082179#comment-15082179
 ] 

ASF subversion and git services commented on LUCENE-6961:
-

Commit 1722993 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1722993 ]

LUCENE-6961: Improve Exception handling in AnalysisFactories / 
AnalysisSPILoader: Don't wrap exceptions occuring in factory's ctor inside 
InvocationTargetException

> Improve Exception handling in AnalysisFactory/SPI loader
> 
>
> Key: LUCENE-6961
> URL: https://issues.apache.org/jira/browse/LUCENE-6961
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 5.4
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6961.patch
>
>
> Currently the AnalysisSPILoader used by AbstractAnalysisFactory uses a 
> {{catch Exception}} block when invoking the constructor. If the constructor 
> throws stuff like IllegalArgumentExceptions or similar, this is hidden inside 
> InvocationTargetException, which gets wrapped in IllegalArgumentException. 
> This is not useful.
> This patch will:
> - Only catch ReflectiveOperationException
> - If it is InvocationTargetException it will rethrow the cause, if it is 
> unchecked. Otherwise it will wrap in RuntimeException
> - If the constructor cannot be called at all (reflective access denied, 
> method not found,...) UOE is thrown with explaining message.
> This patch will be required by next version of LUCENE-6958.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6957) NRTCachingDirectory is missing createTempOutput

2016-01-04 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-6957:
---
Attachment: LUCENE-6957.patch

New patch!

> NRTCachingDirectory is missing createTempOutput
> ---
>
> Key: LUCENE-6957
> URL: https://issues.apache.org/jira/browse/LUCENE-6957
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk
>
> Attachments: LUCENE-6957.patch, LUCENE-6957.patch
>
>
> It's broken now because it simply delegates to the wrapped dir now,
> which can create an output that already exists in the ram dir cache.
> This bug only affects trunk (it's never been released).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5905) Different behaviour of JapaneseAnalyzer at indexing time vs. at search time results in no matches for some words.

2016-01-04 Thread Trejkaz (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trejkaz updated LUCENE-5905:

Summary: Different behaviour of JapaneseAnalyzer at indexing time vs. at 
search time results in no matches for some words.  (was: Different behaviour of 
JapaneseAnalyzer at indexing time vs. at search time)

> Different behaviour of JapaneseAnalyzer at indexing time vs. at search time 
> results in no matches for some words.
> -
>
> Key: LUCENE-5905
> URL: https://issues.apache.org/jira/browse/LUCENE-5905
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 3.6.2, 4.9, 5.2.1
> Environment: Java 8u5
>Reporter: Trejkaz
>
> A document with the word 秋葉原 in the body, when analysed by the 
> JapaneseAnalyzer (AKA Kuromoji), cannot be found when searching for the same 
> text as a phrase query.
> Two programs are provided to reproduce the issue. Both programs print out the 
> term docs and positions and then the result of parsing the phrase query.
> As shown by the output, at analysis time, there is a lone Japanese term 
> "秋葉原". At query parsing time, there are *three* such terms - "秋葉" and "秋葉原" 
> at position 0 and "原" at position 1. Because all terms must be present for a 
> phrase query to be a match, the query never matches, which is quite a serious 
> issue for us.
> *Any workarounds, no matter how hacky, would be extremely helpful at this 
> point.*
> My guess is that this is a quirk with the analyser. If it happened with 
> StandardAnalyzer, surely someone would have discovered it before I did.
> Lucene 5.2.1 reproduction:
> {code:java}
> import org.apache.lucene.analysis.Analyzer;
> import org.apache.lucene.analysis.ja.JapaneseAnalyzer;
> import org.apache.lucene.document.Document;
> import org.apache.lucene.document.Field;
> import org.apache.lucene.document.TextField;
> import org.apache.lucene.index.DirectoryReader;
> import org.apache.lucene.index.IndexReader;
> import org.apache.lucene.index.IndexWriter;
> import org.apache.lucene.index.IndexWriterConfig;
> import org.apache.lucene.index.LeafReader;
> import org.apache.lucene.index.LeafReaderContext;
> import org.apache.lucene.index.MultiFields;
> import org.apache.lucene.index.PostingsEnum;
> import org.apache.lucene.index.Terms;
> import org.apache.lucene.index.TermsEnum;
> import org.apache.lucene.queryparser.flexible.standard.StandardQueryParser;
> import 
> org.apache.lucene.queryparser.flexible.standard.config.StandardQueryConfigHandler;
> import org.apache.lucene.search.DocIdSetIterator;
> import org.apache.lucene.search.IndexSearcher;
> import org.apache.lucene.search.Query;
> import org.apache.lucene.search.TopDocs;
> import org.apache.lucene.store.Directory;
> import org.apache.lucene.store.RAMDirectory;
> import org.apache.lucene.util.Bits;
> import org.apache.lucene.util.BytesRef;
> public class LuceneMissingTerms {
> public static void main(String[] args) throws Exception {
> try (Directory directory = new RAMDirectory()) {
> Analyzer analyser = new JapaneseAnalyzer();
> try (IndexWriter writer = new IndexWriter(directory, new 
> IndexWriterConfig(analyser))) {
> Document document = new Document();
> document.add(new TextField("content", "blah blah commercial 
> blah blah \u79CB\u8449\u539F blah blah", Field.Store.NO));
> writer.addDocument(document);
> }
> try (IndexReader multiReader = DirectoryReader.open(directory)) {
> for (LeafReaderContext leaf : multiReader.leaves()) {
> LeafReader reader = leaf.reader();
> Terms terms = 
> MultiFields.getFields(reader).terms("content");
> TermsEnum termsEnum = terms.iterator();
> BytesRef text;
> //noinspection NestedAssignment
> while ((text = termsEnum.next()) != null) {
> System.out.println("term: " + text.utf8ToString());
> Bits liveDocs = reader.getLiveDocs();
> PostingsEnum postingsEnum = 
> termsEnum.postings(liveDocs, null, PostingsEnum.POSITIONS);
> int doc;
> //noinspection NestedAssignment
> while ((doc = postingsEnum.nextDoc()) != 
> DocIdSetIterator.NO_MORE_DOCS) {
> System.out.println("  doc: " + doc);
> int freq = postingsEnum.freq();
> for (int i = 0; i < freq; i++) {
> int pos = postingsEnum.nextPosition();
> 

[jira] [Created] (SOLR-8487) Add CommitStream to Streaming API and Streaming Expressions

2016-01-04 Thread Jason Gerlowski (JIRA)
Jason Gerlowski created SOLR-8487:
-

 Summary: Add CommitStream to Streaming API and Streaming 
Expressions
 Key: SOLR-8487
 URL: https://issues.apache.org/jira/browse/SOLR-8487
 Project: Solr
  Issue Type: New Feature
Affects Versions: Trunk
Reporter: Jason Gerlowski
Priority: Minor
 Fix For: Trunk


(Paraphrased from Joel's idea/suggestions in the comments of SOLR-7535).

With SOLR-7535, users can now index documents/tuples using an UpdateStream.  
However, there's no way currently using the Streaming API to force a commit.

The purpose of this ticket is to add a CommitStream, which can be used to 
trigger commit(s) on a given collection.

The proposed usage/behavior would look a little bit like:
{{commit(collection, parallel(update(search()))}}

Note that...
1.) CommitStream has a positional collection parameter, to indicate which 
collection to commit on. (Alternatively, it could recurse through 
{{children()}} nodes until it finds the UpdateStream, and then retrieve the 
collection from the UpdateStream).
2.) CommitStream forwards all tuples received by an underlying, wrapped stream.
3.) CommitStream commits when the underlying stream emits its EOF tuple. 
(Alternatively, it could commit every X tuples, based on a parameter).







--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8487) Add CommitStream to Streaming API and Streaming Expressions

2016-01-04 Thread Jason Gerlowski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski updated SOLR-8487:
--
Description: 
(Paraphrased from Joel's idea/suggestions in the comments of SOLR-7535).

With SOLR-7535, users can now index documents/tuples using an UpdateStream.  
However, there's no way currently using the Streaming API to force a commit on 
the collection that received these updates.

The purpose of this ticket is to add a CommitStream, which can be used to 
trigger commit(s) on a given collection.

The proposed usage/behavior would look a little bit like:
{{commit(collection, parallel(update(search()))}}

Note that...
1.) CommitStream has a positional collection parameter, to indicate which 
collection to commit on. (Alternatively, it could recurse through 
{{children()}} nodes until it finds the UpdateStream, and then retrieve the 
collection from the UpdateStream).
2.) CommitStream forwards all tuples received by an underlying, wrapped stream.
3.) CommitStream commits when the underlying stream emits its EOF tuple. 
(Alternatively, it could commit every X tuples, based on a parameter).





  was:
(Paraphrased from Joel's idea/suggestions in the comments of SOLR-7535).

With SOLR-7535, users can now index documents/tuples using an UpdateStream.  
However, there's no way currently using the Streaming API to force a commit.

The purpose of this ticket is to add a CommitStream, which can be used to 
trigger commit(s) on a given collection.

The proposed usage/behavior would look a little bit like:
{{commit(collection, parallel(update(search()))}}

Note that...
1.) CommitStream has a positional collection parameter, to indicate which 
collection to commit on. (Alternatively, it could recurse through 
{{children()}} nodes until it finds the UpdateStream, and then retrieve the 
collection from the UpdateStream).
2.) CommitStream forwards all tuples received by an underlying, wrapped stream.
3.) CommitStream commits when the underlying stream emits its EOF tuple. 
(Alternatively, it could commit every X tuples, based on a parameter).






> Add CommitStream to Streaming API and Streaming Expressions
> ---
>
> Key: SOLR-8487
> URL: https://issues.apache.org/jira/browse/SOLR-8487
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: Trunk
>Reporter: Jason Gerlowski
>Priority: Minor
> Fix For: Trunk
>
>
> (Paraphrased from Joel's idea/suggestions in the comments of SOLR-7535).
> With SOLR-7535, users can now index documents/tuples using an UpdateStream.  
> However, there's no way currently using the Streaming API to force a commit 
> on the collection that received these updates.
> The purpose of this ticket is to add a CommitStream, which can be used to 
> trigger commit(s) on a given collection.
> The proposed usage/behavior would look a little bit like:
> {{commit(collection, parallel(update(search()))}}
> Note that...
> 1.) CommitStream has a positional collection parameter, to indicate which 
> collection to commit on. (Alternatively, it could recurse through 
> {{children()}} nodes until it finds the UpdateStream, and then retrieve the 
> collection from the UpdateStream).
> 2.) CommitStream forwards all tuples received by an underlying, wrapped 
> stream.
> 3.) CommitStream commits when the underlying stream emits its EOF tuple. 
> (Alternatively, it could commit every X tuples, based on a parameter).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.7.0_80) - Build # 5393 - Still Failing!

2016-01-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5393/
Java: 32bit/jdk1.7.0_80 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestSolrConfigHandlerCloud

Error Message:
ObjectTracker found 2 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 2 object(s) that were not 
released!!! [MockDirectoryWrapper, MockDirectoryWrapper]
at __randomizedtesting.SeedInfo.seed([C0065E1B326737BF]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:228)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11175 lines...]
   [junit4] Suite: org.apache.solr.handler.TestSolrConfigHandlerCloud
   [junit4]   2> Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestSolrConfigHandlerCloud_C0065E1B326737BF-001\init-core-data-001
   [junit4]   2> 2947476 INFO  
(SUITE-TestSolrConfigHandlerCloud-seed#[C0065E1B326737BF]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false)
   [junit4]   2> 2947476 INFO  
(SUITE-TestSolrConfigHandlerCloud-seed#[C0065E1B326737BF]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 2947481 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[C0065E1B326737BF]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 2947481 INFO  (Thread-6343) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 2947481 INFO  (Thread-6343) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 2947580 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[C0065E1B326737BF]) [] 
o.a.s.c.ZkTestServer start zk server on port:59445
   [junit4]   2> 2947580 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[C0065E1B326737BF]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 2947581 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[C0065E1B326737BF]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 2947584 INFO  (zkCallback-2168-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@2e359c name:ZooKeeperConnection 
Watcher:127.0.0.1:59445 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2> 2947584 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[C0065E1B326737BF]) [] 
o.a.s.c.c.ConnectionManager 

Re: Breaking Java back-compat in Solr

2016-01-04 Thread david.w.smi...@gmail.com
Great topic Anshum.

I’ve been frustrated with the back-compat situation since Lucene/Solr 5 as
a maintainer of my “SolrTextTagger”.  One version of the plugin supports
4.3 thru the end of the 4x line, whereas on the 5x side I’ve needed 3
versions already (5.0-5,1, 5.2, 5.3+)!  Sometimes on the API consumer side
there is simply a new way compatible with the old way and those don’t
bother me much; what’s annoying is just flat-out differences that can’t
easily be coded around without reflection.  But in my experience through
the SolrTextTagger (YMMV), Solr hasn’t been the main offender of such
changes — it’s Lucene changes (be it removing liveDocs bits to get
postings, changing the BytesRefAttribute signature, or something I don’t
remember now). At least twice it was avoidable IMO.  Nevertheless, we Solr
devs should come up with a back-compat policy, a simple document/paragraph
perhaps, and save it somewhere so we can refer to it.  Lets not have to dig
through the mailing list to know our policy some day in the future when we
want to explain it!

I suggest that Solr's *default* policy for any source file (Java API) that
doesn’t otherwise annotate a back-compat statement is to be permissive to
changes — developer judgement on how much back-compat makes sense to them.
I say this because the Solr code base is large and I think a relatively
small portion of it should aim for stability.  Lets take SearchComponent as
an example.  *That* needs to be stable.  But does HighlightComponent?  I
really don’t think so; besides, it only has one overridable method defined
by this class that isn’t inherited.   Oddly (IMO) there is a separate
abstraction SolrHighlighter and I can intuit that it’s this guy that was
intended to be the abstraction of the Highlighter implementation, not the
some-what generic HighlightComponent.  So arguably SolrHighlighter should
be stable.  DefaultSolrHighlighter is debatable as being stable — it’s a
specific highlighter but it has a bunch of methods designed to be
overridden (and I have done so).  So I think that’s a judgement call
(developer prerogative).

Should we apply a back-compat policy statement (either through a simple
comment or better through a new annotation), I don’t think we should feel
helpless to strictly abide by it for the entire major version range.  We
might decide that such changes are possible provided it gets at least
one +1 and no -1 veto from another developer.

Summary:
* Publish a back-compat policy/approach where we can refer to it easily.
* The default policy of source files without annotations is the developer’s
prerogative — no back-compat.
* Annotate the back-compat Java source files as-such and allow us to break
back-compat only if voted.

~ David

On Mon, Jan 4, 2016 at 11:28 AM Anshum Gupta  wrote:

> Hi,
>
> I was looking at refactoring code in Solr and it gets really tricky and
> confusing in terms of what level of back-compat needs to be maintained.
> Ideally, we should only maintain back-compat at the REST API level. We may
> annotate a few really important Java APIs where we're guarantee back-compat
> across minor versions, but we shouldn't certainly be doing that across the
> board.
>
> Thoughts?
>
> P.S: I hope this doesn't spin-off into something I fear :)
>
> --
> Anshum Gupta
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Commented] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2016-01-04 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082102#comment-15082102
 ] 

Joel Bernstein commented on SOLR-7535:
--

Nice work on this ticket [~gerlowskija]!


> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, 
> SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, 
> SOLR-7535.patch
>
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6961) Improve Exception handling in AnalysisFactory/SPI loader

2016-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082184#comment-15082184
 ] 

ASF subversion and git services commented on LUCENE-6961:
-

Commit 1722994 from [~thetaphi] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1722994 ]

Merged revision(s) 1722993 from lucene/dev/trunk:
LUCENE-6961: Improve Exception handling in AnalysisFactories / 
AnalysisSPILoader: Don't wrap exceptions occuring in factory's ctor inside 
InvocationTargetException

> Improve Exception handling in AnalysisFactory/SPI loader
> 
>
> Key: LUCENE-6961
> URL: https://issues.apache.org/jira/browse/LUCENE-6961
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 5.4
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6961.patch
>
>
> Currently the AnalysisSPILoader used by AbstractAnalysisFactory uses a 
> {{catch Exception}} block when invoking the constructor. If the constructor 
> throws stuff like IllegalArgumentExceptions or similar, this is hidden inside 
> InvocationTargetException, which gets wrapped in IllegalArgumentException. 
> This is not useful.
> This patch will:
> - Only catch ReflectiveOperationException
> - If it is InvocationTargetException it will rethrow the cause, if it is 
> unchecked. Otherwise it will wrap in RuntimeException
> - If the constructor cannot be called at all (reflective access denied, 
> method not found,...) UOE is thrown with explaining message.
> This patch will be required by next version of LUCENE-6958.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: CfP about Geospatial Track at ApacheCon, Vancouver

2016-01-04 Thread david.w.smi...@gmail.com
Thanks for the notice/invite, Uwe.  I may send a proposal suggestion your
way (& to Chris).  It’ll be tough to choose between submitting to FOSS4G NA
(May 2-5th in Raleigh NC) and ApacheCon.
~ David

On Mon, Jan 4, 2016 at 5:28 AM Uwe Schindler  wrote:

> Hi Committers, hi Lucene users,
>
> On the next ApacheCon in Vancouver, Canada (May 9 - 13 2016), there will
> be a track about geospatial data. The track is organized by Chris Mattmann
> together with George Percivall of the OGC (Open Geospatial Consortium). As
> I am also a member of OGC, they invited me to ask the Lucene Community to
> propose talks. Apache Lucene, Solr, and Elasticsearch have great geospatial
> features, this would be a good idea to present them. This is especially
> important because the current OGC standards are very RDBMS-focused (like
> filter definitions, services,...), so we can use the track to talk with OGC
> representatives to better match OGC standards with full text search.
>
> I am not sure if I can manage to get to Vancouver, but the others are
> kindly invited to submit talks. It is not yet sure if the track will be
> part of ApacheCon Core or ApacheCon BigData. I will keep you informed. If
> you have talk suggestions, please send them to me or Chris Mattmann.
> Alternatively, submit them to the Big Data track @
> http://events.linuxfoundation.org/events/apache-big-data-north-america/program/cfp
> (and mention geospatial track).
>
> Uwe
>
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Updated] (LUCENE-6961) Improve Exception handling in AnalysisFactory/SPI loader

2016-01-04 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6961:
--
Description: 
Currently the AnalysisSPILoader used by AbstractAnalysisFactory uses a {{catch 
Exception}} block when invoking the constructor. If the constructor throws 
stuff like IllegalArgumentExceptions or similar, this is hidden inside 
InvocationTargetException, which gets wrapped in IllegalArgumentException. This 
is not useful.

This patch will:
- Only catch ReflectiveOperationException
- If it is InvocationTargetException it will rethrow the cause, if it is 
unchecked. Otherwise it will wrap in RuntimeException
- If the constructor cannot be called at all (reflective access denied, method 
not found,...) UOE is thrown with explaining message.

This patch will be required by next version of LUCENE-6958.

> Improve Exception handling in AnalysisFactory/SPI loader
> 
>
> Key: LUCENE-6961
> URL: https://issues.apache.org/jira/browse/LUCENE-6961
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 5.4
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6961.patch
>
>
> Currently the AnalysisSPILoader used by AbstractAnalysisFactory uses a 
> {{catch Exception}} block when invoking the constructor. If the constructor 
> throws stuff like IllegalArgumentExceptions or similar, this is hidden inside 
> InvocationTargetException, which gets wrapped in IllegalArgumentException. 
> This is not useful.
> This patch will:
> - Only catch ReflectiveOperationException
> - If it is InvocationTargetException it will rethrow the cause, if it is 
> unchecked. Otherwise it will wrap in RuntimeException
> - If the constructor cannot be called at all (reflective access denied, 
> method not found,...) UOE is thrown with explaining message.
> This patch will be required by next version of LUCENE-6958.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6961) Improve Exception handling in AnalysisFactory/SPI loader

2016-01-04 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6961:
--
Attachment: LUCENE-6961.patch

Patch.

> Improve Exception handling in AnalysisFactory/SPI loader
> 
>
> Key: LUCENE-6961
> URL: https://issues.apache.org/jira/browse/LUCENE-6961
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 5.4
> Environment: Currently the AnalysisSPILoader used by 
> AbstractAnalysisFactory uses a {{catch Exception}} block when invoking the 
> constructor. If the constructor throws stuff like IllegalArgumentExceptions 
> or similar, this is hidden inside InvocationTargetException, which gets 
> wrapped in IllegalArgumentException. This is not useful.
> This patch will:
> - Only catch ReflectiveOperationException
> - If it is InvocationTargetException it will rethrow the cause, if it is 
> unchecked. Otherwise it will wrap in RuntimeException
> - If the constructor cannot be called at all (reflective access denied, 
> method not found,...) UOE is thrown with explaining message.
> This patch will be required by next version of LUCENE-6958.
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6961.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6961) Improve Exception handling in AnalysisFactory/SPI loader

2016-01-04 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6961:
--
Environment: (was: Currently the AnalysisSPILoader used by 
AbstractAnalysisFactory uses a {{catch Exception}} block when invoking the 
constructor. If the constructor throws stuff like IllegalArgumentExceptions or 
similar, this is hidden inside InvocationTargetException, which gets wrapped in 
IllegalArgumentException. This is not useful.

This patch will:
- Only catch ReflectiveOperationException
- If it is InvocationTargetException it will rethrow the cause, if it is 
unchecked. Otherwise it will wrap in RuntimeException
- If the constructor cannot be called at all (reflective access denied, method 
not found,...) UOE is thrown with explaining message.

This patch will be required by next version of LUCENE-6958.)

> Improve Exception handling in AnalysisFactory/SPI loader
> 
>
> Key: LUCENE-6961
> URL: https://issues.apache.org/jira/browse/LUCENE-6961
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 5.4
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6961.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6961) Improve Exception handling in AnalysisFactory/SPI loader

2016-01-04 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-6961.
---
Resolution: Fixed

> Improve Exception handling in AnalysisFactory/SPI loader
> 
>
> Key: LUCENE-6961
> URL: https://issues.apache.org/jira/browse/LUCENE-6961
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 5.4
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6961.patch
>
>
> Currently the AnalysisSPILoader used by AbstractAnalysisFactory uses a 
> {{catch Exception}} block when invoking the constructor. If the constructor 
> throws stuff like IllegalArgumentExceptions or similar, this is hidden inside 
> InvocationTargetException, which gets wrapped in IllegalArgumentException. 
> This is not useful.
> This patch will:
> - Only catch ReflectiveOperationException
> - If it is InvocationTargetException it will rethrow the cause, if it is 
> unchecked. Otherwise it will wrap in RuntimeException
> - If the constructor cannot be called at all (reflective access denied, 
> method not found,...) UOE is thrown with explaining message.
> This patch will be required by next version of LUCENE-6958.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2016-01-04 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082176#comment-15082176
 ] 

Joel Bernstein commented on SOLR-7535:
--

I think the CommitStream would be very useful. The main usage would be:

{code}
commit(collection, parallel(update(search()))

or

commit(collection, update(search()))
{code}

We could have it commit on EOF as the simplest use case. I think read() should 
just return all Tuples until it reaches the EOF and then commit the collection.

We can add the CommitStream to the existing UpdateStream tests.

Later we can always add more features.






> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, 
> SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, 
> SOLR-7535.patch
>
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8480) Progress info for TupleStream

2016-01-04 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082315#comment-15082315
 ] 

Jason Gerlowski commented on SOLR-8480:
---

bq. I use this snippet to get size...

Fair enough.  I take back my complaint/hesitation then.

I'll let others chime in then and see what (more knowledgeable) people think.

> Progress info for TupleStream
> -
>
> Key: SOLR-8480
> URL: https://issues.apache.org/jira/browse/SOLR-8480
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Cao Manh Dat
>
> I suggest adding progress info for TupleStream. It can be very helpful for 
> tracking consuming process
> {code}
> public abstract class TupleStream {
>public abstract long getSize();
>public abstract long getConsumed();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6949) fix (potential) resource leak in SynonymFilterFactory (coverity CID 120656)

2016-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081082#comment-15081082
 ] 

ASF subversion and git services commented on LUCENE-6949:
-

Commit 1722856 from [~cpoerschke] in branch 'dev/trunk'
[ https://svn.apache.org/r1722856 ]

LUCENE-6949: fix (potential) resource leak in SynonymFilterFactory 
(https://scan.coverity.com/projects/5620 CID 120656)

> fix (potential) resource leak in SynonymFilterFactory (coverity CID 120656)
> ---
>
> Key: LUCENE-6949
> URL: https://issues.apache.org/jira/browse/LUCENE-6949
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-6949.patch
>
>
> https://scan.coverity.com/projects/5620 mentioned on the dev mailing list 
> (http://mail-archives.apache.org/mod_mbox/lucene-dev/201507.mbox/%3ccaftwexg51-jm_6mdeoz1reagn3xgkbetoz5ou_f+melboo1...@mail.gmail.com%3e)
>  in July 2015.
> * coverity CID 120656



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6949) fix (potential) resource leak in SynonymFilterFactory (coverity CID 120656)

2016-01-04 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved LUCENE-6949.
-
   Resolution: Fixed
Fix Version/s: Trunk
   5.5

> fix (potential) resource leak in SynonymFilterFactory (coverity CID 120656)
> ---
>
> Key: LUCENE-6949
> URL: https://issues.apache.org/jira/browse/LUCENE-6949
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6949.patch
>
>
> https://scan.coverity.com/projects/5620 mentioned on the dev mailing list 
> (http://mail-archives.apache.org/mod_mbox/lucene-dev/201507.mbox/%3ccaftwexg51-jm_6mdeoz1reagn3xgkbetoz5ou_f+melboo1...@mail.gmail.com%3e)
>  in July 2015.
> * coverity CID 120656



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8146) Allowing SolrJ CloudSolrClient to have preferred replica for query/read

2016-01-04 Thread Arcadius Ahouansou (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081175#comment-15081175
 ] 

Arcadius Ahouansou commented on SOLR-8146:
--

Thank you very much [~noble.paul] for the clarification.

Looking at SOLR-6289, maybe there is an overlap between {{ip_2}} vs {{dc}} and 
{{ip_3}} vs {{rack}}?

> Allowing SolrJ CloudSolrClient to have preferred replica for query/read
> ---
>
> Key: SOLR-8146
> URL: https://issues.apache.org/jira/browse/SOLR-8146
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: 5.3
>Reporter: Arcadius Ahouansou
> Attachments: SOLR-8146.patch, SOLR-8146.patch, SOLR-8146.patch
>
>
> h2. Backgrouds
> Currently, the CloudSolrClient randomly picks a replica to query.
> This is done by shuffling the list of live URLs to query then, picking the 
> first item from the list.
> This ticket is to allow more flexibility and control to some extend which 
> URLs will be picked up for queries.
> Note that this is for queries only and would not affect update/delete/admin 
> operations.
> h2. Implementation
> The current patch uses regex pattern and moves to the top of the list of URLs 
> only those matching the given regex specified by the system property 
> {code}solr.preferredQueryNodePattern{code}
> Initially, I thought it may be good to have Solr nodes tagged with a string 
> pattern (snitch?) and use that pattern for matching the URLs.
> Any comment, recommendation or feedback would be appreciated.
> h2. Use Cases
> There are many cases where the ability to choose the node where queries go 
> can be very handy:
> h3. Special node for manual user queries and analytics
> One may have a SolrCLoud cluster where every node host the same set of 
> collections with:  
> - multiple large SolrCLoud nodes (L) used for production apps and 
> - have 1 small node (S) in the same cluster with less ram/cpu used only for 
> manual user queries, data export and other production issue investigation.
> This ticket would allow to configure the applications using SolrJ to query 
> only the (L) nodes
> This use case is similar to the one described in SOLR-5501 raised by [~manuel 
> lenormand]
> h3. Minimizing network traffic
>  
> For simplicity, let's say that we have  a SolrSloud cluster deployed on 2 (or 
> N) separate racks: rack1 and rack2.
> On each rack, we have a set of SolrCloud VMs as well as a couple of client 
> VMs querying solr using SolrJ.
> All solr nodes are identical and have the same number of collections.
> What we would like to achieve is:
> - clients on rack1 will by preference query only SolrCloud nodes on rack1, 
> and 
> - clients on rack2 will by preference query only SolrCloud nodes on rack2.
> - Cross-rack read will happen if and only if one of the racks has no 
> available Solr node to serve a request.
> In other words, we want read operations to be local to a rack whenever 
> possible.
> Note that write/update/delete/admin operations should not be affected.
> Note that in our use case, we have a cross DC deployment. So, replace 
> rack1/rack2 by DC1/DC2
> Any comment would be very appreciated.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8481) TestSearchPerf no longer needs to duplicate SolrIndexSearcher.(NO_CHECK_QCACHE|NO_CHECK_FILTERCACHE)

2016-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081192#comment-15081192
 ] 

ASF subversion and git services commented on SOLR-8481:
---

Commit 1722877 from [~cpoerschke] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1722877 ]

SOLR-8481: TestSearchPerf no longer needs to duplicate 
SolrIndexSearcher.(NO_CHECK_QCACHE|NO_CHECK_FILTERCACHE) (merge in revision 
1722869 from trunk)

> TestSearchPerf no longer needs to duplicate 
> SolrIndexSearcher.(NO_CHECK_QCACHE|NO_CHECK_FILTERCACHE)
> 
>
> Key: SOLR-8481
> URL: https://issues.apache.org/jira/browse/SOLR-8481
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8481.patch
>
>
> {{TestSearchPerf.doListGen}} no longer needs to duplicate 
> {{SolrIndexSearcher.(NO_CHECK_QCACHE|GET_DOCSET|NO_CHECK_FILTERCACHE|GET_SCORES)}}
>  since they are now visible to it (at package or public level).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



apply document filter to solr index

2016-01-04 Thread liviuchristian
Hi everyone, I'm working on a search engine based on solr which indexes 
documents from a large variety of websites. 
The engine is focused on cook recipes. However, one problem is that these 
websites provide not only content related to cooking recipes but also content 
related to: fashion, travel, politics, liberty rights etc etc which are not 
what the user expects to find on a cooking recipes dedicated search engine. 
Is there any way to filter out content which is not related to the core 
business of the search engine?
Something like parental control software maybe?
Kind regards,Christian Christian Fotache Tel: 0728.297.207 Fax: 0351.411.570

[jira] [Commented] (LUCENE-6949) fix (potential) resource leak in SynonymFilterFactory (coverity CID 120656)

2016-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081153#comment-15081153
 ] 

ASF subversion and git services commented on LUCENE-6949:
-

Commit 1722864 from [~cpoerschke] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1722864 ]

LUCENE-6949: fix (potential) resource leak in SynonymFilterFactory 
(https://scan.coverity.com/projects/5620 CID 120656) (merge in revision 1722856 
from trunk)

> fix (potential) resource leak in SynonymFilterFactory (coverity CID 120656)
> ---
>
> Key: LUCENE-6949
> URL: https://issues.apache.org/jira/browse/LUCENE-6949
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-6949.patch
>
>
> https://scan.coverity.com/projects/5620 mentioned on the dev mailing list 
> (http://mail-archives.apache.org/mod_mbox/lucene-dev/201507.mbox/%3ccaftwexg51-jm_6mdeoz1reagn3xgkbetoz5ou_f+melboo1...@mail.gmail.com%3e)
>  in July 2015.
> * coverity CID 120656



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2937 - Still Failing!

2016-01-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2937/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 62459 lines...]
BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/build.xml:794: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/build.xml:674: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/build.xml:657: The following 
files are missing svn:eol-style (or binary svn:mime-type):
* ./solr/core/src/java/org/apache/solr/handler/admin/CoreAdminOperation.java

Total time: 102 minutes 28 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-8481) TestSearchPerf no longer needs to duplicate SolrIndexSearcher.(NO_CHECK_QCACHE|NO_CHECK_FILTERCACHE)

2016-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081165#comment-15081165
 ] 

ASF subversion and git services commented on SOLR-8481:
---

Commit 1722869 from [~cpoerschke] in branch 'dev/trunk'
[ https://svn.apache.org/r1722869 ]

SOLR-8481: TestSearchPerf no longer needs to duplicate 
SolrIndexSearcher.(NO_CHECK_QCACHE|NO_CHECK_FILTERCACHE)

> TestSearchPerf no longer needs to duplicate 
> SolrIndexSearcher.(NO_CHECK_QCACHE|NO_CHECK_FILTERCACHE)
> 
>
> Key: SOLR-8481
> URL: https://issues.apache.org/jira/browse/SOLR-8481
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8481.patch
>
>
> {{TestSearchPerf.doListGen}} no longer needs to duplicate 
> {{SolrIndexSearcher.(NO_CHECK_QCACHE|GET_DOCSET|NO_CHECK_FILTERCACHE|GET_SCORES)}}
>  since they are now visible to it (at package or public level).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8481) TestSearchPerf no longer needs to duplicate SolrIndexSearcher.(NO_CHECK_QCACHE|NO_CHECK_FILTERCACHE)

2016-01-04 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-8481.
---
   Resolution: Fixed
Fix Version/s: Trunk
   5.5

> TestSearchPerf no longer needs to duplicate 
> SolrIndexSearcher.(NO_CHECK_QCACHE|NO_CHECK_FILTERCACHE)
> 
>
> Key: SOLR-8481
> URL: https://issues.apache.org/jira/browse/SOLR-8481
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8481.patch
>
>
> {{TestSearchPerf.doListGen}} no longer needs to duplicate 
> {{SolrIndexSearcher.(NO_CHECK_QCACHE|GET_DOCSET|NO_CHECK_FILTERCACHE|GET_SCORES)}}
>  since they are now visible to it (at package or public level).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8476) Refactor and cleanup CoreAdminHandler

2016-01-04 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081103#comment-15081103
 ] 

Anshum Gupta commented on SOLR-8476:


Seems like you missed setting svn eol-style on the new file.
I'll do that.

> Refactor and cleanup CoreAdminHandler
> -
>
> Key: SOLR-8476
> URL: https://issues.apache.org/jira/browse/SOLR-8476
> Project: Solr
>  Issue Type: Task
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Minor
> Attachments: SOLR-8476.patch
>
>
> {{CoreAdminHandler}} is too large and unmanageable. Split it and make it 
> simpler



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8146) Allowing SolrJ CloudSolrClient to have preferred replica for query/read

2016-01-04 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081120#comment-15081120
 ] 

Noble Paul commented on SOLR-8146:
--

bq.in order to use the preferredNodes snitch, one will have to add that snitch 
to the collection. Is this correct?

well, no. The implicit snitches are available to all collections. A snitch just 
has to say that I can provide values for a particular tag . 


Using regex is not really possible in the current design . It is only possible 
to provide discrete values. or ranges. 

Lets assume an ip address 192.93.255.255 . It is possible for a Snitch to 
provide values such as
ip_1 = 192
ip_2 = 93
ip_3 = 255
ip_4 = 255

In this case you can provide a rule which says 
{{preferredNodes=ip_1:192,ip_2:93}}
This means it will choose only nodes {{192.93.\*.\*}} 
This can be a part of the {{ImplicitSnitch}} itself. The implicitSnitch can 
provide values for tags {{ip_1}}, {{ip_2}}. {{ip_3}}, {{ip_4}} and for ip v6 it 
can provide values for {{ip_5}} and {{ip_6}} as well

> Allowing SolrJ CloudSolrClient to have preferred replica for query/read
> ---
>
> Key: SOLR-8146
> URL: https://issues.apache.org/jira/browse/SOLR-8146
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: 5.3
>Reporter: Arcadius Ahouansou
> Attachments: SOLR-8146.patch, SOLR-8146.patch, SOLR-8146.patch
>
>
> h2. Backgrouds
> Currently, the CloudSolrClient randomly picks a replica to query.
> This is done by shuffling the list of live URLs to query then, picking the 
> first item from the list.
> This ticket is to allow more flexibility and control to some extend which 
> URLs will be picked up for queries.
> Note that this is for queries only and would not affect update/delete/admin 
> operations.
> h2. Implementation
> The current patch uses regex pattern and moves to the top of the list of URLs 
> only those matching the given regex specified by the system property 
> {code}solr.preferredQueryNodePattern{code}
> Initially, I thought it may be good to have Solr nodes tagged with a string 
> pattern (snitch?) and use that pattern for matching the URLs.
> Any comment, recommendation or feedback would be appreciated.
> h2. Use Cases
> There are many cases where the ability to choose the node where queries go 
> can be very handy:
> h3. Special node for manual user queries and analytics
> One may have a SolrCLoud cluster where every node host the same set of 
> collections with:  
> - multiple large SolrCLoud nodes (L) used for production apps and 
> - have 1 small node (S) in the same cluster with less ram/cpu used only for 
> manual user queries, data export and other production issue investigation.
> This ticket would allow to configure the applications using SolrJ to query 
> only the (L) nodes
> This use case is similar to the one described in SOLR-5501 raised by [~manuel 
> lenormand]
> h3. Minimizing network traffic
>  
> For simplicity, let's say that we have  a SolrSloud cluster deployed on 2 (or 
> N) separate racks: rack1 and rack2.
> On each rack, we have a set of SolrCloud VMs as well as a couple of client 
> VMs querying solr using SolrJ.
> All solr nodes are identical and have the same number of collections.
> What we would like to achieve is:
> - clients on rack1 will by preference query only SolrCloud nodes on rack1, 
> and 
> - clients on rack2 will by preference query only SolrCloud nodes on rack2.
> - Cross-rack read will happen if and only if one of the racks has no 
> available Solr node to serve a request.
> In other words, we want read operations to be local to a rack whenever 
> possible.
> Note that write/update/delete/admin operations should not be affected.
> Note that in our use case, we have a cross DC deployment. So, replace 
> rack1/rack2 by DC1/DC2
> Any comment would be very appreciated.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.7.0_80) - Build # 15136 - Still Failing!

2016-01-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/15136/
Java: 32bit/jdk1.7.0_80 -server -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 55585 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:794: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:674: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:657: The following 
files are missing svn:eol-style (or binary svn:mime-type):
* ./solr/core/src/java/org/apache/solr/handler/admin/CoreAdminOperation.java

Total time: 65 minutes 4 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-8476) Refactor and cleanup CoreAdminHandler

2016-01-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081112#comment-15081112
 ] 

ASF subversion and git services commented on SOLR-8476:
---

Commit 1722862 from [~anshumg] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1722862 ]

SOLR-8476: adding svn:eol-style property for CoreAdminOperation.java

> Refactor and cleanup CoreAdminHandler
> -
>
> Key: SOLR-8476
> URL: https://issues.apache.org/jira/browse/SOLR-8476
> Project: Solr
>  Issue Type: Task
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Minor
> Attachments: SOLR-8476.patch
>
>
> {{CoreAdminHandler}} is too large and unmanageable. Split it and make it 
> simpler



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8146) Allowing SolrJ CloudSolrClient to have preferred replica for query/read

2016-01-04 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081176#comment-15081176
 ] 

Noble Paul commented on SOLR-8146:
--

It's OK , the tag names should make sense , that is all using DC or rack does 
not necessarily make sense in all cases

> Allowing SolrJ CloudSolrClient to have preferred replica for query/read
> ---
>
> Key: SOLR-8146
> URL: https://issues.apache.org/jira/browse/SOLR-8146
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: 5.3
>Reporter: Arcadius Ahouansou
> Attachments: SOLR-8146.patch, SOLR-8146.patch, SOLR-8146.patch
>
>
> h2. Backgrouds
> Currently, the CloudSolrClient randomly picks a replica to query.
> This is done by shuffling the list of live URLs to query then, picking the 
> first item from the list.
> This ticket is to allow more flexibility and control to some extend which 
> URLs will be picked up for queries.
> Note that this is for queries only and would not affect update/delete/admin 
> operations.
> h2. Implementation
> The current patch uses regex pattern and moves to the top of the list of URLs 
> only those matching the given regex specified by the system property 
> {code}solr.preferredQueryNodePattern{code}
> Initially, I thought it may be good to have Solr nodes tagged with a string 
> pattern (snitch?) and use that pattern for matching the URLs.
> Any comment, recommendation or feedback would be appreciated.
> h2. Use Cases
> There are many cases where the ability to choose the node where queries go 
> can be very handy:
> h3. Special node for manual user queries and analytics
> One may have a SolrCLoud cluster where every node host the same set of 
> collections with:  
> - multiple large SolrCLoud nodes (L) used for production apps and 
> - have 1 small node (S) in the same cluster with less ram/cpu used only for 
> manual user queries, data export and other production issue investigation.
> This ticket would allow to configure the applications using SolrJ to query 
> only the (L) nodes
> This use case is similar to the one described in SOLR-5501 raised by [~manuel 
> lenormand]
> h3. Minimizing network traffic
>  
> For simplicity, let's say that we have  a SolrSloud cluster deployed on 2 (or 
> N) separate racks: rack1 and rack2.
> On each rack, we have a set of SolrCloud VMs as well as a couple of client 
> VMs querying solr using SolrJ.
> All solr nodes are identical and have the same number of collections.
> What we would like to achieve is:
> - clients on rack1 will by preference query only SolrCloud nodes on rack1, 
> and 
> - clients on rack2 will by preference query only SolrCloud nodes on rack2.
> - Cross-rack read will happen if and only if one of the racks has no 
> available Solr node to serve a request.
> In other words, we want read operations to be local to a rack whenever 
> possible.
> Note that write/update/delete/admin operations should not be affected.
> Note that in our use case, we have a cross DC deployment. So, replace 
> rack1/rack2 by DC1/DC2
> Any comment would be very appreciated.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: apply document filter to solr index

2016-01-04 Thread Binoy Dalal
There is no way that you can do that in solr.

You'll have to write something at the app level,  where you're crawling
your docs or write a custom update handler that will preprocess the crawled
docs and throw out the irrelevant ones.

One way you can do that is look at the doc title and the url for certain
keywords that might tell you that the particular article belongs to the
fashion domain etc.
If the content is well structured then you might also have certain fields
in the raw crawled doc that tell you the doc category.
To look at the raw crawled doc you can use the
DocumentAnalysisRequestHandler.

On Mon, 4 Jan 2016, 18:07   wrote:

> Hi everyone, I'm working on a search engine based on solr which indexes
> documents from a large variety of websites.
> The engine is focused on cook recipes. However, one problem is that these
> websites provide not only content related to cooking recipes but also
> content related to: fashion, travel, politics, liberty rights etc etc which
> are not what the user expects to find on a cooking recipes dedicated search
> engine.
> Is there any way to filter out content which is not related to the core
> business of the search engine?
> Something like parental control software maybe?
> Kind regards,Christian Christian Fotache Tel: 0728.297.207 Fax:
> 0351.411.570

-- 
Regards,
Binoy Dalal


  1   2   >