[JENKINS] Lucene-Solr-SmokeRelease-6.x - Build # 48 - Failure

2016-04-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-6.x/48/

No tests ran.

Build Log:
[...truncated 8078 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/build.xml:520: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build.xml:480:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/common-build.xml:2520:
 java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:209)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.read(InputRecord.java:503)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:973)
at 
sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375)
at 
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1403)
at 
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1387)
at 
sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
at 
sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153)
at 
org.apache.tools.ant.taskdefs.Get$GetThread.openConnection(Get.java:660)
at org.apache.tools.ant.taskdefs.Get$GetThread.get(Get.java:579)
at org.apache.tools.ant.taskdefs.Get$GetThread.run(Get.java:569)

Total time: 2 minutes 34 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-5.5-Java7 - Build # 18 - Failure

2016-04-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.5-Java7/18/

1 tests failed.
FAILED:  org.apache.solr.index.hdfs.CheckHdfsIndexTest.doTest

Error Message:
Could not find a healthy node to handle the request.

Stack Trace:
org.apache.solr.common.SolrException: Could not find a healthy node to handle 
the request.
at 
__randomizedtesting.SeedInfo.seed([C3DC102AED8D0F39:6498A88E80361C80]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1094)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:482)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:463)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.commit(AbstractFullDistribZkTestBase.java:1506)
at 
org.apache.solr.index.hdfs.CheckHdfsIndexTest.doTest(CheckHdfsIndexTest.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+115) - Build # 535 - Failure!

2016-04-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/535/
Java: 32bit/jdk-9-ea+115 -server -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.HttpClientUtilTest.testSSLSystemProperties

Error Message:
HTTPS scheme could not be created using the javax.net.ssl.* system properties.

Stack Trace:
java.lang.AssertionError: HTTPS scheme could not be created using the 
javax.net.ssl.* system properties.
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.client.solrj.impl.HttpClientUtilTest.testSSLSystemProperties(HttpClientUtilTest.java:127)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.execute(SlaveMain.java:243)
at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.main(SlaveMain.java:354)
at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe.main(SlaveMainSafe.java:10)




Build Log:
[...truncated 12997 lines...]
   [junit4] Suite: org.apache.solr.client.solrj.impl.HttpClientUtilTest
   [junit4] FAILURE 0.02s J1 | HttpClientUtilTest.testSSLSystemProperties <<<
   [junit4]> Throwable #1: java.lang.AssertionError: HTTPS scheme could not 
be created using the javax.net.ssl.* system properties.
   [junit4]>at 
org.apache.solr.client.solrj.impl.HttpClientUtilTest.testSSLSystemProperties(HttpClientUtilTest.java:127)
   [junit4]>at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
   [junit4]>at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
   [junit4]>at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
   [junit4]   2> 115350 INFO  (main) [] 
o.a.s.c.s.i.Krb5HttpClientConfigurer Setting up SPNego auth with config: test
   [junit4] Completed [45/83 (1!)] on J1 in 0.04s, 5 tests, 1 failure <<< 
FAILURES!

[...truncated 125 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/build.xml:740: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/build.xml:684: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/build.xml:59: The following error 
occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build.xml:246: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/common-build.xml:529: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/common-build.xml:1457: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/common-build.xml:1014: 
There were test failures: 83 suites, 545 tests, 1 failure [seed: 
75D69A5586D39BE9]

Total time: 72 minutes 18 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+115) - Build # 16620 - Failure!

2016-04-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16620/
Java: 32bit/jdk-9-ea+115 -server -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  'X val' for path 'x' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{"wt":"json"},   
"context":{ "webapp":"", "path":"/test1", "httpMethod":"GET"},   
"class":"org.apache.solr.core.BlobStoreTestRequestHandler",   "x":null},  from 
server:  null

Stack Trace:
java.lang.AssertionError: Could not get expected value  'X val' for path 'x' 
full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{"wt":"json"},
  "context":{
"webapp":"",
"path":"/test1",
"httpMethod":"GET"},
  "class":"org.apache.solr.core.BlobStoreTestRequestHandler",
  "x":null},  from server:  null
at 
__randomizedtesting.SeedInfo.seed([71D343A7E5919E35:A99E6EF0124C3B95]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:457)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:233)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Created] (SOLR-9052) Provide a syntax for Adding Multiple Documents on REST that Uses Proper JSON Format

2016-04-29 Thread Mary Jo Sminkey (JIRA)
Mary Jo Sminkey created SOLR-9052:
-

 Summary: Provide a syntax for Adding Multiple Documents on REST 
that Uses Proper JSON Format
 Key: SOLR-9052
 URL: https://issues.apache.org/jira/browse/SOLR-9052
 Project: Solr
  Issue Type: Improvement
  Components: update
Reporter: Mary Jo Sminkey


Currently if you want to post a batch of documents to the update handler and 
need to include any options like a boost for each, you have to use a format 
that uses multiple "add" keys, which make it virtually impossible to build an 
object in another language and serialize it since most do not allow multiple 
keys of the same name. Many JSON formatters and validators as well will not 
allow this. While the JSON specs do not allow it outright, they do say that 
keys "SHOULD" be unique. Please add a way to send multiple documents to the 
update handler via the REST Api that does not require using multiple "add" 
keys. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 5.5.1

2016-04-29 Thread Anshum Gupta
I tried to add the back compat index for 5.5.0 by running the script on
branch_5x, but it errors out when running the test with : "Extra
back-compat test files: 5.5.0-cfs". I'm confused here in terms of what the
instructions say and what's supposed to be done.


On Fri, Apr 29, 2016 at 6:52 PM, Anshum Gupta 
wrote:

> Seems like 5.5.0 back compat index was never added. Can someone confirm
> that?
> I have the RC but the smoke test failed when I ran it locally. Here's the
> error:
>
> Verify...
>   confirm all releases have coverage in TestBackwardsCompatibility
> find all past Lucene releases...
> run TestBackwardsCompatibility..
>   Backcompat testing not required for release 6.0.0 because it's not
> less than 5.5.1
> Releases that don't seem to be tested:
>   5.5.0
> Traceback (most recent call last):
>   File "dev-tools/scripts/smokeTestRelease.py", line 1443, in 
> main()
>   File "dev-tools/scripts/smokeTestRelease.py", line 1387, in main
> smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir,
> c.is_signed, ' '.join(c.test_args))
>   File "dev-tools/scripts/smokeTestRelease.py", line 1425, in smokeTest
> unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % version,
> gitRevision, version, testArgs, baseURL)
>   File "dev-tools/scripts/smokeTestRelease.py", line 589, in
> unpackAndVerify
> verifyUnpacked(java, project, artifact, unpackPath, gitRevision,
> version, testArgs, tmpDir, baseURL)
>   File "dev-tools/scripts/smokeTestRelease.py", line 769, in verifyUnpacked
> confirmAllReleasesAreTestedForBackCompat(version, unpackPath)
>   File "dev-tools/scripts/smokeTestRelease.py", line 1380, in
> confirmAllReleasesAreTestedForBackCompat
> raise RuntimeError('some releases are not tested by
> TestBackwardsCompatibility?')
> RuntimeError: some releases are not tested by TestBackwardsCompatibility?
>
>
>
> On Fri, Apr 29, 2016 at 11:05 AM, Anshum Gupta 
> wrote:
>
>> Something seems to be going on with TestManagedSchemaAPI as it's been
>> consistently failing.
>> I woke up with a fever today so I'll try and debug it some time later if
>> I'm unable to get an RC built, but if I do get the RC, I'll get it out to
>> vote and in parallel see if it's something that needs fixing unless someone
>> else beats me to it.
>>
>> On Fri, Apr 29, 2016 at 9:26 AM, Anshum Gupta 
>> wrote:
>>
>>> That makes sense considering there are those checks for ignoring 1
>>> missing version.
>>>
>>> On Fri, Apr 29, 2016 at 6:53 AM, Steve Rowe  wrote:
>>>
 Anshum,

 TL;DR: When there is only one release in flight, I think it’s okay to
 run addVersion.py on all branches at the start of the release process for
 all types of releases.

 When we chatted last night I said backcompat index testing was a
 problem on non-release branches in the interval between adding a
 not-yet-released version to o.a.l.util.Version and when a backcompat index
 is committed on the branch.  I was wrong.

 Here are the places where there are back-compat coverage tests:

 1. smokeTestRelease.py's confirmAllReleasesAreTestedForBackCompat()
 will succeed until release artifacts have been published - see
 getAllLuceneReleases() for where they are scraped off the lucene release
 list page on archive.apache.org.  So back-compat indexes should be
 generated and committed as soon as possible after publishing artifacts.

 2. backward-codec’s TestBackwardsCompatibility.testAllVersionsTested()
 will still succeed if a single version is not tested.  Here’s the code:

   // we could be missing up to 1 file, which may be due to a release
 that is in progress
   if (missingFiles.size() <= 1 && extraFiles.isEmpty()) {

 The above test could be improved by checking for the presence of
 published release artifacts for each release like smokeTestRelease.py does,
 and then not requiring the backcompat index be present for those that are
 not yet published; this would allow for multiple in-flight releases.

 Steve

 > On Apr 28, 2016, at 10:44 PM, Anshum Gupta 
 wrote:
 >
 > I've updated the "Update Version Numbers in the Source Code" section
 on the ReleaseToDo page. It'd be good to have some one else also take a
 look at it.
 >
 > Here is what I've changed (only bug fix release):
 > * Only bump up the version on the release branch using addVersion.py
 > * Don't bump it up on the non-release versions in case of bug fix
 release.
 > * As part of the post-release process, use the commit hash from the
 release branch version bump up, to increment the version on the non-release
 branches.
 >
 > I thought we could do this for non bug-fix releases too, but I was
 wrong. Minor versions need to be bumped up on stable branches 

Re: Lucene/Solr 5.5.1

2016-04-29 Thread Anshum Gupta
Seems like 5.5.0 back compat index was never added. Can someone confirm
that?
I have the RC but the smoke test failed when I ran it locally. Here's the
error:

Verify...
  confirm all releases have coverage in TestBackwardsCompatibility
find all past Lucene releases...
run TestBackwardsCompatibility..
  Backcompat testing not required for release 6.0.0 because it's not
less than 5.5.1
Releases that don't seem to be tested:
  5.5.0
Traceback (most recent call last):
  File "dev-tools/scripts/smokeTestRelease.py", line 1443, in 
main()
  File "dev-tools/scripts/smokeTestRelease.py", line 1387, in main
smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, c.is_signed,
' '.join(c.test_args))
  File "dev-tools/scripts/smokeTestRelease.py", line 1425, in smokeTest
unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % version,
gitRevision, version, testArgs, baseURL)
  File "dev-tools/scripts/smokeTestRelease.py", line 589, in unpackAndVerify
verifyUnpacked(java, project, artifact, unpackPath, gitRevision,
version, testArgs, tmpDir, baseURL)
  File "dev-tools/scripts/smokeTestRelease.py", line 769, in verifyUnpacked
confirmAllReleasesAreTestedForBackCompat(version, unpackPath)
  File "dev-tools/scripts/smokeTestRelease.py", line 1380, in
confirmAllReleasesAreTestedForBackCompat
raise RuntimeError('some releases are not tested by
TestBackwardsCompatibility?')
RuntimeError: some releases are not tested by TestBackwardsCompatibility?



On Fri, Apr 29, 2016 at 11:05 AM, Anshum Gupta 
wrote:

> Something seems to be going on with TestManagedSchemaAPI as it's been
> consistently failing.
> I woke up with a fever today so I'll try and debug it some time later if
> I'm unable to get an RC built, but if I do get the RC, I'll get it out to
> vote and in parallel see if it's something that needs fixing unless someone
> else beats me to it.
>
> On Fri, Apr 29, 2016 at 9:26 AM, Anshum Gupta 
> wrote:
>
>> That makes sense considering there are those checks for ignoring 1
>> missing version.
>>
>> On Fri, Apr 29, 2016 at 6:53 AM, Steve Rowe  wrote:
>>
>>> Anshum,
>>>
>>> TL;DR: When there is only one release in flight, I think it’s okay to
>>> run addVersion.py on all branches at the start of the release process for
>>> all types of releases.
>>>
>>> When we chatted last night I said backcompat index testing was a problem
>>> on non-release branches in the interval between adding a not-yet-released
>>> version to o.a.l.util.Version and when a backcompat index is committed on
>>> the branch.  I was wrong.
>>>
>>> Here are the places where there are back-compat coverage tests:
>>>
>>> 1. smokeTestRelease.py's confirmAllReleasesAreTestedForBackCompat() will
>>> succeed until release artifacts have been published - see
>>> getAllLuceneReleases() for where they are scraped off the lucene release
>>> list page on archive.apache.org.  So back-compat indexes should be
>>> generated and committed as soon as possible after publishing artifacts.
>>>
>>> 2. backward-codec’s TestBackwardsCompatibility.testAllVersionsTested()
>>> will still succeed if a single version is not tested.  Here’s the code:
>>>
>>>   // we could be missing up to 1 file, which may be due to a release
>>> that is in progress
>>>   if (missingFiles.size() <= 1 && extraFiles.isEmpty()) {
>>>
>>> The above test could be improved by checking for the presence of
>>> published release artifacts for each release like smokeTestRelease.py does,
>>> and then not requiring the backcompat index be present for those that are
>>> not yet published; this would allow for multiple in-flight releases.
>>>
>>> Steve
>>>
>>> > On Apr 28, 2016, at 10:44 PM, Anshum Gupta 
>>> wrote:
>>> >
>>> > I've updated the "Update Version Numbers in the Source Code" section
>>> on the ReleaseToDo page. It'd be good to have some one else also take a
>>> look at it.
>>> >
>>> > Here is what I've changed (only bug fix release):
>>> > * Only bump up the version on the release branch using addVersion.py
>>> > * Don't bump it up on the non-release versions in case of bug fix
>>> release.
>>> > * As part of the post-release process, use the commit hash from the
>>> release branch version bump up, to increment the version on the non-release
>>> branches.
>>> >
>>> > I thought we could do this for non bug-fix releases too, but I was
>>> wrong. Minor versions need to be bumped up on stable branches (and trunk)
>>> because during the release process for say version 6.1, there might be
>>> commits for 6.2 and we'd need stable branches and master, both to support
>>> those commits.
>>> > We could debate about not needing something like this for major
>>> versions but then I don't think it's worth the pain of different release
>>> processes for each branch but I'm not stuck up with this.
>>> >
>>> >
>>> > On Thu, Apr 28, 2016 at 5:31 PM, Anshum Gupta 

[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 52 - Still Failing

2016-04-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/52/

3 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=1094, name=collection4, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=1094, name=collection4, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
at 
__randomizedtesting.SeedInfo.seed([D87C2A1E874D0DFE:502815C429B16006]:0)
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:55263: collection already exists: 
awholynewstresscollection_collection4_5
at __randomizedtesting.SeedInfo.seed([D87C2A1E874D0DFE]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:590)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:404)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:357)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1192)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:962)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:898)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1595)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1616)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:987)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestTolerantUpdateProcessorRandomCloud

Error Message:
Could not find collection:test_col

Stack Trace:
java.lang.AssertionError: Could not find collection:test_col
at __randomizedtesting.SeedInfo.seed([D87C2A1E874D0DFE]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:150)
at 
org.apache.solr.cloud.TestTolerantUpdateProcessorRandomCloud.createMiniSolrCloudCluster(TestTolerantUpdateProcessorRandomCloud.java:135)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:811)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-8970) SSLTestConfig behaves really stupid if keystore can't be found

2016-04-29 Thread Joseph Lawson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15265044#comment-15265044
 ] 

Joseph Lawson commented on SOLR-8970:
-

Thanks for sticking with this.

> SSLTestConfig behaves really stupid if keystore can't be found
> --
>
> Key: SOLR-8970
> URL: https://issues.apache.org/jira/browse/SOLR-8970
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-8970.patch, SOLR-8970.patch
>
>
> The SSLTestConfig constructor lets the call (notable SolrTestCaseJ4) tell it 
> wether clientAuth should be used (note SolrTestCaseJ4 calls this boolean 
> "trySslClientAuth") but it has a hardcoded assumption that the keystore file 
> it can use (for both the keystore and the truststore) will exist at a fixed 
> path in the solr install.
> when this works, it works fine - but if end users subclass/reuse 
> SolrTestCaseJ4 in their own projects, they may do so in a way that prevents 
> the SSLTestConfig keystore assumptions from being true, and yet they won't 
> get any sort of clear error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8970) SSLTestConfig behaves really stupid if keystore can't be found

2016-04-29 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8970:
---
Attachment: SOLR-8970.patch

updated patch to compile against master -- but after working on other SSL test 
related issues and getting more familiar with the code i realize what Joseph 
suggested isn't hard at all -- it would just require a bit of refactoring in 
how SSLConfig.createContextFactory works so that SSLTestConfig can override the 
the method completely and produce it's own KeyStore object (instead of a path). 
 All the plumbing to load KeyStore objects from resources files in the 
classpath already exists.

I'll look into this soon.

> SSLTestConfig behaves really stupid if keystore can't be found
> --
>
> Key: SOLR-8970
> URL: https://issues.apache.org/jira/browse/SOLR-8970
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-8970.patch, SOLR-8970.patch
>
>
> The SSLTestConfig constructor lets the call (notable SolrTestCaseJ4) tell it 
> wether clientAuth should be used (note SolrTestCaseJ4 calls this boolean 
> "trySslClientAuth") but it has a hardcoded assumption that the keystore file 
> it can use (for both the keystore and the truststore) will exist at a fixed 
> path in the solr install.
> when this works, it works fine - but if end users subclass/reuse 
> SolrTestCaseJ4 in their own projects, they may do so in a way that prevents 
> the SSLTestConfig keystore assumptions from being true, and yet they won't 
> get any sort of clear error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8996) Add Random Streaming Expression

2016-04-29 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15265007#comment-15265007
 ] 

Dennis Gove commented on SOLR-8996:
---

Sure thing. Below is the history of my commits from master waiting on this one 
to backport to 6x

{code}

commit e6e495c79588c60db1ac45bcba1a1dcaa970bcea
Author: Dennis Gove 
Date:   Tue Apr 19 13:49:26 2016 -0400

SOLR-8918: Corrects usage of a global variable in admin page's stream.js 
which was overriding the same variable in cloud.js

commit af7dad6825d47e76c39842e97be8b37ab4c2cffd
Author: Dennis Gove 
Date:   Tue Apr 19 11:40:20 2016 -0400

SOLR-8918: Adds Streaming to the admin page under the collections section

Includes ability to see graphically the expression explanation

commit 2e95a54a52878c1d6305a282a324705a79d56e65
Author: Dennis Gove 
Date:   Mon Apr 18 21:34:36 2016 -0400

SOLR-9009: Adds ability to get an Explanation of a Streaming Expression
{code}

> Add Random Streaming Expression
> ---
>
> Key: SOLR-8996
> URL: https://issues.apache.org/jira/browse/SOLR-8996
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
> Attachments: RandomStream.java, SOLR-8996.patch
>
>
> The random Streaming Expression will return a *limited* random stream of 
> Tuples that match a query. This will be useful in many different scenarios 
> where random data sets are needed.
> Proposed syntax:
> {code}
> random(baskets, q="productID:productX", rows="100", fl="basketID") 
> {code}
> The sample code above will query the *baskets* collection and return 100 
> random *basketID's* where the productID is productX.
> The underlying implementation will rely on Solr's random field type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8492) Add LogisticRegressionQuery and LogitStream

2016-04-29 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264975#comment-15264975
 ] 

Joel Bernstein commented on SOLR-8492:
--

Yep, I see it. If the client  cache wasn't set it was creating a client and not 
closing it. Looks like you changed it to always use the cache. Looks good!

> Add LogisticRegressionQuery and LogitStream
> ---
>
> Key: SOLR-8492
> URL: https://issues.apache.org/jira/browse/SOLR-8492
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
> Fix For: 6.1
>
> Attachments: SOLR-8492.diff, SOLR-8492.diff, SOLR-8492.patch, 
> SOLR-8492.patch, SOLR-8492.patch, SOLR-8492.patch, SOLR-8492.patch, 
> SOLR-8492.patch, SOLR-8492.patch, SOLR-8492.patch, logit.csv
>
>
> This ticket is to add a new query called a LogisticRegressionQuery (LRQ).
> The LRQ extends AnalyticsQuery 
> (http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html)
>  and returns a DelegatingCollector that implements a Stochastic Gradient 
> Descent (SGD) optimizer for Logistic Regression.
> This ticket also adds the LogitStream which leverages Streaming Expressions 
> to provide iteration over the shards. Each call to LogitStream.read() calls 
> down to the shards and executes the LogisticRegressionQuery. The model data 
> is collected from the shards and the weights are averaged and sent back to 
> the shards with the next iteration. Each call to read() returns a Tuple with 
> the averaged weights and error from the shards. With this approach the 
> LogitStream streams the changing model back to the client after each 
> iteration.
> The LogitStream will return the EOF Tuple when it reaches the defined 
> maxIterations. When sent as a Streaming Expression to the Stream handler this 
> provides parallel iterative behavior. This same approach can be used to 
> implement other parallel iterative algorithms.
> The initial patch has  a test which simply tests the mechanics of the 
> iteration. More work will need to be done to ensure the SGD is properly 
> implemented. The distributed approach of the SGD will also need to be 
> reviewed.  
> This implementation is designed for use cases with a small number of features 
> because each feature is it's own discreet field.
> An implementation which supports a higher number of features would be 
> possible by packing features into a byte array and storing as binary 
> DocValues.
> This implementation is designed to support a large sample set. With a large 
> number of shards, a sample set into the billions may be possible.
> sample Streaming Expression Syntax:
> {code}
> logit(collection1, features="a,b,c,d,e,f" outcome="x" maxIterations="80")
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8574) Implement ConnectionImpl.isValid() and DatabaseMetaDataImpl.getConnection()

2016-04-29 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8574:
---
Fix Version/s: (was: master)
   6.0

> Implement ConnectionImpl.isValid() and DatabaseMetaDataImpl.getConnection()
> ---
>
> Key: SOLR-8574
> URL: https://issues.apache.org/jira/browse/SOLR-8574
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: master
>Reporter: Kevin Risden
>Priority: Minor
> Fix For: 6.0
>
> Attachments: SOLR-8574.patch
>
>
> 2016-01-20 10:20:29.631 INFO   685 [ExecutorRunner-pool-2-thread-2 - 
> AbstractFacade.isValid] isValid() throws an exception. Physical connection: 
> 'RootConnection' for: 'abc'. Will consider connection as valid
> java.lang.UnsupportedOperationException
>   at 
> org.apache.solr.client.solrj.io.sql.ConnectionImpl.isValid(ConnectionImpl.java:284)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.onseven.dbvis.h.B.C.ᅣチ(Z:2117)
>   at com.onseven.dbvis.h.B.F$A.call(Z:1369)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9020) Implement StatementImpl/ResultSetImpl get/set fetch* methods and proper errors for traversal methods

2016-04-29 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9020:
---
Fix Version/s: (was: master)

> Implement StatementImpl/ResultSetImpl get/set fetch* methods and proper 
> errors for traversal methods
> 
>
> Key: SOLR-9020
> URL: https://issues.apache.org/jira/browse/SOLR-9020
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: master, 6.0
>Reporter: Kevin Risden
>Assignee: Kevin Risden
> Fix For: 6.1
>
> Attachments: SOLR-9020.patch
>
>
> There are 4 methods related to fetch in StatementImpl and 4 methods related 
> to fetch in ResultSetImpl. ResultSetImpl has some traversal methods that 
> don't make sense with the fetch direction. It would make sense to implement 
> them to support more SQL clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8602) Implement ResultSetImpl.wasNull()

2016-04-29 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8602:
---
Fix Version/s: (was: master)
   6.0

> Implement ResultSetImpl.wasNull()
> -
>
> Key: SOLR-8602
> URL: https://issues.apache.org/jira/browse/SOLR-8602
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: master
>Reporter: Kevin Risden
>Assignee: Joel Bernstein
> Fix For: 6.0
>
> Attachments: SOLR-8602.patch, SOLR-8602.patch, SOLR-8602.patch, 
> SOLR-8602.patch, SOLR-8602.patch, SOLR-8602.patch, SOLR-8602.patch
>
>
> ResultSetImpl.wasNull is necessary for SQL clients to display a SQL NULL 
> instead of 0 or false for certain get* commands.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8809) Implement Connection.prepareStatement

2016-04-29 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8809:
---
Fix Version/s: (was: master)

> Implement Connection.prepareStatement
> -
>
> Key: SOLR-8809
> URL: https://issues.apache.org/jira/browse/SOLR-8809
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: master, 6.0
>Reporter: Kevin Risden
>Assignee: Kevin Risden
> Fix For: 6.1
>
> Attachments: SOLR-8809.patch, SOLR-8809.patch, SOLR-8809.patch
>
>
> There are multiple JDBC clients that require a PreparedStatement to work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8603) Implement StatementImpl.getMoreResults()

2016-04-29 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8603:
---
Fix Version/s: (was: master)
   6.0

> Implement StatementImpl.getMoreResults()
> 
>
> Key: SOLR-8603
> URL: https://issues.apache.org/jira/browse/SOLR-8603
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: master
>Reporter: Kevin Risden
> Fix For: 6.0
>
> Attachments: SOLR-8603.patch
>
>
> JiSQL requires getMoreResults to be implemented. Here is the stacktrace:
> java.lang.UnsupportedOperationException
>   at 
> org.apache.solr.client.solrj.io.sql.StatementImpl.getMoreResults(StatementImpl.java:232)
>   at com.xigole.util.sql.Jisql.doIsql(Jisql.java:443)
>   at com.xigole.util.sql.Jisql.run(Jisql.java:296)
>   at com.xigole.util.sql.Jisql.main(Jisql.java:271)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9019) SolrJ JDBC - Ensure that R RJDBC works with SolrJ JDBC

2016-04-29 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9019:
---
Fix Version/s: (was: master)

> SolrJ JDBC - Ensure that R RJDBC works with SolrJ JDBC
> --
>
> Key: SOLR-9019
> URL: https://issues.apache.org/jira/browse/SOLR-9019
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Reporter: Kevin Risden
>Assignee: Kevin Risden
> Fix For: 6.1
>
>
> R has RJDBC (https://cran.r-project.org/web/packages/RJDBC/index.html) which 
> can connect to JDBC. Check that it works with SolrJ JDBC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8631) Throw UnsupportedOperationException for DatabaseMetaDataImpl.getTypeInfo()

2016-04-29 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8631:
---
Fix Version/s: (was: master)
   6.0

> Throw UnsupportedOperationException for DatabaseMetaDataImpl.getTypeInfo()
> --
>
> Key: SOLR-8631
> URL: https://issues.apache.org/jira/browse/SOLR-8631
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: master
>Reporter: Kevin Risden
> Fix For: 6.0
>
> Attachments: SOLR-8631.patch
>
>
> Once getSchemas (SOLR-8510) is implemented, DBVisualizer tries to get type 
> information with getDataTypes and fails with a NPE. A short term workaround 
> is to an UnsupportedOperationException instead of returning null.
> {code}
> 2016-02-01 21:27:33.868 FINE   647 [pool-3-thread-4 - E.ᅣチ] RootConnection: 
> DatabaseMetaDataImpl.getTypeInfo()
> 2016-02-01 21:27:33.870 FINE   647 [AWT-EventQueue-0 - B.executionFinished] 
> Exception while Connecting
> com.onseven.dbvis.K.B.P: java.util.concurrent.ExecutionException: 
> java.lang.NullPointerException
>   at com.onseven.dbvis.K.B.L.ᅣチ(Z:2680)
>   at com.onseven.dbvis.K.B.L.ᅣチ(Z:1521)
>   at com.onseven.dbvis.K.B.L$3.run(Z:3032)
>   at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:311)
>   at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:756)
>   at java.awt.EventQueue.access$500(EventQueue.java:97)
>   at java.awt.EventQueue$3.run(EventQueue.java:709)
>   at java.awt.EventQueue$3.run(EventQueue.java:703)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at 
> java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:76)
>   at java.awt.EventQueue.dispatchEvent(EventQueue.java:726)
>   at 
> java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:201)
>   at 
> java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:116)
>   at 
> java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:105)
>   at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101)
>   at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:93)
>   at java.awt.EventDispatchThread.run(EventDispatchThread.java:82)
> Caused by: java.util.concurrent.ExecutionException: 
> java.lang.NullPointerException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at javax.swing.SwingWorker.get(SwingWorker.java:602)
>   at com.onseven.dbvis.K.B.L.ᅣチ(Z:990)
>   ... 16 more
> Caused by: java.lang.NullPointerException
>   at com.onseven.dbvis.db.AbstractFacade.getDataTypes(Z:3212)
>   at com.onseven.dbvis.db.AbstractFacade.runConnectionSetup(Z:1260)
>   at com.onseven.dbvis.db.A.I.ᅣᄋ(Z:3512)
>   at com.onseven.dbvis.db.A.B.execute(Z:2933)
>   at com.onseven.dbvis.K.B.Z.ᅣチ(Z:2285)
>   at com.onseven.dbvis.K.B.L.ᅣツ(Z:1374)
>   at com.onseven.dbvis.K.B.L.doInBackground(Z:1521)
>   at javax.swing.SwingWorker$1.call(SwingWorker.java:295)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at javax.swing.SwingWorker.run(SwingWorker.java:334)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8492) Add LogisticRegressionQuery and LogitStream

2016-04-29 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264923#comment-15264923
 ] 

Cao Manh Dat commented on SOLR-8492:


 [~joel.bernstein] the mem leak appear in LogitCall class, whe create 
solrclient and never close it.

> Add LogisticRegressionQuery and LogitStream
> ---
>
> Key: SOLR-8492
> URL: https://issues.apache.org/jira/browse/SOLR-8492
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
> Fix For: 6.1
>
> Attachments: SOLR-8492.diff, SOLR-8492.diff, SOLR-8492.patch, 
> SOLR-8492.patch, SOLR-8492.patch, SOLR-8492.patch, SOLR-8492.patch, 
> SOLR-8492.patch, SOLR-8492.patch, SOLR-8492.patch, logit.csv
>
>
> This ticket is to add a new query called a LogisticRegressionQuery (LRQ).
> The LRQ extends AnalyticsQuery 
> (http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html)
>  and returns a DelegatingCollector that implements a Stochastic Gradient 
> Descent (SGD) optimizer for Logistic Regression.
> This ticket also adds the LogitStream which leverages Streaming Expressions 
> to provide iteration over the shards. Each call to LogitStream.read() calls 
> down to the shards and executes the LogisticRegressionQuery. The model data 
> is collected from the shards and the weights are averaged and sent back to 
> the shards with the next iteration. Each call to read() returns a Tuple with 
> the averaged weights and error from the shards. With this approach the 
> LogitStream streams the changing model back to the client after each 
> iteration.
> The LogitStream will return the EOF Tuple when it reaches the defined 
> maxIterations. When sent as a Streaming Expression to the Stream handler this 
> provides parallel iterative behavior. This same approach can be used to 
> implement other parallel iterative algorithms.
> The initial patch has  a test which simply tests the mechanics of the 
> iteration. More work will need to be done to ensure the SGD is properly 
> implemented. The distributed approach of the SGD will also need to be 
> reviewed.  
> This implementation is designed for use cases with a small number of features 
> because each feature is it's own discreet field.
> An implementation which supports a higher number of features would be 
> possible by packing features into a byte array and storing as binary 
> DocValues.
> This implementation is designed to support a large sample set. With a large 
> number of shards, a sample set into the billions may be possible.
> sample Streaming Expression Syntax:
> {code}
> logit(collection1, features="a,b,c,d,e,f" outcome="x" maxIterations="80")
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 551 - Still Failing!

2016-04-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/551/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ObjectTracker found 4 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper, SolrCore, 
MDCAwareThreadPoolExecutor]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 4 object(s) that were not 
released!!! [MockDirectoryWrapper, MockDirectoryWrapper, SolrCore, 
MDCAwareThreadPoolExecutor]
at __randomizedtesting.SeedInfo.seed([BA2AEC149F732CD7]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:255)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=936, name=searcherExecutor-406-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=936, name=searcherExecutor-406-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 

[jira] [Commented] (SOLR-5750) Backup/Restore API for SolrCloud

2016-04-29 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264910#comment-15264910
 ] 

Hrishikesh Gadre commented on SOLR-5750:


[~dsmiley] 

Please take a look at following pull request 
https://github.com/apache/lucene-solr/pull/36

Unfortunately I wasn't able to apply your latest patch. Hence I had to send a 
PR. I am still working on refactoring the "restore" API. But in the mean time 
any feedback would be great.

> Backup/Restore API for SolrCloud
> 
>
> Key: SOLR-5750
> URL: https://issues.apache.org/jira/browse/SOLR-5750
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: David Smiley
> Fix For: 5.2, master
>
> Attachments: SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch, 
> SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch
>
>
> We should have an easy way to do backups and restores in SolrCloud. The 
> ReplicationHandler supports a backup command which can create snapshots of 
> the index but that is too little.
> The command should be able to backup:
> # Snapshots of all indexes or indexes from the leader or the shards
> # Config set
> # Cluster state
> # Cluster properties
> # Aliases
> # Overseer work queue?
> A restore should be able to completely restore the cloud i.e. no manual steps 
> required other than bringing nodes back up or setting up a new cloud cluster.
> SOLR-5340 will be a part of this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9038) Ability to create/delete/list snapshots for a solr collection

2016-04-29 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264907#comment-15264907
 ] 

Hrishikesh Gadre commented on SOLR-9038:


[~dsmiley] I have created a pull request 
https://github.com/apache/lucene-solr/pull/36

Please take a look. Unfortunately I wasn't able to apply your latest patch. 
Hence I had to send a PR. 

I am still working on refactoring the "restore" API. But in the mean time any 
feedback would be great. 

> Ability to create/delete/list snapshots for a solr collection
> -
>
> Key: SOLR-9038
> URL: https://issues.apache.org/jira/browse/SOLR-9038
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Hrishikesh Gadre
>
> Currently work is under-way to implement backup/restore API for Solr cloud 
> (SOLR-5750). SOLR-5750 is about providing an ability to "copy" index files 
> and collection metadata to a configurable location. 
> In addition to this, we should also provide a facility to create "named" 
> snapshots for Solr collection. Here by "snapshot" I mean configuring the 
> underlying Lucene IndexDeletionPolicy to not delete a specific commit point 
> (e.g. using PersistentSnapshotIndexDeletionPolicy). This should not be 
> confused with SOLR-5340 which implements core level "backup" functionality.
> The primary motivation of this feature is to decouple recording/preserving a 
> known consistent state of a collection from actually "copying" the relevant 
> files to a physically separate location. This decoupling have number of 
> advantages
> - We can use specialized data-copying tools for transferring Solr index 
> files. e.g. in Hadoop environment, typically 
> [distcp|https://hadoop.apache.org/docs/r1.2.1/distcp2.html] tool is used to 
> copy files from one location to other. This tool provides various options to 
> configure degree of parallelism, bandwidth usage as well as integration with 
> different types and versions of file systems (e.g. AWS S3, Azure Blob store 
> etc.)
> - This separation of concern would also help Solr to focus on the key 
> functionality (i.e. querying and indexing) while delegating the copy 
> operation to the tools built for that purpose.
> - Users can decide if/when to copy the data files as against creating a 
> snapshot. e.g. a user may want to create a snapshot of a collection before 
> making an experimental change (e.g. updating/deleting docs, schema change 
> etc.). If the experiment is successful, he can delete the snapshot (without 
> having to copy the files). If the experiment is failed, then he can copy the 
> files associated with the snapshot and restore.
> Note that Apache Blur project is also providing a similar feature 
> [BLUR-132|https://issues.apache.org/jira/browse/BLUR-132]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-9038) Ability to create/delete/list snapshots for a solr collection

2016-04-29 Thread Hrishikesh Gadre (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated SOLR-9038:
---
Comment: was deleted

(was: [~dsmiley] I have created a pull request 
https://github.com/apache/lucene-solr/pull/36

Please take a look. Unfortunately I wasn't able to apply your latest patch. 
Hence I had to send a PR. 

I am still working on refactoring the "restore" API. But in the mean time any 
feedback would be great. )

> Ability to create/delete/list snapshots for a solr collection
> -
>
> Key: SOLR-9038
> URL: https://issues.apache.org/jira/browse/SOLR-9038
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Hrishikesh Gadre
>
> Currently work is under-way to implement backup/restore API for Solr cloud 
> (SOLR-5750). SOLR-5750 is about providing an ability to "copy" index files 
> and collection metadata to a configurable location. 
> In addition to this, we should also provide a facility to create "named" 
> snapshots for Solr collection. Here by "snapshot" I mean configuring the 
> underlying Lucene IndexDeletionPolicy to not delete a specific commit point 
> (e.g. using PersistentSnapshotIndexDeletionPolicy). This should not be 
> confused with SOLR-5340 which implements core level "backup" functionality.
> The primary motivation of this feature is to decouple recording/preserving a 
> known consistent state of a collection from actually "copying" the relevant 
> files to a physically separate location. This decoupling have number of 
> advantages
> - We can use specialized data-copying tools for transferring Solr index 
> files. e.g. in Hadoop environment, typically 
> [distcp|https://hadoop.apache.org/docs/r1.2.1/distcp2.html] tool is used to 
> copy files from one location to other. This tool provides various options to 
> configure degree of parallelism, bandwidth usage as well as integration with 
> different types and versions of file systems (e.g. AWS S3, Azure Blob store 
> etc.)
> - This separation of concern would also help Solr to focus on the key 
> functionality (i.e. querying and indexing) while delegating the copy 
> operation to the tools built for that purpose.
> - Users can decide if/when to copy the data files as against creating a 
> snapshot. e.g. a user may want to create a snapshot of a collection before 
> making an experimental change (e.g. updating/deleting docs, schema change 
> etc.). If the experiment is successful, he can delete the snapshot (without 
> having to copy the files). If the experiment is failed, then he can copy the 
> files associated with the snapshot and restore.
> Note that Apache Blur project is also providing a similar feature 
> [BLUR-132|https://issues.apache.org/jira/browse/BLUR-132]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: post-branch_6x Jira version renaming(s) got overlooked?

2016-04-29 Thread Chris Hostetter

: OK. I'm not sure you're missing anything. But, I think we'll all know
: for sure pretty quickly once we're doing it.

That sounds like a ringing vote of confidence!


: Do you want help with this? Seems like you have it under control, but
: if you want to split it somehow, I can help a bit this afternoon.

Not just yet thanks ... i want to mull it a bit more, and maybe start on 
it monday.

One thing i'll definitely do before i start changing anything is use the 
"Bulk Edit" feature to add *comments* with 2 new unique magic strings to 
all of the issues that match the queries against the current "master" or 
"6.0"

Once thats done then we should be able to merge the versions with 
confidence, because we should always be able to do searches to find those 
2 distinct sets again if something gets fucked up and needs manually fixed 
-- and the various audits we need/want to do can be done after the fact 
using searches on those magic strings.

when it's time for auditing those ~100 6.1 jiras it might be worth 
splitting up the work, but it should go pretty quick.


: 
: On Fri, Apr 29, 2016 at 2:47 PM, Chris Hostetter
:  wrote:
: >
: > : Yeah, good point, I forgot about the permutations with backported issues.
: > :
: > : But it's not just master + 6.1,  it's also master + 6.0. That's why
: > : the query I sent out looked for issues that had "master", but not
: > : either of those versions. If it's marked for 6.0 and also master, then
: > : it's meant for 7.0 (eventually).
: >
: > Not neccessarily -- we have no way of nowing when "master" was put in
: > fixVersion, so "6.0, master" might mean "commited to master=7.0 and
: > branch_6x=6.0" or it might mean "commited to master which was then later
: > forked to branch_6x but then someone also added 6.0 explicitly when
: > resolving"
: >
: > in general, if we're going to merge master->6.0 we don't have to worry
: > about any issues that *currently* list both -- that wll be resolved when
: > they merge.
: >
: > I'm pretty sure we only have to worry about:
: >
: > a) issues that list both "master
: > + 6.1" and wether that really means "commited to branch_6_0=6.0 and
: > branch_6x=6.1" or "commited to master=7.0 and branch_6x=6.0" ... which is
: > why i suggested a manual audit based on jira query.
: >
: > b) issues that *should* only list "master" once we are all done ... which
: > should be a really straight forward audit of the 7.0 CHANGES.txt.
: >
: > ...or am i still missing something?
: >
: > : generally assumed. We could remove master from all issues that already
: > : have another fixVersion (except the forward ones, 6.0 and 6.1), and
: > : then just deal with that list. It's much more manageable:
: > :
: > : 
https://issues.apache.org/jira/browse/SOLR-9046?jql=project%20%3D%20SOLR%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%20master%20AND%20fixVersion%20not%20in%20releasedVersions()
: >
: > how would we remove master from master from those issues? the "Bulk Edit
: > replaces whole field" problem would force us to remove all fixVersions in
: > that case wouldn't it?
: >
: >
: >
: >
: >
: > : > : > for both the LUCENE and SOLR project...
: > : > : >
: > : > : > 1) Audit the list of Jira's with 'fixVersion=mater AND 
fixVersion=6.1' and
: > : > : > manually remove master from all of them (only ~100 total)
: > : > : > 2) merge "master" into "6.0"
: > : > : > 3) re add a "master" version to Jira
: > : > : > 3) Audit CHANGES.txt and set fixVersion=master on the handful of 
issues in
: > : > : > the 7.0 section
: >
: > -Hoss
: > http://www.lucidworks.com/
: >
: > -
: > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
: > For additional commands, e-mail: dev-h...@lucene.apache.org
: >
: 
: -
: To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
: For additional commands, e-mail: dev-h...@lucene.apache.org
: 
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: Solr snapshots

2016-04-29 Thread hgadre
GitHub user hgadre opened a pull request:

https://github.com/apache/lucene-solr/pull/36

Solr snapshots

Refactor the Solr collection backup implementation (refactoring "restore" 
implementation in progress).

Following changes introduced
- Added Solr/Lucene version to check the compatibility between the backup 
version and the version of Solr on which it is being restored.
- Similarly added a backup implementation version to check the 
compatibility between the "restore" implementation and backup format.
- Introduced a Strategy interface to define how the Solr index data is 
backed up (e.g. using file copy approach).
- Introduced a Repository interface to define the file-system used to store 
the backup data. (currently works only with local file system but can be 
extended).


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hgadre/lucene-solr solr_5750_refactor

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/36.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #36


commit 2d94cbd143ac40d7a0fb5136062775ce3d555d55
Author: Hrishikesh Gadre 
Date:   2016-03-24T22:44:38Z

Solr snapshots

Change-Id: I26a3aad80dff5b4f2f035f6371fb82ec3b77e4fe




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Minor typo in javadocs for join package?

2016-04-29 Thread Jeff Evans
Hi,

In the join package summary page
,
the first sentence under "Index-time joins" starts with "The
index-time joining support joins while searching."  I believe it
should say "The index-time joining support joins while indexing,"
correct?  Just trying to verify my understanding; thanks.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9047) zkcli should allow alternative locations for log4j configuration

2016-04-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264850#comment-15264850
 ] 

ASF subversion and git services commented on SOLR-9047:
---

Commit 6e2d80d3a8f4434499bbeee81afa47a52252c143 in lucene-solr's branch 
refs/heads/master from [~gchanan]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6e2d80d ]

SOLR-9047: fix windows script


> zkcli should allow alternative locations for log4j configuration
> 
>
> Key: SOLR-9047
> URL: https://issues.apache.org/jira/browse/SOLR-9047
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools, SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Minor
> Fix For: 6.1, trunk
>
> Attachments: SOLR-9047.patch, SOLR-9047.patch
>
>
> zkcli uses the log4j configuration in the local directory:
> {code}
> sdir="`dirname \"$0\"`"
> PATH=$JAVA_HOME/bin:$PATH $JVM 
> -Dlog4j.configuration=file:$sdir/log4j.properties -classpath 
> "$sdir/../../solr-webapp/webapp/WEB-INF/lib/*:$sdir/../../lib/ext/*" 
> org.apache.solr.cloud.ZkCLI ${1+"$@"}
> {code}
> which is a reasonable default, but often people want to use a "global" log4j 
> configuration.  For example, one may define a log4j configuration that writes 
> to an external log directory and want to point to this rather than copying it 
> to each source checkout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9047) zkcli should allow alternative locations for log4j configuration

2016-04-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264849#comment-15264849
 ] 

ASF subversion and git services commented on SOLR-9047:
---

Commit ad152d23d5e70121f5e6ddc4bae5dabb288b96c2 in lucene-solr's branch 
refs/heads/branch_6x from [~gchanan]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ad152d2 ]

SOLR-9047: fix windows script


> zkcli should allow alternative locations for log4j configuration
> 
>
> Key: SOLR-9047
> URL: https://issues.apache.org/jira/browse/SOLR-9047
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools, SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Minor
> Fix For: 6.1, trunk
>
> Attachments: SOLR-9047.patch, SOLR-9047.patch
>
>
> zkcli uses the log4j configuration in the local directory:
> {code}
> sdir="`dirname \"$0\"`"
> PATH=$JAVA_HOME/bin:$PATH $JVM 
> -Dlog4j.configuration=file:$sdir/log4j.properties -classpath 
> "$sdir/../../solr-webapp/webapp/WEB-INF/lib/*:$sdir/../../lib/ext/*" 
> org.apache.solr.cloud.ZkCLI ${1+"$@"}
> {code}
> which is a reasonable default, but often people want to use a "global" log4j 
> configuration.  For example, one may define a log4j configuration that writes 
> to an external log directory and want to point to this rather than copying it 
> to each source checkout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1000 - Still Failing

2016-04-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1000/

8 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestReplicateAfterCoreReload

Error Message:
expected:<[{indexVersion=1461962376037,generation=2,filelist=[_2pi.cfe, 
_2pi.cfs, _2pi.si, _2pj.fdt, _2pj.fdx, _2pj.fnm, _2pj.nvd, _2pj.nvm, _2pj.si, 
_2pj_Lucene50_0.doc, _2pj_Lucene50_0.tim, _2pj_Lucene50_0.tip, _2pk.cfe, 
_2pk.cfs, _2pk.si, _2pl.cfe, _2pl.cfs, _2pl.si, _2pm.cfe, _2pm.cfs, _2pm.si, 
_2pn.cfe, _2pn.cfs, _2pn.si, _2po.cfe, _2po.cfs, _2po.si, _2pp.cfe, _2pp.cfs, 
_2pp.si, _2pq.cfe, _2pq.cfs, _2pq.si, _2pr.cfe, _2pr.cfs, _2pr.si, _2ps.cfe, 
_2ps.cfs, _2ps.si, _2pt.cfe, _2pt.cfs, _2pt.si, _2pu.cfe, _2pu.cfs, _2pu.si, 
_2pv.cfe, _2pv.cfs, _2pv.si, _2pw.cfe, _2pw.cfs, _2pw.si, _2px.cfe, _2px.cfs, 
_2px.si, _2py.cfe, _2py.cfs, _2py.si, _2pz.cfe, _2pz.cfs, _2pz.si, _2q0.cfe, 
_2q0.cfs, _2q0.si, segments_2]}]> but 
was:<[{indexVersion=1461962376037,generation=2,filelist=[_2pi.cfe, _2pi.cfs, 
_2pi.si, _2pj.fdt, _2pj.fdx, _2pj.fnm, _2pj.nvd, _2pj.nvm, _2pj.si, 
_2pj_Lucene50_0.doc, _2pj_Lucene50_0.tim, _2pj_Lucene50_0.tip, _2pk.cfe, 
_2pk.cfs, _2pk.si, _2pl.cfe, _2pl.cfs, _2pl.si, _2pm.cfe, _2pm.cfs, _2pm.si, 
_2pn.cfe, _2pn.cfs, _2pn.si, _2po.cfe, _2po.cfs, _2po.si, _2pp.cfe, _2pp.cfs, 
_2pp.si, _2pq.cfe, _2pq.cfs, _2pq.si, _2pr.cfe, _2pr.cfs, _2pr.si, _2ps.cfe, 
_2ps.cfs, _2ps.si, _2pt.cfe, _2pt.cfs, _2pt.si, _2pu.cfe, _2pu.cfs, _2pu.si, 
_2pv.cfe, _2pv.cfs, _2pv.si, _2pw.cfe, _2pw.cfs, _2pw.si, _2px.cfe, _2px.cfs, 
_2px.si, _2py.cfe, _2py.cfs, _2py.si, _2pz.cfe, _2pz.cfs, _2pz.si, _2q0.cfe, 
_2q0.cfs, _2q0.si, segments_2]}, 
{indexVersion=1461962376037,generation=3,filelist=[_2q0.cfe, _2q0.cfs, _2q0.si, 
_2q1.fdt, _2q1.fdx, _2q1.fnm, _2q1.nvd, _2q1.nvm, _2q1.si, _2q1_Lucene50_0.doc, 
_2q1_Lucene50_0.tim, _2q1_Lucene50_0.tip, segments_3]}]>

Stack Trace:
java.lang.AssertionError: 
expected:<[{indexVersion=1461962376037,generation=2,filelist=[_2pi.cfe, 
_2pi.cfs, _2pi.si, _2pj.fdt, _2pj.fdx, _2pj.fnm, _2pj.nvd, _2pj.nvm, _2pj.si, 
_2pj_Lucene50_0.doc, _2pj_Lucene50_0.tim, _2pj_Lucene50_0.tip, _2pk.cfe, 
_2pk.cfs, _2pk.si, _2pl.cfe, _2pl.cfs, _2pl.si, _2pm.cfe, _2pm.cfs, _2pm.si, 
_2pn.cfe, _2pn.cfs, _2pn.si, _2po.cfe, _2po.cfs, _2po.si, _2pp.cfe, _2pp.cfs, 
_2pp.si, _2pq.cfe, _2pq.cfs, _2pq.si, _2pr.cfe, _2pr.cfs, _2pr.si, _2ps.cfe, 
_2ps.cfs, _2ps.si, _2pt.cfe, _2pt.cfs, _2pt.si, _2pu.cfe, _2pu.cfs, _2pu.si, 
_2pv.cfe, _2pv.cfs, _2pv.si, _2pw.cfe, _2pw.cfs, _2pw.si, _2px.cfe, _2px.cfs, 
_2px.si, _2py.cfe, _2py.cfs, _2py.si, _2pz.cfe, _2pz.cfs, _2pz.si, _2q0.cfe, 
_2q0.cfs, _2q0.si, segments_2]}]> but 
was:<[{indexVersion=1461962376037,generation=2,filelist=[_2pi.cfe, _2pi.cfs, 
_2pi.si, _2pj.fdt, _2pj.fdx, _2pj.fnm, _2pj.nvd, _2pj.nvm, _2pj.si, 
_2pj_Lucene50_0.doc, _2pj_Lucene50_0.tim, _2pj_Lucene50_0.tip, _2pk.cfe, 
_2pk.cfs, _2pk.si, _2pl.cfe, _2pl.cfs, _2pl.si, _2pm.cfe, _2pm.cfs, _2pm.si, 
_2pn.cfe, _2pn.cfs, _2pn.si, _2po.cfe, _2po.cfs, _2po.si, _2pp.cfe, _2pp.cfs, 
_2pp.si, _2pq.cfe, _2pq.cfs, _2pq.si, _2pr.cfe, _2pr.cfs, _2pr.si, _2ps.cfe, 
_2ps.cfs, _2ps.si, _2pt.cfe, _2pt.cfs, _2pt.si, _2pu.cfe, _2pu.cfs, _2pu.si, 
_2pv.cfe, _2pv.cfs, _2pv.si, _2pw.cfe, _2pw.cfs, _2pw.si, _2px.cfe, _2px.cfs, 
_2px.si, _2py.cfe, _2py.cfs, _2py.si, _2pz.cfe, _2pz.cfs, _2pz.si, _2q0.cfe, 
_2q0.cfs, _2q0.si, segments_2]}, 
{indexVersion=1461962376037,generation=3,filelist=[_2q0.cfe, _2q0.cfs, _2q0.si, 
_2q1.fdt, _2q1.fdx, _2q1.fnm, _2q1.nvd, _2q1.nvm, _2q1.si, _2q1_Lucene50_0.doc, 
_2q1_Lucene50_0.tim, _2q1_Lucene50_0.tip, segments_3]}]>
at 
__randomizedtesting.SeedInfo.seed([DF535571EAA3B3F9:FA844E419AEBBDFA]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.handler.TestReplicationHandler.doTestReplicateAfterCoreReload(TestReplicationHandler.java:1147)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)

[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+115) - Build # 532 - Failure!

2016-04-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/532/
Java: 64bit/jdk-9-ea+115 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestMiniSolrCloudCluster.testStopAllStartAll

Error Message:
Address already in use

Stack Trace:
java.net.BindException: Address already in use
at 
__randomizedtesting.SeedInfo.seed([97F00310105859F6:E1CE1C63516FF4D9]:0)
at sun.nio.ch.Net.bind0(java.base@9-ea/Native Method)
at sun.nio.ch.Net.bind(java.base@9-ea/Net.java:446)
at sun.nio.ch.Net.bind(java.base@9-ea/Net.java:438)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(java.base@9-ea/ServerSocketChannelImpl.java:225)
at 
sun.nio.ch.ServerSocketAdaptor.bind(java.base@9-ea/ServerSocketAdaptor.java:74)
at 
org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:326)
at 
org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
at 
org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:244)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.server.Server.doStart(Server.java:384)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:327)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.startJettySolrRunner(MiniSolrCloudCluster.java:352)
at 
org.apache.solr.cloud.TestMiniSolrCloudCluster.testStopAllStartAll(TestMiniSolrCloudCluster.java:443)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-8323) Add CollectionWatcher API to ZkStateReader

2016-04-29 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264767#comment-15264767
 ] 

David Smiley commented on SOLR-8323:


Pardon the distraction to the fine work going on here but I'd like to possibly 
emulate this code review process on other issue(s).  Is it necessary to create 
a branch on some other/personal repo and then issue a pull request (as was done 
here I see) or is it possible for someone to review commits to a branch on our 
repo/mirror?  I'm thinking SOLR-5750 -- 
https://github.com/apache/lucene-solr/commits/solr-5750   (feel free to make a 
comment to test).

> Add CollectionWatcher API to ZkStateReader
> --
>
> Key: SOLR-8323
> URL: https://issues.apache.org/jira/browse/SOLR-8323
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-8323.patch, SOLR-8323.patch, SOLR-8323.patch, 
> SOLR-8323.patch
>
>
> An API to watch for changes to collection state would be a generally useful 
> thing, both internally and for client use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9047) zkcli should allow alternative locations for log4j configuration

2016-04-29 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan resolved SOLR-9047.
--
   Resolution: Fixed
Fix Version/s: trunk
   6.1

Thanks for taking a look Mark and Christine.  Committed to trunk and 6.1.

> zkcli should allow alternative locations for log4j configuration
> 
>
> Key: SOLR-9047
> URL: https://issues.apache.org/jira/browse/SOLR-9047
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools, SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Minor
> Fix For: 6.1, trunk
>
> Attachments: SOLR-9047.patch, SOLR-9047.patch
>
>
> zkcli uses the log4j configuration in the local directory:
> {code}
> sdir="`dirname \"$0\"`"
> PATH=$JAVA_HOME/bin:$PATH $JVM 
> -Dlog4j.configuration=file:$sdir/log4j.properties -classpath 
> "$sdir/../../solr-webapp/webapp/WEB-INF/lib/*:$sdir/../../lib/ext/*" 
> org.apache.solr.cloud.ZkCLI ${1+"$@"}
> {code}
> which is a reasonable default, but often people want to use a "global" log4j 
> configuration.  For example, one may define a log4j configuration that writes 
> to an external log directory and want to point to this rather than copying it 
> to each source checkout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9047) zkcli should allow alternative locations for log4j configuration

2016-04-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264760#comment-15264760
 ] 

ASF subversion and git services commented on SOLR-9047:
---

Commit 67ebfb1cc257808e53f74d9c38a9729ded87a330 in lucene-solr's branch 
refs/heads/branch_6x from [~gchanan]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=67ebfb1 ]

SOLR-9047: zkcli should allow alternative locations for log4j configuration


> zkcli should allow alternative locations for log4j configuration
> 
>
> Key: SOLR-9047
> URL: https://issues.apache.org/jira/browse/SOLR-9047
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools, SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Minor
> Attachments: SOLR-9047.patch, SOLR-9047.patch
>
>
> zkcli uses the log4j configuration in the local directory:
> {code}
> sdir="`dirname \"$0\"`"
> PATH=$JAVA_HOME/bin:$PATH $JVM 
> -Dlog4j.configuration=file:$sdir/log4j.properties -classpath 
> "$sdir/../../solr-webapp/webapp/WEB-INF/lib/*:$sdir/../../lib/ext/*" 
> org.apache.solr.cloud.ZkCLI ${1+"$@"}
> {code}
> which is a reasonable default, but often people want to use a "global" log4j 
> configuration.  For example, one may define a log4j configuration that writes 
> to an external log directory and want to point to this rather than copying it 
> to each source checkout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8323) Add CollectionWatcher API to ZkStateReader

2016-04-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264743#comment-15264743
 ] 

ASF GitHub Bot commented on SOLR-8323:
--

Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61644721
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java ---
@@ -491,19 +493,28 @@ private void refreshLegacyClusterState(Watcher 
watcher)
   final Stat stat = new Stat();
   final byte[] data = zkClient.getData(CLUSTER_STATE, watcher, stat, 
true);
   final ClusterState loadedData = ClusterState.load(stat.getVersion(), 
data, emptySet(), CLUSTER_STATE);
+  final Set liveNodes = new HashSet<>(this.liveNodes);
   synchronized (getUpdateLock()) {
 if (this.legacyClusterStateVersion >= stat.getVersion()) {
   // Nothing to do, someone else updated same or newer.
   return;
 }
-this.legacyCollectionStates = loadedData.getCollectionStates();
-this.legacyClusterStateVersion = stat.getVersion();
-for (Map.Entry entry : 
this.legacyCollectionStates.entrySet()) {
-  if (entry.getValue().isLazilyLoaded() == false) {
-// a watched collection - trigger notifications
-notifyStateWatchers(entry.getKey(), entry.getValue().get());
+LOG.info("Updating legacy cluster state - {} entries in 
legacyCollectionStates", legacyCollectionStates.size());
+for (Map.Entry watchEntry : 
this.collectionWatches.entrySet()) {
+  String coll = watchEntry.getKey();
+  CollectionWatch collWatch = watchEntry.getValue();
+  ClusterState.CollectionRef ref = 
this.legacyCollectionStates.get(coll);
+  if (ref == null)
+continue;
+  // watched collection, so this will always be local
+  DocCollection newState = ref.get();
+  if (!collWatch.stateWatchers.isEmpty()
+  && 
!Objects.equals(loadedData.getCollectionStates().get(coll).get(), newState)) {
+notifyStateWatchers(liveNodes, coll, newState);
--- End diff --

I just realized you don't want to call user code while holding the update 
lock.  I think you're going to need to move this out of the synchronized block. 
 In fact this is really nasty now that I think about it.  In general, 
you're going to want to defer calling any user code until the current 
constuctState() operation finishes.  Otherwise, the user code is potentially 
going to see a stale copy of the state that you haven't finished updating yet.

I think we're going to have to build a queue of outstanding state watchers 
to notify and always call them later, probably in an executor.  I know that 
sounds like a bit of work, but I'm not sure I can see how it would be safe 
otherwise.

@markrmiller any thoughts?


> Add CollectionWatcher API to ZkStateReader
> --
>
> Key: SOLR-8323
> URL: https://issues.apache.org/jira/browse/SOLR-8323
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-8323.patch, SOLR-8323.patch, SOLR-8323.patch, 
> SOLR-8323.patch
>
>
> An API to watch for changes to collection state would be a generally useful 
> thing, both internally and for client use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8323

2016-04-29 Thread dragonsinth
Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61644721
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java ---
@@ -491,19 +493,28 @@ private void refreshLegacyClusterState(Watcher 
watcher)
   final Stat stat = new Stat();
   final byte[] data = zkClient.getData(CLUSTER_STATE, watcher, stat, 
true);
   final ClusterState loadedData = ClusterState.load(stat.getVersion(), 
data, emptySet(), CLUSTER_STATE);
+  final Set liveNodes = new HashSet<>(this.liveNodes);
   synchronized (getUpdateLock()) {
 if (this.legacyClusterStateVersion >= stat.getVersion()) {
   // Nothing to do, someone else updated same or newer.
   return;
 }
-this.legacyCollectionStates = loadedData.getCollectionStates();
-this.legacyClusterStateVersion = stat.getVersion();
-for (Map.Entry entry : 
this.legacyCollectionStates.entrySet()) {
-  if (entry.getValue().isLazilyLoaded() == false) {
-// a watched collection - trigger notifications
-notifyStateWatchers(entry.getKey(), entry.getValue().get());
+LOG.info("Updating legacy cluster state - {} entries in 
legacyCollectionStates", legacyCollectionStates.size());
+for (Map.Entry watchEntry : 
this.collectionWatches.entrySet()) {
+  String coll = watchEntry.getKey();
+  CollectionWatch collWatch = watchEntry.getValue();
+  ClusterState.CollectionRef ref = 
this.legacyCollectionStates.get(coll);
+  if (ref == null)
+continue;
+  // watched collection, so this will always be local
+  DocCollection newState = ref.get();
+  if (!collWatch.stateWatchers.isEmpty()
+  && 
!Objects.equals(loadedData.getCollectionStates().get(coll).get(), newState)) {
+notifyStateWatchers(liveNodes, coll, newState);
--- End diff --

I just realized you don't want to call user code while holding the update 
lock.  I think you're going to need to move this out of the synchronized block. 
 In fact this is really nasty now that I think about it.  In general, 
you're going to want to defer calling any user code until the current 
constuctState() operation finishes.  Otherwise, the user code is potentially 
going to see a stale copy of the state that you haven't finished updating yet.

I think we're going to have to build a queue of outstanding state watchers 
to notify and always call them later, probably in an executor.  I know that 
sounds like a bit of work, but I'm not sure I can see how it would be safe 
otherwise.

@markrmiller any thoughts?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8323) Add CollectionWatcher API to ZkStateReader

2016-04-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264726#comment-15264726
 ] 

ASF GitHub Bot commented on SOLR-8323:
--

Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61643749
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/CollectionStatePredicate.java 
---
@@ -30,8 +30,9 @@
   /**
* Check the collection state matches a required state
*
-   * The collectionState parameter may be null if the collection does not 
exist
-   * or has been deleted
+   * @param liveNodes the current set of live nodes
+   * @param collectionState the latest collection state, or null if the 
collection
+   *does not exist
--- End diff --

I think this needs to be below the "Note" lines to get formatted right.


> Add CollectionWatcher API to ZkStateReader
> --
>
> Key: SOLR-8323
> URL: https://issues.apache.org/jira/browse/SOLR-8323
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-8323.patch, SOLR-8323.patch, SOLR-8323.patch, 
> SOLR-8323.patch
>
>
> An API to watch for changes to collection state would be a generally useful 
> thing, both internally and for client use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8323) Add CollectionWatcher API to ZkStateReader

2016-04-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264728#comment-15264728
 ] 

ASF GitHub Bot commented on SOLR-8323:
--

Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61643877
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java ---
@@ -256,9 +257,10 @@ public void updateClusterState() throws 
KeeperException, InterruptedException {
   refreshLegacyClusterState(null);
   // Need a copy so we don't delete from what we're iterating over.
   Collection safeCopy = new 
ArrayList<>(watchedCollectionStates.keySet());
+  Set liveNodes = new HashSet<>(this.liveNodes);
--- End diff --

You don't actually need a copy here, since `liveNodes` is an immutable set.


> Add CollectionWatcher API to ZkStateReader
> --
>
> Key: SOLR-8323
> URL: https://issues.apache.org/jira/browse/SOLR-8323
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-8323.patch, SOLR-8323.patch, SOLR-8323.patch, 
> SOLR-8323.patch
>
>
> An API to watch for changes to collection state would be a generally useful 
> thing, both internally and for client use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8323

2016-04-29 Thread dragonsinth
Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61643877
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java ---
@@ -256,9 +257,10 @@ public void updateClusterState() throws 
KeeperException, InterruptedException {
   refreshLegacyClusterState(null);
   // Need a copy so we don't delete from what we're iterating over.
   Collection safeCopy = new 
ArrayList<>(watchedCollectionStates.keySet());
+  Set liveNodes = new HashSet<>(this.liveNodes);
--- End diff --

You don't actually need a copy here, since `liveNodes` is an immutable set.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8323

2016-04-29 Thread dragonsinth
Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61643749
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/CollectionStatePredicate.java 
---
@@ -30,8 +30,9 @@
   /**
* Check the collection state matches a required state
*
-   * The collectionState parameter may be null if the collection does not 
exist
-   * or has been deleted
+   * @param liveNodes the current set of live nodes
+   * @param collectionState the latest collection state, or null if the 
collection
+   *does not exist
--- End diff --

I think this needs to be below the "Note" lines to get formatted right.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8323) Add CollectionWatcher API to ZkStateReader

2016-04-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264723#comment-15264723
 ] 

ASF GitHub Bot commented on SOLR-8323:
--

Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61643539
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java ---
@@ -1066,32 +1079,201 @@ public static String getCollectionPath(String 
coll) {
 return COLLECTIONS_ZKNODE+"/"+coll + "/state.json";
   }
 
-  public void addCollectionWatch(String coll) {
-if (interestingCollections.add(coll)) {
-  LOG.info("addZkWatch [{}]", coll);
-  new StateWatcher(coll).refreshAndWatch(false);
+  /**
+   * Notify this reader that a local Core is a member of a collection, and 
so that collection
+   * state should be watched.
+   *
+   * Not a public API.  This method should only be called from 
ZkController.
+   *
+   * The number of cores per-collection is tracked, and adding multiple 
cores from the same
+   * collection does not increase the number of watches.
+   *
+   * @param collection the collection that the core is a member of
+   *
+   * @see ZkStateReader#unregisterCore(String)
+   */
+  public void registerCore(String collection) {
+AtomicBoolean reconstructState = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null) {
+reconstructState.set(true);
+v = new CollectionWatch();
+  }
+  v.coreRefCount++;
+  return v;
+});
+if (reconstructState.get()) {
+  new StateWatcher(collection).refreshAndWatch();
+  synchronized (getUpdateLock()) {
+constructState();
+  }
+}
+  }
+
+  /**
+   * Notify this reader that a local core that is a member of a collection 
has been closed.
+   *
+   * Not a public API.  This method should only be called from 
ZkController.
+   *
+   * If no cores are registered for a collection, and there are no {@link 
CollectionStateWatcher}s
+   * for that collection either, the collection watch will be removed.
+   *
+   * @param collection the collection that the core belongs to
+   */
+  public void unregisterCore(String collection) {
+AtomicBoolean reconstructState = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null)
+return null;
+  if (v.coreRefCount > 0)
+v.coreRefCount--;
+  if (v.canBeRemoved()) {
+watchedCollectionStates.remove(collection);
+lazyCollectionStates.put(collection, new 
LazyCollectionRef(collection));
+reconstructState.set(true);
+return null;
+  }
+  return v;
+});
+if (reconstructState.get()) {
+  synchronized (getUpdateLock()) {
+constructState();
+  }
+}
+  }
+
+  /**
+   * Register a CollectionStateWatcher to be called when the state of a 
collection changes
+   *
+   * A given CollectionStateWatcher will be only called once.  If you want 
to have a persistent watcher,
+   * it should register itself again in its {@link 
CollectionStateWatcher#onStateChanged(Set, DocCollection)}
+   * method.
+   */
+  public void registerCollectionStateWatcher(String collection, 
CollectionStateWatcher stateWatcher) {
+AtomicBoolean watchSet = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null) {
+v = new CollectionWatch();
+watchSet.set(true);
+  }
+  v.stateWatchers.add(stateWatcher);
+  return v;
+});
+if (watchSet.get()) {
+  new StateWatcher(collection).refreshAndWatch();
   synchronized (getUpdateLock()) {
 constructState();
   }
 }
   }
 
+  /**
+   * Block until a CollectionStatePredicate returns true, or the wait 
times out
+   *
+   * Note that the predicate may be called again even after it has 
returned true, so
+   * implementors should avoid changing state within the predicate call 
itself.
--- End diff --

It seems like it would be nice to shield callers from doing any kind of 
similar mutexing.  If you don't want to bother right now, I can come back and 
see if I can do something not yucky looking here.


> Add CollectionWatcher API to ZkStateReader
> --
>
> Key: SOLR-8323
> URL: https://issues.apache.org/jira/browse/SOLR-8323
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 

[GitHub] lucene-solr pull request: SOLR-8323

2016-04-29 Thread dragonsinth
Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61643539
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java ---
@@ -1066,32 +1079,201 @@ public static String getCollectionPath(String 
coll) {
 return COLLECTIONS_ZKNODE+"/"+coll + "/state.json";
   }
 
-  public void addCollectionWatch(String coll) {
-if (interestingCollections.add(coll)) {
-  LOG.info("addZkWatch [{}]", coll);
-  new StateWatcher(coll).refreshAndWatch(false);
+  /**
+   * Notify this reader that a local Core is a member of a collection, and 
so that collection
+   * state should be watched.
+   *
+   * Not a public API.  This method should only be called from 
ZkController.
+   *
+   * The number of cores per-collection is tracked, and adding multiple 
cores from the same
+   * collection does not increase the number of watches.
+   *
+   * @param collection the collection that the core is a member of
+   *
+   * @see ZkStateReader#unregisterCore(String)
+   */
+  public void registerCore(String collection) {
+AtomicBoolean reconstructState = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null) {
+reconstructState.set(true);
+v = new CollectionWatch();
+  }
+  v.coreRefCount++;
+  return v;
+});
+if (reconstructState.get()) {
+  new StateWatcher(collection).refreshAndWatch();
+  synchronized (getUpdateLock()) {
+constructState();
+  }
+}
+  }
+
+  /**
+   * Notify this reader that a local core that is a member of a collection 
has been closed.
+   *
+   * Not a public API.  This method should only be called from 
ZkController.
+   *
+   * If no cores are registered for a collection, and there are no {@link 
CollectionStateWatcher}s
+   * for that collection either, the collection watch will be removed.
+   *
+   * @param collection the collection that the core belongs to
+   */
+  public void unregisterCore(String collection) {
+AtomicBoolean reconstructState = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null)
+return null;
+  if (v.coreRefCount > 0)
+v.coreRefCount--;
+  if (v.canBeRemoved()) {
+watchedCollectionStates.remove(collection);
+lazyCollectionStates.put(collection, new 
LazyCollectionRef(collection));
+reconstructState.set(true);
+return null;
+  }
+  return v;
+});
+if (reconstructState.get()) {
+  synchronized (getUpdateLock()) {
+constructState();
+  }
+}
+  }
+
+  /**
+   * Register a CollectionStateWatcher to be called when the state of a 
collection changes
+   *
+   * A given CollectionStateWatcher will be only called once.  If you want 
to have a persistent watcher,
+   * it should register itself again in its {@link 
CollectionStateWatcher#onStateChanged(Set, DocCollection)}
+   * method.
+   */
+  public void registerCollectionStateWatcher(String collection, 
CollectionStateWatcher stateWatcher) {
+AtomicBoolean watchSet = new AtomicBoolean(false);
+collectionWatches.compute(collection, (k, v) -> {
+  if (v == null) {
+v = new CollectionWatch();
+watchSet.set(true);
+  }
+  v.stateWatchers.add(stateWatcher);
+  return v;
+});
+if (watchSet.get()) {
+  new StateWatcher(collection).refreshAndWatch();
   synchronized (getUpdateLock()) {
 constructState();
   }
 }
   }
 
+  /**
+   * Block until a CollectionStatePredicate returns true, or the wait 
times out
+   *
+   * Note that the predicate may be called again even after it has 
returned true, so
+   * implementors should avoid changing state within the predicate call 
itself.
--- End diff --

It seems like it would be nice to shield callers from doing any kind of 
similar mutexing.  If you don't want to bother right now, I can come back and 
see if I can do something not yucky looking here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5750) Backup/Restore API for SolrCloud

2016-04-29 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264704#comment-15264704
 ] 

Hrishikesh Gadre commented on SOLR-5750:


[~dsmiley] I am almost done with refactoring the patch. I will submit the patch 
in next couple of hours. 

>>to avoid risk of confusion with "snapshot" possibly being a named commit 
>>(SOLR-9038) in the log statements and backup.properites I'll call it a 
>>backupName, not snapshotName.

I have already fixed this in my patch.

> Backup/Restore API for SolrCloud
> 
>
> Key: SOLR-5750
> URL: https://issues.apache.org/jira/browse/SOLR-5750
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: David Smiley
> Fix For: 5.2, master
>
> Attachments: SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch, 
> SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch
>
>
> We should have an easy way to do backups and restores in SolrCloud. The 
> ReplicationHandler supports a backup command which can create snapshots of 
> the index but that is too little.
> The command should be able to backup:
> # Snapshots of all indexes or indexes from the leader or the shards
> # Config set
> # Cluster state
> # Cluster properties
> # Aliases
> # Overseer work queue?
> A restore should be able to completely restore the cloud i.e. no manual steps 
> required other than bringing nodes back up or setting up a new cloud cluster.
> SOLR-5340 will be a part of this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8297) Allow join query over 2 sharded collections: enhance functionality and exception handling

2016-04-29 Thread Shikha Somani (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264694#comment-15264694
 ] 

Shikha Somani commented on SOLR-8297:
-

I tested this fix with various type of join like:
 - simple join between two collections (A -> B)
 - multi-hop join (join between A -> B -> C)
 - multi collection join (A -> B, A -> C) in single query

Even all testcases passed with the fix.

> Allow join query over 2 sharded collections: enhance functionality and 
> exception handling
> -
>
> Key: SOLR-8297
> URL: https://issues.apache.org/jira/browse/SOLR-8297
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Paul Blanchaert
>
> Enhancement based on SOLR-4905. New Jira issue raised as suggested by Mikhail 
> Khludnev.
> A) exception handling:
> The exception "SolrCloud join: multiple shards not yet supported" thrown in 
> the function findLocalReplicaForFromIndex of JoinQParserPlugin is not 
> triggered correctly: In my use-case, I've a join on a facet.query and when my 
> results are only found in 1 shard and the facet.query with the join is 
> querying the last replica of the last slice, then the exception is not thrown.
> I believe it's better to verify the nr of slices when we want to verify the  
> "multiple shards not yet supported" exception (so exception is thrown when 
> zkController.getClusterState().getSlices(fromIndex).size()>1).
> B) functional enhancement:
> I would expect that there is no problem to perform a cross-core join over 
> sharded collections when the following conditions are met:
> 1) both collections are sharded with the same replicationFactor and numShards
> 2) router.field of the collections is set to the same "key-field" (collection 
> of "fromindex" has router.field = "from" field and collection joined to has 
> router.field = "to" field)
> The router.field setup ensures that documents with the same "key-field" are 
> routed to the same node. 
> So the combination based on the "key-field" should always be available within 
> the same node.
> From a user perspective, I believe these assumptions seem to be a "normal" 
> use-case in the cross-core join in SolrCloud.
> Hope this helps



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+115) - Build # 16617 - Still Failing!

2016-04-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16617/
Java: 64bit/jdk-9-ea+115 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 11963 lines...]
   [junit4] Suite: org.apache.solr.TestDistributedSearch
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.TestDistributedSearch_83EAB5C7C481F3B6-001/init-core-data-001
   [junit4]   2> 1330538 INFO  
(SUITE-TestDistributedSearch-seed#[83EAB5C7C481F3B6]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true)
   [junit4]   2> 1330539 INFO  
(SUITE-TestDistributedSearch-seed#[83EAB5C7C481F3B6]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: 
/crbxf/r
   [junit4]   2> 1330598 INFO  
(TEST-TestDistributedSearch.test-seed#[83EAB5C7C481F3B6]) [] 
o.a.s.SolrTestCaseJ4 Writing core.properties file to 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.TestDistributedSearch_83EAB5C7C481F3B6-001/tempDir-001/control/cores/collection1
   [junit4]   2> 1330599 INFO  
(TEST-TestDistributedSearch.test-seed#[83EAB5C7C481F3B6]) [] o.e.j.s.Server 
jetty-9.3.8.v20160314
   [junit4]   2> 1330600 INFO  
(TEST-TestDistributedSearch.test-seed#[83EAB5C7C481F3B6]) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@2947df20{/crbxf/r,null,AVAILABLE}
   [junit4]   2> 1330601 INFO  
(TEST-TestDistributedSearch.test-seed#[83EAB5C7C481F3B6]) [] 
o.e.j.u.s.SslContextFactory x509=X509@41754f60(solrtest,h=[],w=[]) for 
SslContextFactory@7b80fe9e(file:///home/jenkins/workspace/Lucene-Solr-master-Linux/solr/server/etc/test/solrtest.keystore,file:///home/jenkins/workspace/Lucene-Solr-master-Linux/solr/server/etc/test/solrtest.keystore)
   [junit4]   2> 1330602 INFO  
(TEST-TestDistributedSearch.test-seed#[83EAB5C7C481F3B6]) [] 
o.e.j.s.ServerConnector Started ServerConnector@6c298011{SSL,[ssl, 
http/1.1]}{127.0.0.1:43624}
   [junit4]   2> 1330603 INFO  
(TEST-TestDistributedSearch.test-seed#[83EAB5C7C481F3B6]) [] o.e.j.s.Server 
Started @1332733ms
   [junit4]   2> 1330603 INFO  
(TEST-TestDistributedSearch.test-seed#[83EAB5C7C481F3B6]) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/crbxf/r, 
hostPort=43624, 
coreRootDirectory=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.TestDistributedSearch_83EAB5C7C481F3B6-001/tempDir-001/control/cores}
   [junit4]   2> 1330603 INFO  
(TEST-TestDistributedSearch.test-seed#[83EAB5C7C481F3B6]) [] 
o.a.s.s.SolrDispatchFilter SolrDispatchFilter.init(): 
jdk.internal.loader.ClassLoaders$AppClassLoader@546a03af
   [junit4]   2> 1330603 INFO  
(TEST-TestDistributedSearch.test-seed#[83EAB5C7C481F3B6]) [] 
o.a.s.c.SolrResourceLoader new SolrResourceLoader for directory: 
'/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.TestDistributedSearch_83EAB5C7C481F3B6-001/tempDir-001/control'
   [junit4]   2> 1330603 INFO  
(TEST-TestDistributedSearch.test-seed#[83EAB5C7C481F3B6]) [] 
o.a.s.c.SolrResourceLoader JNDI not configured for solr (NoInitialContextEx)
   [junit4]   2> 1330603 INFO  
(TEST-TestDistributedSearch.test-seed#[83EAB5C7C481F3B6]) [] 
o.a.s.c.SolrResourceLoader solr home defaulted to 'solr/' (could not find 
system property or JNDI)
   [junit4]   2> 1330603 INFO  
(TEST-TestDistributedSearch.test-seed#[83EAB5C7C481F3B6]) [] 
o.a.s.c.SolrXmlConfig Loading container configuration from 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.TestDistributedSearch_83EAB5C7C481F3B6-001/tempDir-001/control/solr.xml
   [junit4]   2> 1330606 INFO  
(TEST-TestDistributedSearch.test-seed#[83EAB5C7C481F3B6]) [] 
o.a.s.c.CorePropertiesLocator Config-defined core root directory: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.TestDistributedSearch_83EAB5C7C481F3B6-001/tempDir-001/control/cores
   [junit4]   2> 1330606 INFO  
(TEST-TestDistributedSearch.test-seed#[83EAB5C7C481F3B6]) [] 
o.a.s.c.CoreContainer New CoreContainer 12648045
   [junit4]   2> 1330606 INFO  
(TEST-TestDistributedSearch.test-seed#[83EAB5C7C481F3B6]) [] 
o.a.s.c.CoreContainer Loading cores into CoreContainer 
[instanceDir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.TestDistributedSearch_83EAB5C7C481F3B6-001/tempDir-001/control]
   [junit4]   2> 1330607 WARN  
(TEST-TestDistributedSearch.test-seed#[83EAB5C7C481F3B6]) [] 
o.a.s.c.CoreContainer Couldn't add files from 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.TestDistributedSearch_83EAB5C7C481F3B6-001/tempDir-001/control/lib
 to classpath: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.TestDistributedSearch_83EAB5C7C481F3B6-001/tempDir-001/control/lib
   

[jira] [Commented] (SOLR-5750) Backup/Restore API for SolrCloud

2016-04-29 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264676#comment-15264676
 ] 

David Smiley commented on SOLR-5750:


I'll give some more time for review, like maybe Monday unless there are further 
changes to be done from any review/feedback.  Some things I think I want to 
change (which I will do today):
* simply remove the Overseer.processMessage case statements for RESTORE & 
BACKUP as they simply aren't used.  This resolves a nocommit.
* to avoid risk of confusion with "snapshot" possibly being a named commit 
(SOLR-9038) in the log statements and backup.properites I'll call it a 
backupName, not snapshotName.

Tentative CHANGES.txt is as follows:
{noformat}
* SOLR-5750: Add /admin/collections?action=BACKUP and RESTORE assuming access 
to a shared file system.
  (Varun Thacker, David Smiley)
{noformat}

About the "shared file system" requirement, it occurred to me this isn't really 
tested; it'd be nice it if failed fast if not all shards can see the backup 
location's ZK backup export.  I'm working on ensuring the backup fails if all 
slices don't see the backup directory that should be created at the start of 
the backup process.  This seems a small matter of ensuring that 
SnapShooter.validateCreateSnapshot call mkdir (which will fail if the parent 
dir isn't there) and not mkdirs but I'm testing to ensure the replication 
handler's use of SnapShooter is fine with this; I think it is.

> Backup/Restore API for SolrCloud
> 
>
> Key: SOLR-5750
> URL: https://issues.apache.org/jira/browse/SOLR-5750
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: David Smiley
> Fix For: 5.2, master
>
> Attachments: SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch, 
> SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch
>
>
> We should have an easy way to do backups and restores in SolrCloud. The 
> ReplicationHandler supports a backup command which can create snapshots of 
> the index but that is too little.
> The command should be able to backup:
> # Snapshots of all indexes or indexes from the leader or the shards
> # Config set
> # Cluster state
> # Cluster properties
> # Aliases
> # Overseer work queue?
> A restore should be able to completely restore the cloud i.e. no manual steps 
> required other than bringing nodes back up or setting up a new cloud cluster.
> SOLR-5340 will be a part of this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8297) Allow join query over 2 sharded collections: enhance functionality and exception handling

2016-04-29 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264662#comment-15264662
 ] 

Mikhail Khludnev commented on SOLR-8297:


To be honest, this fix exceed my understanding of the SolrCloud. Can you extend 
existing {{DistribJoinFromCollectionTest}} to cover this scenario?

> Allow join query over 2 sharded collections: enhance functionality and 
> exception handling
> -
>
> Key: SOLR-8297
> URL: https://issues.apache.org/jira/browse/SOLR-8297
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Paul Blanchaert
>
> Enhancement based on SOLR-4905. New Jira issue raised as suggested by Mikhail 
> Khludnev.
> A) exception handling:
> The exception "SolrCloud join: multiple shards not yet supported" thrown in 
> the function findLocalReplicaForFromIndex of JoinQParserPlugin is not 
> triggered correctly: In my use-case, I've a join on a facet.query and when my 
> results are only found in 1 shard and the facet.query with the join is 
> querying the last replica of the last slice, then the exception is not thrown.
> I believe it's better to verify the nr of slices when we want to verify the  
> "multiple shards not yet supported" exception (so exception is thrown when 
> zkController.getClusterState().getSlices(fromIndex).size()>1).
> B) functional enhancement:
> I would expect that there is no problem to perform a cross-core join over 
> sharded collections when the following conditions are met:
> 1) both collections are sharded with the same replicationFactor and numShards
> 2) router.field of the collections is set to the same "key-field" (collection 
> of "fromindex" has router.field = "from" field and collection joined to has 
> router.field = "to" field)
> The router.field setup ensures that documents with the same "key-field" are 
> routed to the same node. 
> So the combination based on the "key-field" should always be available within 
> the same node.
> From a user perspective, I believe these assumptions seem to be a "normal" 
> use-case in the cross-core join in SolrCloud.
> Hope this helps



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8297) Allow join query over 2 sharded collections: enhance functionality and exception handling

2016-04-29 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264662#comment-15264662
 ] 

Mikhail Khludnev edited comment on SOLR-8297 at 4/29/16 8:13 PM:
-

To be honest, this fix exceeds my understanding of the SolrCloud. Can you 
extend existing {{DistribJoinFromCollectionTest}} to cover this scenario?


was (Author: mkhludnev):
To be honest, this fix exceed my understanding of the SolrCloud. Can you extend 
existing {{DistribJoinFromCollectionTest}} to cover this scenario?

> Allow join query over 2 sharded collections: enhance functionality and 
> exception handling
> -
>
> Key: SOLR-8297
> URL: https://issues.apache.org/jira/browse/SOLR-8297
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Paul Blanchaert
>
> Enhancement based on SOLR-4905. New Jira issue raised as suggested by Mikhail 
> Khludnev.
> A) exception handling:
> The exception "SolrCloud join: multiple shards not yet supported" thrown in 
> the function findLocalReplicaForFromIndex of JoinQParserPlugin is not 
> triggered correctly: In my use-case, I've a join on a facet.query and when my 
> results are only found in 1 shard and the facet.query with the join is 
> querying the last replica of the last slice, then the exception is not thrown.
> I believe it's better to verify the nr of slices when we want to verify the  
> "multiple shards not yet supported" exception (so exception is thrown when 
> zkController.getClusterState().getSlices(fromIndex).size()>1).
> B) functional enhancement:
> I would expect that there is no problem to perform a cross-core join over 
> sharded collections when the following conditions are met:
> 1) both collections are sharded with the same replicationFactor and numShards
> 2) router.field of the collections is set to the same "key-field" (collection 
> of "fromindex" has router.field = "from" field and collection joined to has 
> router.field = "to" field)
> The router.field setup ensures that documents with the same "key-field" are 
> routed to the same node. 
> So the combination based on the "key-field" should always be available within 
> the same node.
> From a user perspective, I believe these assumptions seem to be a "normal" 
> use-case in the cross-core join in SolrCloud.
> Hope this helps



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: post-branch_6x Jira version renaming(s) got overlooked?

2016-04-29 Thread Cassandra Targett
OK. I'm not sure you're missing anything. But, I think we'll all know
for sure pretty quickly once we're doing it.

Do you want help with this? Seems like you have it under control, but
if you want to split it somehow, I can help a bit this afternoon.

On Fri, Apr 29, 2016 at 2:47 PM, Chris Hostetter
 wrote:
>
> : Yeah, good point, I forgot about the permutations with backported issues.
> :
> : But it's not just master + 6.1,  it's also master + 6.0. That's why
> : the query I sent out looked for issues that had "master", but not
> : either of those versions. If it's marked for 6.0 and also master, then
> : it's meant for 7.0 (eventually).
>
> Not neccessarily -- we have no way of nowing when "master" was put in
> fixVersion, so "6.0, master" might mean "commited to master=7.0 and
> branch_6x=6.0" or it might mean "commited to master which was then later
> forked to branch_6x but then someone also added 6.0 explicitly when
> resolving"
>
> in general, if we're going to merge master->6.0 we don't have to worry
> about any issues that *currently* list both -- that wll be resolved when
> they merge.
>
> I'm pretty sure we only have to worry about:
>
> a) issues that list both "master
> + 6.1" and wether that really means "commited to branch_6_0=6.0 and
> branch_6x=6.1" or "commited to master=7.0 and branch_6x=6.0" ... which is
> why i suggested a manual audit based on jira query.
>
> b) issues that *should* only list "master" once we are all done ... which
> should be a really straight forward audit of the 7.0 CHANGES.txt.
>
> ...or am i still missing something?
>
> : generally assumed. We could remove master from all issues that already
> : have another fixVersion (except the forward ones, 6.0 and 6.1), and
> : then just deal with that list. It's much more manageable:
> :
> : 
> https://issues.apache.org/jira/browse/SOLR-9046?jql=project%20%3D%20SOLR%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%20master%20AND%20fixVersion%20not%20in%20releasedVersions()
>
> how would we remove master from master from those issues? the "Bulk Edit
> replaces whole field" problem would force us to remove all fixVersions in
> that case wouldn't it?
>
>
>
>
>
> : > : > for both the LUCENE and SOLR project...
> : > : >
> : > : > 1) Audit the list of Jira's with 'fixVersion=mater AND 
> fixVersion=6.1' and
> : > : > manually remove master from all of them (only ~100 total)
> : > : > 2) merge "master" into "6.0"
> : > : > 3) re add a "master" version to Jira
> : > : > 3) Audit CHANGES.txt and set fixVersion=master on the handful of 
> issues in
> : > : > the 7.0 section
>
> -Hoss
> http://www.lucidworks.com/
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9050) IndexFetcher not retrying after SocketTimeoutException correctly, which leads to trying a full download again

2016-04-29 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-9050:
-
Attachment: SOLR-9050.patch

Updated patch (against 5.3.1 branch since that's the one I'm having issues with 
in prod). I've replicated the SocketTimeoutException locally but the 
IndexFetcher retries as expected locally, but I'm not seeing that in my prod 
server logs?

This patch uses the distribUpdate Timeouts from UpdateShardHandler (config in 
solr.xml) and adds some better logging so we can get a better picture of what's 
happening with these re-downloads after a STE.

> IndexFetcher not retrying after SocketTimeoutException correctly, which leads 
> to trying a full download again
> -
>
> Key: SOLR-9050
> URL: https://issues.apache.org/jira/browse/SOLR-9050
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 5.3.1
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Attachments: SOLR-9050.patch, SOLR-9050.patch
>
>
> I'm seeing a problem where reading a large file from the leader (in SolrCloud 
> mode) during index replication leads to a SocketTimeoutException:
> {code}
> 2016-04-28 16:22:23.568 WARN  (RecoveryThread-foo_shard11_replica2) [c:foo 
> s:shard11 r:core_node139 x:foo_shard11_replica2] o.a.s.h.IndexFetcher Error 
> in fetching file: _405k.cfs (downloaded 7314866176 of 9990844536 bytes)
> java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:150)
> at java.net.SocketInputStream.read(SocketInputStream.java:121)
> at 
> org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
> at 
> org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
> at 
> org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
> at 
> org.apache.http.impl.io.ChunkedInputStream.getChunkSize(ChunkedInputStream.java:253)
> at 
> org.apache.http.impl.io.ChunkedInputStream.nextChunk(ChunkedInputStream.java:227)
> at 
> org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:186)
> at 
> org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:137)
> at 
> org.apache.solr.common.util.FastInputStream.readWrappedStream(FastInputStream.java:80)
> at 
> org.apache.solr.common.util.FastInputStream.refill(FastInputStream.java:89)
> at 
> org.apache.solr.common.util.FastInputStream.read(FastInputStream.java:140)
> at 
> org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:167)
> at 
> org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:161)
> at 
> org.apache.solr.handler.IndexFetcher$FileFetcher.fetchPackets(IndexFetcher.java:1312)
> at 
> org.apache.solr.handler.IndexFetcher$FileFetcher.fetchFile(IndexFetcher.java:1275)
> at 
> org.apache.solr.handler.IndexFetcher.downloadIndexFiles(IndexFetcher.java:800)
> {code}
> and this leads to the following error in cleanup:
> {code}
> 2016-04-28 16:26:04.332 ERROR (RecoveryThread-foo_shard11_replica2) [c:foo 
> s:shard11 r:core_node139 x:foo_shard11_replica2] o.a.s.h.ReplicationHandler 
> Index fetch failed :org.apache.solr.common.SolrException: Unable to download 
> _405k.cfs completely. Downloaded 7314866176!=9990844536
> at 
> org.apache.solr.handler.IndexFetcher$FileFetcher.cleanup(IndexFetcher.java:1406)
> at 
> org.apache.solr.handler.IndexFetcher$FileFetcher.fetchFile(IndexFetcher.java:1286)
> at 
> org.apache.solr.handler.IndexFetcher.downloadIndexFiles(IndexFetcher.java:800)
> at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:423)
> at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:254)
> at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:380)
> at 
> org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:162)
> at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:437)
> at 
> org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:227)
> 2016-04-28 16:26:04.332 ERROR (RecoveryThread-foo_shard11_replica2) [c:foo 
> s:shard11 r:core_node139 x:foo_shard11_replica2] o.a.s.c.RecoveryStrategy 
> Error while trying to recover:org.apache.solr.common.SolrException: 
> Replication for recovery failed.
> at 
> org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:165)
> at 
> 

Re: post-branch_6x Jira version renaming(s) got overlooked?

2016-04-29 Thread Chris Hostetter

: Yeah, good point, I forgot about the permutations with backported issues.
: 
: But it's not just master + 6.1,  it's also master + 6.0. That's why
: the query I sent out looked for issues that had "master", but not
: either of those versions. If it's marked for 6.0 and also master, then
: it's meant for 7.0 (eventually).

Not neccessarily -- we have no way of nowing when "master" was put in 
fixVersion, so "6.0, master" might mean "commited to master=7.0 and 
branch_6x=6.0" or it might mean "commited to master which was then later 
forked to branch_6x but then someone also added 6.0 explicitly when 
resolving"

in general, if we're going to merge master->6.0 we don't have to worry 
about any issues that *currently* list both -- that wll be resolved when 
they merge.

I'm pretty sure we only have to worry about:

a) issues that list both "master 
+ 6.1" and wether that really means "commited to branch_6_0=6.0 and 
branch_6x=6.1" or "commited to master=7.0 and branch_6x=6.0" ... which is 
why i suggested a manual audit based on jira query.

b) issues that *should* only list "master" once we are all done ... which 
should be a really straight forward audit of the 7.0 CHANGES.txt.

...or am i still missing something?

: generally assumed. We could remove master from all issues that already
: have another fixVersion (except the forward ones, 6.0 and 6.1), and
: then just deal with that list. It's much more manageable:
: 
: 
https://issues.apache.org/jira/browse/SOLR-9046?jql=project%20%3D%20SOLR%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%20master%20AND%20fixVersion%20not%20in%20releasedVersions()

how would we remove master from master from those issues? the "Bulk Edit 
replaces whole field" problem would force us to remove all fixVersions in 
that case wouldn't it?





: > : > for both the LUCENE and SOLR project...
: > : >
: > : > 1) Audit the list of Jira's with 'fixVersion=mater AND fixVersion=6.1' 
and
: > : > manually remove master from all of them (only ~100 total)
: > : > 2) merge "master" into "6.0"
: > : > 3) re add a "master" version to Jira
: > : > 3) Audit CHANGES.txt and set fixVersion=master on the handful of issues 
in
: > : > the 7.0 section

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: post-branch_6x Jira version renaming(s) got overlooked?

2016-04-29 Thread Cassandra Targett
Yeah, good point, I forgot about the permutations with backported issues.

But it's not just master + 6.1,  it's also master + 6.0. That's why
the query I sent out looked for issues that had "master", but not
either of those versions. If it's marked for 6.0 and also master, then
it's meant for 7.0 (eventually).

David did bring up a good point, though, which is that if it has a
prior version (4.x, 5.x) then, the fact it's also in 5.x or 6.x is
generally assumed. We could remove master from all issues that already
have another fixVersion (except the forward ones, 6.0 and 6.1), and
then just deal with that list. It's much more manageable:

https://issues.apache.org/jira/browse/SOLR-9046?jql=project%20%3D%20SOLR%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%20master%20AND%20fixVersion%20not%20in%20releasedVersions()


On Fri, Apr 29, 2016 at 2:18 PM, Chris Hostetter
 wrote:
>
> : The biggest problem seems to be that bulk edit to set the version number
> : overrides any *additional* version numbers in those issues (they'd get
> : removed).  Assuming we can set multiple versions in bulk-edit, maybe we
> : only need to do this command once for every 5.x release? -- i.e. find all
> : issues with fix version master & 5.2, then replace it with 6.0 & 5.2.  Or
> : just replace with 5.2 for that matter -- code in 5.x is assumed to be in
> : all versions after (whatever "master" is).  When I close issues, I don't
>
> that doesn't really help for things that currently say "Fix Version: 5.3,
> 5.2.2, master" ... if you are running 5.2.0, it's important to know that
> if you aren't ready to upgrade to 6.0, but you need a fix to that bug, you
> can upgrade to either 5.3 or 5.2.2 -- but it wasn't fixed in 5.2.1.
>
> So just doing one bulk edit for every "5.x, master" pair isn't enough ...
> you can't even do *one* bulk edit for every 5.x.y, you'd have to do one
> bulk edit for every permutation of all possible 5.x.y combos ... Example:
> some bugs are "Fix Version: 5.3.2, 5.5, master, 5.4.1" while other bugs
> are "5.3.2, 5.4, master" (depending on when they were fixed/backported)
> ...
>
> ...all in all this would probably be 10x more tedious then just abandoming
> "master" and manually editing every issue in CHANGES.txt -- which in
> itself would already be more tedious then my current favorite idea of
> doing a jira "merge versions" and manually auditing the ~100 issues that
> already have master+6.1 ... which is probably as tedious as i'm willing to
> volunteer to be at this point (if other people wnat to volutneer for
> something more tedious i'm happy to let them)
>
>
> : On Fri, Apr 29, 2016 at 2:11 PM Chris Hostetter 
> : wrote:
> :
> : >
> : > : Is it possible there are 2100 of these?
> : >
> : > Possible or not, that's certialy what it looks like (1665 more in LUCENE)
> : >
> : > I woke up this morning thinking "Oh wait - doesn't jira have a way to
> : > merge Versions?" ... and the answer is "Yes" so i was going to suggest the
> : > following...
> : >
> : > for both the LUCENE and SOLR project...
> : >
> : > 1) Audit the list of Jira's with 'fixVersion=mater AND fixVersion=6.1' and
> : > manually remove master from all of them (only ~100 total)
> : > 2) merge "master" into "6.0"
> : > 3) re add a "master" version to Jira
> : > 3) Audit CHANGES.txt and set fixVersion=master on the handful of issues in
> : > the 7.0 section
> : >
> : > ...but that was before i really looked at Cassandra's Jira queries...
> : >
> : > : I did the below JIRA query, only in the Solr project, looking for
> : > : Resolved or Closed issues with fixVersion of "master", but not with
> : > : fixVersion of 6.0 nor 6.1, resolved before 8 Apr 2016 (the release
> : > : date of Lucene/Solr 6).
> : > :
> : > :
> : > 
> https://issues.apache.org/jira/browse/SOLR-7712?jql=project%20%3D%20SOLR%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%20master%20AND%20fixVersion%20!%3D%206.0%20AND%20fixVersion%20!%3D%206.1%20AND%20resolved%20%3C%20%222016%2F04%2F08%22
> : >
> : > ...if you sort by Resolved Date, it becomes really clear that we've fucked
> : > up on renaming/dealing with "master" for longer then just the 6.0 release
> : > ... it seems like s we didn't do something correctly for 5.0 either.
> : >
> : > So i'm kind of at a loss now as to what the optimal solution would be.
> : >
> : > : It seems it would be easier to make some sort of "rename master" sort
> : > : of change and go back and fix the ones that shouldn't be changed
> : > : because they have been finished post-6.0 release, but I'm not seeing a
> : > : good way to make a single query for those.
> : >
> : > that kind of fits with my "Merge Version" idea ... but i'm not sure if/how
> : > to care about the really old issues 4.x which will start saying "Fixed in:
> : > ...,6.0" ... will that confuse people?  Will users see "Fixed in:
> : > 4.0-ALPHA, 6.0" and think there was a regression in 

Re: post-branch_6x Jira version renaming(s) got overlooked?

2016-04-29 Thread David Smiley
Yeah good point; nevermind.  Sorry for the noise.
+1 to your "current favorite idea of doing a jira 'merge versions' and
manually auditing the issues that etc."

On Fri, Apr 29, 2016 at 3:18 PM Chris Hostetter 
wrote:

>
> : The biggest problem seems to be that bulk edit to set the version number
> : overrides any *additional* version numbers in those issues (they'd get
> : removed).  Assuming we can set multiple versions in bulk-edit, maybe we
> : only need to do this command once for every 5.x release? -- i.e. find all
> : issues with fix version master & 5.2, then replace it with 6.0 & 5.2.  Or
> : just replace with 5.2 for that matter -- code in 5.x is assumed to be in
> : all versions after (whatever "master" is).  When I close issues, I don't
>
> that doesn't really help for things that currently say "Fix Version: 5.3,
> 5.2.2, master" ... if you are running 5.2.0, it's important to know that
> if you aren't ready to upgrade to 6.0, but you need a fix to that bug, you
> can upgrade to either 5.3 or 5.2.2 -- but it wasn't fixed in 5.2.1.
>
> So just doing one bulk edit for every "5.x, master" pair isn't enough ...
> you can't even do *one* bulk edit for every 5.x.y, you'd have to do one
> bulk edit for every permutation of all possible 5.x.y combos ... Example:
> some bugs are "Fix Version: 5.3.2, 5.5, master, 5.4.1" while other bugs
> are "5.3.2, 5.4, master" (depending on when they were fixed/backported)
> ...
>
> ...all in all this would probably be 10x more tedious then just abandoming
> "master" and manually editing every issue in CHANGES.txt -- which in
> itself would already be more tedious then my current favorite idea of
> doing a jira "merge versions" and manually auditing the ~100 issues that
> already have master+6.1 ... which is probably as tedious as i'm willing to
> volunteer to be at this point (if other people wnat to volutneer for
> something more tedious i'm happy to let them)
>
>
> : On Fri, Apr 29, 2016 at 2:11 PM Chris Hostetter <
> hossman_luc...@fucit.org>
> : wrote:
> :
> : >
> : > : Is it possible there are 2100 of these?
> : >
> : > Possible or not, that's certialy what it looks like (1665 more in
> LUCENE)
> : >
> : > I woke up this morning thinking "Oh wait - doesn't jira have a way to
> : > merge Versions?" ... and the answer is "Yes" so i was going to suggest
> the
> : > following...
> : >
> : > for both the LUCENE and SOLR project...
> : >
> : > 1) Audit the list of Jira's with 'fixVersion=mater AND fixVersion=6.1'
> and
> : > manually remove master from all of them (only ~100 total)
> : > 2) merge "master" into "6.0"
> : > 3) re add a "master" version to Jira
> : > 3) Audit CHANGES.txt and set fixVersion=master on the handful of
> issues in
> : > the 7.0 section
> : >
> : > ...but that was before i really looked at Cassandra's Jira queries...
> : >
> : > : I did the below JIRA query, only in the Solr project, looking for
> : > : Resolved or Closed issues with fixVersion of "master", but not with
> : > : fixVersion of 6.0 nor 6.1, resolved before 8 Apr 2016 (the release
> : > : date of Lucene/Solr 6).
> : > :
> : > :
> : >
> https://issues.apache.org/jira/browse/SOLR-7712?jql=project%20%3D%20SOLR%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%20master%20AND%20fixVersion%20!%3D%206.0%20AND%20fixVersion%20!%3D%206.1%20AND%20resolved%20%3C%20%222016%2F04%2F08%22
> : >
> : > ...if you sort by Resolved Date, it becomes really clear that we've
> fucked
> : > up on renaming/dealing with "master" for longer then just the 6.0
> release
> : > ... it seems like s we didn't do something correctly for 5.0 either.
> : >
> : > So i'm kind of at a loss now as to what the optimal solution would be.
> : >
> : > : It seems it would be easier to make some sort of "rename master" sort
> : > : of change and go back and fix the ones that shouldn't be changed
> : > : because they have been finished post-6.0 release, but I'm not seeing
> a
> : > : good way to make a single query for those.
> : >
> : > that kind of fits with my "Merge Version" idea ... but i'm not sure
> if/how
> : > to care about the really old issues 4.x which will start saying "Fixed
> in:
> : > ...,6.0" ... will that confuse people?  Will users see "Fixed in:
> : > 4.0-ALPHA, 6.0" and think there was a regression in 5.x? ... or am i
> just
> : > over thinking things?
> : >
> : >
> : >
> : > The other option: straight up delete "master" so it disappears from
> all of
> : > these issues (we can add a new "master" back later) and then explicitly
> : > add 6.0 to every issue mentioned in the 6.0 CHANGES sections ...
> writting a
> : > little perl script to pull them out and build up a few jira search urls
> : > like "id in (SOLR-3085, SOLR-7560, SOLR-7707, SOLR-7707, ...)"
> wouldn't be
> : > too painful, and once we had those search URLs matches a few hundred
> : > issues each, we can use the "Bulk Edit" to add 6.0...
> : >
> : > ...oh fuck ... right, i forgot about this part...

Re: post-branch_6x Jira version renaming(s) got overlooked?

2016-04-29 Thread Chris Hostetter

: The biggest problem seems to be that bulk edit to set the version number
: overrides any *additional* version numbers in those issues (they'd get
: removed).  Assuming we can set multiple versions in bulk-edit, maybe we
: only need to do this command once for every 5.x release? -- i.e. find all
: issues with fix version master & 5.2, then replace it with 6.0 & 5.2.  Or
: just replace with 5.2 for that matter -- code in 5.x is assumed to be in
: all versions after (whatever "master" is).  When I close issues, I don't

that doesn't really help for things that currently say "Fix Version: 5.3, 
5.2.2, master" ... if you are running 5.2.0, it's important to know that 
if you aren't ready to upgrade to 6.0, but you need a fix to that bug, you 
can upgrade to either 5.3 or 5.2.2 -- but it wasn't fixed in 5.2.1.

So just doing one bulk edit for every "5.x, master" pair isn't enough ... 
you can't even do *one* bulk edit for every 5.x.y, you'd have to do one 
bulk edit for every permutation of all possible 5.x.y combos ... Example: 
some bugs are "Fix Version: 5.3.2, 5.5, master, 5.4.1" while other bugs 
are "5.3.2, 5.4, master" (depending on when they were fixed/backported) 
...

...all in all this would probably be 10x more tedious then just abandoming 
"master" and manually editing every issue in CHANGES.txt -- which in 
itself would already be more tedious then my current favorite idea of 
doing a jira "merge versions" and manually auditing the ~100 issues that 
already have master+6.1 ... which is probably as tedious as i'm willing to 
volunteer to be at this point (if other people wnat to volutneer for 
something more tedious i'm happy to let them)


: On Fri, Apr 29, 2016 at 2:11 PM Chris Hostetter 
: wrote:
: 
: >
: > : Is it possible there are 2100 of these?
: >
: > Possible or not, that's certialy what it looks like (1665 more in LUCENE)
: >
: > I woke up this morning thinking "Oh wait - doesn't jira have a way to
: > merge Versions?" ... and the answer is "Yes" so i was going to suggest the
: > following...
: >
: > for both the LUCENE and SOLR project...
: >
: > 1) Audit the list of Jira's with 'fixVersion=mater AND fixVersion=6.1' and
: > manually remove master from all of them (only ~100 total)
: > 2) merge "master" into "6.0"
: > 3) re add a "master" version to Jira
: > 3) Audit CHANGES.txt and set fixVersion=master on the handful of issues in
: > the 7.0 section
: >
: > ...but that was before i really looked at Cassandra's Jira queries...
: >
: > : I did the below JIRA query, only in the Solr project, looking for
: > : Resolved or Closed issues with fixVersion of "master", but not with
: > : fixVersion of 6.0 nor 6.1, resolved before 8 Apr 2016 (the release
: > : date of Lucene/Solr 6).
: > :
: > :
: > 
https://issues.apache.org/jira/browse/SOLR-7712?jql=project%20%3D%20SOLR%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%20master%20AND%20fixVersion%20!%3D%206.0%20AND%20fixVersion%20!%3D%206.1%20AND%20resolved%20%3C%20%222016%2F04%2F08%22
: >
: > ...if you sort by Resolved Date, it becomes really clear that we've fucked
: > up on renaming/dealing with "master" for longer then just the 6.0 release
: > ... it seems like s we didn't do something correctly for 5.0 either.
: >
: > So i'm kind of at a loss now as to what the optimal solution would be.
: >
: > : It seems it would be easier to make some sort of "rename master" sort
: > : of change and go back and fix the ones that shouldn't be changed
: > : because they have been finished post-6.0 release, but I'm not seeing a
: > : good way to make a single query for those.
: >
: > that kind of fits with my "Merge Version" idea ... but i'm not sure if/how
: > to care about the really old issues 4.x which will start saying "Fixed in:
: > ...,6.0" ... will that confuse people?  Will users see "Fixed in:
: > 4.0-ALPHA, 6.0" and think there was a regression in 5.x? ... or am i just
: > over thinking things?
: >
: >
: >
: > The other option: straight up delete "master" so it disappears from all of
: > these issues (we can add a new "master" back later) and then explicitly
: > add 6.0 to every issue mentioned in the 6.0 CHANGES sections ... writting a
: > little perl script to pull them out and build up a few jira search urls
: > like "id in (SOLR-3085, SOLR-7560, SOLR-7707, SOLR-7707, ...)" wouldn't be
: > too painful, and once we had those search URLs matches a few hundred
: > issues each, we can use the "Bulk Edit" to add 6.0...
: >
: > ...oh fuck ... right, i forgot about this part...
: >
: > : Additionally, and sadly, in JIRA any bulk update to a field overwrites
: > : the existing value in the field. So if the fixVersion is "master" and
: > : "5.3", then doing a bulk update to "master" only would remove "5.3".
: >
: >
: > ...so i guess i'm back to my "Merge master -> 6.0" idea, and oh well to
: > any confusion there might be for those really old issues.
: >
: >
: > Anybody have a better suggestion?
: >
: >

[jira] [Commented] (SOLR-9050) IndexFetcher not retrying after SocketTimeoutException correctly, which leads to trying a full download again

2016-04-29 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264572#comment-15264572
 ] 

Timothy Potter commented on SOLR-9050:
--

hmmm ... I reproduced the STE locally, but the request gets retried multiple 
times (as expected) locally, but I didn't see that in my prod env? Or maybe I 
just got incomplete logs from my ops team :P

> IndexFetcher not retrying after SocketTimeoutException correctly, which leads 
> to trying a full download again
> -
>
> Key: SOLR-9050
> URL: https://issues.apache.org/jira/browse/SOLR-9050
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 5.3.1
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Attachments: SOLR-9050.patch
>
>
> I'm seeing a problem where reading a large file from the leader (in SolrCloud 
> mode) during index replication leads to a SocketTimeoutException:
> {code}
> 2016-04-28 16:22:23.568 WARN  (RecoveryThread-foo_shard11_replica2) [c:foo 
> s:shard11 r:core_node139 x:foo_shard11_replica2] o.a.s.h.IndexFetcher Error 
> in fetching file: _405k.cfs (downloaded 7314866176 of 9990844536 bytes)
> java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:150)
> at java.net.SocketInputStream.read(SocketInputStream.java:121)
> at 
> org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
> at 
> org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
> at 
> org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
> at 
> org.apache.http.impl.io.ChunkedInputStream.getChunkSize(ChunkedInputStream.java:253)
> at 
> org.apache.http.impl.io.ChunkedInputStream.nextChunk(ChunkedInputStream.java:227)
> at 
> org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:186)
> at 
> org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:137)
> at 
> org.apache.solr.common.util.FastInputStream.readWrappedStream(FastInputStream.java:80)
> at 
> org.apache.solr.common.util.FastInputStream.refill(FastInputStream.java:89)
> at 
> org.apache.solr.common.util.FastInputStream.read(FastInputStream.java:140)
> at 
> org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:167)
> at 
> org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:161)
> at 
> org.apache.solr.handler.IndexFetcher$FileFetcher.fetchPackets(IndexFetcher.java:1312)
> at 
> org.apache.solr.handler.IndexFetcher$FileFetcher.fetchFile(IndexFetcher.java:1275)
> at 
> org.apache.solr.handler.IndexFetcher.downloadIndexFiles(IndexFetcher.java:800)
> {code}
> and this leads to the following error in cleanup:
> {code}
> 2016-04-28 16:26:04.332 ERROR (RecoveryThread-foo_shard11_replica2) [c:foo 
> s:shard11 r:core_node139 x:foo_shard11_replica2] o.a.s.h.ReplicationHandler 
> Index fetch failed :org.apache.solr.common.SolrException: Unable to download 
> _405k.cfs completely. Downloaded 7314866176!=9990844536
> at 
> org.apache.solr.handler.IndexFetcher$FileFetcher.cleanup(IndexFetcher.java:1406)
> at 
> org.apache.solr.handler.IndexFetcher$FileFetcher.fetchFile(IndexFetcher.java:1286)
> at 
> org.apache.solr.handler.IndexFetcher.downloadIndexFiles(IndexFetcher.java:800)
> at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:423)
> at 
> org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:254)
> at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:380)
> at 
> org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:162)
> at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:437)
> at 
> org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:227)
> 2016-04-28 16:26:04.332 ERROR (RecoveryThread-foo_shard11_replica2) [c:foo 
> s:shard11 r:core_node139 x:foo_shard11_replica2] o.a.s.c.RecoveryStrategy 
> Error while trying to recover:org.apache.solr.common.SolrException: 
> Replication for recovery failed.
> at 
> org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:165)
> at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:437)
> at 
> org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:227)
> {code}
> So a simple read timeout exception leads to re-downloading the whole index 
> again, and 

[jira] [Commented] (SOLR-6584) Export handler causes bug in prefetch with very small indexes.

2016-04-29 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264559#comment-15264559
 ] 

Joel Bernstein commented on SOLR-6584:
--

I was looking for which release this went in, but it wasn't added to the 
CHANGES.txt. I suspect this was an oversight at the time, along with not 
closing the ticket. This was a really small bug that only effected indexes that 
had fewer documents then the default rows param (10 I believe). But I'll see if 
I can track down what release it went in.

> Export handler causes bug in prefetch with very small indexes.
> --
>
> Key: SOLR-6584
> URL: https://issues.apache.org/jira/browse/SOLR-6584
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-6584.patch
>
>
> When there are very few documents in the index the ExportQParserPlugin is 
> creating a dummy docList which is larger then the number of documents in the 
> index. This causes a bug during the prefetch stage of the QueryComponent.
> There really needs to be two fixes here.
> 1) The dummy docList should never be larger then the number of documents in 
> the index.
> 2) Prefetch should be turned off during exports as it's not doing anything 
> useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-5750) Backup/Restore API for SolrCloud

2016-04-29 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned SOLR-5750:
--

Assignee: David Smiley  (was: Varun Thacker)

> Backup/Restore API for SolrCloud
> 
>
> Key: SOLR-5750
> URL: https://issues.apache.org/jira/browse/SOLR-5750
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: David Smiley
> Fix For: 5.2, master
>
> Attachments: SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch, 
> SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch
>
>
> We should have an easy way to do backups and restores in SolrCloud. The 
> ReplicationHandler supports a backup command which can create snapshots of 
> the index but that is too little.
> The command should be able to backup:
> # Snapshots of all indexes or indexes from the leader or the shards
> # Config set
> # Cluster state
> # Cluster properties
> # Aliases
> # Overseer work queue?
> A restore should be able to completely restore the cloud i.e. no manual steps 
> required other than bringing nodes back up or setting up a new cloud cluster.
> SOLR-5340 will be a part of this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6584) Export handler causes bug in prefetch with very small indexes.

2016-04-29 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264533#comment-15264533
 ] 

Joel Bernstein commented on SOLR-6584:
--

Yes, I'll close it.

> Export handler causes bug in prefetch with very small indexes.
> --
>
> Key: SOLR-6584
> URL: https://issues.apache.org/jira/browse/SOLR-6584
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-6584.patch
>
>
> When there are very few documents in the index the ExportQParserPlugin is 
> creating a dummy docList which is larger then the number of documents in the 
> index. This causes a bug during the prefetch stage of the QueryComponent.
> There really needs to be two fixes here.
> 1) The dummy docList should never be larger then the number of documents in 
> the index.
> 2) Prefetch should be turned off during exports as it's not doing anything 
> useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-6584) Export handler causes bug in prefetch with very small indexes.

2016-04-29 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein closed SOLR-6584.

Resolution: Resolved

> Export handler causes bug in prefetch with very small indexes.
> --
>
> Key: SOLR-6584
> URL: https://issues.apache.org/jira/browse/SOLR-6584
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-6584.patch
>
>
> When there are very few documents in the index the ExportQParserPlugin is 
> creating a dummy docList which is larger then the number of documents in the 
> index. This causes a bug during the prefetch stage of the QueryComponent.
> There really needs to be two fixes here.
> 1) The dummy docList should never be larger then the number of documents in 
> the index.
> 2) Prefetch should be turned off during exports as it's not doing anything 
> useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7267) Field with an explicit TokenStream must be tokenized and then uses the default Analyzer offset gaps

2016-04-29 Thread Dawid Weiss (JIRA)
Dawid Weiss created LUCENE-7267:
---

 Summary: Field with an explicit TokenStream must be tokenized and 
then uses the default Analyzer offset gaps
 Key: LUCENE-7267
 URL: https://issues.apache.org/jira/browse/LUCENE-7267
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Dawid Weiss
Priority: Minor


This took me somewhat by surprise. We have a pretty complex code that uses 
fields with explicit token streams (which provide their own offset data) and 
multivalues.

It was surprising to see that offsets for subsequent values were shifted by 1 
compared to what was explicitly provided in the OffsetAttribute. A bit of 
debugging showed this code inside {{PerField.invert}}:

{code}
  if (analyzed) {
invertState.position += 
docState.analyzer.getPositionIncrementGap(fieldInfo.name);
invertState.offset += docState.analyzer.getOffsetGap(fieldInfo.name);
  }
{code}

A field with an explicit token stream must still be declared as tokenized and 
PerField then thinks that this field must have come from an analyzer (where in 
fact it didn't):

{code}
  final boolean analyzed = fieldType.tokenized() && docState.analyzer != 
null;
{code}

While the default position increment is 0, the default offset gap isn't -- it's 
1, causing the shift.

Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9047) zkcli should allow alternative locations for log4j configuration

2016-04-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264495#comment-15264495
 ] 

ASF subversion and git services commented on SOLR-9047:
---

Commit 0dec8f9415a9d97a93870a416e96366db60a72fa in lucene-solr's branch 
refs/heads/master from [~gchanan]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0dec8f9 ]

SOLR-9047: zkcli should allow alternative locations for log4j configuration


> zkcli should allow alternative locations for log4j configuration
> 
>
> Key: SOLR-9047
> URL: https://issues.apache.org/jira/browse/SOLR-9047
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools, SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Minor
> Attachments: SOLR-9047.patch, SOLR-9047.patch
>
>
> zkcli uses the log4j configuration in the local directory:
> {code}
> sdir="`dirname \"$0\"`"
> PATH=$JAVA_HOME/bin:$PATH $JVM 
> -Dlog4j.configuration=file:$sdir/log4j.properties -classpath 
> "$sdir/../../solr-webapp/webapp/WEB-INF/lib/*:$sdir/../../lib/ext/*" 
> org.apache.solr.cloud.ZkCLI ${1+"$@"}
> {code}
> which is a reasonable default, but often people want to use a "global" log4j 
> configuration.  For example, one may define a log4j configuration that writes 
> to an external log directory and want to point to this rather than copying it 
> to each source checkout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6584) Export handler causes bug in prefetch with very small indexes.

2016-04-29 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264490#comment-15264490
 ] 

Shalin Shekhar Mangar commented on SOLR-6584:
-

I think this can be closed?

> Export handler causes bug in prefetch with very small indexes.
> --
>
> Key: SOLR-6584
> URL: https://issues.apache.org/jira/browse/SOLR-6584
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-6584.patch
>
>
> When there are very few documents in the index the ExportQParserPlugin is 
> creating a dummy docList which is larger then the number of documents in the 
> index. This causes a bug during the prefetch stage of the QueryComponent.
> There really needs to be two fixes here.
> 1) The dummy docList should never be larger then the number of documents in 
> the index.
> 2) Prefetch should be turned off during exports as it's not doing anything 
> useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: post-branch_6x Jira version renaming(s) got overlooked?

2016-04-29 Thread David Smiley
The biggest problem seems to be that bulk edit to set the version number
overrides any *additional* version numbers in those issues (they'd get
removed).  Assuming we can set multiple versions in bulk-edit, maybe we
only need to do this command once for every 5.x release? -- i.e. find all
issues with fix version master & 5.2, then replace it with 6.0 & 5.2.  Or
just replace with 5.2 for that matter -- code in 5.x is assumed to be in
all versions after (whatever "master" is).  When I close issues, I don't
mark them with one version number even though I have to commit it to both
branches.   Hmm; but a 4x backport would be an additional version number...
maybe we could handle those issues manually?

On Fri, Apr 29, 2016 at 2:11 PM Chris Hostetter 
wrote:

>
> : Is it possible there are 2100 of these?
>
> Possible or not, that's certialy what it looks like (1665 more in LUCENE)
>
> I woke up this morning thinking "Oh wait - doesn't jira have a way to
> merge Versions?" ... and the answer is "Yes" so i was going to suggest the
> following...
>
> for both the LUCENE and SOLR project...
>
> 1) Audit the list of Jira's with 'fixVersion=mater AND fixVersion=6.1' and
> manually remove master from all of them (only ~100 total)
> 2) merge "master" into "6.0"
> 3) re add a "master" version to Jira
> 3) Audit CHANGES.txt and set fixVersion=master on the handful of issues in
> the 7.0 section
>
> ...but that was before i really looked at Cassandra's Jira queries...
>
> : I did the below JIRA query, only in the Solr project, looking for
> : Resolved or Closed issues with fixVersion of "master", but not with
> : fixVersion of 6.0 nor 6.1, resolved before 8 Apr 2016 (the release
> : date of Lucene/Solr 6).
> :
> :
> https://issues.apache.org/jira/browse/SOLR-7712?jql=project%20%3D%20SOLR%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%20master%20AND%20fixVersion%20!%3D%206.0%20AND%20fixVersion%20!%3D%206.1%20AND%20resolved%20%3C%20%222016%2F04%2F08%22
>
> ...if you sort by Resolved Date, it becomes really clear that we've fucked
> up on renaming/dealing with "master" for longer then just the 6.0 release
> ... it seems like s we didn't do something correctly for 5.0 either.
>
> So i'm kind of at a loss now as to what the optimal solution would be.
>
> : It seems it would be easier to make some sort of "rename master" sort
> : of change and go back and fix the ones that shouldn't be changed
> : because they have been finished post-6.0 release, but I'm not seeing a
> : good way to make a single query for those.
>
> that kind of fits with my "Merge Version" idea ... but i'm not sure if/how
> to care about the really old issues 4.x which will start saying "Fixed in:
> ...,6.0" ... will that confuse people?  Will users see "Fixed in:
> 4.0-ALPHA, 6.0" and think there was a regression in 5.x? ... or am i just
> over thinking things?
>
>
>
> The other option: straight up delete "master" so it disappears from all of
> these issues (we can add a new "master" back later) and then explicitly
> add 6.0 to every issue mentioned in the 6.0 CHANGES sections ... writting a
> little perl script to pull them out and build up a few jira search urls
> like "id in (SOLR-3085, SOLR-7560, SOLR-7707, SOLR-7707, ...)" wouldn't be
> too painful, and once we had those search URLs matches a few hundred
> issues each, we can use the "Bulk Edit" to add 6.0...
>
> ...oh fuck ... right, i forgot about this part...
>
> : Additionally, and sadly, in JIRA any bulk update to a field overwrites
> : the existing value in the field. So if the fixVersion is "master" and
> : "5.3", then doing a bulk update to "master" only would remove "5.3".
>
>
> ...so i guess i'm back to my "Merge master -> 6.0" idea, and oh well to
> any confusion there might be for those really old issues.
>
>
> Anybody have a better suggestion?
>
>
>
> -Hoss
> http://www.lucidworks.com/
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Commented] (SOLR-8323) Add CollectionWatcher API to ZkStateReader

2016-04-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264473#comment-15264473
 ] 

ASF GitHub Bot commented on SOLR-8323:
--

Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61622572
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/CollectionStateWatcher.java ---
@@ -0,0 +1,42 @@
+package org.apache.solr.common.cloud;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+*/
+
+import java.util.Set;
+
+/**
+ * Callback registered with {@link 
ZkStateReader#registerCollectionStateWatcher(String, CollectionStateWatcher)}
+ * and called whenever the collection state changes.
+ */
--- End diff --

Not sure! ¯\_(ツ)_/¯


> Add CollectionWatcher API to ZkStateReader
> --
>
> Key: SOLR-8323
> URL: https://issues.apache.org/jira/browse/SOLR-8323
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-8323.patch, SOLR-8323.patch, SOLR-8323.patch, 
> SOLR-8323.patch
>
>
> An API to watch for changes to collection state would be a generally useful 
> thing, both internally and for client use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8323

2016-04-29 Thread dragonsinth
Github user dragonsinth commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/32#discussion_r61622572
  
--- Diff: 
solr/solrj/src/java/org/apache/solr/common/cloud/CollectionStateWatcher.java ---
@@ -0,0 +1,42 @@
+package org.apache.solr.common.cloud;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+*/
+
+import java.util.Set;
+
+/**
+ * Callback registered with {@link 
ZkStateReader#registerCollectionStateWatcher(String, CollectionStateWatcher)}
+ * and called whenever the collection state changes.
+ */
--- End diff --

Not sure! ¯\_(ツ)_/¯


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: post-branch_6x Jira version renaming(s) got overlooked?

2016-04-29 Thread Chris Hostetter

: Is it possible there are 2100 of these?

Possible or not, that's certialy what it looks like (1665 more in LUCENE)

I woke up this morning thinking "Oh wait - doesn't jira have a way to 
merge Versions?" ... and the answer is "Yes" so i was going to suggest the 
following...

for both the LUCENE and SOLR project...

1) Audit the list of Jira's with 'fixVersion=mater AND fixVersion=6.1' and 
manually remove master from all of them (only ~100 total)
2) merge "master" into "6.0"
3) re add a "master" version to Jira
3) Audit CHANGES.txt and set fixVersion=master on the handful of issues in 
the 7.0 section

...but that was before i really looked at Cassandra's Jira queries...

: I did the below JIRA query, only in the Solr project, looking for
: Resolved or Closed issues with fixVersion of "master", but not with
: fixVersion of 6.0 nor 6.1, resolved before 8 Apr 2016 (the release
: date of Lucene/Solr 6).
: 
: 
https://issues.apache.org/jira/browse/SOLR-7712?jql=project%20%3D%20SOLR%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%20master%20AND%20fixVersion%20!%3D%206.0%20AND%20fixVersion%20!%3D%206.1%20AND%20resolved%20%3C%20%222016%2F04%2F08%22

...if you sort by Resolved Date, it becomes really clear that we've fucked 
up on renaming/dealing with "master" for longer then just the 6.0 release 
... it seems like s we didn't do something correctly for 5.0 either.

So i'm kind of at a loss now as to what the optimal solution would be.

: It seems it would be easier to make some sort of "rename master" sort
: of change and go back and fix the ones that shouldn't be changed
: because they have been finished post-6.0 release, but I'm not seeing a
: good way to make a single query for those.

that kind of fits with my "Merge Version" idea ... but i'm not sure if/how 
to care about the really old issues 4.x which will start saying "Fixed in: 
...,6.0" ... will that confuse people?  Will users see "Fixed in: 
4.0-ALPHA, 6.0" and think there was a regression in 5.x? ... or am i just 
over thinking things?



The other option: straight up delete "master" so it disappears from all of 
these issues (we can add a new "master" back later) and then explicitly 
add 6.0 to every issue mentioned in the 6.0 CHANGES sections ... writting a 
little perl script to pull them out and build up a few jira search urls 
like "id in (SOLR-3085, SOLR-7560, SOLR-7707, SOLR-7707, ...)" wouldn't be 
too painful, and once we had those search URLs matches a few hundred 
issues each, we can use the "Bulk Edit" to add 6.0...

...oh fuck ... right, i forgot about this part...

: Additionally, and sadly, in JIRA any bulk update to a field overwrites
: the existing value in the field. So if the fixVersion is "master" and
: "5.3", then doing a bulk update to "master" only would remove "5.3".


...so i guess i'm back to my "Merge master -> 6.0" idea, and oh well to 
any confusion there might be for those really old issues.


Anybody have a better suggestion?



-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_92) - Build # 16616 - Still Failing!

2016-04-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16616/
Java: 32bit/jdk1.8.0_92 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
timed out waiting for collection1 startAt time to exceed: Fri Apr 29 17:52:16 
UTC 2016

Stack Trace:
java.lang.AssertionError: timed out waiting for collection1 startAt time to 
exceed: Fri Apr 29 17:52:16 UTC 2016
at 
__randomizedtesting.SeedInfo.seed([B6BD91F430C23795:6D16913235EA5E26]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.TestReplicationHandler.watchCoreStartAt(TestReplicationHandler.java:1426)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:778)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11498 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandler
   

[jira] [Commented] (SOLR-9038) Ability to create/delete/list snapshots for a solr collection

2016-04-29 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264461#comment-15264461
 ] 

David Smiley commented on SOLR-9038:


bq. If we are going to allow the "backup" operation to use this snapshot commit 
in future, then I think we need to make sure that that snapshot commit is 
preserved during collection configuration changes. If the snapshot commit is 
created on all replicas for a shard, then it probably is OK to delete one or 
more replicas. But I am not sure how would we handle the case when a shard 
containing a one or more snapshot commits is deleted.

There's no issue, I think, if a replica is deleted.  If a whole shard is 
deleted, then I think it's okay too -- it won't be backed up -- there's nothing 
left :-)

bq. I agree that requiring replicas to transfer snapshot commits during 
recovery may not be a good idea since in case of large collections it will 
increase the size of data transferred over the network.

I don't think it's a blocker to the approach... it's just the price one pays to 
recover in the presence of snapshot commits.  Other improvements around how 
Lucene segments merge might make more sense to optimize this such that segments 
can only be merged if the IndexCommits pointing to them are consistent.  If 
this idea were implemented, and If one were to do an optimize (as a 
hypothetical example to explain the effect), they would have a segment for each 
snapshot commit, with disjoint documents (no duplication).  Pretty good, I 
think.  But this would clearly be it's own issue :-)

> Ability to create/delete/list snapshots for a solr collection
> -
>
> Key: SOLR-9038
> URL: https://issues.apache.org/jira/browse/SOLR-9038
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Hrishikesh Gadre
>
> Currently work is under-way to implement backup/restore API for Solr cloud 
> (SOLR-5750). SOLR-5750 is about providing an ability to "copy" index files 
> and collection metadata to a configurable location. 
> In addition to this, we should also provide a facility to create "named" 
> snapshots for Solr collection. Here by "snapshot" I mean configuring the 
> underlying Lucene IndexDeletionPolicy to not delete a specific commit point 
> (e.g. using PersistentSnapshotIndexDeletionPolicy). This should not be 
> confused with SOLR-5340 which implements core level "backup" functionality.
> The primary motivation of this feature is to decouple recording/preserving a 
> known consistent state of a collection from actually "copying" the relevant 
> files to a physically separate location. This decoupling have number of 
> advantages
> - We can use specialized data-copying tools for transferring Solr index 
> files. e.g. in Hadoop environment, typically 
> [distcp|https://hadoop.apache.org/docs/r1.2.1/distcp2.html] tool is used to 
> copy files from one location to other. This tool provides various options to 
> configure degree of parallelism, bandwidth usage as well as integration with 
> different types and versions of file systems (e.g. AWS S3, Azure Blob store 
> etc.)
> - This separation of concern would also help Solr to focus on the key 
> functionality (i.e. querying and indexing) while delegating the copy 
> operation to the tools built for that purpose.
> - Users can decide if/when to copy the data files as against creating a 
> snapshot. e.g. a user may want to create a snapshot of a collection before 
> making an experimental change (e.g. updating/deleting docs, schema change 
> etc.). If the experiment is successful, he can delete the snapshot (without 
> having to copy the files). If the experiment is failed, then he can copy the 
> files associated with the snapshot and restore.
> Note that Apache Blur project is also providing a similar feature 
> [BLUR-132|https://issues.apache.org/jira/browse/BLUR-132]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3238 - Still Failing!

2016-04-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3238/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'CY val' for path 'params/c' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{ "a":"A 
val", "b":"B val", "wt":"json", "useParams":""},   "context":{ 
"webapp":"/_/o", "path":"/dump1", "httpMethod":"GET"}},  from server:  
https://127.0.0.1:58884/_/o/collection1

Stack Trace:
java.lang.AssertionError: Could not get expected value  'CY val' for path 
'params/c' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{
"a":"A val",
"b":"B val",
"wt":"json",
"useParams":""},
  "context":{
"webapp":"/_/o",
"path":"/dump1",
"httpMethod":"GET"}},  from server:  https://127.0.0.1:58884/_/o/collection1
at 
__randomizedtesting.SeedInfo.seed([BC7D274F75C35C18:34291895DB3F31E0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:457)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:172)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: Lucene/Solr 5.5.1

2016-04-29 Thread Anshum Gupta
Something seems to be going on with TestManagedSchemaAPI as it's been
consistently failing.
I woke up with a fever today so I'll try and debug it some time later if
I'm unable to get an RC built, but if I do get the RC, I'll get it out to
vote and in parallel see if it's something that needs fixing unless someone
else beats me to it.

On Fri, Apr 29, 2016 at 9:26 AM, Anshum Gupta 
wrote:

> That makes sense considering there are those checks for ignoring 1 missing
> version.
>
> On Fri, Apr 29, 2016 at 6:53 AM, Steve Rowe  wrote:
>
>> Anshum,
>>
>> TL;DR: When there is only one release in flight, I think it’s okay to run
>> addVersion.py on all branches at the start of the release process for all
>> types of releases.
>>
>> When we chatted last night I said backcompat index testing was a problem
>> on non-release branches in the interval between adding a not-yet-released
>> version to o.a.l.util.Version and when a backcompat index is committed on
>> the branch.  I was wrong.
>>
>> Here are the places where there are back-compat coverage tests:
>>
>> 1. smokeTestRelease.py's confirmAllReleasesAreTestedForBackCompat() will
>> succeed until release artifacts have been published - see
>> getAllLuceneReleases() for where they are scraped off the lucene release
>> list page on archive.apache.org.  So back-compat indexes should be
>> generated and committed as soon as possible after publishing artifacts.
>>
>> 2. backward-codec’s TestBackwardsCompatibility.testAllVersionsTested()
>> will still succeed if a single version is not tested.  Here’s the code:
>>
>>   // we could be missing up to 1 file, which may be due to a release that
>> is in progress
>>   if (missingFiles.size() <= 1 && extraFiles.isEmpty()) {
>>
>> The above test could be improved by checking for the presence of
>> published release artifacts for each release like smokeTestRelease.py does,
>> and then not requiring the backcompat index be present for those that are
>> not yet published; this would allow for multiple in-flight releases.
>>
>> Steve
>>
>> > On Apr 28, 2016, at 10:44 PM, Anshum Gupta 
>> wrote:
>> >
>> > I've updated the "Update Version Numbers in the Source Code" section on
>> the ReleaseToDo page. It'd be good to have some one else also take a look
>> at it.
>> >
>> > Here is what I've changed (only bug fix release):
>> > * Only bump up the version on the release branch using addVersion.py
>> > * Don't bump it up on the non-release versions in case of bug fix
>> release.
>> > * As part of the post-release process, use the commit hash from the
>> release branch version bump up, to increment the version on the non-release
>> branches.
>> >
>> > I thought we could do this for non bug-fix releases too, but I was
>> wrong. Minor versions need to be bumped up on stable branches (and trunk)
>> because during the release process for say version 6.1, there might be
>> commits for 6.2 and we'd need stable branches and master, both to support
>> those commits.
>> > We could debate about not needing something like this for major
>> versions but then I don't think it's worth the pain of different release
>> processes for each branch but I'm not stuck up with this.
>> >
>> >
>> > On Thu, Apr 28, 2016 at 5:31 PM, Anshum Gupta 
>> wrote:
>> > That's fixed (about to commit the fix from LUCENE-7265) thought.
>> >
>> > While discussing the release process, Steve mentioned that we should
>> document the failing back-compat index test on the non-release branches due
>> to the missing index for the unreleased version.
>> > On discussing further, he suggested that we instead move the process of
>> adding the version to non-release branches as a post-release task. This
>> way, we wouldn't have failing tests until the release goes through and the
>> back-compat indexes are checked in.
>> >
>> > We still would have failing tests for the release branch but there's no
>> way around that.
>> >
>> > So, I'll change the documentation to move those steps as post-release
>> tasks.
>> >
>> >
>> > On Thu, Apr 28, 2016 at 11:40 AM, Anshum Gupta 
>> wrote:
>> > Seems like LUCENE-6938 removed the merge logic that used the change id.
>> Now the merge doesn't happen, and there's no logic that replaces it.
>> >
>> > I certainly can do with some help on this one.
>> >
>> > On Thu, Apr 28, 2016 at 11:24 AM, Anshum Gupta 
>> wrote:
>> > Just wanted to make sure I wasn't missing something here again. While
>> trying to update the version on 5x, after having done that on 5.5, using
>> the addVersion.py script and following the instructions, the command
>> consistently fails. Here's what I've been trying to do:
>> >
>> > python3 -u dev-tools/scripts/addVersion.py --changeid 49ba147 5.5.1
>> >
>> > Seems like addVersion.py is broken for minor version releases so I'd
>> need some help with someone who has a better understanding of python than 

[jira] [Commented] (SOLR-8257) DELETEREPLICA command shouldn't delete de last replica of a shard

2016-04-29 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264419#comment-15264419
 ] 

David Smiley commented on SOLR-8257:


+1 to fail; one can call DELETESHARD.   And DELETESHARD may stop you as well 
(e.g. not using ImplicitRouter and this isn't an inactive pre-split shard).

FYI I discovered this reading about [~jwartes] cool 
https://github.com/whitepages/solrcloud_manager that adds some safety at the 
client end to working with SolrCloud.

> DELETEREPLICA command shouldn't delete de last replica of a shard
> -
>
> Key: SOLR-8257
> URL: https://issues.apache.org/jira/browse/SOLR-8257
> Project: Solr
>  Issue Type: Bug
>Reporter: Yago Riveiro
>Priority: Minor
>
> The DELETEREPLICA command shouldn't remove the last replica of a shard.
> The original thread in the mailing list 
> http://lucene.472066.n3.nabble.com/DELETEREPLICA-command-shouldn-t-delete-de-last-replica-of-a-shard-td4239054.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7241) Improve performance of geo3d for polygons with very large numbers of points

2016-04-29 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264402#comment-15264402
 ] 

Karl Wright commented on LUCENE-7241:
-

[~mikemccand]: There is a new Geo3DPoint.makeLargePolygon() method now in place 
that builds large polygons.  If you hook it up in luceneutil, it will probably 
easily load the OSM london boroughs polygons without trouble (I would guess), 
but it may blow up still trying to do the perf test.  I'd like to kick it 
around more before declaring it ready for prime time.  Also, the BKD 
implementation as it stands now will need to obtain the bounds for the polygon, 
which may be expensive for a borough, so that may impact performance and may be 
worth getting rid of if this is the solution we want.

> Improve performance of geo3d for polygons with very large numbers of points
> ---
>
> Key: LUCENE-7241
> URL: https://issues.apache.org/jira/browse/LUCENE-7241
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Affects Versions: master
>Reporter: Karl Wright
>Assignee: Karl Wright
> Attachments: LUCENE-7241.patch, LUCENE-7241.patch, LUCENE-7241.patch, 
> LUCENE-7241.patch, LUCENE-7241.patch
>
>
> This ticket corresponds to LUCENE-7239, except it's for geo3d polygons.
> The trick here is to organize edges by some criteria, e.g. z value range, and 
> use that to avoid needing to go through all edges and/or tile large irregular 
> polygons.  Then we use the ability to quickly determine intersections to 
> figure out whether a point is within the polygon, or not.
> The current way geo3d polygons are constructed involves finding a single 
> point, or "pole", which all polygon points circle.  This point is known to be 
> either "in" or "out" based on the direction of the points.  So we have one 
> place of "truth" on the globe that is known at polygon setup time.
> If edges are organized by z value, where the z values for an edge are 
> computed by the standard way of computing bounds for a plane, then we can 
> readily organize edges into a tree structure such that it is easy to find all 
> edges we need to check for a given z value.  Then, we merely need to compute 
> how many intersections to consider as we navigate from the "truth" point to 
> the point being tested.  In practice, this means both having a tree that is 
> organized by z, and a tree organized by (x,y), since we need to navigate in 
> both directions.  But then we can cheaply count the number of intersections, 
> and once we do that, we know whether our point is "in" or "out".
> The other performance improvement we need is whether a given plane intersects 
> the polygon within provided bounds.  This can be done using the same two 
> trees (z and (x,y)), by virtue of picking which tree to use based on the 
> plane's minimum bounds in z or (x,y).  And, in practice, we might well use 
> three trees: one in x, one in y, and one in z, which would mean we didn't 
> have to compute longitudes ever.
> An implementation like this trades off the cost of finding point membership 
> in near O\(log\(n)) time vs. the extra expense per step of finding that 
> membership.  Setup of the query is O\(n) in this scheme, rather than O\(n^2) 
> in the current implementation, but once again each individual step is more 
> expensive.  Therefore I would expect we'd want to use the current 
> implementation for simpler polygons and this sort of implementation for 
> tougher polygons.  Choosing which to use is a topic for another ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7241) Improve performance of geo3d for polygons with very large numbers of points

2016-04-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264398#comment-15264398
 ] 

ASF subversion and git services commented on LUCENE-7241:
-

Commit 228aebe82d8f0b4820ec6d61124b661bd77607cf in lucene-solr's branch 
refs/heads/branch_6x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=228aebe ]

LUCENE-7241: Add public functionality for handling large polygons in geo3d.


> Improve performance of geo3d for polygons with very large numbers of points
> ---
>
> Key: LUCENE-7241
> URL: https://issues.apache.org/jira/browse/LUCENE-7241
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Affects Versions: master
>Reporter: Karl Wright
>Assignee: Karl Wright
> Attachments: LUCENE-7241.patch, LUCENE-7241.patch, LUCENE-7241.patch, 
> LUCENE-7241.patch, LUCENE-7241.patch
>
>
> This ticket corresponds to LUCENE-7239, except it's for geo3d polygons.
> The trick here is to organize edges by some criteria, e.g. z value range, and 
> use that to avoid needing to go through all edges and/or tile large irregular 
> polygons.  Then we use the ability to quickly determine intersections to 
> figure out whether a point is within the polygon, or not.
> The current way geo3d polygons are constructed involves finding a single 
> point, or "pole", which all polygon points circle.  This point is known to be 
> either "in" or "out" based on the direction of the points.  So we have one 
> place of "truth" on the globe that is known at polygon setup time.
> If edges are organized by z value, where the z values for an edge are 
> computed by the standard way of computing bounds for a plane, then we can 
> readily organize edges into a tree structure such that it is easy to find all 
> edges we need to check for a given z value.  Then, we merely need to compute 
> how many intersections to consider as we navigate from the "truth" point to 
> the point being tested.  In practice, this means both having a tree that is 
> organized by z, and a tree organized by (x,y), since we need to navigate in 
> both directions.  But then we can cheaply count the number of intersections, 
> and once we do that, we know whether our point is "in" or "out".
> The other performance improvement we need is whether a given plane intersects 
> the polygon within provided bounds.  This can be done using the same two 
> trees (z and (x,y)), by virtue of picking which tree to use based on the 
> plane's minimum bounds in z or (x,y).  And, in practice, we might well use 
> three trees: one in x, one in y, and one in z, which would mean we didn't 
> have to compute longitudes ever.
> An implementation like this trades off the cost of finding point membership 
> in near O\(log\(n)) time vs. the extra expense per step of finding that 
> membership.  Setup of the query is O\(n) in this scheme, rather than O\(n^2) 
> in the current implementation, but once again each individual step is more 
> expensive.  Therefore I would expect we'd want to use the current 
> implementation for simpler polygons and this sort of implementation for 
> tougher polygons.  Choosing which to use is a topic for another ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7241) Improve performance of geo3d for polygons with very large numbers of points

2016-04-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264395#comment-15264395
 ] 

ASF subversion and git services commented on LUCENE-7241:
-

Commit 595a55bbb54bdcf671e9563246302a93ee1d1f80 in lucene-solr's branch 
refs/heads/master from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=595a55b ]

LUCENE-7241: Add public functionality for handling large polygons in geo3d.


> Improve performance of geo3d for polygons with very large numbers of points
> ---
>
> Key: LUCENE-7241
> URL: https://issues.apache.org/jira/browse/LUCENE-7241
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Affects Versions: master
>Reporter: Karl Wright
>Assignee: Karl Wright
> Attachments: LUCENE-7241.patch, LUCENE-7241.patch, LUCENE-7241.patch, 
> LUCENE-7241.patch, LUCENE-7241.patch
>
>
> This ticket corresponds to LUCENE-7239, except it's for geo3d polygons.
> The trick here is to organize edges by some criteria, e.g. z value range, and 
> use that to avoid needing to go through all edges and/or tile large irregular 
> polygons.  Then we use the ability to quickly determine intersections to 
> figure out whether a point is within the polygon, or not.
> The current way geo3d polygons are constructed involves finding a single 
> point, or "pole", which all polygon points circle.  This point is known to be 
> either "in" or "out" based on the direction of the points.  So we have one 
> place of "truth" on the globe that is known at polygon setup time.
> If edges are organized by z value, where the z values for an edge are 
> computed by the standard way of computing bounds for a plane, then we can 
> readily organize edges into a tree structure such that it is easy to find all 
> edges we need to check for a given z value.  Then, we merely need to compute 
> how many intersections to consider as we navigate from the "truth" point to 
> the point being tested.  In practice, this means both having a tree that is 
> organized by z, and a tree organized by (x,y), since we need to navigate in 
> both directions.  But then we can cheaply count the number of intersections, 
> and once we do that, we know whether our point is "in" or "out".
> The other performance improvement we need is whether a given plane intersects 
> the polygon within provided bounds.  This can be done using the same two 
> trees (z and (x,y)), by virtue of picking which tree to use based on the 
> plane's minimum bounds in z or (x,y).  And, in practice, we might well use 
> three trees: one in x, one in y, and one in z, which would mean we didn't 
> have to compute longitudes ever.
> An implementation like this trades off the cost of finding point membership 
> in near O\(log\(n)) time vs. the extra expense per step of finding that 
> membership.  Setup of the query is O\(n) in this scheme, rather than O\(n^2) 
> in the current implementation, but once again each individual step is more 
> expensive.  Therefore I would expect we'd want to use the current 
> implementation for simpler polygons and this sort of implementation for 
> tougher polygons.  Choosing which to use is a topic for another ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4509) Move to non deprecated HttpClient impl classes to remove stale connection check on every request and move connection lifecycle management towards the client.

2016-04-29 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264391#comment-15264391
 ] 

Hoss Man commented on SOLR-4509:


FWIW: It doesn't seem likely that this issue (SOLR-4509) is going to be 
backported to 6x since it has some incompatible solrj level changes 
(Configurer->Builder+SchemaProvider) but if I'm wrong and someone does decide 
to try and backport it, please note that SOLR-9028 has already been backported 
from master->6x and quite a few conflicts due to SOLR-4509 changes were 
resolved there that might cause new conflicts here.

If SOLR-4509 is backported, it might be easiest to:
# revert the branch_6x changes related to SOLR-9028
# backport SOLR-4509
# rebackport the master changes for SOLR-9028

> Move to non deprecated HttpClient impl classes to remove stale connection 
> check on every request and move connection lifecycle management towards the 
> client.
> -
>
> Key: SOLR-4509
> URL: https://issues.apache.org/jira/browse/SOLR-4509
> Project: Solr
>  Issue Type: Improvement
> Environment: 5 node SmartOS cluster (all nodes living in same global 
> zone - i.e. same physical machine)
>Reporter: Ryan Zezeski
>Assignee: Mark Miller
>Priority: Minor
> Fix For: master
>
> Attachments: 
> 0001-SOLR-4509-Move-to-non-deprecated-HttpClient-impl-cla.patch, 
> 0001-SOLR-4509-Move-to-non-deprecated-HttpClient-impl-cla.patch, 
> 0001-SOLR-4509-Move-to-non-deprecated-HttpClient-impl-cla.patch, 
> 0001-SOLR-4509-Move-to-non-deprecated-HttpClient-impl-cla.patch, 
> 0001-SOLR-4509-Move-to-non-deprecated-HttpClient-impl-cla.patch, 
> 0001-SOLR-4509-Move-to-non-deprecated-HttpClient-impl-cla.patch, 
> IsStaleTime.java, SOLR-4509-4_4_0.patch, SOLR-4509.patch, SOLR-4509.patch, 
> SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
> SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
> baremetal-stale-nostale-med-latency.dat, 
> baremetal-stale-nostale-med-latency.svg, 
> baremetal-stale-nostale-throughput.dat, baremetal-stale-nostale-throughput.svg
>
>
> By disabling the Apache HTTP Client stale check I've witnessed a 2-4x 
> increase in throughput and reduction of over 100ms.  This patch was made in 
> the context of a project I'm leading, called Yokozuna, which relies on 
> distributed search.
> Here's the patch on Yokozuna: https://github.com/rzezeski/yokozuna/pull/26
> Here's a write-up I did on my findings: 
> http://www.zinascii.com/2013/solr-distributed-search-and-the-stale-check.html
> I'm happy to answer any questions or make changes to the patch to make it 
> acceptable.
> ReviewBoard: https://reviews.apache.org/r/28393/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9051) Read Solr 4.10 indexes from Solr 6.x

2016-04-29 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-9051:
--

 Summary: Read Solr 4.10 indexes from Solr 6.x
 Key: SOLR-9051
 URL: https://issues.apache.org/jira/browse/SOLR-9051
 Project: Solr
  Issue Type: Improvement
Reporter: Yonik Seeley


This is something I need to look into as part of my $day_job anyway,
and I've heard that others are interested in this.  There are a lot of people 
on 4.x (esp 4.10), and providing them an index upgrade path that doesn't 
involve going through 5x would be nice.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9028) fix bugs in (add sanity checks for) SSL clientAuth testing

2016-04-29 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264382#comment-15264382
 ] 

Hoss Man commented on SOLR-9028:


bq. Maybe SOLR-4509 will be backported to 6.x? If so, couldn't backporting this 
issue wait for that?

That doesn't seem likely - it involves a lot of incompatible changes to the 
solrj level client APIs (completely eliminated HttpConfigurer for the new 
Builder stuff).

I've already got the 6x changes for SOLR-9028 ready (just hammering tests 
locally) so i'd rather go ahead and commit so we have the tests in place -- if 
SOLR-4509 does get backported it should be fairly easy to just revert the 6x 
commits for this issue & re-merge the master commits. (I'll make a note to that 
effect in SOLR-4509 once i commit)

> fix bugs in (add sanity checks for) SSL clientAuth testing
> --
>
> Key: SOLR-9028
> URL: https://issues.apache.org/jira/browse/SOLR-9028
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-9028.patch, SOLR-9028.patch, SOLR-9028.patch, 
> SOLR-9028.patch, SOLR-9028.patch, SOLR-9028.patch, os.x.failure.txt
>
>
> While looking into SOLR-8970 i realized there was a whole heap of problems 
> with how clientAuth was being handled in tests.  Notably: it wasn't actaully 
> being used when the randomization selects it (aparently due to a copy/paste 
> mistake in SOLR-7166).  But there are few other misc issues (improper usage 
> of sysprops overrides for tests, missuage of keystore/truststore in test 
> clients, etc..)
> I'm working up a patch to fix all of this, and add some much needed tests to 
> *explicitly* verify both SSL and clientAuth that will include some "false 
> positive" verifications, and some "test the test" checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9028) fix bugs in (add sanity checks for) SSL clientAuth testing

2016-04-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264374#comment-15264374
 ] 

ASF subversion and git services commented on SOLR-9028:
---

Commit 7aecf344b15fb7f1a3136198ca590efd9eec7164 in lucene-solr's branch 
refs/heads/branch_6x from [~hossman_luc...@fucit.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7aecf34 ]

SOLR-9028: Fixed some test related bugs preventing SSL + ClientAuth from ever 
being tested
(cherry picked from commit 791d1e7)

Conflicts:
solr/core/src/test/org/apache/solr/cloud/SSLMigrationTest.java

solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpClientUtil.java
solr/test-framework/src/java/org/apache/solr/SolrTestCaseJ4.java
solr/test-framework/src/java/org/apache/solr/util/SSLTestConfig.java


> fix bugs in (add sanity checks for) SSL clientAuth testing
> --
>
> Key: SOLR-9028
> URL: https://issues.apache.org/jira/browse/SOLR-9028
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-9028.patch, SOLR-9028.patch, SOLR-9028.patch, 
> SOLR-9028.patch, SOLR-9028.patch, SOLR-9028.patch, os.x.failure.txt
>
>
> While looking into SOLR-8970 i realized there was a whole heap of problems 
> with how clientAuth was being handled in tests.  Notably: it wasn't actaully 
> being used when the randomization selects it (aparently due to a copy/paste 
> mistake in SOLR-7166).  But there are few other misc issues (improper usage 
> of sysprops overrides for tests, missuage of keystore/truststore in test 
> clients, etc..)
> I'm working up a patch to fix all of this, and add some much needed tests to 
> *explicitly* verify both SSL and clientAuth that will include some "false 
> positive" verifications, and some "test the test" checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5638) Collection creation partially works, but results in unusable configuration due to missing config in ZK

2016-04-29 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-5638.

Resolution: Fixed

I'm closing this issue since it has already been fixed in some Solr version or 
another; not sure which.  I verified for sure in 6.x... and I see the check in 
5.4 and it probably got added even prior to that.
(not sure what JIRA resolution is right for this scenario but I'll just use 
"Fixed")

> Collection creation partially works, but results in unusable configuration 
> due to missing config in ZK
> --
>
> Key: SOLR-5638
> URL: https://issues.apache.org/jira/browse/SOLR-5638
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.6
>Reporter: Nathan Neulinger
> Attachments: SOLR-5638.patch
>
>
> Need help properly recovering from 'collection gets created without config 
> being defined'.
> Right now, if you submit a collection create and the config is missing, it 
> will proceed with partially creating cores, but then the cores fail to load. 
> This requires manual intervention on the server to fix unless you pick a new 
> colllection name:
> What's worse - if you retry the create a second time, it will usually try to 
> create the replicas in the opposite order, resulting in TWO broken cores on 
> each box, one for each attempted replica. 
> beta1-newarch_hive1_v12_shard1_replica1: 
> org.apache.solr.common.cloud.ZooKeeperException:org.apache.solr.common.cloud.ZooKeeperException:
>  Specified config does not exist in ZooKeeper:hivepoint-unknown
> beta1-newarch_hive1_v12_shard1_replica2: 
> org.apache.solr.common.cloud.ZooKeeperException:org.apache.solr.common.cloud.ZooKeeperException:
>  Specified config does not exist in ZooKeeper:hivepoint-unknown
> I already know how to clear this up manually, but this is something where 
> solr is allowing a condition in external service to result in a 
> corrupted/partial configuration. 
> I can see an easy option for resolving this as a workaround - allow a 
> collection CREATE operation to specify "reuseCores"  - i.e. allow it to use 
> an existing core of the proper name if it already exists. 
> Right now you wind up getting:
> org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error 
> CREATEing SolrCore 'beta1-newarch_hive1_v12_shard1_replica1': Could not 
> create a new core in solr/beta1-newarch_hive1_v12_shard1_replica1/as another 
> core is already defined there
> org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error 
> CREATEing SolrCore 'beta1-newarch_hive1_v12_shard1_replica2': Could not 
> create a new core in solr/beta1-newarch_hive1_v12_shard1_replica2/as another 
> core is already defined there



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-5638) Collection creation partially works, but results in unusable configuration due to missing config in ZK

2016-04-29 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley closed SOLR-5638.
--

> Collection creation partially works, but results in unusable configuration 
> due to missing config in ZK
> --
>
> Key: SOLR-5638
> URL: https://issues.apache.org/jira/browse/SOLR-5638
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.6
>Reporter: Nathan Neulinger
> Attachments: SOLR-5638.patch
>
>
> Need help properly recovering from 'collection gets created without config 
> being defined'.
> Right now, if you submit a collection create and the config is missing, it 
> will proceed with partially creating cores, but then the cores fail to load. 
> This requires manual intervention on the server to fix unless you pick a new 
> colllection name:
> What's worse - if you retry the create a second time, it will usually try to 
> create the replicas in the opposite order, resulting in TWO broken cores on 
> each box, one for each attempted replica. 
> beta1-newarch_hive1_v12_shard1_replica1: 
> org.apache.solr.common.cloud.ZooKeeperException:org.apache.solr.common.cloud.ZooKeeperException:
>  Specified config does not exist in ZooKeeper:hivepoint-unknown
> beta1-newarch_hive1_v12_shard1_replica2: 
> org.apache.solr.common.cloud.ZooKeeperException:org.apache.solr.common.cloud.ZooKeeperException:
>  Specified config does not exist in ZooKeeper:hivepoint-unknown
> I already know how to clear this up manually, but this is something where 
> solr is allowing a condition in external service to result in a 
> corrupted/partial configuration. 
> I can see an easy option for resolving this as a workaround - allow a 
> collection CREATE operation to specify "reuseCores"  - i.e. allow it to use 
> an existing core of the proper name if it already exists. 
> Right now you wind up getting:
> org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error 
> CREATEing SolrCore 'beta1-newarch_hive1_v12_shard1_replica1': Could not 
> create a new core in solr/beta1-newarch_hive1_v12_shard1_replica1/as another 
> core is already defined there
> org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error 
> CREATEing SolrCore 'beta1-newarch_hive1_v12_shard1_replica2': Could not 
> create a new core in solr/beta1-newarch_hive1_v12_shard1_replica2/as another 
> core is already defined there



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9026) Design Facet Telemetry for non-JSON field facet

2016-04-29 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264340#comment-15264340
 ] 

Michael Sun commented on SOLR-9026:
---

Just upload a new patch with all these issues addressed. cc [~yo...@apache.org]

> Design Facet Telemetry for non-JSON field facet
> ---
>
> Key: SOLR-9026
> URL: https://issues.apache.org/jira/browse/SOLR-9026
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: master
>
> Attachments: SOLR-9026.patch, SOLR-9026.patch
>
>
> Non-JSON facet is widely used and telemetry is helpful is diagnosing 
> expensive queries. As first step, the JIRA is to design telemetry for field 
> facet.
> Example: (using films)
> {code}
> $curl 
> 'http://localhost:8228/solr/films/select?debugQuery=true=genre=directed_by=true=on=*:*=json'
> ...
> "facet-trace":{
>   "elapse":1,
>   "sub-facet":[{
>   "processor":"SimpleFacets",
>   "elapse":1,
>   "action":"field facet",
>   "maxThreads":0,
>   "sub-facet":[{
>   "method":"FC",
>   "inputDocSetSize":1100,
>   "field":"genre",
>   "numBuckets":213},
> {
>   "method":"FC",
>   "inputDocSetSize":1100,
>   "field":"directed_by",
>   "numBuckets":1053}]}]},
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-7263) xmlparser: Allow SpanQueryBuilder to be used by derived classes

2016-04-29 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke reassigned LUCENE-7263:
---

Assignee: Christine Poerschke

> xmlparser: Allow SpanQueryBuilder to be used by derived classes
> ---
>
> Key: LUCENE-7263
> URL: https://issues.apache.org/jira/browse/LUCENE-7263
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/queryparser
>Affects Versions: master
>Reporter: Daniel Collins
>Assignee: Christine Poerschke
> Attachments: LUCENE-7263.patch
>
>
> Following on from LUCENE-7210 (and others), the xml queryparser has different 
> factories, one for creating normal queries and one for creating span queries.
> The former is a protected variable so can be used by derived classes, the 
> latter isn't.
> This makes the spanFactory a variable that can be used more easily.  No 
> functional changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9026) Design Facet Telemetry for non-JSON field facet

2016-04-29 Thread Michael Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Sun updated SOLR-9026:
--
Attachment: SOLR-9026.patch

> Design Facet Telemetry for non-JSON field facet
> ---
>
> Key: SOLR-9026
> URL: https://issues.apache.org/jira/browse/SOLR-9026
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: master
>
> Attachments: SOLR-9026.patch, SOLR-9026.patch
>
>
> Non-JSON facet is widely used and telemetry is helpful is diagnosing 
> expensive queries. As first step, the JIRA is to design telemetry for field 
> facet.
> Example: (using films)
> {code}
> $curl 
> 'http://localhost:8228/solr/films/select?debugQuery=true=genre=directed_by=true=on=*:*=json'
> ...
> "facet-trace":{
>   "elapse":1,
>   "sub-facet":[{
>   "processor":"SimpleFacets",
>   "elapse":1,
>   "action":"field facet",
>   "maxThreads":0,
>   "sub-facet":[{
>   "method":"FC",
>   "inputDocSetSize":1100,
>   "field":"genre",
>   "numBuckets":213},
> {
>   "method":"FC",
>   "inputDocSetSize":1100,
>   "field":"directed_by",
>   "numBuckets":1053}]}]},
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9028) fix bugs in (add sanity checks for) SSL clientAuth testing

2016-04-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264331#comment-15264331
 ] 

ASF subversion and git services commented on SOLR-9028:
---

Commit 48f2b2a3bbfacd5d2a6d2b395ab573305e8c6612 in lucene-solr's branch 
refs/heads/master from [~hossman_luc...@fucit.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=48f2b2a ]

SOLR-9028: relax the SSLHandshakeException expectation - in some 
platforms/java# diff IOExceptions are thrown


> fix bugs in (add sanity checks for) SSL clientAuth testing
> --
>
> Key: SOLR-9028
> URL: https://issues.apache.org/jira/browse/SOLR-9028
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-9028.patch, SOLR-9028.patch, SOLR-9028.patch, 
> SOLR-9028.patch, SOLR-9028.patch, SOLR-9028.patch, os.x.failure.txt
>
>
> While looking into SOLR-8970 i realized there was a whole heap of problems 
> with how clientAuth was being handled in tests.  Notably: it wasn't actaully 
> being used when the randomization selects it (aparently due to a copy/paste 
> mistake in SOLR-7166).  But there are few other misc issues (improper usage 
> of sysprops overrides for tests, missuage of keystore/truststore in test 
> clients, etc..)
> I'm working up a patch to fix all of this, and add some much needed tests to 
> *explicitly* verify both SSL and clientAuth that will include some "false 
> positive" verifications, and some "test the test" checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9038) Ability to create/delete/list snapshots for a solr collection

2016-04-29 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264323#comment-15264323
 ] 

Hrishikesh Gadre commented on SOLR-9038:


[~dsmiley] Thanks for the comments :)

>>I think so. Not doing so might be a pain, and it's not evident to me it's 
>>important to worry about it.

If we are going to allow the "backup" operation to use this snapshot commit in 
future, then I think we need to make sure that that snapshot commit is 
preserved during collection configuration changes. If the snapshot commit is 
created on all replicas for a shard, then it probably is OK to delete one or 
more replicas. But I am not sure how would we handle the case when a shard 
containing a one or more snapshot commits is deleted.

>>Perhaps a snapshot commit needs to block for all replicas to not be in 
>>recovery first? That seems much easier than trying to get replicas in 
>>recovery to somehow get IndexCommit data which I think is kinda impossible / 
>>infeasible. However, another bad situation is when there are already 
>>successful snapshot commits, and then for whatever reason a replica goes into 
>>recovery – full recovery, and thus only grabs the latest commit (which might 
>>not even be a snapshot commit. So perhaps recovering replicas need to ask to 
>>replicate not just the latest commit but all snapshot commits as well. Seems 
>>pretty doable. One would hope that the commits would share lots of big 
>>segments, but they might not. I don't think this scenario would block an 
>>initial release. Possible but too bad.

I agree that requiring replicas to transfer snapshot commits during recovery 
may not be a good idea since in case of large collections it will increase the 
size of data transferred over the network. I am also not very sure if we should 
block for all replicas to be "active" before creating a snapshot since on a 
large cluster it would more likely that one or more replicas would be "down" or 
"recovering". 

I do have an alternative design in mind, but just want to make sure that we are 
on the same page regarding overall semantics before diving into details :)

Thoughts?

> Ability to create/delete/list snapshots for a solr collection
> -
>
> Key: SOLR-9038
> URL: https://issues.apache.org/jira/browse/SOLR-9038
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Hrishikesh Gadre
>
> Currently work is under-way to implement backup/restore API for Solr cloud 
> (SOLR-5750). SOLR-5750 is about providing an ability to "copy" index files 
> and collection metadata to a configurable location. 
> In addition to this, we should also provide a facility to create "named" 
> snapshots for Solr collection. Here by "snapshot" I mean configuring the 
> underlying Lucene IndexDeletionPolicy to not delete a specific commit point 
> (e.g. using PersistentSnapshotIndexDeletionPolicy). This should not be 
> confused with SOLR-5340 which implements core level "backup" functionality.
> The primary motivation of this feature is to decouple recording/preserving a 
> known consistent state of a collection from actually "copying" the relevant 
> files to a physically separate location. This decoupling have number of 
> advantages
> - We can use specialized data-copying tools for transferring Solr index 
> files. e.g. in Hadoop environment, typically 
> [distcp|https://hadoop.apache.org/docs/r1.2.1/distcp2.html] tool is used to 
> copy files from one location to other. This tool provides various options to 
> configure degree of parallelism, bandwidth usage as well as integration with 
> different types and versions of file systems (e.g. AWS S3, Azure Blob store 
> etc.)
> - This separation of concern would also help Solr to focus on the key 
> functionality (i.e. querying and indexing) while delegating the copy 
> operation to the tools built for that purpose.
> - Users can decide if/when to copy the data files as against creating a 
> snapshot. e.g. a user may want to create a snapshot of a collection before 
> making an experimental change (e.g. updating/deleting docs, schema change 
> etc.). If the experiment is successful, he can delete the snapshot (without 
> having to copy the files). If the experiment is failed, then he can copy the 
> files associated with the snapshot and restore.
> Note that Apache Blur project is also providing a similar feature 
> [BLUR-132|https://issues.apache.org/jira/browse/BLUR-132]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_92) - Build # 5806 - Still Failing!

2016-04-29 Thread Chris Hostetter

I'm looking into these SSL exceptions ... looks like diff JVMs/OSs cause 
diff exceptions in these (expected) certificate failure situations ... 
i'll relax the expectThrows calls to account for this.  (see SOLR-9028)

: Date: Thu, 28 Apr 2016 23:21:51 + (UTC)
: From: Policeman Jenkins Server 
: Reply-To: dev@lucene.apache.org
: To: hoss...@apache.org, yo...@apache.org, jbern...@apache.org,
: jpou...@gmail.com, dev@lucene.apache.org
: Subject: [JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_92) - Build #
: 5806 - Still Failing!
: 
: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5806/
: Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseParallelGC
: 
: 1 tests failed.
: FAILED:  
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslAndClientAuth
: 
: Error Message:
: Unexpected exception type, expected SSLHandshakeException
: 
: Stack Trace:
: junit.framework.AssertionFailedError: Unexpected exception type, expected 
SSLHandshakeException
:   at 
__randomizedtesting.SeedInfo.seed([92AB971A6B3DF963:412F7ED7F9F0A29F]:0)
:   at 
org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2682)
:   at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterJettys(TestMiniSolrCloudClusterSSL.java:283)
:   at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithCollectionCreations(TestMiniSolrCloudClusterSSL.java:185)
:   at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithNodeReplacement(TestMiniSolrCloudClusterSSL.java:147)
:   at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslAndClientAuth(TestMiniSolrCloudClusterSSL.java:129)
:   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
:   at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
:   at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
:   at java.lang.reflect.Method.invoke(Method.java:498)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
:   at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
:   at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
:   at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
:   at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
:   at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
:   at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
:   at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
:   at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
:   at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
:   at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
:   at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 

[JENKINS] Lucene-Solr-5.5-Windows (32bit/jdk1.7.0_80) - Build # 61 - Still Failing!

2016-04-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Windows/61/
Java: 32bit/jdk1.7.0_80 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.schema.TestManagedSchemaAPI.test

Error Message:
Error from server at http://127.0.0.1:59370/solr/testschemaapi_shard1_replica1: 
ERROR: [doc=2] unknown field 'myNewField1'

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:59370/solr/testschemaapi_shard1_replica1: ERROR: 
[doc=2] unknown field 'myNewField1'
at 
__randomizedtesting.SeedInfo.seed([6B5014E5668C86C0:E3042B3FC870EB38]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:632)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:981)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at 
org.apache.solr.schema.TestManagedSchemaAPI.testAddFieldAndDocument(TestManagedSchemaAPI.java:101)
at 
org.apache.solr.schema.TestManagedSchemaAPI.test(TestManagedSchemaAPI.java:69)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (LUCENE-7258) Tune DocIdSetBuilder allocation rate

2016-04-29 Thread Jeff Wartes (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264309#comment-15264309
 ] 

Jeff Wartes commented on LUCENE-7258:
-

I'm not sure I understand how the dangers of large FBS size would be any 
different with a pooling mechanism than they are right now. If a query needs 
several of them, then it needs several of them, whether they're freshly 
allocated or not. The only real difference I see might be whether that memory 
exists in the tenured space, rather than thrashing the eden space every time. 

I don't think it'd need to be per-thread. I don't mind points of 
synchronization if they're tight and well understood. Allocation rate by count 
is generally lower here. One thought:
https://gist.github.com/randomstatistic/87caefdea8435d6af4ad13a3f92d2698

To anticipate some objections, there are likely lockless data structures you 
could use, and yes, you might prefer to control size in terms of memory instead 
of count. I can think of a dozen improvements per minute I spend looking at 
this. But you get the idea. Anyone anywhere who knows for *sure* they're done 
with a FBS can offer it up for reuse, and anyone can potentially get some reuse 
by just changing their "new" to "request". 
If everybody does this, you end up with a fairly steady pool of FBS instances 
large enough for most uses. If only some places use it, there's no chance of an 
unbounded leak, you might get some gain, and worst-case you haven't lost much. 
If nobody uses it, you've lost nothing.

Last I checked, something like a full 50% of (my) allocations by size were 
FixedBitSets despite a low allocation rate by count, or I wouldn't be harping 
on the subject. As a matter of principle, I'd gladly pay heap to reduce GC. The 
fastest search algorithm in the world doesn't help me if I'm stuck waiting for 
the collector to finish all the time.


> Tune DocIdSetBuilder allocation rate
> 
>
> Key: LUCENE-7258
> URL: https://issues.apache.org/jira/browse/LUCENE-7258
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial
>Reporter: Jeff Wartes
> Attachments: 
> LUCENE-7258-Tune-memory-allocation-rate-for-Intersec.patch, 
> LUCENE-7258-Tune-memory-allocation-rate-for-Intersec.patch, 
> allocation_plot.jpg
>
>
> LUCENE-7211 converted IntersectsPrefixTreeQuery to use DocIdSetBuilder, but 
> didn't actually reduce garbage generation for my Solr index.
> Since something like 40% of my garbage (by space) is now attributed to 
> DocIdSetBuilder.growBuffer, I charted a few different allocation strategies 
> to see if I could tune things more. 
> See here: http://i.imgur.com/7sXLAYv.jpg 
> The jump-then-flatline at the right would be where DocIdSetBuilder gives up 
> and allocates a FixedBitSet for a 100M-doc index. (The 1M-doc index 
> curve/cutoff looked similar)
> Perhaps unsurprisingly, the 1/8th growth factor in ArrayUtil.oversize is 
> terrible from an allocation standpoint if you're doing a lot of expansions, 
> and is especially terrible when used to build a short-lived data structure 
> like this one.
> By the time it goes with the FBS, it's allocated around twice as much memory 
> for the buffer as it would have needed for just the FBS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 5.5.1

2016-04-29 Thread Anshum Gupta
That makes sense considering there are those checks for ignoring 1 missing
version.

On Fri, Apr 29, 2016 at 6:53 AM, Steve Rowe  wrote:

> Anshum,
>
> TL;DR: When there is only one release in flight, I think it’s okay to run
> addVersion.py on all branches at the start of the release process for all
> types of releases.
>
> When we chatted last night I said backcompat index testing was a problem
> on non-release branches in the interval between adding a not-yet-released
> version to o.a.l.util.Version and when a backcompat index is committed on
> the branch.  I was wrong.
>
> Here are the places where there are back-compat coverage tests:
>
> 1. smokeTestRelease.py's confirmAllReleasesAreTestedForBackCompat() will
> succeed until release artifacts have been published - see
> getAllLuceneReleases() for where they are scraped off the lucene release
> list page on archive.apache.org.  So back-compat indexes should be
> generated and committed as soon as possible after publishing artifacts.
>
> 2. backward-codec’s TestBackwardsCompatibility.testAllVersionsTested()
> will still succeed if a single version is not tested.  Here’s the code:
>
>   // we could be missing up to 1 file, which may be due to a release that
> is in progress
>   if (missingFiles.size() <= 1 && extraFiles.isEmpty()) {
>
> The above test could be improved by checking for the presence of published
> release artifacts for each release like smokeTestRelease.py does, and then
> not requiring the backcompat index be present for those that are not yet
> published; this would allow for multiple in-flight releases.
>
> Steve
>
> > On Apr 28, 2016, at 10:44 PM, Anshum Gupta 
> wrote:
> >
> > I've updated the "Update Version Numbers in the Source Code" section on
> the ReleaseToDo page. It'd be good to have some one else also take a look
> at it.
> >
> > Here is what I've changed (only bug fix release):
> > * Only bump up the version on the release branch using addVersion.py
> > * Don't bump it up on the non-release versions in case of bug fix
> release.
> > * As part of the post-release process, use the commit hash from the
> release branch version bump up, to increment the version on the non-release
> branches.
> >
> > I thought we could do this for non bug-fix releases too, but I was
> wrong. Minor versions need to be bumped up on stable branches (and trunk)
> because during the release process for say version 6.1, there might be
> commits for 6.2 and we'd need stable branches and master, both to support
> those commits.
> > We could debate about not needing something like this for major versions
> but then I don't think it's worth the pain of different release processes
> for each branch but I'm not stuck up with this.
> >
> >
> > On Thu, Apr 28, 2016 at 5:31 PM, Anshum Gupta 
> wrote:
> > That's fixed (about to commit the fix from LUCENE-7265) thought.
> >
> > While discussing the release process, Steve mentioned that we should
> document the failing back-compat index test on the non-release branches due
> to the missing index for the unreleased version.
> > On discussing further, he suggested that we instead move the process of
> adding the version to non-release branches as a post-release task. This
> way, we wouldn't have failing tests until the release goes through and the
> back-compat indexes are checked in.
> >
> > We still would have failing tests for the release branch but there's no
> way around that.
> >
> > So, I'll change the documentation to move those steps as post-release
> tasks.
> >
> >
> > On Thu, Apr 28, 2016 at 11:40 AM, Anshum Gupta 
> wrote:
> > Seems like LUCENE-6938 removed the merge logic that used the change id.
> Now the merge doesn't happen, and there's no logic that replaces it.
> >
> > I certainly can do with some help on this one.
> >
> > On Thu, Apr 28, 2016 at 11:24 AM, Anshum Gupta 
> wrote:
> > Just wanted to make sure I wasn't missing something here again. While
> trying to update the version on 5x, after having done that on 5.5, using
> the addVersion.py script and following the instructions, the command
> consistently fails. Here's what I've been trying to do:
> >
> > python3 -u dev-tools/scripts/addVersion.py --changeid 49ba147 5.5.1
> >
> > Seems like addVersion.py is broken for minor version releases so I'd
> need some help with someone who has a better understanding of python than I
> do. I observed that 5.5.1 Version gets added to Version.java but also gets
> marked as deprecated.
> >
> >
> >
> > On Thu, Apr 28, 2016 at 9:27 AM, Anshum Gupta 
> wrote:
> > Too much going on! Thanks Yonik.
> > I'll start working on the RC now.
> >
> > NOTE: Please don't back port any more issues right now. In case of
> exceptions, please raise them here.
> >
> > On Thu, Apr 28, 2016 at 9:09 AM, Yonik Seeley  wrote:
> > On Thu, Apr 28, 2016 at 12:04 PM, Anshum Gupta 

Re: Lucene/Solr 5.5.1

2016-04-29 Thread Anshum Gupta
Hi Upayavira,

I've already started the release process and I'm creating an RC. It
would've been created last night but the tests failed so I'm just creating
it again.
Feel free to commit this to 5.5 so that if we re-spin for whatever reason,
this would get automatically included unless you think this qualifies as a
blocker and we wait/re-spin just for these issues.

On Fri, Apr 29, 2016 at 5:52 AM, Upayavira  wrote:

> I would like to include at least one, possibly two, trivial but
> significant fixes to the Solr Admin UI - SOLR-9032 is one of them, where
> the create alias feature fails without telling you.
>
> I'll try to get this committed by the end of the weekend.
>
> Upayavira
>
> On Fri, 29 Apr 2016, at 03:44 AM, Anshum Gupta wrote:
>
> I've updated the "Update Version Numbers in the Source Code" section on
> the ReleaseToDo page. It'd be good to have some one else also take a look
> at it.
>
> Here is what I've changed (only bug fix release):
> * Only bump up the version on the release branch using addVersion.py
> * Don't bump it up on the non-release versions in case of bug fix release.
> * As part of the post-release process, use the commit hash from the
> release branch version bump up, to increment the version on the non-release
> branches.
>
> I thought we could do this for non bug-fix releases too, but I was wrong.
> Minor versions need to be bumped up on stable branches (and trunk) because
> during the release process for say version 6.1, there might be commits for
> 6.2 and we'd need stable branches and master, both to support those commits.
> We could debate about not needing something like this for major versions
> but then I don't think it's worth the pain of different release processes
> for each branch but I'm not stuck up with this.
>
>
> On Thu, Apr 28, 2016 at 5:31 PM, Anshum Gupta 
> wrote:
>
> That's fixed (about to commit the fix from LUCENE-7265) thought.
>
> While discussing the release process, Steve mentioned that we should
> document the failing back-compat index test on the non-release branches due
> to the missing index for the unreleased version.
> On discussing further, he suggested that we instead move the process of
> adding the version to non-release branches as a post-release task. This
> way, we wouldn't have failing tests until the release goes through and the
> back-compat indexes are checked in.
>
> We still would have failing tests for the release branch but there's no
> way around that.
>
> So, I'll change the documentation to move those steps as post-release
> tasks.
>
>
> On Thu, Apr 28, 2016 at 11:40 AM, Anshum Gupta 
> wrote:
>
> Seems like LUCENE-6938 removed the merge logic that used the change id.
> Now the merge doesn't happen, and there's no logic that replaces it.
>
> I certainly can do with some help on this one.
>
> On Thu, Apr 28, 2016 at 11:24 AM, Anshum Gupta 
> wrote:
>
> Just wanted to make sure I wasn't missing something here again. While
> trying to update the version on 5x, after having done that on 5.5, using
> the addVersion.py script and following the instructions, the command
> consistently fails. Here's what I've been trying to do:
>
>
> python3 -u dev-tools/scripts/addVersion.py --changeid 49ba147 5.5.1
>
>
> Seems like addVersion.py is broken for minor version releases so I'd need
> some help with someone who has a better understanding of python than I do.
> I observed that 5.5.1 Version gets added to Version.java but also gets
> marked as deprecated.
>
>
>
> On Thu, Apr 28, 2016 at 9:27 AM, Anshum Gupta 
> wrote:
>
> Too much going on! Thanks Yonik.
> I'll start working on the RC now.
>
> NOTE: Please don't back port any more issues right now. In case of
> exceptions, please raise them here.
>
> On Thu, Apr 28, 2016 at 9:09 AM, Yonik Seeley  wrote:
>
> On Thu, Apr 28, 2016 at 12:04 PM, Anshum Gupta 
> wrote:
> > Thanks. I'm waiting for the last back port of SOLR-8865.
>
> It should be already be there... I closed it yesterday.
>
> -Yonik
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>
>
>
>
> --
> Anshum Gupta
>
>
>
>
>
> --
> Anshum Gupta
>
>
>
>
>
> --
> Anshum Gupta
>
>
>
>
>
> --
> Anshum Gupta
>
>
>
>
>
> --
> Anshum Gupta
>
>
>



-- 
Anshum Gupta


[jira] [Commented] (SOLR-9032) Alias creation fails in new UI

2016-04-29 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264285#comment-15264285
 ] 

Upayavira commented on SOLR-9032:
-

Thx for spotting, [~ctargett]

> Alias creation fails in new UI
> --
>
> Key: SOLR-9032
> URL: https://issues.apache.org/jira/browse/SOLR-9032
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 6.0
>Reporter: Upayavira
>Assignee: Upayavira
> Fix For: 6.0.1
>
> Attachments: SOLR-9032.patch
>
>
> Using the Collections UI to create an alias makes a call like this:
> http://$HOST:8983/solr/admin/collections?_=1461358635047=CREATEALIAS=%5Bobject+Object%5D=assets=json
> The collections param is effectively [object Object] which is clearly wrong, 
> and should be a comma separated list of collections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9026) Design Facet Telemetry for non-JSON field facet

2016-04-29 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264235#comment-15264235
 ] 

Michael Sun commented on SOLR-9026:
---

Thanks [~yo...@apache.org] for reviewing. Here is my thoughts.

bq. "facet-trace" is the same name used for JSON Facet API, right? If both 
faceting components are used at the same time, does this work?
That's good point. Let me separate them.

bq. it seems like one would really want elapsed time per facet.field (i.e 
per-sub-facet)?
I think so, particularly when the query has multiple field faceting, it's good 
to know which field faceting causes problem.


> Design Facet Telemetry for non-JSON field facet
> ---
>
> Key: SOLR-9026
> URL: https://issues.apache.org/jira/browse/SOLR-9026
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: master
>
> Attachments: SOLR-9026.patch
>
>
> Non-JSON facet is widely used and telemetry is helpful is diagnosing 
> expensive queries. As first step, the JIRA is to design telemetry for field 
> facet.
> Example: (using films)
> {code}
> $curl 
> 'http://localhost:8228/solr/films/select?debugQuery=true=genre=directed_by=true=on=*:*=json'
> ...
> "facet-trace":{
>   "elapse":1,
>   "sub-facet":[{
>   "processor":"SimpleFacets",
>   "elapse":1,
>   "action":"field facet",
>   "maxThreads":0,
>   "sub-facet":[{
>   "method":"FC",
>   "inputDocSetSize":1100,
>   "field":"genre",
>   "numBuckets":213},
> {
>   "method":"FC",
>   "inputDocSetSize":1100,
>   "field":"directed_by",
>   "numBuckets":1053}]}]},
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_92) - Build # 16615 - Still Failing!

2016-04-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16615/
Java: 32bit/jdk1.8.0_92 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.schema.TestManagedSchemaAPI.test

Error Message:
Error from server at http://127.0.0.1:42332/solr/testschemaapi_shard1_replica2: 
ERROR: [doc=2] unknown field 'myNewField1'

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:42332/solr/testschemaapi_shard1_replica2: ERROR: 
[doc=2] unknown field 'myNewField1'
at 
__randomizedtesting.SeedInfo.seed([8C1E3B997D636B7C:44A0443D39F0684]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:661)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1073)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:962)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:898)
at 
org.apache.solr.schema.TestManagedSchemaAPI.testAddFieldAndDocument(TestManagedSchemaAPI.java:86)
at 
org.apache.solr.schema.TestManagedSchemaAPI.test(TestManagedSchemaAPI.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-9026) Design Facet Telemetry for non-JSON field facet

2016-04-29 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264221#comment-15264221
 ] 

Yonik Seeley commented on SOLR-9026:


Looking good... a couple of quick points:
- "facet-trace" is the same name used for JSON Facet API, right?  If both 
faceting components are used at the same time, does this work?
- it seems like one would really want elapsed time per facet.field (i.e 
per-sub-facet)?
- instead of changing "long elapse" to "Long elapse", perhaps just use "-1" to 
detect if it's been set?
- "fdebugCurrentTermCount" is oddly named... perhaps just "fdebug"?

> Design Facet Telemetry for non-JSON field facet
> ---
>
> Key: SOLR-9026
> URL: https://issues.apache.org/jira/browse/SOLR-9026
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: master
>
> Attachments: SOLR-9026.patch
>
>
> Non-JSON facet is widely used and telemetry is helpful is diagnosing 
> expensive queries. As first step, the JIRA is to design telemetry for field 
> facet.
> Example: (using films)
> {code}
> $curl 
> 'http://localhost:8228/solr/films/select?debugQuery=true=genre=directed_by=true=on=*:*=json'
> ...
> "facet-trace":{
>   "elapse":1,
>   "sub-facet":[{
>   "processor":"SimpleFacets",
>   "elapse":1,
>   "action":"field facet",
>   "maxThreads":0,
>   "sub-facet":[{
>   "method":"FC",
>   "inputDocSetSize":1100,
>   "field":"genre",
>   "numBuckets":213},
> {
>   "method":"FC",
>   "inputDocSetSize":1100,
>   "field":"directed_by",
>   "numBuckets":1053}]}]},
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7263) xmlparser: Allow SpanQueryBuilder to be used by derived classes

2016-04-29 Thread Daniel Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264216#comment-15264216
 ] 

Daniel Collins commented on LUCENE-7263:


Don't think there is anything controversial here, its just allowing derived 
classes access to the span builder, but if anyone sees any issues, let me know

> xmlparser: Allow SpanQueryBuilder to be used by derived classes
> ---
>
> Key: LUCENE-7263
> URL: https://issues.apache.org/jira/browse/LUCENE-7263
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/queryparser
>Affects Versions: master
>Reporter: Daniel Collins
> Attachments: LUCENE-7263.patch
>
>
> Following on from LUCENE-7210 (and others), the xml queryparser has different 
> factories, one for creating normal queries and one for creating span queries.
> The former is a protected variable so can be used by derived classes, the 
> latter isn't.
> This makes the spanFactory a variable that can be used more easily.  No 
> functional changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7263) xmlparser: Allow SpanQueryBuilder to be used by derived classes

2016-04-29 Thread Daniel Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Collins updated LUCENE-7263:
---
Attachment: LUCENE-7263.patch

> xmlparser: Allow SpanQueryBuilder to be used by derived classes
> ---
>
> Key: LUCENE-7263
> URL: https://issues.apache.org/jira/browse/LUCENE-7263
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/queryparser
>Affects Versions: master
>Reporter: Daniel Collins
> Attachments: LUCENE-7263.patch
>
>
> Following on from LUCENE-7210 (and others), the xml queryparser has different 
> factories, one for creating normal queries and one for creating span queries.
> The former is a protected variable so can be used by derived classes, the 
> latter isn't.
> This makes the spanFactory a variable that can be used more easily.  No 
> functional changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7258) Tune DocIdSetBuilder allocation rate

2016-04-29 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264201#comment-15264201
 ] 

Adrien Grand commented on LUCENE-7258:
--

I played with the scaling factor and the "poly 5" geo benchmark by temporarily 
switching from MatchingPoints to DocIdSetBuilder in LatLonPointInPolygonQuery. 
I got the following QPS:

||scaling factor = 9/8 (like in master)|48.3|
||scaling factor = 5/4|49.3|
||scaling factor = 3/2|50.2|
||scaling factor = 2|50.9|
||MatchingPoints|51.7|

This gets DocIdSetBuilder closer to the throughput of MatchingPoints in spite 
of the fact it tries to better deal with the sparse case. Given than wasting 
space is not a big deal for this class (the data will be trashed once the query 
finishes running), I would be in favor of moving to a scaling factor of 3/2 or 
2.

Regarding reusing fixed bitsets, I think the only way would be to keep state on 
the index searcher and then have access to the cache in {{Query.createWeight}}. 
But I don't think I would like it: this looks quite dangerous to me as bit sets 
can take a lot of memory and you need a different cache per thread (if your 
index has 1B documents, you would need 120MB per thread for a single 
FixedBitSet, while a single query may need to create several of them).

> Tune DocIdSetBuilder allocation rate
> 
>
> Key: LUCENE-7258
> URL: https://issues.apache.org/jira/browse/LUCENE-7258
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial
>Reporter: Jeff Wartes
> Attachments: 
> LUCENE-7258-Tune-memory-allocation-rate-for-Intersec.patch, 
> LUCENE-7258-Tune-memory-allocation-rate-for-Intersec.patch, 
> allocation_plot.jpg
>
>
> LUCENE-7211 converted IntersectsPrefixTreeQuery to use DocIdSetBuilder, but 
> didn't actually reduce garbage generation for my Solr index.
> Since something like 40% of my garbage (by space) is now attributed to 
> DocIdSetBuilder.growBuffer, I charted a few different allocation strategies 
> to see if I could tune things more. 
> See here: http://i.imgur.com/7sXLAYv.jpg 
> The jump-then-flatline at the right would be where DocIdSetBuilder gives up 
> and allocates a FixedBitSet for a 100M-doc index. (The 1M-doc index 
> curve/cutoff looked similar)
> Perhaps unsurprisingly, the 1/8th growth factor in ArrayUtil.oversize is 
> terrible from an allocation standpoint if you're doing a lot of expansions, 
> and is especially terrible when used to build a short-lived data structure 
> like this one.
> By the time it goes with the FBS, it's allocated around twice as much memory 
> for the buffer as it would have needed for just the FBS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8996) Add Random Streaming Expression

2016-04-29 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264167#comment-15264167
 ] 

Joel Bernstein commented on SOLR-8996:
--

[~dpgove], I'm going to do one last round of manual testing at scale, then I'm 
ready to backport. 

We'll need to apply the cherrypick's in order. Probably easier for one of us to 
them all. If you can provide a list of commits you want backported I'll create 
an ordered list of commits. Then either one of us can do them.

> Add Random Streaming Expression
> ---
>
> Key: SOLR-8996
> URL: https://issues.apache.org/jira/browse/SOLR-8996
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
> Attachments: RandomStream.java, SOLR-8996.patch
>
>
> The random Streaming Expression will return a *limited* random stream of 
> Tuples that match a query. This will be useful in many different scenarios 
> where random data sets are needed.
> Proposed syntax:
> {code}
> random(baskets, q="productID:productX", rows="100", fl="basketID") 
> {code}
> The sample code above will query the *baskets* collection and return 100 
> random *basketID's* where the productID is productX.
> The underlying implementation will rely on Solr's random field type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7262) Add back the "estimate match count" optimization

2016-04-29 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264162#comment-15264162
 ] 

David Smiley commented on LUCENE-7262:
--

+1 and nice testing.  I think you can use the new constructor accepting Terms 
for stats in more places (judging from a find-usages on DocIdSetBuilder).

> Add back the "estimate match count" optimization
> 
>
> Key: LUCENE-7262
> URL: https://issues.apache.org/jira/browse/LUCENE-7262
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7262.patch, LUCENE-7262.patch
>
>
> Follow-up to my last message on LUCENE-7051: I removed this optimization a 
> while ago because it made things a bit more complicated but did not seem to 
> help with point queries. However the reason why it did not seem to help was 
> that the benchmark only runs queries that match 25% of the dataset. This 
> makes the run time completely dominated by calls to FixedBitSet.set so the 
> call to FixedBitSet.cardinality() looks free. However with slightly sparser 
> queries like the geo benchmark generates (dense enough to trigger the 
> creation of a FixedBitSet but sparse enough so that FixedBitSet.set does not 
> dominate the run time), one can notice speed-ups when this call is skipped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9027) Add GraphTermsQuery to limit traversal on high frequency nodes

2016-04-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264151#comment-15264151
 ] 

ASF subversion and git services commented on SOLR-9027:
---

Commit 3d3c3fb5fc2db39f433c5f449d0bee81ef89a189 in lucene-solr's branch 
refs/heads/master from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3d3c3fb ]

SOLR-9027: Pull the TermsEnum once for each segment


> Add GraphTermsQuery to limit traversal on high frequency nodes
> --
>
> Key: SOLR-9027
> URL: https://issues.apache.org/jira/browse/SOLR-9027
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-9027.patch, SOLR-9027.patch, SOLR-9027.patch, 
> SOLR-9027.patch
>
>
> The gatherNodes() Streaming Expression is currently using a basic disjunction 
> query to perform the traversals. This ticket is to create a specific 
> GraphTermsQuery for performing the traversals. 
> The GraphTermsQuery will be based off of the TermsQuery, but will also 
> include an option for a docFreq cutoff. Terms that are above the docFreq 
> cutoff will not be included in the query. This will help users do a more 
> precise and efficient traversal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >