[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+116) - Build # 610 - Still Failing!

2016-05-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/610/
Java: 32bit/jdk-9-ea+116 -client -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:34439/_tq/q

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:34439/_tq/q
at 
__randomizedtesting.SeedInfo.seed([7EE1F80BF1C1289E:F6B5C7D15F3D4566]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:601)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.makeRequest(CollectionsAPIDistributedZkTest.java:399)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.deletePartiallyCreatedCollection(CollectionsAPIDistributedZkTest.java:231)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:181)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1010 - Failure

2016-05-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1010/

11 tests failed.
FAILED:  org.apache.solr.core.OpenCloseCoreStressTest.test10MinutesOld

Error Message:
Captured an uncaught exception in thread: Thread[id=37848, name=Lucene Merge 
Thread #421, state=RUNNABLE, group=TGRP-OpenCloseCoreStressTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=37848, name=Lucene Merge Thread #421, 
state=RUNNABLE, group=TGRP-OpenCloseCoreStressTest]
Caused by: org.apache.lucene.index.MergePolicy$MergeException: 
java.lang.IllegalStateException: this writer hit an unrecoverable error; cannot 
complete merge
at __randomizedtesting.SeedInfo.seed([915457D6CDC341D8]:0)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:668)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:648)
Caused by: java.lang.IllegalStateException: this writer hit an unrecoverable 
error; cannot complete merge
at 
org.apache.lucene.index.IndexWriter.commitMerge(IndexWriter.java:3487)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4248)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3679)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
Caused by: java.nio.file.FileSystemException: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/solr/build/solr-core/test/J1/temp/solr.core.OpenCloseCoreStressTest_915457D6CDC341D8-001/index-NIOFSDirectory-033/_3n1_FSTOrd50_0.tix:
 Too many open files
at 
org.apache.lucene.mockfile.HandleLimitFS.onOpen(HandleLimitFS.java:48)
at 
org.apache.lucene.mockfile.HandleTrackingFS.callOpenHook(HandleTrackingFS.java:81)
at 
org.apache.lucene.mockfile.HandleTrackingFS.newFileChannel(HandleTrackingFS.java:197)
at java.nio.channels.FileChannel.open(FileChannel.java:287)
at java.nio.channels.FileChannel.open(FileChannel.java:335)
at 
org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:81)
at 
org.apache.lucene.util.LuceneTestCase.slowFileExists(LuceneTestCase.java:2695)
at 
org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:737)
at 
org.apache.lucene.store.Directory.openChecksumInput(Directory.java:113)
at 
org.apache.lucene.store.MockDirectoryWrapper.openChecksumInput(MockDirectoryWrapper.java:1060)
at 
org.apache.lucene.codecs.memory.FSTOrdTermsReader.(FSTOrdTermsReader.java:89)
at 
org.apache.lucene.codecs.memory.FSTOrdPostingsFormat.fieldsProducer(FSTOrdPostingsFormat.java:69)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:261)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:341)
at 
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:106)
at org.apache.lucene.index.SegmentReader.(SegmentReader.java:66)
at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
at 
org.apache.lucene.index.BufferedUpdatesStream$SegmentState.(BufferedUpdatesStream.java:379)
at 
org.apache.lucene.index.BufferedUpdatesStream.openSegmentStates(BufferedUpdatesStream.java:411)
at 
org.apache.lucene.index.BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:256)
at org.apache.lucene.index.IndexWriter._mergeInit(IndexWriter.java:3858)
at org.apache.lucene.index.IndexWriter.mergeInit(IndexWriter.java:3816)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3670)
... 2 more


FAILED:  org.apache.solr.response.transform.TestSubQueryTransformerDistrib.test

Error Message:
Error from server at http://127.0.0.1:42542: Cannot create collection 
departments. Value of maxShardsPerNode is 9, and the number of nodes currently 
live or live and part of your createNodeSet is 5. This allows a maximum of 45 
to be created. Value of numShards is 6 and value of replicationFactor is 9. 
This requires 54 shards to be created (higher than the allowed number)

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:42542: Cannot create collection departments. 
Value of maxShardsPerNode is 9, and the number of nodes currently live or live 
and part of your createNodeSet is 5. This allows a maximum of 45 to be created. 
Value of numShards is 6 and value of replicationFactor is 9. This requires 54 
shards to be created (higher than the allowed number)
at 
__randomizedtesting.SeedInfo.seed([915457D6CDC341D8:1900680C633F2C20]:0)
  

[jira] [Commented] (SOLR-9078) Let Parallel SQL support offset or start

2016-05-09 Thread lingya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277543#comment-15277543
 ] 

lingya commented on SOLR-9078:
--

In next release,whether or not use Calcite instead of presto parse SQL?

> Let Parallel SQL support offset or start
> 
>
> Key: SOLR-9078
> URL: https://issues.apache.org/jira/browse/SOLR-9078
> Project: Solr
>  Issue Type: Bug
>Reporter: lingya
>Priority: Minor
>
> In solr6 ,Parallel SQL Interface don't support  offset  or  start  .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_92) - Build # 609 - Still Failing!

2016-05-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/609/
Java: 32bit/jdk1.8.0_92 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI

Error Message:
ObjectTracker found 4 object(s) that were not released!!! 
[MDCAwareThreadPoolExecutor, MockDirectoryWrapper, MockDirectoryWrapper, 
TransactionLog]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 4 object(s) that were not 
released!!! [MDCAwareThreadPoolExecutor, MockDirectoryWrapper, 
MockDirectoryWrapper, TransactionLog]
at __randomizedtesting.SeedInfo.seed([B7650CA29295C2C9]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:256)
at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10857 lines...]
   [junit4] Suite: org.apache.solr.schema.TestManagedSchemaAPI
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J1/temp/solr.schema.TestManagedSchemaAPI_B7650CA29295C2C9-001/init-core-data-001
   [junit4]   2> 464087 INFO  
(SUITE-TestManagedSchemaAPI-seed#[B7650CA29295C2C9]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false)
   [junit4]   2> 464088 INFO  
(SUITE-TestManagedSchemaAPI-seed#[B7650CA29295C2C9]-worker) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 464088 INFO  (Thread-1113) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 464088 INFO  (Thread-1113) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 464188 INFO  
(SUITE-TestManagedSchemaAPI-seed#[B7650CA29295C2C9]-worker) [] 
o.a.s.c.ZkTestServer start zk server on port:40070
   [junit4]   2> 464188 INFO  
(SUITE-TestManagedSchemaAPI-seed#[B7650CA29295C2C9]-worker) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 464189 INFO  
(SUITE-TestManagedSchemaAPI-seed#[B7650CA29295C2C9]-worker) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 464195 INFO  (zkCallback-19314-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@252970 name:ZooKeeperConnection 
Watcher:127.0.0.1:40070 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2> 464195 INFO  
(SUITE-TestManagedSchemaAPI-seed#[B7650CA29295C2C9]-worker) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 464195 INFO  
(SUITE-TestManagedSchemaAPI-seed#[B7650CA29295C2C9]-worker) [] 
o.a.s.c.c.SolrZkClient Using 

[jira] [Updated] (SOLR-9097) Refactor shortestPath streaming expression

2016-05-09 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-9097:
---
Attachment: SOLR-9097.patch

[~joel.bernstein] Please review.

> Refactor shortestPath streaming expression
> --
>
> Key: SOLR-9097
> URL: https://issues.apache.org/jira/browse/SOLR-9097
> Project: Solr
>  Issue Type: Improvement
>Reporter: Cao Manh Dat
>Priority: Minor
> Attachments: SOLR-9097.patch
>
>
> Refactor ShortestPathStream to make it more compact/clean.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9097) Refactor shortestPath streaming expression

2016-05-09 Thread Cao Manh Dat (JIRA)
Cao Manh Dat created SOLR-9097:
--

 Summary: Refactor shortestPath streaming expression
 Key: SOLR-9097
 URL: https://issues.apache.org/jira/browse/SOLR-9097
 Project: Solr
  Issue Type: Improvement
Reporter: Cao Manh Dat
Priority: Minor


Refactor ShortestPathStream to make it more compact/clean.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 486 - Still Failing

2016-05-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/486/

No tests ran.

Build Log:
[...truncated 40521 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (18.3 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.0.0-src.tgz...
   [smoker] 28.6 MB in 0.03 sec (994.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.tgz...
   [smoker] 63.0 MB in 0.05 sec (1167.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.zip...
   [smoker] 73.5 MB in 0.06 sec (1185.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6003 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6003 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 218 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] Releases that don't seem to be tested:
   [smoker]   5.5.1
   [smoker] Traceback (most recent call last):
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 1414, in 
   [smoker] main()
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 1358, in main
   [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
c.is_signed, ' '.join(c.test_args))
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 1396, in smokeTest
   [smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % 
version, gitRevision, version, testArgs, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 590, in unpackAndVerify
   [smoker] verifyUnpacked(java, project, artifact, unpackPath, 
gitRevision, version, testArgs, tmpDir, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 736, in verifyUnpacked
   [smoker] confirmAllReleasesAreTestedForBackCompat(version, unpackPath)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 1351, in confirmAllReleasesAreTestedForBackCompat
   [smoker] raise RuntimeError('some releases are not tested by 
TestBackwardsCompatibility?')
   [smoker] RuntimeError: some releases are not tested by 
TestBackwardsCompatibility?

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/build.xml:536:
 exec returned: 1

Total time: 30 minutes 16 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-SmokeRelease-6.x - Build # 55 - Still Failing

2016-05-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-6.x/55/

No tests ran.

Build Log:
[...truncated 40520 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (14.5 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-6.1.0-src.tgz...
   [smoker] 28.6 MB in 0.03 sec (1097.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.1.0.tgz...
   [smoker] 63.0 MB in 0.05 sec (1176.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.1.0.zip...
   [smoker] 73.5 MB in 0.07 sec (1115.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-6.1.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 5999 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.1.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 5999 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.1.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 218 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (58.9 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-6.1.0-src.tgz...
   [smoker] 37.8 MB in 0.04 sec (1054.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.1.0.tgz...
   [smoker] 132.1 MB in 0.13 sec (1052.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.1.0.zip...
   [smoker] 140.7 MB in 0.13 sec (1080.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-6.1.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-6.1.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.1.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.1.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.1.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.1.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.1.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] bin/solr start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 30 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]   [|]   [/]   [-]   
[\]   [|]   [/]   [-]  
   

[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+116) - Build # 608 - Failure!

2016-05-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/608/
Java: 64bit/jdk-9-ea+116 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
timed out waiting for collection1 startAt time to exceed: Mon May 09 18:32:37 
ACT 2016

Stack Trace:
java.lang.AssertionError: timed out waiting for collection1 startAt time to 
exceed: Mon May 09 18:32:37 ACT 2016
at 
__randomizedtesting.SeedInfo.seed([1818DB4CFE202E8D:C3B3DB8AFB08473E]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.TestReplicationHandler.watchCoreStartAt(TestReplicationHandler.java:1506)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:858)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(java.base@9-ea/Thread.java:804)


[jira] [Updated] (LUCENE-7271) Cleanup jira's concept of 'master' and '6.0'

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7271:
-
Attachment: LUCENE-7271_S6_report.txt
LUCENE-7271_S5_hoss_todo.txt


Status update...


* S5
** Making progress manually reviewing issues from the S1 report -- about 50 
issues left to review.
*** attaching {{LUCENE-7271_S5_hoss_todo.txt}} which is my personal checklist 
i'm working through (deleeting as i go)
* S6
** SOLR-4509 was the only issue in either CHANGES.txt 7.0 section, so i went 
ahead and updated it in jira to {{master (7.0)}}
** Attaching {{LUCENE-7271_S6_report.txt}} containing the GIT SHAs on master 
but not 6.0.0 that still need reviewed
*** I won't bother worrying about this until Step # S5 is done, but I wanted to 
go ahead and generate this report now so the list of commits wouldn't keep 
growing with stuff i didn't need to worry about it.


> Cleanup jira's concept of 'master' and '6.0'
> 
>
> Key: LUCENE-7271
> URL: https://issues.apache.org/jira/browse/LUCENE-7271
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: LUCENE-7271_S1_report.csv, LUCENE-7271_S1_report.csv, 
> LUCENE-7271_S1_report.xls, LUCENE-7271_S1_report.xls, 
> LUCENE-7271_S2_6.0_report.xml, LUCENE-7271_S2_master_report.tgz, 
> LUCENE-7271_S5_hoss_todo.txt, LUCENE-7271_S6_report.txt, jira_export.pl
>
>
> Jira's concept of "Fix Version: master" is currently screwed up, as noted & 
> discussed in this mailing list thread...
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201604.mbox/%3Calpine.DEB.2.11.1604131529140.15570@tray%3E
> The current best plan of attack (summary) is:
> * use Jira's "Merge Versions" feature to merge {{master}} into {{6.0}}
> * add a new {{master (7.0)}} to use moving forward
> * manually audit/fix the fixVersion of some clean up issues as needed.
> I'm using this issue to track this work.
> 
> Detailed Check list of planned steps:
> * S1: Generate a CSV report listing all resolved/closed Jira's with 
> 'fixVersion=master AND fixVersion=6.1'
> ** 
> https://issues.apache.org/jira/issues/?jql=project%20in%20%28LUCENE%2C%20SOLR%29%20AND%20status%20in%20%28Resolved%2C%20Closed%29%20AND%20fixVersion%20%3D%20master%20AND%20fixVersion%20%3D%206.1%20ORDER%20BY%20resolved%20DESC%2C%20key%20DESC
> *** currently about ~100 issues
> ** The operating assumption is that any non-resolved issues should have the 
> fixVersion set correctly if/when they are resolved in the future
> * S2: Generate two CSV reports containing all issues that match these 2 
> queries for fixVersion=master and fixVersion=6.0
> *** master: 
> https://issues.apache.org/jira/issues/?jql=project%20in%20%28LUCENE%2C%20SOLR%29%20AND%20fixVersion%20%3D%20master%20ORDER%20BY%20key%20DESC
> *** 6.0: 
> https://issues.apache.org/jira/issues/?jql=project%20in%20%28LUCENE%2C%20SOLR%29%20AND%20fixVersion%20%3D%206.0%20ORDER%20BY%20key%20DESC
> ** these reports can be attached to this issue (LUCENE-7271) for posterity in 
> case people want to later review what the state of any issue was before this 
> whole process was started and versions were changed/merged
> * S3: Use Jira's "Merge Versions" feature to merge "master" into "6.0"
> ** This needs to be done distinctly for both LUCENE and SOLR
> * S4: Add a new "master (7.0)" version to Jira
> ** This needs to be done distinctly for both LUCENE and SOLR
> * S5: audit every issue in the CSV file from S1 above to determine if/when 
> the issue should get fixVersion="master (7.0)" *added* to it or 
> fixVersion="6.0" *removed* from it:
> ** if it only ever had commits to master (ie: before branch_6x was made on 
> March 2nd) then no action needed.
> ** if it has distinct commits to both master after branch_6x was made on 
> March 2nd, then fixVersion="master (7.0)" should definitely be added.
> ** if it has no commits to branch_6_0, and the only commits to branch_6x are 
> after branch_6_0 was created on March 3rd, then fixVersion="6.0" should be 
> removed.
> ** if there are no obvious commits in the issue comments, then sanity check 
> why it has any fixVersion at all (can't reproduce? dup? etc...)
> * S6: Audit CHANGES.txt & git commits and *replace* fixVersion=6.0 with 
> fixVersion="master (7.0)" on the handful of issues where appropriate in case 
> they fell through the cracks in S5:
> ** Look at the 7.0 section of lucene/CHANGES.txt & solr/CHANGES.txt for new 
> features
> *** currently only 1 jira mentioned in either files in 7.0 section
> ** Use {{git co releases/lucene-solr/6.0.0 && git cherry -v master | egrep 
> '^\+'}} to find changesets on master that were not included in 6.0
> *** currently ~40 commits
> ** before removing fixVersion=6.0 from any of these issues, sanity check if 
> this is a 

[jira] [Updated] (LUCENE-7168) Remove geo3d test leniency

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7168:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> Remove geo3d test leniency
> --
>
> Key: LUCENE-7168
> URL: https://issues.apache.org/jira/browse/LUCENE-7168
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7168.patch, LUCENE-7168.patch, LUCENE-7168.patch, 
> LUCENE-7168.patch, LUCENE-7168.patch, LUCENE-7168.patch, LUCENE-7168.patch
>
>
> Today the test hides possible failures by leniently handling quantization 
> issues.
> We should fix it to do what geo2d tests now do: pre-quantized indexed points, 
> but don't quantize query shapes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8967) UI should not show the replication tab in the core selector panel in cloud mode

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8967:
---
Fix Version/s: (was: 6.0)
   master (7.0)

> UI should not show the replication tab in the core selector panel in cloud 
> mode
> ---
>
> Key: SOLR-8967
> URL: https://issues.apache.org/jira/browse/SOLR-8967
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Varun Thacker
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-8967.patch
>
>
> When running Solr in cloud mode, the UI has a 'Replication' tab under 'Core 
> Selector' . 
> I think we should not display this when Solr is running in cloud mode. It 
> doesn't add any value as this is useful for master-slave setups. It could 
> also be harmful if someone accidentally clicks on 'Disable Replication' in 
> the UI. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7184) Add GeoEncodingUtils to core

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7184:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> Add GeoEncodingUtils to core
> 
>
> Key: LUCENE-7184
> URL: https://issues.apache.org/jira/browse/LUCENE-7184
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nicholas Knize
>Assignee: Nicholas Knize
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7184.patch, LUCENE-7184.patch, LUCENE-7184.patch
>
>
> This is part 1 for LUCENE-7165. This task will add a {{GeoEncodingUtils}} 
> helper class to {{o.a.l.geo}} in the core module for reusing lat/lon encoding 
> methods. Existing encoding methods in {{LatLonPoint}} will be refactored to 
> the new helper class so a new numerically stable morton encoding can be added 
> that reuses the same encoding methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8967) UI should not show the replication tab in the core selector panel in cloud mode

2016-05-09 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277328#comment-15277328
 ] 

Hoss Man commented on SOLR-8967:



Manually correcting fixVersion per Step #S5 of LUCENE-7271


> UI should not show the replication tab in the core selector panel in cloud 
> mode
> ---
>
> Key: SOLR-8967
> URL: https://issues.apache.org/jira/browse/SOLR-8967
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Varun Thacker
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-8967.patch
>
>
> When running Solr in cloud mode, the UI has a 'Replication' tab under 'Core 
> Selector' . 
> I think we should not display this when Solr is running in cloud mode. It 
> doesn't add any value as this is useful for master-slave setups. It could 
> also be harmful if someone accidentally clicks on 'Disable Replication' in 
> the UI. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8902) ReturnFields can return fields that were not requested

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8902:
---
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> ReturnFields can return fields that were not requested
> --
>
> Key: SOLR-8902
> URL: https://issues.apache.org/jira/browse/SOLR-8902
> Project: Solr
>  Issue Type: Bug
>  Components: Response Writers
>Reporter: Ryan McKinley
>Assignee: Ryan McKinley
>Priority: Minor
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-8902.diff
>
>
> It looks like something changed that now returns all fields requested from 
> lucene, not just the ones request from solr.
> This is the difference between 'fields' and 'okFieldNames' in 
> SolrReturnFields.
> The logic here:
> https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/search/SolrReturnFields.java#L141
> adds all the 'fields' to 'okFieldName'
> I think that should be removed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7188) IllegalStateException in NRTCachingDirectory.listAll

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7188:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> IllegalStateException in NRTCachingDirectory.listAll
> 
>
> Key: LUCENE-7188
> URL: https://issues.apache.org/jira/browse/LUCENE-7188
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 5.2.1
> Environment: Production, QA
>Reporter: Semion Mc Alice
>Assignee: Yonik Seeley
> Fix For: 6.1, 5.5.1, master (7.0)
>
> Attachments: LUCENE-7188.patch
>
>
> Hey,
> we are getting IllegalStateException in 2 different circumstances. The first 
> one is on Status calls:
> {noformat}
> ERROR - 2016-02-01 22:32:43.164; [   ] org.apache.solr.common.SolrException; 
> org.apache.solr.common.SolrException: Error handling 'status' action 
>   at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:748)
>   at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:228)
>   at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:193)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:660)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:431)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at 
> org.eclipse.jetty.server.handler.RequestLogHandler.handle(RequestLogHandler.java:95)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1129)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:497)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Unknown Source)
> Caused by: java.lang.IllegalStateException: file: 
> MMapDirectory@D:\Solr\server\solr\Prod_Core1_shard1_replica2\data\index 
> lockFactory=org.apache.lucene.store.NativeFSLockFactory@65d307e5 appears both 
> in delegate and in cache: cache=[_a0.fdt, _9t_7.liv, _a0.fdx, 
> _a0_Lucene50_0.tip, _a0.nvm, _a0_Lucene50_0.doc, _a0_Lucene50_0.tim, _a0.fnm, 
> _a0_Lucene50_0.pos, _a0.si],delegate=[pending_segments_93, segments_92, 
> write.lock, _9t.fdt, _9t.fdx, _9t.fnm, _9t.nvd, _9t.nvm, _9t.si, _9t_6.liv, 
> _9t_Lucene50_0.doc, _9t_Lucene50_0.pos, _9t_Lucene50_0.tim, 
> _9t_Lucene50_0.tip, _9u.fdt, _9u.fdx, _9u.fnm, _9u.nvd, _9u.nvm, _9u.si, 
> _9u_Lucene50_0.doc, _9u_Lucene50_0.pos, _9u_Lucene50_0.tim, 
> _9u_Lucene50_0.tip, _9v.fdt, _9v.fdx, _9v.fnm, _9v.nvd, _9v.nvm, _9v.si, 
> _9v_Lucene50_0.doc, _9v_Lucene50_0.pos, _9v_Lucene50_0.tim, 
> _9v_Lucene50_0.tip, _9w.fdt, _9w.fdx, _9w.fnm, _9w.nvd, _9w.nvm, _9w.si, 
> _9w_Lucene50_0.doc, _9w_Lucene50_0.pos, _9w_Lucene50_0.tim, 
> _9w_Lucene50_0.tip, 

[jira] [Updated] (SOLR-4509) Move to non deprecated HttpClient impl classes to remove stale connection check on every request and move connection lifecycle management towards the client.

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-4509:
---
Fix Version/s: (was: 6.0)
   master (7.0)

> Move to non deprecated HttpClient impl classes to remove stale connection 
> check on every request and move connection lifecycle management towards the 
> client.
> -
>
> Key: SOLR-4509
> URL: https://issues.apache.org/jira/browse/SOLR-4509
> Project: Solr
>  Issue Type: Improvement
> Environment: 5 node SmartOS cluster (all nodes living in same global 
> zone - i.e. same physical machine)
>Reporter: Ryan Zezeski
>Assignee: Mark Miller
>Priority: Minor
> Fix For: master (7.0)
>
> Attachments: 
> 0001-SOLR-4509-Move-to-non-deprecated-HttpClient-impl-cla.patch, 
> 0001-SOLR-4509-Move-to-non-deprecated-HttpClient-impl-cla.patch, 
> 0001-SOLR-4509-Move-to-non-deprecated-HttpClient-impl-cla.patch, 
> 0001-SOLR-4509-Move-to-non-deprecated-HttpClient-impl-cla.patch, 
> 0001-SOLR-4509-Move-to-non-deprecated-HttpClient-impl-cla.patch, 
> 0001-SOLR-4509-Move-to-non-deprecated-HttpClient-impl-cla.patch, 
> IsStaleTime.java, SOLR-4509-4_4_0.patch, SOLR-4509.patch, SOLR-4509.patch, 
> SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
> SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
> baremetal-stale-nostale-med-latency.dat, 
> baremetal-stale-nostale-med-latency.svg, 
> baremetal-stale-nostale-throughput.dat, baremetal-stale-nostale-throughput.svg
>
>
> By disabling the Apache HTTP Client stale check I've witnessed a 2-4x 
> increase in throughput and reduction of over 100ms.  This patch was made in 
> the context of a project I'm leading, called Yokozuna, which relies on 
> distributed search.
> Here's the patch on Yokozuna: https://github.com/rzezeski/yokozuna/pull/26
> Here's a write-up I did on my findings: 
> http://www.zinascii.com/2013/solr-distributed-search-and-the-stale-check.html
> I'm happy to answer any questions or make changes to the patch to make it 
> acceptable.
> ReviewBoard: https://reviews.apache.org/r/28393/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4509) Move to non deprecated HttpClient impl classes to remove stale connection check on every request and move connection lifecycle management towards the client.

2016-05-09 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277308#comment-15277308
 ] 

Hoss Man commented on SOLR-4509:



Manually correcting fixVersion per Step #S6 of LUCENE-7271


> Move to non deprecated HttpClient impl classes to remove stale connection 
> check on every request and move connection lifecycle management towards the 
> client.
> -
>
> Key: SOLR-4509
> URL: https://issues.apache.org/jira/browse/SOLR-4509
> Project: Solr
>  Issue Type: Improvement
> Environment: 5 node SmartOS cluster (all nodes living in same global 
> zone - i.e. same physical machine)
>Reporter: Ryan Zezeski
>Assignee: Mark Miller
>Priority: Minor
> Fix For: master (7.0)
>
> Attachments: 
> 0001-SOLR-4509-Move-to-non-deprecated-HttpClient-impl-cla.patch, 
> 0001-SOLR-4509-Move-to-non-deprecated-HttpClient-impl-cla.patch, 
> 0001-SOLR-4509-Move-to-non-deprecated-HttpClient-impl-cla.patch, 
> 0001-SOLR-4509-Move-to-non-deprecated-HttpClient-impl-cla.patch, 
> 0001-SOLR-4509-Move-to-non-deprecated-HttpClient-impl-cla.patch, 
> 0001-SOLR-4509-Move-to-non-deprecated-HttpClient-impl-cla.patch, 
> IsStaleTime.java, SOLR-4509-4_4_0.patch, SOLR-4509.patch, SOLR-4509.patch, 
> SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
> SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
> baremetal-stale-nostale-med-latency.dat, 
> baremetal-stale-nostale-med-latency.svg, 
> baremetal-stale-nostale-throughput.dat, baremetal-stale-nostale-throughput.svg
>
>
> By disabling the Apache HTTP Client stale check I've witnessed a 2-4x 
> increase in throughput and reduction of over 100ms.  This patch was made in 
> the context of a project I'm leading, called Yokozuna, which relies on 
> distributed search.
> Here's the patch on Yokozuna: https://github.com/rzezeski/yokozuna/pull/26
> Here's a write-up I did on my findings: 
> http://www.zinascii.com/2013/solr-distributed-search-and-the-stale-check.html
> I'm happy to answer any questions or make changes to the patch to make it 
> acceptable.
> ReviewBoard: https://reviews.apache.org/r/28393/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7214) Remove two-phase iteration from LatLonPoint.newDistanceQuery

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7214:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> Remove two-phase iteration from LatLonPoint.newDistanceQuery
> 
>
> Key: LUCENE-7214
> URL: https://issues.apache.org/jira/browse/LUCENE-7214
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7214.patch
>
>
> This was a nice crutch for tons of expensive per-document methods, but its no 
> longer needed anymore. After LUCENE-7147 these are truly only boundary cases 
> and we aren't doing a ton of per doc checks anymore. See LUCENE-7212 for 
> inspiration.
> The extra bitset needed, 64-bit docvalues fetch, etc, this cost is no longer 
> worth it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8976) Add SolrJ support for REBALANCELEADERS Collections API

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8976:
---
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> Add SolrJ support for REBALANCELEADERS Collections API
> --
>
> Key: SOLR-8976
> URL: https://issues.apache.org/jira/browse/SOLR-8976
> Project: Solr
>  Issue Type: New Feature
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-8976.patch
>
>
> We should have SolrJ supporting REBALANCELEADERS API directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7069) Add LatLonPoint.nearest to find closest indexed point to a given query point

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7069:
-
Fix Version/s: (was: 6.0)
   master (7.0)


https://issues.apache.org/jira/browse/LUCENE-7215

> Add LatLonPoint.nearest to find closest indexed point to a given query point
> 
>
> Key: LUCENE-7069
> URL: https://issues.apache.org/jira/browse/LUCENE-7069
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7069.patch, LUCENE-7069.patch, LUCENE-7069.patch, 
> LUCENE-7069.patch
>
>
> KD trees (used by Lucene's new dimensional points) excel at finding "nearest 
> neighbors" to a given query point ... I think we should add this to Lucene's 
> sandbox as:
> {noformat}
>   public static Document nearest(IndexReader r, String field, double lat, 
> double lon) throws IOException
> {noformat}
> I only implemented the 1 nearest neighbor for starters ... I think we can 
> easily generalize this in the future to K nearest.
> It could also be generalized to more than 2 dimensions, but for now I'm 
> making the class package private in sandbox for just the geo2d (lat/lon) use 
> case.
> I don't think this should go into 6.0.0, but should go into 6.1: it's a new 
> feature, and we need to wrap up and ship 6.0.0 already ;)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8962) Add sort Streaming Expression

2016-05-09 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277283#comment-15277283
 ] 

Hoss Man edited comment on SOLR-8962 at 5/9/16 11:16 PM:
-


Manually correcting fixVersion per Step #S5 of LUCENE-7271



was (Author: hossman):

https://issues.apache.org/jira/browse/LUCENE-7215

> Add sort Streaming Expression
> -
>
> Key: SOLR-8962
> URL: https://issues.apache.org/jira/browse/SOLR-8962
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Dennis Gove
>Priority: Critical
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-8962.patch, SOLR-8962.patch
>
>
> The sort Streaming Expression does an in memory sort of the Tuples returned 
> by it's underlying stream. This is intended to be used for sorting sets 
> gathered during local graph traversals. This will make it easy to gather sets 
> during a traversal and use all of the sort based set operations (merge, 
> innerJoin, outerJoin, reduce, complement, intersect). 
> This will be particularly useful with the gatherNodes expression (SOLR-8925). 
> Sample syntax:
> {code}
> intersect(
>sort(gatherNodes(...), "fieldA asc"),
>sort(gatherNodes(...), "fieldA asc"),
>on)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7069) Add LatLonPoint.nearest to find closest indexed point to a given query point

2016-05-09 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277287#comment-15277287
 ] 

Hoss Man edited comment on LUCENE-7069 at 5/9/16 11:15 PM:
---


Manually correcting fixVersion per Step #S5 of LUCENE-7271



was (Author: hossman):

https://issues.apache.org/jira/browse/LUCENE-7215

> Add LatLonPoint.nearest to find closest indexed point to a given query point
> 
>
> Key: LUCENE-7069
> URL: https://issues.apache.org/jira/browse/LUCENE-7069
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7069.patch, LUCENE-7069.patch, LUCENE-7069.patch, 
> LUCENE-7069.patch
>
>
> KD trees (used by Lucene's new dimensional points) excel at finding "nearest 
> neighbors" to a given query point ... I think we should add this to Lucene's 
> sandbox as:
> {noformat}
>   public static Document nearest(IndexReader r, String field, double lat, 
> double lon) throws IOException
> {noformat}
> I only implemented the 1 nearest neighbor for starters ... I think we can 
> easily generalize this in the future to K nearest.
> It could also be generalized to more than 2 dimensions, but for now I'm 
> making the class package private in sandbox for just the geo2d (lat/lon) use 
> case.
> I don't think this should go into 6.0.0, but should go into 6.1: it's a new 
> feature, and we need to wrap up and ship 6.0.0 already ;)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 60 - Failure

2016-05-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/60/

2 tests failed.
FAILED:  
org.apache.lucene.codecs.perfield.TestPerFieldDocValuesFormat.testBinaryFixedLengthVsStoredFields

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([771AFF4B9A67369E]:0)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.codecs.perfield.TestPerFieldDocValuesFormat

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([771AFF4B9A67369E]:0)




Build Log:
[...truncated 1721 lines...]
   [junit4] Suite: org.apache.lucene.codecs.perfield.TestPerFieldDocValuesFormat
   [junit4]   2> May 09, 2016 4:14:36 PM 
com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
   [junit4]   2> WARNING: Suite execution timed out: 
org.apache.lucene.codecs.perfield.TestPerFieldDocValuesFormat
   [junit4]   2>1) Thread[id=4934, 
name=TEST-TestPerFieldDocValuesFormat.testBinaryFixedLengthVsStoredFields-seed#[771AFF4B9A67369E],
 state=TIMED_WAITING, group=TGRP-TestPerFieldDocValuesFormat]
   [junit4]   2> at java.lang.Object.wait(Native Method)
   [junit4]   2> at 
org.apache.lucene.index.IndexWriter.doWait(IndexWriter.java:4328)
   [junit4]   2> at 
org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1802)
   [junit4]   2> at 
org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1736)
   [junit4]   2> at 
org.apache.lucene.index.RandomIndexWriter.forceMerge(RandomIndexWriter.java:421)
   [junit4]   2> at 
org.apache.lucene.index.BaseDocValuesFormatTestCase.doTestBinaryVsStoredFields(BaseDocValuesFormatTestCase.java:1392)
   [junit4]   2> at 
org.apache.lucene.index.BaseDocValuesFormatTestCase.testBinaryFixedLengthVsStoredFields(BaseDocValuesFormatTestCase.java:1412)
   [junit4]   2> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
   [junit4]   2> at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   [junit4]   2> at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   [junit4]   2> at java.lang.reflect.Method.invoke(Method.java:498)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
   [junit4]   2> at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
   [junit4]   2> at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
   [junit4]   2> at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
   [junit4]   2> at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
   [junit4]   2> at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
   [junit4]   2> at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
   [junit4]   2> at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
   [junit4]   2> at 

[jira] [Updated] (SOLR-8962) Add sort Streaming Expression

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8962:
---
Fix Version/s: (was: 6.0)
   master (7.0)


https://issues.apache.org/jira/browse/LUCENE-7215

> Add sort Streaming Expression
> -
>
> Key: SOLR-8962
> URL: https://issues.apache.org/jira/browse/SOLR-8962
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Dennis Gove
>Priority: Critical
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-8962.patch, SOLR-8962.patch
>
>
> The sort Streaming Expression does an in memory sort of the Tuples returned 
> by it's underlying stream. This is intended to be used for sorting sets 
> gathered during local graph traversals. This will make it easy to gather sets 
> during a traversal and use all of the sort based set operations (merge, 
> innerJoin, outerJoin, reduce, complement, intersect). 
> This will be particularly useful with the gatherNodes expression (SOLR-8925). 
> Sample syntax:
> {code}
> intersect(
>sort(gatherNodes(...), "fieldA asc"),
>sort(gatherNodes(...), "fieldA asc"),
>on)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-8962) Add sort Streaming Expression

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reopened SOLR-8962:


> Add sort Streaming Expression
> -
>
> Key: SOLR-8962
> URL: https://issues.apache.org/jira/browse/SOLR-8962
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Dennis Gove
>Priority: Critical
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-8962.patch, SOLR-8962.patch
>
>
> The sort Streaming Expression does an in memory sort of the Tuples returned 
> by it's underlying stream. This is intended to be used for sorting sets 
> gathered during local graph traversals. This will make it easy to gather sets 
> during a traversal and use all of the sort based set operations (merge, 
> innerJoin, outerJoin, reduce, complement, intersect). 
> This will be particularly useful with the gatherNodes expression (SOLR-8925). 
> Sample syntax:
> {code}
> intersect(
>sort(gatherNodes(...), "fieldA asc"),
>sort(gatherNodes(...), "fieldA asc"),
>on)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8962) Add sort Streaming Expression

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-8962.

Resolution: Fixed

> Add sort Streaming Expression
> -
>
> Key: SOLR-8962
> URL: https://issues.apache.org/jira/browse/SOLR-8962
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Dennis Gove
>Priority: Critical
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-8962.patch, SOLR-8962.patch
>
>
> The sort Streaming Expression does an in memory sort of the Tuples returned 
> by it's underlying stream. This is intended to be used for sorting sets 
> gathered during local graph traversals. This will make it easy to gather sets 
> during a traversal and use all of the sort based set operations (merge, 
> innerJoin, outerJoin, reduce, complement, intersect). 
> This will be particularly useful with the gatherNodes expression (SOLR-8925). 
> Sample syntax:
> {code}
> intersect(
>sort(gatherNodes(...), "fieldA asc"),
>sort(gatherNodes(...), "fieldA asc"),
>on)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7215) don't invoke full haversin for LatLonPoint.newDistanceQuery

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7215:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> don't invoke full haversin for LatLonPoint.newDistanceQuery
> ---
>
> Key: LUCENE-7215
> URL: https://issues.apache.org/jira/browse/LUCENE-7215
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7215.patch
>
>
> For tree traversals and edge cases we still sometimes invoke full haversin 
> (with asin() call and everything). this is not necessary: we just need to 
> compute the exact sort key needed for comparisons.
> While not a huge optimization, its obviously less work and keeps the overhead 
> of the BKD traversal as low as possible. And it removes the slow asin call 
> from any hot path (its already done for sorting too), with its large tables 
> and so on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8938) add optional --excluderegex argument to ZkCLI

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8938:
---
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> add optional --excluderegex argument to ZkCLI
> -
>
> Key: SOLR-8938
> URL: https://issues.apache.org/jira/browse/SOLR-8938
> Project: Solr
>  Issue Type: Improvement
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 5.6, 6.1, master (7.0)
>
> Attachments: SOLR-8938-part2.patch, SOLR-8938.patch
>
>
> Add optional {{--excluderegex}} argument to 
> [ZkCLI.java|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/cloud/ZkCLI.java]
>  class.
> This change preserves existing behavior (files whose name starts with a . 
> will not be uploaded to ZK) if the new optional argument is not specified. If 
> an {{--excluderegex}} argument is specified then files matching the regular 
> expression won’t be uploaded to ZK.
> Additionally, {{ZkConfigManager.uploadToZK}} now info logs the names of the 
> files that were skipped from uploading to ZK.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7222) Improve Polygon.contains()

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7222:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> Improve Polygon.contains()
> --
>
> Key: LUCENE-7222
> URL: https://issues.apache.org/jira/browse/LUCENE-7222
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7222.patch
>
>
> The current PIP algorithm could use some improvements. I think we should swap 
> in the algorithm here: 
> https://www.ecse.rpi.edu/~wrf/Research/Short_Notes/pnpoly.html
> It is a bit faster for complex polygons:
> {noformat}
> n=50   
> 19.3 QPS -> 20.4 QPS
> n=500   
> 9.8 QPS -> 11.2 QPS
> n=1000 
> 6.3 QPS -> 7.4 QPS
> {noformat}
> It also has some nice properties:
> {quote}
>  if you partition a region of the plane into polygons, i.e., form a planar 
> graph, then PNPOLY will locate each point into exactly one polygon. In other 
> words, PNPOLY considers each polygon to be topologically a semi-open set. 
> This makes things simpler, i.e., causes fewer special cases, if you use 
> PNPOLY as part of a larger system. Examples of this include locating a point 
> in a planar graph, and intersecting two planar graphs. 
> {quote}
> You can see the current issues here by writing tests that pick numbers that 
> won't suffer from rounding errors, to see how the edges behave. For a 
> rectangle as an example, the current code will treat all edges and corners as 
> "contains=true", except for the top edge. This means that if you tried to 
> e.g. form a grid of rectangles (like described above), some points would 
> exist in more than one square.
> On the other hand if you port this same test to java.awt.Polygon, you will 
> see that only the bottom left corner, bottom side, and left side are treated 
> as "contains=true". So then your grid would work without any corner cases. 
> This algorithm behaves the same way.
> Finally, it supports multiple components and holes directly. this is nice for 
> the future because for a complex multipolygon, we can just have one tight 
> loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7223) Add "store" hint to Points javadocs

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7223:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> Add "store" hint to Points javadocs
> ---
>
> Key: LUCENE-7223
> URL: https://issues.apache.org/jira/browse/LUCENE-7223
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7223.patch
>
>
> We had this for e.g. docvalues fields from the beginning:
> {code}
>  * If you also need to store the value, you should add a
>  * separate {@link StoredField} instance.
> {code}
> We should add this to the points types too? It will prevent confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7170) move BaseGeoPointTestCase to test-framework

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7170:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> move BaseGeoPointTestCase to test-framework
> ---
>
> Key: LUCENE-7170
> URL: https://issues.apache.org/jira/browse/LUCENE-7170
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Robert Muir
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7170.patch
>
>
> This abstract test class has hooks for basic operations:
> {code}
>   protected abstract void addPointToDoc(String field, Document doc, double 
> lat, double lon);
>   protected abstract Query newRectQuery(String field, double minLat, double 
> maxLat, double minLon, double maxLon);
>   protected abstract Query newDistanceQuery(String field, double centerLat, 
> double centerLon, double radiusMeters);
>   protected abstract Query newPolygonQuery(String field, Polygon... polygon);
> {code}
> and hooks for quantization (quantizeLat/quantizeLon) so it can demand exact 
> answers.
> We currently have 3 subclasses, one is in the sandbox. I don't think the 
> sandbox/ should have to depend on spatial/ just for this base test class, and 
> having it in test-framework is a better place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8662) SchemaManager doesn't wait correctly for replicas to update

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8662:
---
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> SchemaManager doesn't wait correctly for replicas to update
> ---
>
> Key: SOLR-8662
> URL: https://issues.apache.org/jira/browse/SOLR-8662
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Varun Thacker
> Fix For: 5.5.1, 6.1, master (7.0)
>
> Attachments: SOLR-8662.patch, SOLR-8662.patch, SOLR-8662.patch, 
> SOLR-8662.patch
>
>
> Currently in SchemaManager, waitForOtherReplicasToUpdate doesn't execute 
> since it gets passed the timeout value instead of the end time 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9004) films example's name field is created multiValued despite FAQ

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9004:
---
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> films example's name field is created multiValued despite FAQ
> -
>
> Key: SOLR-9004
> URL: https://issues.apache.org/jira/browse/SOLR-9004
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Alexandre Rafalovitch
>Assignee: Varun Thacker
>Priority: Minor
> Fix For: 6.1, master (7.0)
>
>
> In the README.txt for the *films* example, it says:
> bq. Without overriding those field types, the _name_ field would have been 
> guessed as a multi-valued string field type  it makes more sense with this
>  particular data set domain to have the movie name be a single valued 
> general full-text searchable field,
> However, the actual Schema API call does not specify multiValued=false, just 
> that it is of *text_general* type. That type is defined as multiValued, so 
> the end result is multiValued as well, opposite to the explanation given.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8349) Allow sharing of large in memory data structures across cores

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8349:
---
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> Allow sharing of large in memory data structures across cores
> -
>
> Key: SOLR-8349
> URL: https://issues.apache.org/jira/browse/SOLR-8349
> Project: Solr
>  Issue Type: Sub-task
>  Components: Server
>Affects Versions: 5.3
>Reporter: Gus Heck
>Assignee: Noble Paul
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-8349.patch, SOLR-8349.patch, SOLR-8349.patch, 
> SOLR-8349.patch, SOLR-8349.patch, SOLR-8349.patch, SOLR-8349.patch
>
>
> In some cases search components or analysis classes may utilize a large 
> dictionary or other in-memory structure. When multiple cores are loaded with 
> identical configurations utilizing this large in memory structure, each core 
> holds it's own copy in memory. This has been noted in the past and a specific 
> case reported in SOLR-3443. This patch provides a generalized capability, and 
> if accepted, this capability will then be used to fix SOLR-3443.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9007) SolrCLI still mentions managed_schema_configs as valid config option for SolrCloud example

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9007:
---
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> SolrCLI still mentions managed_schema_configs as valid config option for 
> SolrCloud example
> --
>
> Key: SOLR-9007
> URL: https://issues.apache.org/jira/browse/SOLR-9007
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 5.5
>Reporter: Timothy Potter
>Assignee: Timothy Potter
>Priority: Minor
> Fix For: 5.5.1, 6.1, master (7.0)
>
> Attachments: SOLR-9007.patch
>
>
> SolrCLI still mentions managed_schema_configs as valid config option for the 
> SolrCloud example. It should be removed as an option to avoid giving a bad 
> out-of-box experience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8990) UI: query links from the "Top Terms" table on the Schema Browser page should use the "term" parser

2016-05-09 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277259#comment-15277259
 ] 

Hoss Man commented on SOLR-8990:



Manually correcting fixVersion per Step #S5 of LUCENE-7271


> UI: query links from the "Top Terms" table on the Schema Browser page should 
> use the "term" parser
> --
>
> Key: SOLR-8990
> URL: https://issues.apache.org/jira/browse/SOLR-8990
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-8990.patch
>
>
> If you are using a StrField, or a TextField with a Keyword tokenizer then 
> it's very possible your indexed terms will include white space.
> But the links created  by the Schema Browser UI screen to serach for a term 
> in the "Top Terms" list assume that just prepending hte term with the 
> fieldname (ie: {{$fieldname + ":" $term}}) will be valid -- and instead they 
> don't match the correct term.
> 
> Example: 
> Load the {{example/films}} data into a "films" collection, and then load the 
> Schema Browser page for the "genre" field...
> http://127.0.1.1:8983/solr/#/films/schema?field=genre
> The "Top Terms" list includes terms such as {{Rommance Film}} but clicking on 
> this term takes you to this URL...
> http://127.0.1.1:8983/solr/#/films/query?q=genre:Romance%20Film
> ...which is just doing a search for "genre:Romance" OR "Film" (in the default 
> field)
> Instead it should link to...
> http://127.0.1.1:8983/solr/#/gettingstarted/query?q=%7B!term+f=genre%7DRomance+Film



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8990) UI: query links from the "Top Terms" table on the Schema Browser page should use the "term" parser

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8990:
---
Fix Version/s: (was: 6.0)
   master (7.0)

> UI: query links from the "Top Terms" table on the Schema Browser page should 
> use the "term" parser
> --
>
> Key: SOLR-8990
> URL: https://issues.apache.org/jira/browse/SOLR-8990
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-8990.patch
>
>
> If you are using a StrField, or a TextField with a Keyword tokenizer then 
> it's very possible your indexed terms will include white space.
> But the links created  by the Schema Browser UI screen to serach for a term 
> in the "Top Terms" list assume that just prepending hte term with the 
> fieldname (ie: {{$fieldname + ":" $term}}) will be valid -- and instead they 
> don't match the correct term.
> 
> Example: 
> Load the {{example/films}} data into a "films" collection, and then load the 
> Schema Browser page for the "genre" field...
> http://127.0.1.1:8983/solr/#/films/schema?field=genre
> The "Top Terms" list includes terms such as {{Rommance Film}} but clicking on 
> this term takes you to this URL...
> http://127.0.1.1:8983/solr/#/films/query?q=genre:Romance%20Film
> ...which is just doing a search for "genre:Romance" OR "Film" (in the default 
> field)
> Instead it should link to...
> http://127.0.1.1:8983/solr/#/gettingstarted/query?q=%7B!term+f=genre%7DRomance+Film



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8971) ShardHandlerFactory error handling throws away exception details

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8971:
---
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> ShardHandlerFactory error handling throws away exception details
> 
>
> Key: SOLR-8971
> URL: https://issues.apache.org/jira/browse/SOLR-8971
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 6.1, master (7.0)
>
>
> ShardHandlerFactory.newInstance catches any Exception from initializing the 
> configured ShardHandlerFactory class as a plugin, and then throws a new 
> SolrException w/o wrapping hte original excpetion - losing all useful context 
> of why the plugin couldn't be loaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7729) ConcurrentUpdateSolrClient ignoring the collection parameter in some methods

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-7729:
---
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> ConcurrentUpdateSolrClient ignoring the collection parameter in some methods
> 
>
> Key: SOLR-7729
> URL: https://issues.apache.org/jira/browse/SOLR-7729
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: 5.1
>Reporter: Jorge Luis Betancourt Gonzalez
>Assignee: Mark Miller
>  Labels: client, solrj
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-7729-ConcurrentUpdateSolrClient-collection.patch, 
> SOLR-7729.patch
>
>
> Some of the methods in {{ConcurrentUpdateSolrClient}} accept an aditional 
> {{collection}} parameter, some of this methods are: {{add(String collection, 
> SolrInputDocument doc)}} and {{request(SolrRequest, String collection)}}. 
> This collection parameter is being ignored in this cases but works for others 
> like {{commit(String collection)}}.
> [~elyograg] noted that:
> {quote} 
> Looking into how an update request actually gets added to the background
> queue in ConcurrentUpdateSolrClient, it appears that the "collection"
> information is ignored before the request is added to the queue.
> {quote}
> From the source, when a commit is issued or the 
> {{UpdateParams.WAIT_SEARCHER}} is set in the request params the collection 
> parameter is used, otherwise the request {{UpdateRequest req}} is queued 
> without any regarding of the collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7495) Unexpected docvalues type NUMERIC when grouping by a int facet

2016-05-09 Thread Nick Coult (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277256#comment-15277256
 ] 

Nick Coult commented on SOLR-7495:
--

Is this still a bug in Solr 6?

> Unexpected docvalues type NUMERIC when grouping by a int facet
> --
>
> Key: SOLR-7495
> URL: https://issues.apache.org/jira/browse/SOLR-7495
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.3
>Reporter: Fabio Batista da Silva
> Attachments: SOLR-7495.patch, SOLR-7495.patch
>
>
> Hey All,
> After upgrading from solr 4.10 to 5.1 with solr could
> I'm getting a IllegalStateException when i try to facet a int field.
> IllegalStateException: unexpected docvalues type NUMERIC for field 'year' 
> (expected=SORTED). Use UninvertingReader or index with docvalues.
> schema.xml
> {code}
> 
> 
> 
> 
> 
> 
>  multiValued="false" required="true"/>
>  multiValued="false" required="true"/>
> 
> 
>  stored="true"/>
> 
> 
> 
>  />
>  sortMissingLast="true"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  precisionStep="0" positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  class="solr.SpatialRecursivePrefixTreeFieldType" geo="true" 
> distErrPct="0.025" maxDistErr="0.09" units="degrees" />
> 
> id
> name
> 
> 
> {code}
> query :
> {code}
> http://solr.dev:8983/solr/my_collection/select?wt=json=id=index_type:foobar=true=year_make_model=true=true=year
> {code}
> Exception :
> {code}
> ull:org.apache.solr.common.SolrException: Exception during facet.field: year
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:627)
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:612)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at org.apache.solr.request.SimpleFacets$2.execute(SimpleFacets.java:566)
> at 
> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:637)
> at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:280)
> at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:106)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:222)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at 
> 

[jira] [Resolved] (SOLR-9015) Add SelectStream as a default function in the StreamHandler

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-9015.

Resolution: Fixed

> Add SelectStream as a default function in the StreamHandler
> ---
>
> Key: SOLR-9015
> URL: https://issues.apache.org/jira/browse/SOLR-9015
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: 6.0, 6.1
>Reporter: Dennis Gove
>Assignee: Dennis Gove
>Priority: Trivial
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-9015.patch
>
>
> This adds the select(...) streaming expression as a default function in the 
> StreamHandler. This was always intended to be the case but for some reason I 
> neglected to ever add it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8929) Add an idea module for solr/server to enable launching start.jar

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8929:
---
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> Add an idea module for solr/server to enable launching start.jar
> 
>
> Key: SOLR-8929
> URL: https://issues.apache.org/jira/browse/SOLR-8929
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>Assignee: Scott Blum
>Priority: Minor
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-8929-bin-solr-run-configuration.patch, 
> SOLR-8929.patch, SOLR-8929.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Currently in IntelliJ, it's difficult to create a launch config to run Solr 
> in the same way that the bin/solr script would, because there aren't any 
> modules that reference the jetty start.jar that it uses.
> I want to create a simple solr/server IJ module that can be referenced from a 
> launch config.  I've created it manually in the past, but then I always lose 
> it when I have to regenerate idea on branch switch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9015) Add SelectStream as a default function in the StreamHandler

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9015:
---
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> Add SelectStream as a default function in the StreamHandler
> ---
>
> Key: SOLR-9015
> URL: https://issues.apache.org/jira/browse/SOLR-9015
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: 6.0, 6.1
>Reporter: Dennis Gove
>Assignee: Dennis Gove
>Priority: Trivial
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-9015.patch
>
>
> This adds the select(...) streaming expression as a default function in the 
> StreamHandler. This was always intended to be the case but for some reason I 
> neglected to ever add it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-9015) Add SelectStream as a default function in the StreamHandler

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reopened SOLR-9015:


> Add SelectStream as a default function in the StreamHandler
> ---
>
> Key: SOLR-9015
> URL: https://issues.apache.org/jira/browse/SOLR-9015
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: 6.0, 6.1
>Reporter: Dennis Gove
>Assignee: Dennis Gove
>Priority: Trivial
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-9015.patch
>
>
> This adds the select(...) streaming expression as a default function in the 
> StreamHandler. This was always intended to be the case but for some reason I 
> neglected to ever add it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7234) Add InetAddressPoint.nextUp/nextDown

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7234:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> Add InetAddressPoint.nextUp/nextDown
> 
>
> Key: LUCENE-7234
> URL: https://issues.apache.org/jira/browse/LUCENE-7234
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7234.patch, LUCENE-7234.patch
>
>
> This can be useful for dealing with exclusive bounds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7232) InetAddressPoint.newPrefixQuery is not correct when prefixLength is not a multiple of 8

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7232:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> InetAddressPoint.newPrefixQuery is not correct when prefixLength is not a 
> multiple of 8
> ---
>
> Key: LUCENE-7232
> URL: https://issues.apache.org/jira/browse/LUCENE-7232
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 6.1, 6.0.1, master (7.0)
>
> Attachments: LUCENE-7232.patch
>
>
> The reason is that it applies masks on individual bytes in the wrong order: 
> it goes from the lower bits to the upper bits instead of the opposite.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8913) When using a shared filesystem we should store data dir and tlog dir locations in the clusterstate.

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8913:
---
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> When using a shared filesystem we should store data dir and tlog dir 
> locations in the clusterstate.
> ---
>
> Key: SOLR-8913
> URL: https://issues.apache.org/jira/browse/SOLR-8913
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 6.1, master (7.0)
>
>
> Spinning this out of SOLR-6237. I'll put up an initial patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8914) ZkStateReader's refreshLiveNodes(Watcher) is not thread safe

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8914:
---
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> ZkStateReader's refreshLiveNodes(Watcher) is not thread safe
> 
>
> Key: SOLR-8914
> URL: https://issues.apache.org/jira/browse/SOLR-8914
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4.1, 5.5, 6.0
>Reporter: Hoss Man
>Assignee: Scott Blum
> Fix For: 5.5.1, 5.6, 6.0.1, 6.1, master (7.0)
>
> Attachments: SOLR-8914.patch, SOLR-8914.patch, SOLR-8914.patch, 
> SOLR-8914.patch, jenkins.thetaphi.de_Lucene-Solr-6.x-Solaris_32.log.txt, 
> live_node_mentions_port56361_with_threadIds.log.txt, 
> live_nodes_mentions.log.txt
>
>
> Jenkin's encountered a failure in TestTolerantUpdateProcessorCloud over the 
> weekend
> {noformat}
> http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/32/consoleText
> Checking out Revision c46d7686643e7503304cb35dfe546bce9c6684e7 
> (refs/remotes/origin/branch_6x)
> Using Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC
> {noformat}
> The failure happened during the static setup of the test, when a 
> MiniSolrCloudCluster & several clients are initialized -- before any code 
> related to TolerantUpdateProcessor is ever used.
> I can't reproduce this, or really make sense of what i'm (not) seeing here in 
> the logs, so i'm filing this jira with my analysis in the hopes that someone 
> else can help make sense of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8973) TX-frenzy on Zookeeper when collection is put to use

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8973:
---
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> TX-frenzy on Zookeeper when collection is put to use
> 
>
> Key: SOLR-8973
> URL: https://issues.apache.org/jira/browse/SOLR-8973
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0, 5.1, 5.2, 5.3, 5.4, 5.5, 5.6, 6.0
>Reporter: Janmejay Singh
>Assignee: Scott Blum
>  Labels: collections, patch-available, solrcloud, zookeeper
> Fix For: 5.5.1, 5.6, 6.1, master (7.0)
>
> Attachments: SOLR-8973-ZkStateReader.patch, SOLR-8973.patch, 
> SOLR-8973.patch, SOLR-8973.patch
>
>
> This is to do with a distributed data-race. Core-creation happens at a time 
> when collection is not yet visible to the node. In this case a fallback 
> code-path is used which de-references collection-state lazily (on demand) as 
> opposed to setting a watch and keeping it cached locally.
> Due to this, as requests towards the core mount, it generates ZK fetch for 
> collection proportionately. On a large solr-cloud cluster, this generates 
> several Gbps of TX traffic on ZK nodes. This affects indexing 
> throughput(which floors) in addition to running ZK node out of network 
> bandwidth. 
> On smaller solr-cloud clusters its hard to run into, because probability of 
> this race materializing reduces.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7235) Avoid taking the lock in LRUQueryCache when not necessary

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7235:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> Avoid taking the lock in LRUQueryCache when not necessary
> -
>
> Key: LUCENE-7235
> URL: https://issues.apache.org/jira/browse/LUCENE-7235
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7235.patch
>
>
> LRUQueryCache's CachingWeightWrapper works this way:
>  - first it looks up the cache to see if there is an entry for the query in 
> the current leaf
>  - if yes, it returns it
>  - otherwise it checks whether the query should be cached on this leaf
>  - if yes, it builds a cache entry and returns it
>  - otherwise it returns a scorer built from the wrapped weight
> The potential issue is that this first step always takes the lock, and I have 
> seen a couple cases where indices were small and/or queries were very cheap 
> and this showed up as a bottleneck. On the other hand, we have checks in step 
> 3 that tell the cache to not cache on a particular segment regardless of the 
> query. So I would like to move that part before 1 so that we do not even take 
> the lock in that case.
> For instance right now we require that segments have at least 10k documents 
> and 3% of all docs in the index to be cached. I just looked at a random index 
> that contains 1.7m documents, and only 4 segments out of 29 met this 
> criterion (yet they contain 1.1m documents: 65% of the total index size). So 
> in the case of that index, we would take the lock 7x less often.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7238) MemoryIndex.createSearcher should disable caching explicitly

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7238:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> MemoryIndex.createSearcher should disable caching explicitly
> 
>
> Key: LUCENE-7238
> URL: https://issues.apache.org/jira/browse/LUCENE-7238
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 6.1, master (7.0)
>
>
> Follow-up of LUCENE-7235: In practice, nothing will be cached with a 
> reasonable cache implementation given the size of the index (a single 
> document). But it would still be better to explicitly disable caching so that 
> we don't eg. take unnecessary locks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8728) Splitting a shard of a collection created with a rule fails with NPE

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8728:
---
Fix Version/s: master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> Splitting a shard of a collection created with a rule fails with NPE
> 
>
> Key: SOLR-8728
> URL: https://issues.apache.org/jira/browse/SOLR-8728
> Project: Solr
>  Issue Type: Bug
>Reporter: Shai Erera
>Assignee: Noble Paul
> Fix For: 5.5.1, 6.0, 6.1, master (7.0)
>
> Attachments: SOLR-8728.patch, SOLR-8728.patch
>
>
> Spinoff from this discussion: http://markmail.org/message/f7liw4hqaagxo7y2
> I wrote a short test which reproduces, will upload shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9096) Add PartitionStream to Streaming Expressions

2016-05-09 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove updated SOLR-9096:
--
Description: 
The basic idea of a PartitionStream is to take a one or more input streams of 
tuples, partition them out to a set of workers such that each worker can work 
with a subset of the tuples, and then bring them all back into a single stream. 
This differs from a ParallelStream because in ParallelStream the data is 
partitioned at the source whereas with a PartitionStream one can take an 
existing stream and spread it out across workers.

The use-case here is for when one has a source stream (or more) which cannot be 
parallelized at the source but which can be parallelized after some level of 
processing. I see this being used most for parallelized sort, rollups, or graph 
searches.

{code}
/--- sort \
   /   \  /--- 
Collection A
  /  sort - \/
Client <--- rollup  <<<- innerJoin <
  \  sort - /\
   \   /  \--- 
Collection B
\--- sort /
{code}

{code}
/--- sort -- rollup \
   / \  
/--- Collection A
  /  sort -- rollup - \/
Client <-- innerJoin <---<  <- innerJoin <
\ \  sort -- rollup - /\
 \ \ /  
\--- Collection B
  \ \--- sort -- rollup /
   \
\
 \ <--- jdbc source
{code}

{code}
/--- sort -- innerJoin \
   /\  
  /  sort -- innerJoin - \  <--- jdbc source
Client <-- innerJoin <---<| 
\ \  sort -- innerJoin - /  <--- rollup < 
Collection A
 \ \/  
  \ \--- sort -- innerJoin /
   \
\
 \ <--- jdbc source
{code}



I imagine partition expression would look something like this

{code}
partition(
  inputA=,
  inputB=,
  work=,
  over="fieldA,fieldB",
  workers=6,
  zkHost=
)
{code}

for example

{code}
innerJoin(
  partition(
inputA=jdbc(database1),
inputB=rollup(
  search(collectionA, ...),
  ...
),
work=sort(
  innerJoin(
inputA,
inputB,
on="fieldA,fieldB"
  ),
  by="jdbcFieldC asc, collectionAFieldB desc"
),
workers=6,
zkHost=localhost:12345
  ),
  jdbc(database2),
  on="fieldZ"
)
{code}


  was:
The basic idea of a PartitionStream is to take a one or more input streams of 
tuples, partition them out to a set of workers such that each worker can work 
with a subset of the tuples, and then bring them all back into a single stream. 
This differs from a ParallelStream because in ParallelStream the data is 
partitioned at the source whereas with a PartitionStream one can take an 
existing stream and spread it out across workers.

{code}
/--- sort \
   /   \  /--- 
Collection A
  /  sort - \/
Client <--- rollup  <<<- innerJoin <
  \  sort - /\
   \   /  \--- 
Collection B
\--- sort /
{code}

{code}
/--- sort -- rollup \
   / \  
/--- Collection A
  /  sort -- rollup - \/
Client <-- innerJoin <---<  <- innerJoin <
\ \  sort -- rollup - /\
 \ \ /  
\--- Collection B
  \ \--- sort -- rollup /
   \
\
 \ <--- jdbc source
{code}

{code}
/--- sort -- innerJoin \
   /\  
  /  sort -- innerJoin - \  <--- jdbc source
Client <-- innerJoin <---<| 
\ \  sort -- 

[jira] [Updated] (LUCENE-7239) Speed up LatLonPoint's polygon queries when there are many vertices

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7239:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271

FWIW: This issue has no commits linked in comments, so I'm only assuming 
"fix=6.0" should be replaced with "fix=master" based on the timeframe the issue 
was created/resolved in.

> Speed up LatLonPoint's polygon queries when there are many vertices
> ---
>
> Key: LUCENE-7239
> URL: https://issues.apache.org/jira/browse/LUCENE-7239
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7239.patch, LUCENE-7239.patch
>
>
> This is inspired by the "reliability and numerical stability" recommendations 
> at the end of http://www-ma2.upc.es/geoc/Schirra-pointPolygon.pdf.
> Basically our polys need to answer two questions that are slow today:
> contains(point)
> crosses(rectangle)
> Both of these ops only care about a subset of edges: the ones overlapping a y 
> interval range. We can organize these edges in an interval tree to be 
> practical and speed things up a lot. Worst case is still O(n) but those 
> solutions are more complex to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7229) Improve Polygon.relate

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7229:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> Improve Polygon.relate
> --
>
> Key: LUCENE-7229
> URL: https://issues.apache.org/jira/browse/LUCENE-7229
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7229.patch, LUCENE-7229.patch
>
>
> This method is currently quite slow and in many cases does more work than is 
> required. The speed actually directly impacts queries (tree traversal) and 
> bounds grid size to something tiny making it less effective.
> I think we should replace it line intersections based on orientation methods 
> described here http://www.cs.berkeley.edu/~jrs/meshpapers/robnotes.pdf and 
> https://www.cs.cmu.edu/~quake/robust.html
> For one, a naive implementation is considerably faster than the method today: 
> both because it reduces the cost of BKD tree traversals and also because it 
> makes grid construction cheaper. This means we can increase its level of 
> detail with similar or lower startup cost. Now its more like a Mario Brothers 
> 2 picture of your polygon instead of Space Invaders.
> Synthetic polygons from luceneUtil
> ||vertices||old QPS||new QPS||old startup cost||new startup cost||
> |50|20.4|21.7|1ms|1ms|
> |500|11.2|14.4|5ms|4ms|
> |1000|7.4|10.0|9ms|8ms|
> Real polygons (33 london districts: 
> http://data.london.gov.uk/2011-boundary-files)
> ||vertices||old QPS||new QPS||old startup cost||new startup cost||
> |avg 5.6k|4.9|8.6|94ms|85ms|
> But I also like using this method because its possible to extend it to remove 
> floating point error completely in the future with techniques described in 
> those links. This may be necessary if we want to do smarter things (e.g. not 
> linear time).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9096) Add PartitionStream to Streaming Expressions

2016-05-09 Thread Dennis Gove (JIRA)
Dennis Gove created SOLR-9096:
-

 Summary: Add PartitionStream to Streaming Expressions
 Key: SOLR-9096
 URL: https://issues.apache.org/jira/browse/SOLR-9096
 Project: Solr
  Issue Type: New Feature
Reporter: Dennis Gove


The basic idea of a PartitionStream is to take a one or more input streams of 
tuples, partition them out to a set of workers such that each worker can work 
with a subset of the tuples, and then bring them all back into a single stream. 
This differs from a ParallelStream because in ParallelStream the data is 
partitioned at the source whereas with a PartitionStream one can take an 
existing stream and spread it out across workers.

{code}
/--- sort \
   /   \  /--- 
Collection A
  /  sort - \/
Client <--- rollup  <<<- innerJoin <
  \  sort - /\
   \   /  \--- 
Collection B
\--- sort /
{code}

{code}
/--- sort -- rollup \
   / \  
/--- Collection A
  /  sort -- rollup - \/
Client <-- innerJoin <---<  <- innerJoin <
\ \  sort -- rollup - /\
 \ \ /  
\--- Collection B
  \ \--- sort -- rollup /
   \
\
 \ <--- jdbc source
{code}

{code}
/--- sort -- innerJoin \
   /\  
  /  sort -- innerJoin - \  <--- jdbc source
Client <-- innerJoin <---<| 
\ \  sort -- innerJoin - /  <--- rollup < 
Collection A
 \ \/  
  \ \--- sort -- innerJoin /
   \
\
 \ <--- jdbc source
{code}



I imagine partition expression would look something like this

{code}
partition(
  inputA=,
  inputB=,
  work=,
  over="fieldA,fieldB",
  workers=6,
  zkHost=
)
{code}

for example

{code}
innerJoin(
  partition(
inputA=jdbc(database1),
inputB=rollup(
  search(collectionA, ...),
  ...
),
work=sort(
  innerJoin(
inputA,
inputB,
on="fieldA,fieldB"
  ),
  by="jdbcFieldC asc, collectionAFieldB desc"
),
workers=6,
zkHost=localhost:12345
  ),
  jdbc(database2),
  on="fieldZ"
)
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7243) Remove LeafReaderContext from QueryCachingPolicy.shouldCache

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7243:
-
Fix Version/s: master (7.0)

> Remove LeafReaderContext from QueryCachingPolicy.shouldCache
> 
>
> Key: LUCENE-7243
> URL: https://issues.apache.org/jira/browse/LUCENE-7243
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7243.patch
>
>
> Now that the heuristic to not cache on small segments has been moved to the 
> cache, we don't need the LeafReaderContext in QueryCachingPolicy.shouldCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7243) Remove LeafReaderContext from QueryCachingPolicy.shouldCache

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7243:
-
Fix Version/s: (was: 6.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> Remove LeafReaderContext from QueryCachingPolicy.shouldCache
> 
>
> Key: LUCENE-7243
> URL: https://issues.apache.org/jira/browse/LUCENE-7243
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7243.patch
>
>
> Now that the heuristic to not cache on small segments has been moved to the 
> cache, we don't need the LeafReaderContext in QueryCachingPolicy.shouldCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7242) LatLonTree should build a balanced tree

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7242:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> LatLonTree should build a balanced tree
> ---
>
> Key: LUCENE-7242
> URL: https://issues.apache.org/jira/browse/LUCENE-7242
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7242.patch
>
>
> [~rjernst]'s idea: we create an interval tree of edges, but with randomized 
> order.
> Instead we can speed things up more by creating a balanced tree up front.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7237) LRUQueryCache should rather not cache than wait on a lock

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7237:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> LRUQueryCache should rather not cache than wait on a lock
> -
>
> Key: LUCENE-7237
> URL: https://issues.apache.org/jira/browse/LUCENE-7237
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7237.patch
>
>
> This is an idea Robert just mentioned to me: currently the cache is using a 
> lock to keep various data-structures in sync. It is a pity that you might 
> have contention because of caching. So something we could do would be to not 
> cache when the lock is already taken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9025) add SolrCoreTest.testImplicitPlugins test

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9025:
---
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> add SolrCoreTest.testImplicitPlugins test
> -
>
> Key: SOLR-9025
> URL: https://issues.apache.org/jira/browse/SOLR-9025
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 5.6, 6.1, master (7.0)
>
> Attachments: SOLR-9025.patch
>
>
> Various places in the code assume that certain implicit handlers are 
> configured on certain paths (e.g. {{/replication}} is referenced by 
> {{RecoveryStrategy}} and {{IndexFetcher}}). This test tests that the 
> {{ImplicitPlugins.json}} content configures the expected paths and class 
> names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7249) LatLonPoint polygon should use tree relate()

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7249:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> LatLonPoint polygon should use tree relate()
> 
>
> Key: LUCENE-7249
> URL: https://issues.apache.org/jira/browse/LUCENE-7249
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7249.patch
>
>
> Built and tested this method on LUCENE-7239 but forgot to actually cut the 
> code over to use it.
> Using our tree relation methods speeds up BKD traversal. It is not important 
> for tiny polygons but matters as complexity increases:
> Synthetic polygons from luceneUtil
> ||vertices||old QPS||new QPS|
> |5|40.9|40.5|
> |50|33.0|33.1|
> |500|31.5|31.9|
> |5000|24.6|29.4|
> |5|7.0|20.4|
> Real polygons (33 london districts: 
> http://data.london.gov.uk/2011-boundary-files)
> ||vertices||old QPS||new QPS|
> |avg 5.6k|84.3|113.8|



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9034) Atomic updates not work with CopyField

2016-05-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277188#comment-15277188
 ] 

Przemysław Szeremiota edited comment on SOLR-9034 at 5/9/16 10:27 PM:
--

Yonik,

I was fighting SOLR 5.5 useDocValuesAsStored/copyField issue on our company's 
SOLR installation, and sufficient fix seems to be simple:
{code:title=solr/core/src/java/org/apache/solr/handler/component/RealTimeGetComponent.java|borderStyle=solid}
  -   searcher.decorateDocValueFields(sid, docid, 
searcher.getNonStoredDVs(false));
  +   searcher.decorateDocValueFields(sid, docid, 
searcher.getNonStoredDVs(true));
{code}
getNonStoredDVs(false) returns all non-stored docValues fields, 
getNonStoredDVs(true) returns only non-stored docValues fields used as stored, 
either explicitly or implicitly (in schema 1.6). Doesn't masking "implicitly 
use all docvalues as stored, configured or not"  behavior with copyField target 
detection defeats whole purpose of choosing docValues/stored behavior?


was (Author: przemosz):
Yonik,

I was fighting SOLR 5.5 useDocValuesAsStored/copyField issue on our company's 
SOLR installation, and sufficient fix seems to be simple:
{code:title=solr/core/src/java/org/apache/solr/handler/component/RealTimeGetComponent.java|borderStyle=solid}
  -searcher.decorateDocValueFields(sid, docid, 
searcher.getNonStoredDVs(false));
  +   searcher.decorateDocValueFields(sid, docid, 
searcher.getNonStoredDVs(true));
{code}
getNonStoredDVs(false) returns all docValues fields, getNonStoredDVs(true) 
returns only docValues fields used as stored, either explicitly or implicitly 
(in schema 1.6). Doesn't masking "implicitly use all docvalues as stored, 
configured or not"  behavior with copyField target detection defeats whole 
purpose of choosing docValues/stored behavior?

> Atomic updates not work with CopyField
> --
>
> Key: SOLR-9034
> URL: https://issues.apache.org/jira/browse/SOLR-9034
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.5
>Reporter: Karthik Ramachandran
>Assignee: Yonik Seeley
>  Labels: atomicupdate
> Fix For: 6.1
>
> Attachments: SOLR-9034.patch, SOLR-9034.patch, SOLR-9034.patch
>
>
> Atomic updates does not work when CopyField has docValues enabled.  Below is 
> the sample schema
> {code:xml|title:schema.xml}
> indexed="true" stored="true" />
> indexed="true" stored="true" />
> indexed="true" stored="true" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> {code}
> Below is the exception
> {noformat}
> Caused by: java.lang.IllegalArgumentException: DocValuesField
>  "copy_single_i_dvn" appears more than once in this document 
> (only one value is allowed per field)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9034) Atomic updates not work with CopyField

2016-05-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277188#comment-15277188
 ] 

Przemysław Szeremiota edited comment on SOLR-9034 at 5/9/16 10:24 PM:
--

Yonik,

I was fighting SOLR 5.5 useDocValuesAsStored/copyField issue on our company's 
SOLR installation, and sufficient fix seems to be simple:
{code:title=solr/core/src/java/org/apache/solr/handler/component/RealTimeGetComponent.java|borderStyle=solid}
  -searcher.decorateDocValueFields(sid, docid, 
searcher.getNonStoredDVs(false));
  +   searcher.decorateDocValueFields(sid, docid, 
searcher.getNonStoredDVs(true));
{code}
getNonStoredDVs(false) returns all docValues fields, getNonStoredDVs(true) 
returns only docValues fields used as stored, either explicitly or implicitly 
(in schema 1.6). Doesn't masking "implicitly use all docvalues as stored, 
configured or not"  behavior with copyField target detection defeats whole 
purpose of choosing docValues/stored behavior?


was (Author: przemosz):
Yonik,

I was fighting SOLR 5.5 useDocValuesAsStored/copyField issue on our company's 
SOLR installation, and sufficient fix seems to be simple:
  -searcher.decorateDocValueFields(sid, docid, 
searcher.getNonStoredDVs(false));
  +   searcher.decorateDocValueFields(sid, docid, 
searcher.getNonStoredDVs(true));
(edit: in 
solr/core/src/java/org/apache/solr/handler/component/RealTimeGetComponent.java)

getNonStoredDVs(false) returns all docValues fields, getNonStoredDVs(true) 
returns only docValues fields used as stored, either explicitly or implicitly 
(in schema 1.6). Doesn't masking "implicitly use all docvalues as stored, 
configured or not"  behavior with copyField target detection defeats whole 
purpose of choosing docValues/stored behavior?

> Atomic updates not work with CopyField
> --
>
> Key: SOLR-9034
> URL: https://issues.apache.org/jira/browse/SOLR-9034
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.5
>Reporter: Karthik Ramachandran
>Assignee: Yonik Seeley
>  Labels: atomicupdate
> Fix For: 6.1
>
> Attachments: SOLR-9034.patch, SOLR-9034.patch, SOLR-9034.patch
>
>
> Atomic updates does not work when CopyField has docValues enabled.  Below is 
> the sample schema
> {code:xml|title:schema.xml}
> indexed="true" stored="true" />
> indexed="true" stored="true" />
> indexed="true" stored="true" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> {code}
> Below is the exception
> {noformat}
> Caused by: java.lang.IllegalArgumentException: DocValuesField
>  "copy_single_i_dvn" appears more than once in this document 
> (only one value is allowed per field)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9034) Atomic updates not work with CopyField

2016-05-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277188#comment-15277188
 ] 

Przemysław Szeremiota edited comment on SOLR-9034 at 5/9/16 10:22 PM:
--

Yonik,

I was fighting SOLR 5.5 useDocValuesAsStored/copyField issue on our company's 
SOLR installation, and sufficient fix seems to be simple:
  -searcher.decorateDocValueFields(sid, docid, 
searcher.getNonStoredDVs(false));
  +   searcher.decorateDocValueFields(sid, docid, 
searcher.getNonStoredDVs(true));
(edit: in 
solr/core/src/java/org/apache/solr/handler/component/RealTimeGetComponent.java)

getNonStoredDVs(false) returns all docValues fields, getNonStoredDVs(true) 
returns only docValues fields used as stored, either explicitly or implicitly 
(in schema 1.6). Doesn't masking "implicitly use all docvalues as stored, 
configured or not"  behavior with copyField target detection defeats whole 
purpose of choosing docValues/stored behavior?


was (Author: przemosz):
Yonik,

I was fighting SOLR 5.5 useDocValuesAsStored/copyField issue on our company's 
SOLR installation, and sufficient fix seems to be simple:
-searcher.decorateDocValueFields(sid, docid, 
searcher.getNonStoredDVs(false));
+searcher.decorateDocValueFields(sid, docid, 
searcher.getNonStoredDVs(true));
(edit: in 
solr/core/src/java/org/apache/solr/handler/component/RealTimeGetComponent.java)

getNonStoredDVs(false) returns all docValues fields, getNonStoredDVs(true) 
returns only docValues fields used as stored, either explicitly or implicitly 
(in schema 1.6). Doesn't masking "implicitly use all docvalues as stored, 
configured or not"  behavior with copyField target detection defeats whole 
purpose of choosing docValues/stored behavior?

> Atomic updates not work with CopyField
> --
>
> Key: SOLR-9034
> URL: https://issues.apache.org/jira/browse/SOLR-9034
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.5
>Reporter: Karthik Ramachandran
>Assignee: Yonik Seeley
>  Labels: atomicupdate
> Fix For: 6.1
>
> Attachments: SOLR-9034.patch, SOLR-9034.patch, SOLR-9034.patch
>
>
> Atomic updates does not work when CopyField has docValues enabled.  Below is 
> the sample schema
> {code:xml|title:schema.xml}
> indexed="true" stored="true" />
> indexed="true" stored="true" />
> indexed="true" stored="true" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> {code}
> Below is the exception
> {noformat}
> Caused by: java.lang.IllegalArgumentException: DocValuesField
>  "copy_single_i_dvn" appears more than once in this document 
> (only one value is allowed per field)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9034) Atomic updates not work with CopyField

2016-05-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277188#comment-15277188
 ] 

Przemysław Szeremiota edited comment on SOLR-9034 at 5/9/16 10:21 PM:
--

Yonik,

I was fighting SOLR 5.5 useDocValuesAsStored/copyField issue on our company's 
SOLR installation, and sufficient fix seems to be simple:
-searcher.decorateDocValueFields(sid, docid, 
searcher.getNonStoredDVs(false));
+searcher.decorateDocValueFields(sid, docid, 
searcher.getNonStoredDVs(true));
(edit: in 
solr/core/src/java/org/apache/solr/handler/component/RealTimeGetComponent.java)

getNonStoredDVs(false) returns all docValues fields, getNonStoredDVs(true) 
returns only docValues fields used as stored, either explicitly or implicitly 
(in schema 1.6). Doesn't masking "implicitly use all docvalues as stored, 
configured or not"  behavior with copyField target detection defeats whole 
purpose of choosing docValues/stored behavior?


was (Author: przemosz):
Yonik,

I was fighting SOLR 5.5 useDocValuesAsStored/copyField issue on our company's 
SOLR installation, and sufficient fix seems to be simple:
-searcher.decorateDocValueFields(sid, docid, 
searcher.getNonStoredDVs(false));
-searcher.decorateDocValueFields(sid, docid, 
searcher.getNonStoredDVs(true));
(edit: in 
solr/core/src/java/org/apache/solr/handler/component/RealTimeGetComponent.java)

getNonStoredDVs(false) returns all docValues fields, getNonStoredDVs(true) 
returns only docValues fields used as stored, either explicitly or implicitly 
(in schema 1.6). Doesn't masking "implicitly use all docvalues as stored, 
configured or not"  behavior with copyField target detection defeats whole 
purpose of choosing docValues/stored behavior?

> Atomic updates not work with CopyField
> --
>
> Key: SOLR-9034
> URL: https://issues.apache.org/jira/browse/SOLR-9034
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.5
>Reporter: Karthik Ramachandran
>Assignee: Yonik Seeley
>  Labels: atomicupdate
> Fix For: 6.1
>
> Attachments: SOLR-9034.patch, SOLR-9034.patch, SOLR-9034.patch
>
>
> Atomic updates does not work when CopyField has docValues enabled.  Below is 
> the sample schema
> {code:xml|title:schema.xml}
> indexed="true" stored="true" />
> indexed="true" stored="true" />
> indexed="true" stored="true" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> {code}
> Below is the exception
> {noformat}
> Caused by: java.lang.IllegalArgumentException: DocValuesField
>  "copy_single_i_dvn" appears more than once in this document 
> (only one value is allowed per field)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7251) remove LatLonGrid

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7251:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> remove LatLonGrid
> -
>
> Key: LUCENE-7251
> URL: https://issues.apache.org/jira/browse/LUCENE-7251
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7251.patch, LUCENE-7251.patch
>
>
> This crutch doesn't speed up most polygons anymore, only some very complex 
> ones with many components/holes.
> Instead as a simple step, we can use a tree of components (organized by 
> bounding box x-intervals just like edges). This makes things less trappy for 
> crazy polygons like the russia one.
> Synthetic polygons from luceneUtil
> ||vertices||old QPS||new QPS|
> |5|40.5|43.8|
> |50|33.1|32.8|
> |500|31.9|31.9|
> |5000|29.4|29.6|
> |5|20.4|22.8|
> |50|4.0|6.9|
> Real polygons (33 london districts: 
> http://data.london.gov.uk/2011-boundary-files)
> ||vertices||old QPS||new QPS|
> |avg 5.6k|113.8|105.4|
> Russia geonames polygon (> 1000 components, crosses dateline, hugs poles, you 
> name it)
> ||vertices||old QPS||new QPS|
> |11598|1.17|5.35|
> The grid hurts russia (keeping it around -> 4 QPS), and you can see it also 
> hurts all the synthetic ones. Those london boundaries hit a sweet spot where 
> it helps just a tad but, I think we should remove it and its startup cost 
> along with it.
> We can probably organize the tree better to be more efficient with many 
> components: for contains() we could just pack them all into one poly. But i'm 
> worried what this will do for relations (there would be fake edges between 
> components i think?), and it would be complicated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7240) Remove DocValues from LatLonPoint, add DocValuesField for that

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7240:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> Remove DocValues from LatLonPoint, add DocValuesField for that
> --
>
> Key: LUCENE-7240
> URL: https://issues.apache.org/jira/browse/LUCENE-7240
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7240.patch, LUCENE-7240.patch
>
>
> LatLonPoint needed two-phase intersection initially because of big 
> inefficiencies, but as of LUCENE-7239 all of its query operations:  
> {{newBoxQuery()}}, {{newDistanceQuery()}}, {{newPolygonQuery()}} and 
> {{nearest()}} only need the points datastructure (BKD).
> If you want to do {{newDistanceSort()}} then you need docvalues for that, but 
> I think it should be moved to a separate field: e.g. docvalues is optional 
> just like any other field in lucene. We can add other methods that make sense 
> to that new docvalues field (e.g. facet by distance/region, expressions 
> support, whatever). It is really disjoint from the core query support: and 
> also currently has a heavyish cost of ~64-bits per value in space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9034) Atomic updates not work with CopyField

2016-05-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277188#comment-15277188
 ] 

Przemysław Szeremiota edited comment on SOLR-9034 at 5/9/16 10:19 PM:
--

Yonik,

I was fighting SOLR 5.5 useDocValuesAsStored/copyField issue on our company's 
SOLR installation, and sufficient fix seems to be simple:
-searcher.decorateDocValueFields(sid, docid, 
searcher.getNonStoredDVs(false));
-searcher.decorateDocValueFields(sid, docid, 
searcher.getNonStoredDVs(true));
(edit: in 
solr/core/src/java/org/apache/solr/handler/component/RealTimeGetComponent.java)

getNonStoredDVs(false) returns all docValues fields, getNonStoredDVs(true) 
returns only docValues fields used as stored, either explicitly or implicitly 
(in schema 1.6). Doesn't masking "implicitly use all docvalues as stored, 
configured or not"  behavior with copyField target detection defeats whole 
purpose of choosing docValues/stored behavior?


was (Author: przemosz):
Yonik,

I was fighting SOLR 5.5 useDocValuesAsStored/copyField issue on our company's 
SOLR installation, and sufficient fix seems to be simple:
-searcher.decorateDocValueFields(sid, docid, 
searcher.getNonStoredDVs(false));
-searcher.decorateDocValueFields(sid, docid, 
searcher.getNonStoredDVs(true));

getNonStoredDVs(false) returns all docValues fields, getNonStoredDVs(true) 
returns only docValues fields used as stored, either explicitly or implicitly 
(in schema 1.6). Doesn't masking "implicitly use all docvalues as stored, 
configured or not"  behavior with copyField target detection defeats whole 
purpose of choosing docValues/stored behavior?

> Atomic updates not work with CopyField
> --
>
> Key: SOLR-9034
> URL: https://issues.apache.org/jira/browse/SOLR-9034
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.5
>Reporter: Karthik Ramachandran
>Assignee: Yonik Seeley
>  Labels: atomicupdate
> Fix For: 6.1
>
> Attachments: SOLR-9034.patch, SOLR-9034.patch, SOLR-9034.patch
>
>
> Atomic updates does not work when CopyField has docValues enabled.  Below is 
> the sample schema
> {code:xml|title:schema.xml}
> indexed="true" stored="true" />
> indexed="true" stored="true" />
> indexed="true" stored="true" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> {code}
> Below is the exception
> {noformat}
> Caused by: java.lang.IllegalArgumentException: DocValuesField
>  "copy_single_i_dvn" appears more than once in this document 
> (only one value is allowed per field)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9034) Atomic updates not work with CopyField

2016-05-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277188#comment-15277188
 ] 

Przemysław Szeremiota commented on SOLR-9034:
-

Yonik,

I was fighting SOLR 5.5 useDocValuesAsStored/copyField issue on our company's 
SOLR installation, and sufficient fix seems to be simple:
-searcher.decorateDocValueFields(sid, docid, 
searcher.getNonStoredDVs(false));
-searcher.decorateDocValueFields(sid, docid, 
searcher.getNonStoredDVs(true));

getNonStoredDVs(false) returns all docValues fields, getNonStoredDVs(true) 
returns only docValues fields used as stored, either explicitly or implicitly 
(in schema 1.6). Doesn't masking "implicitly use all docvalues as stored, 
configured or not"  behavior with copyField target detection defeats whole 
purpose of choosing docValues/stored behavior?

> Atomic updates not work with CopyField
> --
>
> Key: SOLR-9034
> URL: https://issues.apache.org/jira/browse/SOLR-9034
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.5
>Reporter: Karthik Ramachandran
>Assignee: Yonik Seeley
>  Labels: atomicupdate
> Fix For: 6.1
>
> Attachments: SOLR-9034.patch, SOLR-9034.patch, SOLR-9034.patch
>
>
> Atomic updates does not work when CopyField has docValues enabled.  Below is 
> the sample schema
> {code:xml|title:schema.xml}
> indexed="true" stored="true" />
> indexed="true" stored="true" />
> indexed="true" stored="true" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> {code}
> Below is the exception
> {noformat}
> Caused by: java.lang.IllegalArgumentException: DocValuesField
>  "copy_single_i_dvn" appears more than once in this document 
> (only one value is allowed per field)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7159) improve spatial point/rect vs. polygon performance

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7159:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> improve spatial point/rect vs. polygon performance
> --
>
> Key: LUCENE-7159
> URL: https://issues.apache.org/jira/browse/LUCENE-7159
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7159.patch, LUCENE-7159.patch
>
>
> Now that we can query on complex polygons without going OOM (LUCENE-7153), we 
> should do something to address the current  performance.
> Currently, we use a basic crossings test ({{O\(n)}}) for boundary cases. We 
> defer these expensive per-doc checks on boundary cases to a two phase 
> iterator (LUCENE-7019, LUCENE-7109), so that it can be avoided if e.g. 
> excluded by filters, conjunctions, deleted doc, and so on. This is currently 
> important for performance, but basically its shoving the problem under the 
> rug and hoping it goes away. At least for point in poly, there are a number 
> of faster techniques described here: 
> http://erich.realtimerendering.com/ptinpoly/
> Additionally I am not sure how costly our "tree traversal" (rectangle 
> intersection algorithms). Maybe its nothing to be worried about, but likely 
> it too gets bad if the thing gets complex enough. These don't need to be 
> perfect but need to behave like java's Shape#contains (can conservatively 
> return false), and Shape#intersects (can conservatively return true). Of 
> course, if they are too inaccurate, then things can get slower.
> In cases of precomputed structures we should also consider memory usage: e.g. 
> we shouldn't make a horrible tradeoff there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9029) regular fails since ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9029:
---
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> regular fails since  
> ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy 
> 
>
> Key: SOLR-9029
> URL: https://issues.apache.org/jira/browse/SOLR-9029
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Scott Blum
> Fix For: 6.1, master (7.0)
>
>
> jenkins started to semi-regularly complain about 
> ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy on march 7 (53 
> failures in 45 days at current count)
> March 7th is not-coincidently when commit 
> 093a8ce57c06f1bf2f71ddde52dcc7b40cbd6197 for SOLR-8745 was made, modifying 
> both the test & a bunch of ClusterState code.
> 
> Sample failure...
> https://builds.apache.org/job/Lucene-Solr-Tests-master/1096
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=ZkStateReaderTest 
> -Dtests.method=testStateFormatUpdateWithExplicitRefreshLazy 
> -Dtests.seed=78F99EDE682EC04B -Dtests.multiplier=2 -Dtests.slow=true 
> -Dtests.locale=tr-TR -Dtests.timezone=Europe/Tallinn -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.45s J0 | 
> ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy <<<
>[junit4]> Throwable #1: org.apache.solr.common.SolrException: Could 
> not find collection : c1
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([78F99EDE682EC04B:13B63EA311211D71]:0)
>[junit4]>  at 
> org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:170)
>[junit4]>  at 
> org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdate(ZkStateReaderTest.java:135)
>[junit4]>  at 
> org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy(ZkStateReaderTest.java:46)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {noformat}
> ...i've also seen this fail locally, but i've never been able to reproduce it 
> with the same seed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9041) create a well known permission for core-admin-read and core-admin-edit

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9041:
---
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> create a well known permission for core-admin-read and core-admin-edit
> --
>
> Key: SOLR-9041
> URL: https://issues.apache.org/jira/browse/SOLR-9041
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Minor
> Fix For: 6.1, master (7.0)
>
>
> We have missed this very important operation. Any admin operation would need 
> to restrict this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9091) Solr index restore silently copies the corrupt segments in the backup

2016-05-09 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277184#comment-15277184
 ] 

Hrishikesh Gadre commented on SOLR-9091:


[~thetaphi]

bq.The identifiers are excatly there to prevent the problem you are describing. 
So please use them for that, no need to revisit this again. You can be 
99.999...% sure that 2 segment files with identical filename, identical 
identifier and identical hash are the same files.

Thanks for the comment. good to know :)

> Solr index restore silently copies the corrupt segments in the backup
> -
>
> Key: SOLR-9091
> URL: https://issues.apache.org/jira/browse/SOLR-9091
> Project: Solr
>  Issue Type: Bug
>Reporter: Hrishikesh Gadre
>
> The Solr core restore functionality uses following criteria to decide if a 
> given file is copied from backup directory or from current index directory.
> case 1] File is available in both backup and current index directory
> --> Compare the checksum and file length
>   --> If checksum and length matching, copy the file from current working 
> directory.
>  --> If the checksum and length doesn't match, copy the file from backup 
> directory. 
> case 2] File is available in only in backup directory (This can happen for a 
> newly created core without any data).
> --> Copy the file from backup directory. 
> Now the problem here is that we intentionally catch and ignore the error 
> while reading the checksum for a file in the backup directory. Hence in case 
> (2), it will result into restoration of a file without appropriate "checksum".
> Here is the relevant code snippet,
> https://github.com/apache/lucene-solr/blob/a5586d29b23f7d032e6d8f0cf8758e56b09e0208/solr/core/src/java/org/apache/solr/handler/RestoreCore.java#L82-L95



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7259) speed up MatchingPoints cost() estimation

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7259:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> speed up MatchingPoints cost() estimation
> -
>
> Key: LUCENE-7259
> URL: https://issues.apache.org/jira/browse/LUCENE-7259
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7259.patch
>
>
> MatchingPoints currently tracks a counter in the super-hot add() loop. While 
> not a big deal, we can easily just use the grow() api for this instead (which 
> is only currently called e.g. every 1k docs).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7257) PointValues aggregated stats fail if the provided field does not have points on one of the leaves

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7257:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> PointValues aggregated stats fail if the provided field does not have points 
> on one of the leaves
> -
>
> Key: LUCENE-7257
> URL: https://issues.apache.org/jira/browse/LUCENE-7257
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7257.patch
>
>
> The static helpers on PointValues to get aggregated 
> size/docCount/minPackedValue/maxPackedValue fail if a leaf has points indexed 
> (so that getPointValues() returns a non-null value) but not for the given 
> field. In that case PointValues.size() throws an exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8082) can't query against negative float or double values when indexed="false" docValues="true" multiValued="false"

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8082:
---
Fix Version/s: master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> can't query against negative float or double values when indexed="false" 
> docValues="true" multiValued="false"
> -
>
> Key: SOLR-8082
> URL: https://issues.apache.org/jira/browse/SOLR-8082
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: 5.5.1, 5.6, 6.0, 6.1, master (7.0)
>
> Attachments: SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch
>
>
> Haven't dug into this yet, but something is evidently wrong in how the 
> DocValues based queries get build for single valued float or double fields 
> when negative numbers are involved.
> Steps to reproduce...
> {noformat}
> $ bin/solr -e schemaless -noprompt
> ...
> $ curl -X POST -H 'Content-type:application/json' --data-binary '{ 
> "add-field":{ "name":"f_dv_multi", "type":"tfloat", "stored":"true", 
> "indexed":"false", "docValues":"true", "multiValued":"true" }, "add-field":{ 
> "name":"f_dv_single", "type":"tfloat", "stored":"true", "indexed":"false", 
> "docValues":"true", "multiValued":"false" } }' 
> http://localhost:8983/solr/gettingstarted/schema
> {
>   "responseHeader":{
> "status":0,
> "QTime":84}}
> $ curl -X POST -H 'Content-type:application/json' --data-binary 
> '[{"id":"test", "f_dv_multi":-4.3, "f_dv_single":-4.3}]' 
> 'http://localhost:8983/solr/gettingstarted/update/json/docs?commit=true'
> {"responseHeader":{"status":0,"QTime":57}}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_multi:\"-4.3\""}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_single:\"-4.3\""}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}
> Explicit range queries (which is how numeric "field" queries are implemented 
> under the cover) are equally problematic...
> {noformat}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_multi:[-4.3 TO -4.3]"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_single:[-4.3 TO -4.3]"}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9037) replace multiple "/replication" strings with one static constant

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9037:
---
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> replace multiple "/replication" strings with one static constant
> 
>
> Key: SOLR-9037
> URL: https://issues.apache.org/jira/browse/SOLR-9037
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 5.6, 6.1, master (7.0)
>
> Attachments: SOLR-9037.patch
>
>
> proposed patch to follow



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9091) Solr index restore silently copies the corrupt segments in the backup

2016-05-09 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277172#comment-15277172
 ] 

Uwe Schindler commented on SOLR-9091:
-

bq. Is it possible that the segment identifiers generated in core B may have an 
overlap with those in core A ?

Unlikely, but theoretically possible - this can be compared to the possibility 
of 2 different files could have the same SHA1 hash. If it ever happens, we have 
to revisit the random number generator behind it.

Just to note: The identifiers are excatly there to prevent the problem you are 
describing. So please use them for that, no need to revisit this again. You can 
be 99.999...% sure that 2 segment files with identical filename, 
identical identifier and identical hash are the same files.

> Solr index restore silently copies the corrupt segments in the backup
> -
>
> Key: SOLR-9091
> URL: https://issues.apache.org/jira/browse/SOLR-9091
> Project: Solr
>  Issue Type: Bug
>Reporter: Hrishikesh Gadre
>
> The Solr core restore functionality uses following criteria to decide if a 
> given file is copied from backup directory or from current index directory.
> case 1] File is available in both backup and current index directory
> --> Compare the checksum and file length
>   --> If checksum and length matching, copy the file from current working 
> directory.
>  --> If the checksum and length doesn't match, copy the file from backup 
> directory. 
> case 2] File is available in only in backup directory (This can happen for a 
> newly created core without any data).
> --> Copy the file from backup directory. 
> Now the problem here is that we intentionally catch and ignore the error 
> while reading the checksum for a file in the backup directory. Hence in case 
> (2), it will result into restoration of a file without appropriate "checksum".
> Here is the relevant code snippet,
> https://github.com/apache/lucene-solr/blob/a5586d29b23f7d032e6d8f0cf8758e56b09e0208/solr/core/src/java/org/apache/solr/handler/RestoreCore.java#L82-L95



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9046) solr.cmd wrongly assumes Jetty will always listen on 0.0.0.0

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9046:
---
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> solr.cmd wrongly assumes Jetty will always listen on 0.0.0.0
> 
>
> Key: SOLR-9046
> URL: https://issues.apache.org/jira/browse/SOLR-9046
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.5
> Environment: Windows
>Reporter: Bram Van Dam
>Assignee: Uwe Schindler
> Fix For: 5.5.1, 6.1, master (7.0)
>
> Attachments: SOLR-9045.patch, SOLR-9046.patch, SOLR-9046.patch, 
> SOLR-9046.patch, SOLR-9046.patch
>
>
> The Windows solr.cmd script makes the (incorrect) assumption that Solr will 
> always be listening on 0.0.0.0 (all interfaces). When you change the interface
> address, say to 127.0.0.1, then the status and stop commands will fail.
> This patch adds a property in solr.in.cmd, which is passed to SOLR_OPTS as 
> -Djetty.host, and replaces the instances of 0.0.0.0 in solr.cmd.
> The patch includes some changes in the netstat logic used in solr.cmd to find 
> the correct Solr process(es). 
> Tested on Solr 5.5 on Windows 7 and 10. 
> Note: Untested on Solr 6. Currently using Solr 5.5



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7261) Speed up LSBRadixSorter

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7261:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> Speed up LSBRadixSorter
> ---
>
> Key: LUCENE-7261
> URL: https://issues.apache.org/jira/browse/LUCENE-7261
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7261.patch
>
>
> Currently it always does 4 passes over the data (one per byte, since ints 
> have 4 bytes). However, most of the time, we know {{maxDoc}}, so we can use 
> this information to do fewer passes when they are not necessary. For 
> instance, if maxDoc is less than or equal to 2^24, we only need 3 passes, and 
> if maxDoc is less than or equals to 2^16, we only need two passes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7264) Fewer conditionals in DocIdSetBuilder.add

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7264:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> Fewer conditionals in DocIdSetBuilder.add
> -
>
> Key: LUCENE-7264
> URL: https://issues.apache.org/jira/browse/LUCENE-7264
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7264.patch
>
>
> As reported in LUCENE-7254, DocIdSetBuilder.add has several conditionals that 
> slow down the construction of the DocIdSet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9047) zkcli should allow alternative locations for log4j configuration

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9047:
---
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> zkcli should allow alternative locations for log4j configuration
> 
>
> Key: SOLR-9047
> URL: https://issues.apache.org/jira/browse/SOLR-9047
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools, SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Minor
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-9047.patch, SOLR-9047.patch
>
>
> zkcli uses the log4j configuration in the local directory:
> {code}
> sdir="`dirname \"$0\"`"
> PATH=$JAVA_HOME/bin:$PATH $JVM 
> -Dlog4j.configuration=file:$sdir/log4j.properties -classpath 
> "$sdir/../../solr-webapp/webapp/WEB-INF/lib/*:$sdir/../../lib/ext/*" 
> org.apache.solr.cloud.ZkCLI ${1+"$@"}
> {code}
> which is a reasonable default, but often people want to use a "global" log4j 
> configuration.  For example, one may define a log4j configuration that writes 
> to an external log directory and want to point to this rather than copying it 
> to each source checkout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7262) Add back the "estimate match count" optimization

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7262:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> Add back the "estimate match count" optimization
> 
>
> Key: LUCENE-7262
> URL: https://issues.apache.org/jira/browse/LUCENE-7262
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7262.patch, LUCENE-7262.patch, LUCENE-7262.patch
>
>
> Follow-up to my last message on LUCENE-7051: I removed this optimization a 
> while ago because it made things a bit more complicated but did not seem to 
> help with point queries. However the reason why it did not seem to help was 
> that the benchmark only runs queries that match 25% of the dataset. This 
> makes the run time completely dominated by calls to FixedBitSet.set so the 
> call to FixedBitSet.cardinality() looks free. However with slightly sparser 
> queries like the geo benchmark generates (dense enough to trigger the 
> creation of a FixedBitSet but sparse enough so that FixedBitSet.set does not 
> dominate the run time), one can notice speed-ups when this call is skipped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8992) Restore Schema API GET method functionality removed by SOLR-8736

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8992:
---
Fix Version/s: (was: 6.0)
   master (7.0)
   6.0.1


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> Restore Schema API GET method functionality removed by SOLR-8736
> 
>
> Key: SOLR-8992
> URL: https://issues.apache.org/jira/browse/SOLR-8992
> Project: Solr
>  Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Noble Paul
> Fix For: 6.0.1, 6.1, master (7.0)
>
> Attachments: SOLR-8992.patch, SOLR-8992.patch, SOLR-8992.patch
>
>
> The following schema API GET functionality was removed under SOLR-8736; some 
> of this functionality should be restored:
> * {{schema/copyfields}}:
> ** The following information is no longer output:
> *** {{destDynamicBase}}: the matching dynamic field pattern for the 
> destination
> *** {{sourceDynamicBase}}: the matching dynamic field pattern for the source
> ** The following request parameters are no longer supported:
> *** {{dest.fl}}: include only copyFields that have one of these as a 
> destination
> *** {{source.fl}}: include only copyFields that have one of these as a source
> * {{schema/dynamicfields}}:
> ** The following request parameters are no longer supported:
> *** {{fl}}: a comma and/or space separated list of dynamic field patterns to 
> include 
> * {{schema/fields}} and {{schema/fields/_fieldname_}}:
> ** The following information is no longer output:
> *** {{dynamicBase}}: the matching dynamic field pattern, if the 
> {{includeDynamic}} param is given (see below) 
> ** The following request parameters are no longer supported:
> *** {{fl}}: (only supported without {{/_fieldname_}}): a comma and/or space 
> separated list of fields to include 
> *** {{includeDynamic}}: output the matching dynamic field pattern as 
> {{dynamicBase}}, if {{_fieldname_}}, or field(s) listed in {{fl}} param, are 
> not explicitly declared in the schema
> * {{schema/fieldtypes}} and {{schema/fieldtypes/_typename_}}:
> ** The following information is no longer output: 
> *** {{fields}}: the fields with the given field type
> *** {{dynamicFields}}: the dynamic fields with the given field type  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9049) RuleBasedAuthorizationPlugin should use regex in params instead of just String.equal()

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9049:
---
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> RuleBasedAuthorizationPlugin should use regex in params instead of just 
> String.equal()
> --
>
> Key: SOLR-9049
> URL: https://issues.apache.org/jira/browse/SOLR-9049
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-9049.patch, SOLR-9049.patch
>
>
> Params can have complex values which will be difficult to capture in a single 
> string. So, a user can specify a full regex if it is prefixed with a "REGEX:"
> example:
> {code:javascript}
> "{params" : {"action":"REGEX:(i?)create"}
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7263) xmlparser: Allow SpanQueryBuilder to be used by derived classes

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7263:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> xmlparser: Allow SpanQueryBuilder to be used by derived classes
> ---
>
> Key: LUCENE-7263
> URL: https://issues.apache.org/jira/browse/LUCENE-7263
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/queryparser
>Affects Versions: 6.0
>Reporter: Daniel Collins
>Assignee: Christine Poerschke
> Fix For: 5.x, 6.1, master (7.0)
>
> Attachments: LUCENE-7263.patch
>
>
> Following on from LUCENE-7210 (and others), the xml queryparser has different 
> factories, one for creating normal queries and one for creating span queries.
> The former is a protected variable so can be used by derived classes, the 
> latter isn't.
> This makes the spanFactory a variable that can be used more easily.  No 
> functional changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7269) TestPointQueries failures

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7269:
-
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> TestPointQueries failures
> -
>
> Key: LUCENE-7269
> URL: https://issues.apache.org/jira/browse/LUCENE-7269
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Steve Rowe
>Assignee: Michael McCandless
> Fix For: 6.1, master (7.0)
>
>
> My Jenkins found a reproducing seed on master:
> {noformat}
> Checking out Revision a48245a1bfbef0259d38ef36fec814f3891ab80c 
> (refs/remotes/origin/master)
> [...]
>[junit4] Suite: org.apache.lucene.search.TestPointQueries
>[junit4] IGNOR/A 0.00s J1 | TestPointQueries.testRandomBinaryBig
>[junit4]> Assumption #1: 'nightly' test group is disabled (@Nightly())
>[junit4]   2> maj 02, 2016 3:29:13 PM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
>[junit4]   2> WARNING: Uncaught exception in thread: 
> Thread[T0,5,TGRP-TestPointQueries]
>[junit4]   2> java.lang.AssertionError
>[junit4]   2>  at 
> __randomizedtesting.SeedInfo.seed([61528898A1A30059]:0)
>[junit4]   2>  at 
> org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:110)
>[junit4]   2>  at 
> org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:98)
>[junit4]   2>  at 
> org.apache.lucene.search.PointRangeQuery$1.buildMatchingDocIdSet(PointRangeQuery.java:109)
>[junit4]   2>  at 
> org.apache.lucene.search.PointRangeQuery$1.scorer(PointRangeQuery.java:213)
>[junit4]   2>  at 
> org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
>[junit4]   2>  at 
> org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.cache(LRUQueryCache.java:683)
>[junit4]   2>  at 
> org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:766)
>[junit4]   2>  at 
> org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
>[junit4]   2>  at 
> org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
>[junit4]   2>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
>[junit4]   2>  at 
> org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)
>[junit4]   2>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
>[junit4]   2>  at 
> org.apache.lucene.search.TestPointQueries$2._run(TestPointQueries.java:805)
>[junit4]   2>  at 
> org.apache.lucene.search.TestPointQueries$2.run(TestPointQueries.java:758)
>[junit4]   2> 
>[junit4]   2> maj 02, 2016 3:29:13 PM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
>[junit4]   2> WARNING: Uncaught exception in thread: 
> Thread[T1,5,TGRP-TestPointQueries]
>[junit4]   2> java.lang.AssertionError
>[junit4]   2>  at 
> __randomizedtesting.SeedInfo.seed([61528898A1A30059]:0)
>[junit4]   2>  at 
> org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:110)
>[junit4]   2>  at 
> org.apache.lucene.util.DocIdSetBuilder.(DocIdSetBuilder.java:98)
>[junit4]   2>  at 
> org.apache.lucene.search.PointRangeQuery$1.buildMatchingDocIdSet(PointRangeQuery.java:109)
>[junit4]   2>  at 
> org.apache.lucene.search.PointRangeQuery$1.scorer(PointRangeQuery.java:213)
>[junit4]   2>  at 
> org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
>[junit4]   2>  at 
> org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.cache(LRUQueryCache.java:683)
>[junit4]   2>  at 
> org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:766)
>[junit4]   2>  at 
> org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
>[junit4]   2>  at 
> org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:68)
>[junit4]   2>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
>[junit4]   2>  at 
> org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:91)
>[junit4]   2>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
>[junit4]   2>  at 
> org.apache.lucene.search.TestPointQueries$2._run(TestPointQueries.java:805)
>[junit4]   2>  at 
> org.apache.lucene.search.TestPointQueries$2.run(TestPointQueries.java:758)
>[junit4]   2> 
>[junit4]   2> maj 02, 2016 3:29:13 PM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  

[jira] [Updated] (SOLR-8933) SolrDispatchFilter::consumeInput logs "Stream Closed" IOException

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8933:
---
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> SolrDispatchFilter::consumeInput logs "Stream Closed" IOException
> -
>
> Key: SOLR-8933
> URL: https://issues.apache.org/jira/browse/SOLR-8933
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.3
>Reporter: Mike Drob
>Assignee: Mark Miller
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-8933.patch, SOLR-8933.patch, SOLR-8933.patch, 
> SOLR-8933.patch, SOLR-8933.patch, SOLR-8933.patch, SOLR-8933.patch, 
> SOLR-8933.patch, SOLR-8933.patch, SOLR-8933.patch, SOLR-8933.patch
>
>
> After SOLR-8453 we started seeing some IOExceptions coming out of 
> SolrDispatchFilter with "Stream Closed" messages.
> It looks like we are indeed closing the request stream in several places when 
> we really need to be letting the web container handle their life cycle. I've 
> got a preliminary patch ready and am working on testing it to make sure there 
> are no regressions.
> A very strange piece of this is that I have been entirely unable to reproduce 
> it on a unit test, but have seen it on cluster deployment quite consistently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9064) UpdateStream Explanation should include the explanation for the incoming stream

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-9064.

Resolution: Fixed

> UpdateStream Explanation should include the explanation for the incoming 
> stream
> ---
>
> Key: SOLR-9064
> URL: https://issues.apache.org/jira/browse/SOLR-9064
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0, 6.1
>Reporter: Dennis Gove
>Assignee: Dennis Gove
>Priority: Minor
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-9064.patch
>
>
> The explanation for an UpdateStream does not include a child explanation of 
> the incoming stream. This results in the UpdateStream explanation not being 
> all that informative.
> Simple fix, line 191 should add
> {code}
> child.addChild(tupleSource.toExplanation(factory));
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-9064) UpdateStream Explanation should include the explanation for the incoming stream

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reopened SOLR-9064:


> UpdateStream Explanation should include the explanation for the incoming 
> stream
> ---
>
> Key: SOLR-9064
> URL: https://issues.apache.org/jira/browse/SOLR-9064
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0, 6.1
>Reporter: Dennis Gove
>Assignee: Dennis Gove
>Priority: Minor
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-9064.patch
>
>
> The explanation for an UpdateStream does not include a child explanation of 
> the incoming stream. This results in the UpdateStream explanation not being 
> all that informative.
> Simple fix, line 191 should add
> {code}
> child.addChild(tupleSource.toExplanation(factory));
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9064) UpdateStream Explanation should include the explanation for the incoming stream

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9064:
---
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> UpdateStream Explanation should include the explanation for the incoming 
> stream
> ---
>
> Key: SOLR-9064
> URL: https://issues.apache.org/jira/browse/SOLR-9064
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0, 6.1
>Reporter: Dennis Gove
>Assignee: Dennis Gove
>Priority: Minor
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-9064.patch
>
>
> The explanation for an UpdateStream does not include a child explanation of 
> the incoming stream. This results in the UpdateStream explanation not being 
> all that informative.
> Simple fix, line 191 should add
> {code}
> child.addChild(tupleSource.toExplanation(factory));
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9030) The 'downnode' command can trip asserts in ZkStateWriter

2016-05-09 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277139#comment-15277139
 ] 

Hoss Man commented on SOLR-9030:



Manually correcting fixVersion per Step #S5 of LUCENE-7271


> The 'downnode' command can trip asserts in ZkStateWriter
> 
>
> Key: SOLR-9030
> URL: https://issues.apache.org/jira/browse/SOLR-9030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-9030.patch, SOLR-9030.patch
>
>
> While working on SOLR-9014 I came across a strange test failure.
> {code}
>[junit4] ERROR   16.9s | 
> AsyncCallRequestStatusResponseTest.testAsyncCallStatusResponse <<<
>[junit4]> Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=46, 
> name=OverseerStateUpdate-95769832112259076-127.0.0.1:51135_z_oeg%2Ft-n_00,
>  state=RUNNABLE, group=Overseer state updater.]
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([91F68DA7E10807C3:CBF7E84BCF328A1A]:0)
>[junit4]> Caused by: java.lang.AssertionError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([91F68DA7E10807C3]:0)
>[junit4]>  at 
> org.apache.solr.cloud.overseer.ZkStateWriter.writePendingUpdates(ZkStateWriter.java:231)
>[junit4]>  at 
> org.apache.solr.cloud.Overseer$ClusterStateUpdater.run(Overseer.java:240)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {code}
> The underlying problem can manifest by tripping the above assert or a 
> BadVersionException as well. I found that this was introduced in SOLR-7281 
> where a new 'downnode' command was added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9030) The 'downnode' command can trip asserts in ZkStateWriter

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9030:
---
Fix Version/s: (was: 6.0)
   master (7.0)

> The 'downnode' command can trip asserts in ZkStateWriter
> 
>
> Key: SOLR-9030
> URL: https://issues.apache.org/jira/browse/SOLR-9030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-9030.patch, SOLR-9030.patch
>
>
> While working on SOLR-9014 I came across a strange test failure.
> {code}
>[junit4] ERROR   16.9s | 
> AsyncCallRequestStatusResponseTest.testAsyncCallStatusResponse <<<
>[junit4]> Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=46, 
> name=OverseerStateUpdate-95769832112259076-127.0.0.1:51135_z_oeg%2Ft-n_00,
>  state=RUNNABLE, group=Overseer state updater.]
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([91F68DA7E10807C3:CBF7E84BCF328A1A]:0)
>[junit4]> Caused by: java.lang.AssertionError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([91F68DA7E10807C3]:0)
>[junit4]>  at 
> org.apache.solr.cloud.overseer.ZkStateWriter.writePendingUpdates(ZkStateWriter.java:231)
>[junit4]>  at 
> org.apache.solr.cloud.Overseer$ClusterStateUpdater.run(Overseer.java:240)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {code}
> The underlying problem can manifest by tripping the above assert or a 
> BadVersionException as well. I found that this was introduced in SOLR-7281 
> where a new 'downnode' command was added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9014) Deprecate and reduce usage of ClusterState methods which may make calls to ZK via the lazy collection reference

2016-05-09 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277138#comment-15277138
 ] 

Hoss Man commented on SOLR-9014:



Manually correcting fixVersion per Step #S5 of LUCENE-7271


> Deprecate and reduce usage of ClusterState methods which may make calls to ZK 
> via the lazy collection reference
> ---
>
> Key: SOLR-9014
> URL: https://issues.apache.org/jira/browse/SOLR-9014
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-9014-deprecate-getCollections.patch, SOLR-9014.patch
>
>
> ClusterState has a bunch of methods such as getSlice and getReplica which 
> internally call getCollectionOrNull that ends up making a call to ZK via the 
> lazy collection reference. Many classes use these methods even though a 
> DocCollection object is available. In such cases, multiple redundant calls to 
> ZooKeeper can happen if the collection is not watched locally. This is 
> especially true for Overseer classes which operate on all collections.
> We should audit all usages of these methods and replace them with calls to 
> appropriate DocCollection methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9074) solrj CloudSolrClient.directUpdate tweak

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9074:
---
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> solrj CloudSolrClient.directUpdate tweak
> 
>
> Key: SOLR-9074
> URL: https://issues.apache.org/jira/browse/SOLR-9074
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Trivial
> Fix For: 5.6, 6.1, master (7.0)
>
> Attachments: SOLR-9074.patch
>
>
> Defer two NamedList allocations and initialCapacity one of them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9014) Deprecate and reduce usage of ClusterState methods which may make calls to ZK via the lazy collection reference

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9014:
---
Fix Version/s: (was: 6.0)
   master (7.0)

> Deprecate and reduce usage of ClusterState methods which may make calls to ZK 
> via the lazy collection reference
> ---
>
> Key: SOLR-9014
> URL: https://issues.apache.org/jira/browse/SOLR-9014
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-9014-deprecate-getCollections.patch, SOLR-9014.patch
>
>
> ClusterState has a bunch of methods such as getSlice and getReplica which 
> internally call getCollectionOrNull that ends up making a call to ZK via the 
> lazy collection reference. Many classes use these methods even though a 
> DocCollection object is available. In such cases, multiple redundant calls to 
> ZooKeeper can happen if the collection is not watched locally. This is 
> especially true for Overseer classes which operate on all collections.
> We should audit all usages of these methods and replace them with calls to 
> appropriate DocCollection methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8918) Add Streaming Expressions to the admin page

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8918:
---
Fix Version/s: (was: 6.0)
   master (7.0)


Manually correcting fixVersion per Step #S5 of LUCENE-7271


> Add Streaming Expressions to the admin page
> ---
>
> Key: SOLR-8918
> URL: https://issues.apache.org/jira/browse/SOLR-8918
> Project: Solr
>  Issue Type: New Feature
>  Components: UI, web gui
>Reporter: Dennis Gove
>Assignee: Dennis Gove
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-8918.patch, SOLR-8918.patch, SOLR-8918.patch, 
> SOLR-8918.patch, SOLR-8918.patch, sample-display.png, sample-display.png
>
>
> Add to the admin page an ability to work with and view Streaming Expressions.
> This tab will appear under the Collection selection section and will work 
> similarly to the Query tab. On this page the user will be able to enter a 
> streaming expression for execution. The user can then execute the expression 
> against the collection and view all the results as they are returned from the 
> Stream handler. Along with this the user will be able to view the structure 
> of the expression in a graph-like layout. 
> If the user wishes to only view the expression structure without executing 
> the expression the user will be able to click an "Explain" button which will 
> show the structure. Included in the structure will be information about each 
> node (the expression for that node).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-8918) Add Streaming Expressions to the admin page

2016-05-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reopened SOLR-8918:


> Add Streaming Expressions to the admin page
> ---
>
> Key: SOLR-8918
> URL: https://issues.apache.org/jira/browse/SOLR-8918
> Project: Solr
>  Issue Type: New Feature
>  Components: UI, web gui
>Reporter: Dennis Gove
>Assignee: Dennis Gove
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-8918.patch, SOLR-8918.patch, SOLR-8918.patch, 
> SOLR-8918.patch, SOLR-8918.patch, sample-display.png, sample-display.png
>
>
> Add to the admin page an ability to work with and view Streaming Expressions.
> This tab will appear under the Collection selection section and will work 
> similarly to the Query tab. On this page the user will be able to enter a 
> streaming expression for execution. The user can then execute the expression 
> against the collection and view all the results as they are returned from the 
> Stream handler. Along with this the user will be able to view the structure 
> of the expression in a graph-like layout. 
> If the user wishes to only view the expression structure without executing 
> the expression the user will be able to click an "Explain" button which will 
> show the structure. Included in the structure will be information about each 
> node (the expression for that node).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   3   >