[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_102) - Build # 18415 - Unstable!

2016-11-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18415/
Java: 32bit/jdk1.8.0_102 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testMaxTime

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([8FF33CCFC6611A1E:1507412D58FB8622]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:818)
at 
org.apache.solr.update.AutoCommitTest.testMaxTime(AutoCommitTest.java:270)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=id:529=standard=0=20=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:811)
... 40 more




Build Log:
[...truncated 10739 lines...]
   [junit4] Suite: org.apache.solr.update.AutoCommitTest
   [junit4]  

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1169 - Still unstable

2016-11-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1169/

10 tests failed.
FAILED:  
org.apache.lucene.index.TestIndexWriterOnDiskFull.testAddDocumentOnDiskFull

Error Message:
this IndexWriter is closed

Stack Trace:
org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
at 
__randomizedtesting.SeedInfo.seed([38CD05CC56B9:B4D53DB3AB26E5FD]:0)
at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:748)
at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:762)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1566)
at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1315)
at 
org.apache.lucene.index.TestIndexWriterOnDiskFull.addDoc(TestIndexWriterOnDiskFull.java:568)
at 
org.apache.lucene.index.TestIndexWriterOnDiskFull.testAddDocumentOnDiskFull(TestIndexWriterOnDiskFull.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: fake disk full at 28873 bytes when writing 
_5.scf (file length=0)
at 
org.apache.lucene.store.MockIndexOutputWrapper.checkDiskFull(MockIndexOutputWrapper.java:87)
at 
org.apache.lucene.store.MockIndexOutputWrapper.writeBytes(MockIndexOutputWrapper.java:133)
at 

[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_102) - Build # 594 - Unstable!

2016-11-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/594/
Java: 32bit/jdk1.8.0_102 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=6536, 
name=SocketProxy-Request-52701:52336, state=RUNNABLE, 
group=TGRP-HttpPartitionTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=6536, name=SocketProxy-Request-52701:52336, 
state=RUNNABLE, group=TGRP-HttpPartitionTest]
Caused by: java.lang.RuntimeException: java.net.SocketException: Socket is 
closed
at __randomizedtesting.SeedInfo.seed([CC62AE27F57E4879]:0)
at 
org.apache.solr.cloud.SocketProxy$Bridge$Pump.run(SocketProxy.java:347)
Caused by: java.net.SocketException: Socket is closed
at java.net.Socket.setSoTimeout(Socket.java:1137)
at 
org.apache.solr.cloud.SocketProxy$Bridge$Pump.run(SocketProxy.java:344)




Build Log:
[...truncated 11275 lines...]
   [junit4] Suite: org.apache.solr.cloud.HttpPartitionTest
   [junit4]   2> Creating dataDir: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.HttpPartitionTest_CC62AE27F57E4879-001\init-core-data-001
   [junit4]   2> 1073809 INFO  
(SUITE-HttpPartitionTest-seed#[CC62AE27F57E4879]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.SolrTestCaseJ4$SuppressSSL(bugUrl=https://issues.apache.org/jira/browse/SOLR-5776)
   [junit4]   2> 1073809 INFO  
(SUITE-HttpPartitionTest-seed#[CC62AE27F57E4879]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 1073811 INFO  
(TEST-HttpPartitionTest.test-seed#[CC62AE27F57E4879]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1073812 INFO  (Thread-1700) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1073812 INFO  (Thread-1700) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1073912 INFO  
(TEST-HttpPartitionTest.test-seed#[CC62AE27F57E4879]) [] 
o.a.s.c.ZkTestServer start zk server on port:52309
   [junit4]   2> 1073940 INFO  
(TEST-HttpPartitionTest.test-seed#[CC62AE27F57E4879]) [] 
o.a.s.c.AbstractZkTestCase put 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test-files\solr\collection1\conf\solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 1073945 INFO  
(TEST-HttpPartitionTest.test-seed#[CC62AE27F57E4879]) [] 
o.a.s.c.AbstractZkTestCase put 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test-files\solr\collection1\conf\schema.xml
 to /configs/conf1/schema.xml
   [junit4]   2> 1073949 INFO  
(TEST-HttpPartitionTest.test-seed#[CC62AE27F57E4879]) [] 
o.a.s.c.AbstractZkTestCase put 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test-files\solr\collection1\conf\solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 1073952 INFO  
(TEST-HttpPartitionTest.test-seed#[CC62AE27F57E4879]) [] 
o.a.s.c.AbstractZkTestCase put 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test-files\solr\collection1\conf\stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2> 1073956 INFO  
(TEST-HttpPartitionTest.test-seed#[CC62AE27F57E4879]) [] 
o.a.s.c.AbstractZkTestCase put 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test-files\solr\collection1\conf\protwords.txt
 to /configs/conf1/protwords.txt
   [junit4]   2> 1073959 INFO  
(TEST-HttpPartitionTest.test-seed#[CC62AE27F57E4879]) [] 
o.a.s.c.AbstractZkTestCase put 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test-files\solr\collection1\conf\currency.xml
 to /configs/conf1/currency.xml
   [junit4]   2> 1073963 INFO  
(TEST-HttpPartitionTest.test-seed#[CC62AE27F57E4879]) [] 
o.a.s.c.AbstractZkTestCase put 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test-files\solr\collection1\conf\enumsConfig.xml
 to /configs/conf1/enumsConfig.xml
   [junit4]   2> 1073966 INFO  
(TEST-HttpPartitionTest.test-seed#[CC62AE27F57E4879]) [] 
o.a.s.c.AbstractZkTestCase put 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test-files\solr\collection1\conf\open-exchange-rates.json
 to /configs/conf1/open-exchange-rates.json
   [junit4]   2> 1073969 INFO  
(TEST-HttpPartitionTest.test-seed#[CC62AE27F57E4879]) [] 
o.a.s.c.AbstractZkTestCase put 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test-files\solr\collection1\conf\mapping-ISOLatin1Accent.txt
 to /configs/conf1/mapping-ISOLatin1Accent.txt
   [junit4]   2> 1073974 INFO  
(TEST-HttpPartitionTest.test-seed#[CC62AE27F57E4879]) [] 
o.a.s.c.AbstractZkTestCase put 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\test-files\solr\collection1\conf\old_synonyms.txt

[jira] [Updated] (SOLR-6203) cast exception while searching with sort function and result grouping

2016-11-30 Thread Judith Silverman (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Judith Silverman updated SOLR-6203:
---
Attachment: SOLR-6203.patch

> cast exception while searching with sort function and result grouping
> -
>
> Key: SOLR-6203
> URL: https://issues.apache.org/jira/browse/SOLR-6203
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.7, 4.8
>Reporter: Nate Dire
>Assignee: Christine Poerschke
> Attachments: README, SOLR-6203-unittest.patch, 
> SOLR-6203-unittest.patch, SOLR-6203.patch, SOLR-6203.patch, SOLR-6203.patch, 
> SOLR-6203.patch
>
>
> After upgrading from 4.5.1 to 4.7+, a schema including a {{"*"}} dynamic 
> field as text gets a cast exception when using a sort function and result 
> grouping.  
> Repro (with example config):
> # Add {{"*"}} dynamic field as a {{TextField}}, eg:
> {noformat}
> 
> {noformat}
> #  Create  sharded collection
> {noformat}
> curl 
> 'http://localhost:8983/solr/admin/collections?action=CREATE=test=2=2'
> {noformat}
> # Add example docs (query must have some results)
> # Submit query which sorts on a function result and uses result grouping:
> {noformat}
> {
>   "responseHeader": {
> "status": 500,
> "QTime": 50,
> "params": {
>   "sort": "sqrt(popularity) desc",
>   "indent": "true",
>   "q": "*:*",
>   "_": "1403709010008",
>   "group.field": "manu",
>   "group": "true",
>   "wt": "json"
> }
>   },
>   "error": {
> "msg": "java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef",
> "code": 500
>   }
> }
> {noformat}
> Source exception from log:
> {noformat}
> ERROR - 2014-06-25 08:10:10.055; org.apache.solr.common.SolrException; 
> java.lang.ClassCastException: java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef
> at 
> org.apache.solr.schema.FieldType.marshalStringSortValue(FieldType.java:981)
> at org.apache.solr.schema.TextField.marshalSortValue(TextField.java:176)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.serializeSearchGroup(SearchGroupsResultTransformer.java:125)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:65)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:43)
> at 
> org.apache.solr.search.grouping.CommandHandler.processResult(CommandHandler.java:193)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:340)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   ...
> {noformat}
> It looks like {{serializeSearchGroup}} is matching the sort expression as the 
> {{"*"}} dynamic field, which is a TextField in the repro.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6203) cast exception while searching with sort function and result grouping

2016-11-30 Thread Judith Silverman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15710884#comment-15710884
 ] 

Judith Silverman edited comment on SOLR-6203 at 12/1/16 5:30 AM:
-

Hi, in tonight's patch I finished uncommenting your new signatures and made the 
changes necessary to get the code to compile and the existing tests to pass.


was (Author: judith):
Hi, in tonight's patch I uncommented the last of your new signatures and made 
the changes necessary to get the code to compile and the existing tests to pass.

> cast exception while searching with sort function and result grouping
> -
>
> Key: SOLR-6203
> URL: https://issues.apache.org/jira/browse/SOLR-6203
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.7, 4.8
>Reporter: Nate Dire
>Assignee: Christine Poerschke
> Attachments: README, SOLR-6203-unittest.patch, 
> SOLR-6203-unittest.patch, SOLR-6203.patch, SOLR-6203.patch, SOLR-6203.patch, 
> SOLR-6203.patch
>
>
> After upgrading from 4.5.1 to 4.7+, a schema including a {{"*"}} dynamic 
> field as text gets a cast exception when using a sort function and result 
> grouping.  
> Repro (with example config):
> # Add {{"*"}} dynamic field as a {{TextField}}, eg:
> {noformat}
> 
> {noformat}
> #  Create  sharded collection
> {noformat}
> curl 
> 'http://localhost:8983/solr/admin/collections?action=CREATE=test=2=2'
> {noformat}
> # Add example docs (query must have some results)
> # Submit query which sorts on a function result and uses result grouping:
> {noformat}
> {
>   "responseHeader": {
> "status": 500,
> "QTime": 50,
> "params": {
>   "sort": "sqrt(popularity) desc",
>   "indent": "true",
>   "q": "*:*",
>   "_": "1403709010008",
>   "group.field": "manu",
>   "group": "true",
>   "wt": "json"
> }
>   },
>   "error": {
> "msg": "java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef",
> "code": 500
>   }
> }
> {noformat}
> Source exception from log:
> {noformat}
> ERROR - 2014-06-25 08:10:10.055; org.apache.solr.common.SolrException; 
> java.lang.ClassCastException: java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef
> at 
> org.apache.solr.schema.FieldType.marshalStringSortValue(FieldType.java:981)
> at org.apache.solr.schema.TextField.marshalSortValue(TextField.java:176)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.serializeSearchGroup(SearchGroupsResultTransformer.java:125)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:65)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:43)
> at 
> org.apache.solr.search.grouping.CommandHandler.processResult(CommandHandler.java:193)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:340)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   ...
> {noformat}
> It looks like {{serializeSearchGroup}} is matching the sort expression as the 
> {{"*"}} dynamic field, which is a TextField in the repro.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6203) cast exception while searching with sort function and result grouping

2016-11-30 Thread Judith Silverman (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Judith Silverman updated SOLR-6203:
---
Attachment: SOLR-6203.patch

Hi, in tonight's patch I uncommented the last of your new signatures and made 
the changes necessary to get the code to compile and the existing tests to pass.

> cast exception while searching with sort function and result grouping
> -
>
> Key: SOLR-6203
> URL: https://issues.apache.org/jira/browse/SOLR-6203
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.7, 4.8
>Reporter: Nate Dire
>Assignee: Christine Poerschke
> Attachments: README, SOLR-6203-unittest.patch, 
> SOLR-6203-unittest.patch, SOLR-6203.patch, SOLR-6203.patch, SOLR-6203.patch
>
>
> After upgrading from 4.5.1 to 4.7+, a schema including a {{"*"}} dynamic 
> field as text gets a cast exception when using a sort function and result 
> grouping.  
> Repro (with example config):
> # Add {{"*"}} dynamic field as a {{TextField}}, eg:
> {noformat}
> 
> {noformat}
> #  Create  sharded collection
> {noformat}
> curl 
> 'http://localhost:8983/solr/admin/collections?action=CREATE=test=2=2'
> {noformat}
> # Add example docs (query must have some results)
> # Submit query which sorts on a function result and uses result grouping:
> {noformat}
> {
>   "responseHeader": {
> "status": 500,
> "QTime": 50,
> "params": {
>   "sort": "sqrt(popularity) desc",
>   "indent": "true",
>   "q": "*:*",
>   "_": "1403709010008",
>   "group.field": "manu",
>   "group": "true",
>   "wt": "json"
> }
>   },
>   "error": {
> "msg": "java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef",
> "code": 500
>   }
> }
> {noformat}
> Source exception from log:
> {noformat}
> ERROR - 2014-06-25 08:10:10.055; org.apache.solr.common.SolrException; 
> java.lang.ClassCastException: java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef
> at 
> org.apache.solr.schema.FieldType.marshalStringSortValue(FieldType.java:981)
> at org.apache.solr.schema.TextField.marshalSortValue(TextField.java:176)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.serializeSearchGroup(SearchGroupsResultTransformer.java:125)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:65)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:43)
> at 
> org.apache.solr.search.grouping.CommandHandler.processResult(CommandHandler.java:193)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:340)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   ...
> {noformat}
> It looks like {{serializeSearchGroup}} is matching the sort expression as the 
> {{"*"}} dynamic field, which is a TextField in the repro.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Ishan Chattopadhyaya as Lucene/Solr committer

2016-11-30 Thread Noble Paul
Congrats ishan. Long overdue

On Dec 1, 2016 3:14 AM, "Jan Høydahl"  wrote:

> Congrats Ishan, and welcome!
>
> --
> Jan Høydahl
>
> Den 29. nov. 2016 kl. 18.17 skrev Mark Miller :
>
> I'm pleased to announce that Ishan Chattopadhyaya has accepted the PMC's
> invitation to become a committer.
>
> Ishan, it's tradition that you introduce yourself with a brief bio /
> origin story, explaining how you arrived here.
>
> Your handle "ishan" has already added to the “lucene" LDAP group, so
> you now have commit privileges.
>
> Please celebrate this rite of passage, and confirm that the right
> karma has in fact enabled, by embarking on the challenge of adding
> yourself to the committers section of the Who We Are page on the
> website: http://lucene.apache.org/whoweare.html (use the ASF CMS
> bookmarklet
> at the bottom of the page here: https://cms.apache.org/#bookmark -
> more info here http://www.apache.org/dev/cms.html).
>
> Congratulations and welcome!
> --
> - Mark
> about.me/markrmiller
>
>


[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 526 - Unstable!

2016-11-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/526/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)  
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)  at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
  at sun.reflect.GeneratedConstructorAccessor157.newInstance(Unknown Source)  
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)  at 
org.apache.solr.core.SolrCore.createInstance(SolrCore.java:705)  at 
org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:767)  at 
org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1006)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:871)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:775)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)  at 
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)  at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [HdfsTransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)
at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
at sun.reflect.GeneratedConstructorAccessor157.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:705)
at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:767)
at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1006)
at org.apache.solr.core.SolrCore.(SolrCore.java:871)
at org.apache.solr.core.SolrCore.(SolrCore.java:775)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)
at 
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


at __randomizedtesting.SeedInfo.seed([4D3004017B7ACEC0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:260)
at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 982 - Still Unstable!

2016-11-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/982/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.search.stats.TestDistribIDF

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.search.stats.TestDistribIDF: 1) Thread[id=54257, 
name=OverseerHdfsCoreFailoverThread-97029437477158917-127.0.0.1:44203_solr-n_02,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:137)
 at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.search.stats.TestDistribIDF: 
   1) Thread[id=54257, 
name=OverseerHdfsCoreFailoverThread-97029437477158917-127.0.0.1:44203_solr-n_02,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:137)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([D55A06E42A659E6F]:0)




Build Log:
[...truncated 12557 lines...]
   [junit4] Suite: org.apache.solr.search.stats.TestDistribIDF
   [junit4]   2> Creating dataDir: 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J1/temp/solr.search.stats.TestDistribIDF_D55A06E42A659E6F-001/init-core-data-001
   [junit4]   2> 2468325 INFO  
(SUITE-TestDistribIDF-seed#[D55A06E42A659E6F]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN)
   [junit4]   2> 2468329 INFO  
(TEST-TestDistribIDF.testMultiCollectionQuery-seed#[D55A06E42A659E6F]) [] 
o.a.s.SolrTestCaseJ4 ###Starting testMultiCollectionQuery
   [junit4]   2> 2468329 INFO  
(TEST-TestDistribIDF.testMultiCollectionQuery-seed#[D55A06E42A659E6F]) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 3 servers in 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J1/temp/solr.search.stats.TestDistribIDF_D55A06E42A659E6F-001/tempDir-001
   [junit4]   2> 2468329 INFO  
(TEST-TestDistribIDF.testMultiCollectionQuery-seed#[D55A06E42A659E6F]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 2468329 INFO  (Thread-12347) [] o.a.s.c.ZkTestServer 
client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 2468329 INFO  (Thread-12347) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 2468429 INFO  
(TEST-TestDistribIDF.testMultiCollectionQuery-seed#[D55A06E42A659E6F]) [] 
o.a.s.c.ZkTestServer start zk server on port:62536
   [junit4]   2> 2468449 INFO  (jetty-launcher-10582-thread-1) [] 
o.e.j.s.Server jetty-9.3.14.v20161028
   [junit4]   2> 2468449 INFO  (jetty-launcher-10582-thread-2) [] 
o.e.j.s.Server jetty-9.3.14.v20161028
   [junit4]   2> 2468449 INFO  (jetty-launcher-10582-thread-3) [] 
o.e.j.s.Server jetty-9.3.14.v20161028
   [junit4]   2> 2468451 INFO  (jetty-launcher-10582-thread-2) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@c276d45{/solr,null,AVAILABLE}
   [junit4]   2> 2468452 INFO  (jetty-launcher-10582-thread-3) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@4885c814{/solr,null,AVAILABLE}
   [junit4]   2> 2468452 INFO  (jetty-launcher-10582-thread-2) [] 
o.e.j.s.AbstractConnector Started 
ServerConnector@673ad272{HTTP/1.1,[http/1.1]}{127.0.0.1:60562}
   [junit4]   2> 2468452 INFO  (jetty-launcher-10582-thread-2) [] 
o.e.j.s.Server Started @2471965ms
   [junit4]   2> 2468452 INFO  (jetty-launcher-10582-thread-2) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=60562}
   [junit4]   2> 2468453 ERROR (jetty-launcher-10582-thread-2) [] 
o.a.s.s.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 2468453 INFO  (jetty-launcher-10582-thread-3) [] 
o.e.j.s.AbstractConnector Started 
ServerConnector@4d384cd6{HTTP/1.1,[http/1.1]}{127.0.0.1:60141}
   [junit4]   2> 2468453 INFO  (jetty-launcher-10582-thread-3) [] 
o.e.j.s.Server Started @2471965ms
   [junit4]   2> 2468453 INFO  (jetty-launcher-10582-thread-2) [] 
o.a.s.s.SolrDispatchFilter  ___  _   Welcome to Apache Solr? version 
7.0.0
   [junit4]   2> 2468453 INFO  (jetty-launcher-10582-thread-3) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=60141}
   [junit4]   2> 2468453 INFO  (jetty-launcher-10582-thread-2) [] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 2468453 ERROR (jetty-launcher-10582-thread-3) [] 

[jira] [Comment Edited] (SOLR-6203) cast exception while searching with sort function and result grouping

2016-11-30 Thread Judith Silverman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709543#comment-15709543
 ] 

Judith Silverman edited comment on SOLR-6203 at 12/1/16 12:33 AM:
--

Thanks for the update, Christine.  I am happy to pursue your incremental 
approach.  I have made a patch to your branch in which I uncommented a couple 
of your signature changes and modified calls to the affected methods to use the 
new signatures.  I also took this opportunity to  start standardizing field and 
method names related to the field variously known throughout the codebase as 
"sortWithinGroup" and "withinGroupSort".  The latter fits better with related 
field and method names, and since we are already deprecating 
GroupingSpecification's accessors for Sorts in favor of accessors of  
SortSpecs, this seems to me like a good time to make the change.  I renamed the 
new public accessors and also renamed private fields in all the files I was 
already modifying for this commit.  If you approve of this change, I will 
rename private fields in other files.  In the meantime, I will keep going in 
the direction you indicated.   
Thanks,
Judith 


was (Author: judith):
Thanks for the update, Christine.  I am happy to pursue your   
incremental approach.  I have made a patch to your branch in which I   
uncommented your signature changes and modified calls to the affected
methods to use the new signatures.  I also took this opportunity to  
start standardizing field and method names related to the field  
variously known throughout the codebase as "sortWithinGroup" and 
"withinGroupSort".  The latter fits better with related field and
method names, and since we are already deprecating 
GroupingSpecification's accessors for Sorts in favor of accessors of 
SortSpecs, this seems to me like a good time to make the change.  I   
renamed the new public accessors and also renamed private fields in 
all the files I was already modifying for this commit.  If you approve 
of this change, I will rename private fields in other files.  In the
meantime, I will start fleshing out utility functions as you indicated. 
  
Thanks,
Judith 

> cast exception while searching with sort function and result grouping
> -
>
> Key: SOLR-6203
> URL: https://issues.apache.org/jira/browse/SOLR-6203
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.7, 4.8
>Reporter: Nate Dire
>Assignee: Christine Poerschke
> Attachments: README, SOLR-6203-unittest.patch, 
> SOLR-6203-unittest.patch, SOLR-6203.patch, SOLR-6203.patch
>
>
> After upgrading from 4.5.1 to 4.7+, a schema including a {{"*"}} dynamic 
> field as text gets a cast exception when using a sort function and result 
> grouping.  
> Repro (with example config):
> # Add {{"*"}} dynamic field as a {{TextField}}, eg:
> {noformat}
> 
> {noformat}
> #  Create  sharded collection
> {noformat}
> curl 
> 'http://localhost:8983/solr/admin/collections?action=CREATE=test=2=2'
> {noformat}
> # Add example docs (query must have some results)
> # Submit query which sorts on a function result and uses result grouping:
> {noformat}
> {
>   "responseHeader": {
> "status": 500,
> "QTime": 50,
> "params": {
>   "sort": "sqrt(popularity) desc",
>   "indent": "true",
>   "q": "*:*",
>   "_": "1403709010008",
>   "group.field": "manu",
>   "group": "true",
>   "wt": "json"
> }
>   },
>   "error": {
> "msg": "java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef",
> "code": 500
>   }
> }
> {noformat}
> Source exception from log:
> {noformat}
> ERROR - 2014-06-25 08:10:10.055; org.apache.solr.common.SolrException; 
> java.lang.ClassCastException: java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef
> at 
> org.apache.solr.schema.FieldType.marshalStringSortValue(FieldType.java:981)
> at org.apache.solr.schema.TextField.marshalSortValue(TextField.java:176)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.serializeSearchGroup(SearchGroupsResultTransformer.java:125)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:65)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:43)
> at 
> 

[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+140) - Build # 2309 - Unstable!

2016-11-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2309/
Java: 32bit/jdk-9-ea+140 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.testSplitAfterFailedSplit

Error Message:
expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([9948F3F46B1CCD3E:6005605B576980B4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.ShardSplitTest.testSplitAfterFailedSplit(ShardSplitTest.java:284)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Resolved] (SOLR-7021) Leader will not publish core as active without recovering first, but never recovers

2016-11-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-7021.
--
Resolution: Cannot Reproduce

Looks to me like this has been fixed by other JIRAs according to the comments, 
so closing. We can open new ones for issues in 6x. 

> Leader will not publish core as active without recovering first, but never 
> recovers
> ---
>
> Key: SOLR-7021
> URL: https://issues.apache.org/jira/browse/SOLR-7021
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10
>Reporter: James Hardwick
>Priority: Critical
>  Labels: recovery, solrcloud, zookeeper
>
> A little background: 1 core solr-cloud cluster across 3 nodes, each with its 
> own shard and each shard with a single replica hence each replica is itself a 
> leader. 
> For reasons we won't get into, we witnessed a shard go down in our cluster. 
> We restarted the cluster but our core/shards still did not come back up. 
> After inspecting the logs, we found this:
> {code}
> 015-01-21 15:51:56,494 [coreZkRegister-1-thread-2] INFO  cloud.ZkController  
> - We are http://xxx.xxx.xxx.35:8081/solr/xyzcore/ and leader is 
> http://xxx.xxx.xxx.35:8081/solr/xyzcore/
> 2015-01-21 15:51:56,496 [coreZkRegister-1-thread-2] INFO  cloud.ZkController  
> - No LogReplay needed for core=xyzcore baseURL=http://xxx.xxx.xxx.35:8081/solr
> 2015-01-21 15:51:56,496 [coreZkRegister-1-thread-2] INFO  cloud.ZkController  
> - I am the leader, no recovery necessary
> 2015-01-21 15:51:56,496 [coreZkRegister-1-thread-2] INFO  cloud.ZkController  
> - publishing core=xyzcore state=active collection=xyzcore
> 2015-01-21 15:51:56,497 [coreZkRegister-1-thread-2] INFO  cloud.ZkController  
> - numShards not found on descriptor - reading it from system property
> 2015-01-21 15:51:56,498 [coreZkRegister-1-thread-2] INFO  cloud.ZkController  
> - publishing core=xyzcore state=down collection=xyzcore
> 2015-01-21 15:51:56,498 [coreZkRegister-1-thread-2] INFO  cloud.ZkController  
> - numShards not found on descriptor - reading it from system property
> 2015-01-21 15:51:56,501 [coreZkRegister-1-thread-2] ERROR core.ZkContainer  - 
> :org.apache.solr.common.SolrException: Cannot publish state of core 'xyzcore' 
> as active without recovering first!
>   at org.apache.solr.cloud.ZkController.publish(ZkController.java:1075)
> {code}
> And at this point the necessary shards never recover correctly and hence our 
> core never returns to a functional state. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-11-30 Thread Julian Hyde (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15710183#comment-15710183
 ] 

Julian Hyde commented on SOLR-8593:
---

Calcite's operators are logical. A 'Filter' operator might turn into operator 
instances running on multiple nodes or threads, each processing a partition of 
the data.

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7542) Release smoker should fail when CHANGES.txt has a release section for a future release

2016-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709952#comment-15709952
 ] 

ASF subversion and git services commented on LUCENE-7542:
-

Commit 071f554e8a9f269a701f926e6beeaffcd60b82fc in lucene-solr's branch 
refs/heads/branch_6_3 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=071f554 ]

LUCENE-7542: Remove debug printing of parsed versions


> Release smoker should fail when CHANGES.txt has a release section for a 
> future release
> --
>
> Key: LUCENE-7542
> URL: https://issues.apache.org/jira/browse/LUCENE-7542
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: master (7.0), 6.0.2, 6.1.1, 5.6, 5.5.4, 6.2.2, 6.4, 6.3.1
>
> Attachments: LUCENE-7542.patch
>
>
> In the first 6.3.0 RC, Solr's CHANGES.txt had a section for 7.0.0.  
> smokeTestRelease.py should add a new check for future release sections and 
> fail if any are found.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7542) Release smoker should fail when CHANGES.txt has a release section for a future release

2016-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709953#comment-15709953
 ] 

ASF subversion and git services commented on LUCENE-7542:
-

Commit f1e402e39175757295157ff647298069a96a0d3f in lucene-solr's branch 
refs/heads/branch_6x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f1e402e ]

LUCENE-7542: Remove debug printing of parsed versions


> Release smoker should fail when CHANGES.txt has a release section for a 
> future release
> --
>
> Key: LUCENE-7542
> URL: https://issues.apache.org/jira/browse/LUCENE-7542
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: master (7.0), 6.0.2, 6.1.1, 5.6, 5.5.4, 6.2.2, 6.4, 6.3.1
>
> Attachments: LUCENE-7542.patch
>
>
> In the first 6.3.0 RC, Solr's CHANGES.txt had a section for 7.0.0.  
> smokeTestRelease.py should add a new check for future release sections and 
> fail if any are found.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7542) Release smoker should fail when CHANGES.txt has a release section for a future release

2016-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709949#comment-15709949
 ] 

ASF subversion and git services commented on LUCENE-7542:
-

Commit 53ac93d2dedf2cab384d748c1e38f0360cf48470 in lucene-solr's branch 
refs/heads/branch_6_0 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=53ac93d ]

LUCENE-7542: Remove debug printing of parsed versions


> Release smoker should fail when CHANGES.txt has a release section for a 
> future release
> --
>
> Key: LUCENE-7542
> URL: https://issues.apache.org/jira/browse/LUCENE-7542
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: master (7.0), 6.0.2, 6.1.1, 5.6, 5.5.4, 6.2.2, 6.4, 6.3.1
>
> Attachments: LUCENE-7542.patch
>
>
> In the first 6.3.0 RC, Solr's CHANGES.txt had a section for 7.0.0.  
> smokeTestRelease.py should add a new check for future release sections and 
> fail if any are found.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7542) Release smoker should fail when CHANGES.txt has a release section for a future release

2016-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709955#comment-15709955
 ] 

ASF subversion and git services commented on LUCENE-7542:
-

Commit 98f75723f3bc6a718f1a7b47a50b820c4fb408f6 in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=98f7572 ]

LUCENE-7542: Remove debug printing of parsed versions


> Release smoker should fail when CHANGES.txt has a release section for a 
> future release
> --
>
> Key: LUCENE-7542
> URL: https://issues.apache.org/jira/browse/LUCENE-7542
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: master (7.0), 6.0.2, 6.1.1, 5.6, 5.5.4, 6.2.2, 6.4, 6.3.1
>
> Attachments: LUCENE-7542.patch
>
>
> In the first 6.3.0 RC, Solr's CHANGES.txt had a section for 7.0.0.  
> smokeTestRelease.py should add a new check for future release sections and 
> fail if any are found.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7542) Release smoker should fail when CHANGES.txt has a release section for a future release

2016-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709948#comment-15709948
 ] 

ASF subversion and git services commented on LUCENE-7542:
-

Commit 8e84bcd0fb3223120d8d86a6428ffc4adf41d265 in lucene-solr's branch 
refs/heads/branch_5x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8e84bcd ]

LUCENE-7542: Remove debug printing of parsed versions


> Release smoker should fail when CHANGES.txt has a release section for a 
> future release
> --
>
> Key: LUCENE-7542
> URL: https://issues.apache.org/jira/browse/LUCENE-7542
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: master (7.0), 6.0.2, 6.1.1, 5.6, 5.5.4, 6.2.2, 6.4, 6.3.1
>
> Attachments: LUCENE-7542.patch
>
>
> In the first 6.3.0 RC, Solr's CHANGES.txt had a section for 7.0.0.  
> smokeTestRelease.py should add a new check for future release sections and 
> fail if any are found.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7542) Release smoker should fail when CHANGES.txt has a release section for a future release

2016-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709951#comment-15709951
 ] 

ASF subversion and git services commented on LUCENE-7542:
-

Commit a01504749fefd648e623b742483c175c9a57410e in lucene-solr's branch 
refs/heads/branch_6_2 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a015047 ]

LUCENE-7542: Remove debug printing of parsed versions


> Release smoker should fail when CHANGES.txt has a release section for a 
> future release
> --
>
> Key: LUCENE-7542
> URL: https://issues.apache.org/jira/browse/LUCENE-7542
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: master (7.0), 6.0.2, 6.1.1, 5.6, 5.5.4, 6.2.2, 6.4, 6.3.1
>
> Attachments: LUCENE-7542.patch
>
>
> In the first 6.3.0 RC, Solr's CHANGES.txt had a section for 7.0.0.  
> smokeTestRelease.py should add a new check for future release sections and 
> fail if any are found.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7542) Release smoker should fail when CHANGES.txt has a release section for a future release

2016-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709950#comment-15709950
 ] 

ASF subversion and git services commented on LUCENE-7542:
-

Commit 7e566ec5914f394caaba902c92b893d5090b9459 in lucene-solr's branch 
refs/heads/branch_6_1 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7e566ec ]

LUCENE-7542: Remove debug printing of parsed versions


> Release smoker should fail when CHANGES.txt has a release section for a 
> future release
> --
>
> Key: LUCENE-7542
> URL: https://issues.apache.org/jira/browse/LUCENE-7542
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: master (7.0), 6.0.2, 6.1.1, 5.6, 5.5.4, 6.2.2, 6.4, 6.3.1
>
> Attachments: LUCENE-7542.patch
>
>
> In the first 6.3.0 RC, Solr's CHANGES.txt had a section for 7.0.0.  
> smokeTestRelease.py should add a new check for future release sections and 
> fail if any are found.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7542) Release smoker should fail when CHANGES.txt has a release section for a future release

2016-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709947#comment-15709947
 ] 

ASF subversion and git services commented on LUCENE-7542:
-

Commit 4cdad182b8918fcc35d98601923425c122391f1d in lucene-solr's branch 
refs/heads/branch_5_5 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4cdad18 ]

LUCENE-7542: Remove debug printing of parsed versions


> Release smoker should fail when CHANGES.txt has a release section for a 
> future release
> --
>
> Key: LUCENE-7542
> URL: https://issues.apache.org/jira/browse/LUCENE-7542
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: master (7.0), 6.0.2, 6.1.1, 5.6, 5.5.4, 6.2.2, 6.4, 6.3.1
>
> Attachments: LUCENE-7542.patch
>
>
> In the first 6.3.0 RC, Solr's CHANGES.txt had a section for 7.0.0.  
> smokeTestRelease.py should add a new check for future release sections and 
> fail if any are found.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-4924) indices getting out of sync with SolrCloud

2016-11-30 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett closed SOLR-4924.
---
Resolution: Won't Fix

As Mark notes, a lot has changed here to make this less likely to still be a 
problem (or manifests very differently than described here).

> indices getting out of sync with SolrCloud
> --
>
> Key: SOLR-4924
> URL: https://issues.apache.org/jira/browse/SOLR-4924
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java), SolrCloud
>Affects Versions: 4.2
> Environment: Linux 2.6.18-308.16.1.el5 #1 SMP Tue Oct 2 22:01:43 EDT 
> 2012 x86_64 x86_64 x86_64 GNU/Linux
> CentOS release 5.8 (Final)
> Solr 4.2.1
>Reporter: Ricardo Merizalde
>
> We are experiencing an issue in our production servers where the indices get 
> out of sync. Customers will see different results/result sorting depending of 
> the instance that serves the request.
> We currently have 2 instances with a single shard. This is our update handler 
> configuration
> 
>   
> 
> 60
> 
> 5000
> 
> false
>   
>   
> 
> 5000
>   
>   
> ${solr.data.dir:}
>   
> 
> When the indices get out of sync the follower replica ends up with a higher 
> version than the master. Optimizing the leader or reloading the follower core 
> doesn't not help. The only why to get the indices in sync is to restart the 
> server.
> This is an state example of the leader:
> version: 1102541
> numDocs: 214007
> maxDoc: 370861
> deletedDocs: 156854 
> While the follower core has the following state:
> version: 1109143
> numDocs: 213890
> maxDoc: 341585
> deletedDocs: 127695 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-11-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709887#comment-15709887
 ] 

Joel Bernstein commented on SOLR-8593:
--

There would only be an advantage  when grouping on high cardinality fields. For 
example a multi-dimension aggregate that produces millions of distinct 
aggregations. In this scenario we can push the having expression to worker 
nodes so all the aggregation tuples don't need to be sent back to the 
SQLHandler. If the Having expression eliminates a significant number of tuples 
we can eliminate a  lot of network traffic and a bottleneck at the SQLHandler.



> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8297) Allow join query over 2 sharded collections: enhance functionality and exception handling

2016-11-30 Thread Shikha Somani (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709882#comment-15709882
 ] 

Shikha Somani commented on SOLR-8297:
-

This patch is tested on 6.x also and it can be applied to 6.x.

> Allow join query over 2 sharded collections: enhance functionality and 
> exception handling
> -
>
> Key: SOLR-8297
> URL: https://issues.apache.org/jira/browse/SOLR-8297
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Paul Blanchaert
> Attachments: SOLR-8297.patch
>
>
> Enhancement based on SOLR-4905. New Jira issue raised as suggested by Mikhail 
> Khludnev.
> A) exception handling:
> The exception "SolrCloud join: multiple shards not yet supported" thrown in 
> the function findLocalReplicaForFromIndex of JoinQParserPlugin is not 
> triggered correctly: In my use-case, I've a join on a facet.query and when my 
> results are only found in 1 shard and the facet.query with the join is 
> querying the last replica of the last slice, then the exception is not thrown.
> I believe it's better to verify the nr of slices when we want to verify the  
> "multiple shards not yet supported" exception (so exception is thrown when 
> zkController.getClusterState().getSlices(fromIndex).size()>1).
> B) functional enhancement:
> I would expect that there is no problem to perform a cross-core join over 
> sharded collections when the following conditions are met:
> 1) both collections are sharded with the same replicationFactor and numShards
> 2) router.field of the collections is set to the same "key-field" (collection 
> of "fromindex" has router.field = "from" field and collection joined to has 
> router.field = "to" field)
> The router.field setup ensures that documents with the same "key-field" are 
> routed to the same node. 
> So the combination based on the "key-field" should always be available within 
> the same node.
> From a user perspective, I believe these assumptions seem to be a "normal" 
> use-case in the cross-core join in SolrCloud.
> Hope this helps



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Ishan Chattopadhyaya as Lucene/Solr committer

2016-11-30 Thread Jan Høydahl
Congrats Ishan, and welcome!

--
Jan Høydahl

> Den 29. nov. 2016 kl. 18.17 skrev Mark Miller :
> 
> I'm pleased to announce that Ishan Chattopadhyaya has accepted the PMC's
> invitation to become a committer.
> 
> Ishan, it's tradition that you introduce yourself with a brief bio /
> origin story, explaining how you arrived here.
> 
> Your handle "ishan" has already added to the “lucene" LDAP group, so
> you now have commit privileges.
> 
> Please celebrate this rite of passage, and confirm that the right
> karma has in fact enabled, by embarking on the challenge of adding
> yourself to the committers section of the Who We Are page on the
> website: http://lucene.apache.org/whoweare.html (use the ASF CMS
> bookmarklet
> at the bottom of the page here: https://cms.apache.org/#bookmark -
> more info here http://www.apache.org/dev/cms.html).
> 
> Congratulations and welcome!
> -- 
> - Mark 
> about.me/markrmiller


[jira] [Commented] (LUCENE-7578) UnifiedHighlighter: Convert PhraseHelper to use SpanCollector API

2016-11-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709804#comment-15709804
 ] 

David Smiley commented on LUCENE-7578:
--

Yes definitely; I didn't mention slop but we should only expose a virtual 
single position if slop is 0; perhaps configurable. Exposing a single virtual 
position seems like a separate issue too.

> UnifiedHighlighter: Convert PhraseHelper to use SpanCollector API
> -
>
> Key: LUCENE-7578
> URL: https://issues.apache.org/jira/browse/LUCENE-7578
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: David Smiley
>
> The PhraseHelper of the UnifiedHighlighter currently collects position-spans 
> per SpanQuery (and it knows which terms are in which SpanQuery), and then it 
> filters PostingsEnum based on that.  It's similar to how the original 
> Highlighter WSTE works.  The main problem with this approach is that it can 
> be inaccurate for some nested span queries -- LUCENE-2287, LUCENE-5455 (has 
> the clearest example), LUCENE-6796.  Non-nested SpanQueries (e.g. that which 
> is converted from a PhraseQuery or MultiPhraseQuery) are _not_ a problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7578) UnifiedHighlighter: Convert PhraseHelper to use SpanCollector API

2016-11-30 Thread Timothy M. Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709795#comment-15709795
 ] 

Timothy M. Rodriguez edited comment on LUCENE-7578 at 11/30/16 9:15 PM:


Some care would have to be taken with spans, especially with significant slop.  
It's arguably worse to have a single highlight across it.  But otherwise, this 
definitely is a desired improvement.


was (Author: timothy055):
Some care would have to be taken with spans, especially with significant slop.  
It's arguably worse to have a single highlight across it.

> UnifiedHighlighter: Convert PhraseHelper to use SpanCollector API
> -
>
> Key: LUCENE-7578
> URL: https://issues.apache.org/jira/browse/LUCENE-7578
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: David Smiley
>
> The PhraseHelper of the UnifiedHighlighter currently collects position-spans 
> per SpanQuery (and it knows which terms are in which SpanQuery), and then it 
> filters PostingsEnum based on that.  It's similar to how the original 
> Highlighter WSTE works.  The main problem with this approach is that it can 
> be inaccurate for some nested span queries -- LUCENE-2287, LUCENE-5455 (has 
> the clearest example), LUCENE-6796.  Non-nested SpanQueries (e.g. that which 
> is converted from a PhraseQuery or MultiPhraseQuery) are _not_ a problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7578) UnifiedHighlighter: Convert PhraseHelper to use SpanCollector API

2016-11-30 Thread Timothy M. Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709795#comment-15709795
 ] 

Timothy M. Rodriguez commented on LUCENE-7578:
--

Some care would have to be taken with spans, especially with significant slop.  
It's arguably worse to have a single highlight across it.

> UnifiedHighlighter: Convert PhraseHelper to use SpanCollector API
> -
>
> Key: LUCENE-7578
> URL: https://issues.apache.org/jira/browse/LUCENE-7578
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: David Smiley
>
> The PhraseHelper of the UnifiedHighlighter currently collects position-spans 
> per SpanQuery (and it knows which terms are in which SpanQuery), and then it 
> filters PostingsEnum based on that.  It's similar to how the original 
> Highlighter WSTE works.  The main problem with this approach is that it can 
> be inaccurate for some nested span queries -- LUCENE-2287, LUCENE-5455 (has 
> the clearest example), LUCENE-6796.  Non-nested SpanQueries (e.g. that which 
> is converted from a PhraseQuery or MultiPhraseQuery) are _not_ a problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-11-30 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8593:
-
Attachment: SOLR-8593.patch

New patch with beginnings of refactoring of the SolrTable and the initial 
implementation of the HavingStream and accompanying BooleanOperations.

This patch is just meant for review. I'll push changes out to the branch as it 
matures.

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9660) in GroupingSpecification factor [group](sort|offset|limit) into [group](sortSpec)

2016-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709703#comment-15709703
 ] 

ASF subversion and git services commented on SOLR-9660:
---

Commit cf8d0e1ccbb06edc8830b7c270b90984c1e287af in lucene-solr's branch 
refs/heads/branch_6x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cf8d0e1 ]

SOLR-9660: in GroupingSpecification factor [group](sort|offset|limit) into 
[group](sortSpec) (Judith Silverman, Christine Poerschke)


> in GroupingSpecification factor [group](sort|offset|limit) into 
> [group](sortSpec)
> -
>
> Key: SOLR-9660
> URL: https://issues.apache.org/jira/browse/SOLR-9660
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-9660.patch, SOLR-9660.patch, SOLR-9660.patch, 
> SOLR-9660.patch
>
>
> This is split out and adapted from and towards the SOLR-6203 changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7575) UnifiedHighlighter: add requireFieldMatch=false support

2016-11-30 Thread Timothy M. Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709621#comment-15709621
 ] 

Timothy M. Rodriguez commented on LUCENE-7575:
--

Looks good to me too.  Some additional suggestions:

UnifiedHighlighter:
  * +1 on the suggestion to use HighlightFlags instead.

PhraseHelper:
  * It's clearer in my opinion to change the boolean branch to something like 
{code} if (!requireFieldMatch) {} else {} {code} instead of checking {code} 
requireFieldMatch == false {code}.  Even better would be swapping the branches 
so it's {code}if (requireFieldBranch) {} else {}{code}
  * Similar point for line 287 {code} if (requireFieldMatch && 
fieldName.equals(queryTerm.field()) == false) {} {code}

TestUnifiedHiglighter:
  * I think it'd be clearer to separate the the cases for 
term/phrase/multi-term queries into separate tests.  This makes it easier to 
chase bugs down the line if only 1 fails.  (And provides more information if 
all 3 fail)

> UnifiedHighlighter: add requireFieldMatch=false support
> ---
>
> Key: LUCENE-7575
> URL: https://issues.apache.org/jira/browse/LUCENE-7575
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: LUCENE-7575.patch
>
>
> The UnifiedHighlighter (like the PostingsHighlighter) only supports 
> highlighting queries for the same fields that are being highlighted.  The 
> original Highlighter and FVH support loosening this, AKA 
> requireFieldMatch=false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7578) UnifiedHighlighter: Convert PhraseHelper to use SpanCollector API

2016-11-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709615#comment-15709615
 ] 

David Smiley commented on LUCENE-7578:
--

_disclaimer: I'm merely filing this issue at this time; no time to do it._

Perhaps a separate issue or do here as well if it would be overall less work 
than separate: Instead of PhraseHelper filtering a provided PostingsEnum, I 
think it should produce one OffsetsEnum per top level SpanQuery.  A redesigned 
half rewritten PhraseHelper that uses the SpanCollector API could do this in 
the same amount of code whereas trying to change the current design to do this 
would add a lot of complexity, I think.  The outcome would improve passage 
relevancy for position-sensitive clauses, I think.  It could be further tweaked 
such that _some_ SpanQueries (namely those converted from PhraseQuery) yield 
one virtual position (with earliest startOffset and last endOffset) instead of 
exposing each word position separately.  That would eliminate intra-phrase 
highlight delimiters, and it would probably indirectly improve passage 
relevancy too.  The reported freq() would be the smallest freq of the provided 
terms.  Also, the move to this design would eliminate the position span caching 
going on in PhraseHelper.

> UnifiedHighlighter: Convert PhraseHelper to use SpanCollector API
> -
>
> Key: LUCENE-7578
> URL: https://issues.apache.org/jira/browse/LUCENE-7578
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: David Smiley
>
> The PhraseHelper of the UnifiedHighlighter currently collects position-spans 
> per SpanQuery (and it knows which terms are in which SpanQuery), and then it 
> filters PostingsEnum based on that.  It's similar to how the original 
> Highlighter WSTE works.  The main problem with this approach is that it can 
> be inaccurate for some nested span queries -- LUCENE-2287, LUCENE-5455 (has 
> the clearest example), LUCENE-6796.  Non-nested SpanQueries (e.g. that which 
> is converted from a PhraseQuery or MultiPhraseQuery) are _not_ a problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7578) UnifiedHighlighter: Convert PhraseHelper to use SpanCollector API

2016-11-30 Thread David Smiley (JIRA)
David Smiley created LUCENE-7578:


 Summary: UnifiedHighlighter: Convert PhraseHelper to use 
SpanCollector API
 Key: LUCENE-7578
 URL: https://issues.apache.org/jira/browse/LUCENE-7578
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/highlighter
Reporter: David Smiley


The PhraseHelper of the UnifiedHighlighter currently collects position-spans 
per SpanQuery (and it knows which terms are in which SpanQuery), and then it 
filters PostingsEnum based on that.  It's similar to how the original 
Highlighter WSTE works.  The main problem with this approach is that it can be 
inaccurate for some nested span queries -- LUCENE-2287, LUCENE-5455 (has the 
clearest example), LUCENE-6796.  Non-nested SpanQueries (e.g. that which is 
converted from a PhraseQuery or MultiPhraseQuery) are _not_ a problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9616) Solr throws exception when expand=true on empty result

2016-11-30 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya resolved SOLR-9616.

   Resolution: Fixed
Fix Version/s: 6.4
   master (7.0)

Thanks [~timo.schmidt]!

> Solr throws exception when expand=true on empty result
> --
>
> Key: SOLR-9616
> URL: https://issues.apache.org/jira/browse/SOLR-9616
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: 6.2.1, 6.3
>Reporter: Timo Hund
>Assignee: Ishan Chattopadhyaya
>Priority: Critical
> Fix For: master (7.0), 6.4
>
>
> When i run a query with expand=true with field collapsing and the result set 
> is empty an exception is thrown:
> solr:8984/solr/core_en/select?={!collapse 
> field=pid}=true=10
> Produces:
>   "error":{
> "msg":"Index: 0, Size: 0",
> "trace":"java.lang.IndexOutOfBoundsException: Index: 0, Size: 0\n\tat 
> java.util.ArrayList.rangeCheck(ArrayList.java:653)\n\tat 
> java.util.ArrayList.get(ArrayList.java:429)\n\tat 
> java.util.Collections$UnmodifiableList.get(Collections.java:1309)\n\tat 
> org.apache.solr.handler.component.ExpandComponent.process(ExpandComponent.java:269)\n\tat
>  
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)\n\tat
>  org.apache.solr.core.SolrCore.execute(SolrCore.java:2036)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:657)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:518)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)\n\tat
>  
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)\n\tat
>  org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)\n\tat 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)\n\tat
>  java.lang.Thread.run(Thread.java:745)\n",
> "code":500}}
> Instead i would assume to get an empty result. 
> Is this a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4735) Improve Solr metrics reporting

2016-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709559#comment-15709559
 ] 

ASF subversion and git services commented on SOLR-4735:
---

Commit f489bb8566985174111d4e91df2d6ec03ffcb01e in lucene-solr's branch 
refs/heads/feature/metrics from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f489bb8 ]

SOLR-4735 This method may actually remove several metrics.


> Improve Solr metrics reporting
> --
>
> Key: SOLR-4735
> URL: https://issues.apache.org/jira/browse/SOLR-4735
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Andrzej Bialecki 
>Priority: Minor
> Attachments: SOLR-4735.patch, SOLR-4735.patch, SOLR-4735.patch, 
> SOLR-4735.patch
>
>
> Following on from a discussion on the mailing list:
> http://search-lucene.com/m/IO0EI1qdyJF1/codahale=Solr+metrics+in+Codahale+metrics+and+Graphite+
> It would be good to make Solr play more nicely with existing devops 
> monitoring systems, such as Graphite or Ganglia.  Stats monitoring at the 
> moment is poll-only, either via JMX or through the admin stats page.  I'd 
> like to refactor things a bit to make this more pluggable.
> This patch is a start.  It adds a new interface, InstrumentedBean, which 
> extends SolrInfoMBean to return a 
> [[Metrics|http://metrics.codahale.com/manual/core/]] MetricRegistry, and a 
> couple of MetricReporters (which basically just duplicate the JMX and admin 
> page reporting that's there at the moment, but which should be more 
> extensible).  The patch includes a change to RequestHandlerBase showing how 
> this could work.  The idea would be to eventually replace the getStatistics() 
> call on SolrInfoMBean with this instead.
> The next step would be to allow more MetricReporters to be defined in 
> solrconfig.xml.  The Metrics library comes with ganglia and graphite 
> reporting modules, and we can add contrib plugins for both of those.
> There's some more general cleanup that could be done around SolrInfoMBean 
> (we've got two plugin handlers at /mbeans and /plugins that basically do the 
> same thing, and the beans themselves have some weirdly inconsistent data on 
> them - getVersion() returns different things for different impls, and 
> getSource() seems pretty useless), but maybe that's for another issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9616) Solr throws exception when expand=true on empty result

2016-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709552#comment-15709552
 ] 

ASF subversion and git services commented on SOLR-9616:
---

Commit 0c3fb754454d5bb43c4511a68ae4d362c9fb40bf in lucene-solr's branch 
refs/heads/branch_6x from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0c3fb75 ]

SOLR-9616 Solr throws exception when expand=true on empty index


> Solr throws exception when expand=true on empty result
> --
>
> Key: SOLR-9616
> URL: https://issues.apache.org/jira/browse/SOLR-9616
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: 6.2.1, 6.3
>Reporter: Timo Hund
>Assignee: Ishan Chattopadhyaya
>Priority: Critical
>
> When i run a query with expand=true with field collapsing and the result set 
> is empty an exception is thrown:
> solr:8984/solr/core_en/select?={!collapse 
> field=pid}=true=10
> Produces:
>   "error":{
> "msg":"Index: 0, Size: 0",
> "trace":"java.lang.IndexOutOfBoundsException: Index: 0, Size: 0\n\tat 
> java.util.ArrayList.rangeCheck(ArrayList.java:653)\n\tat 
> java.util.ArrayList.get(ArrayList.java:429)\n\tat 
> java.util.Collections$UnmodifiableList.get(Collections.java:1309)\n\tat 
> org.apache.solr.handler.component.ExpandComponent.process(ExpandComponent.java:269)\n\tat
>  
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)\n\tat
>  org.apache.solr.core.SolrCore.execute(SolrCore.java:2036)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:657)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:518)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)\n\tat
>  
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)\n\tat
>  org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)\n\tat 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)\n\tat
>  java.lang.Thread.run(Thread.java:745)\n",
> "code":500}}
> Instead i would assume to get an empty result. 
> Is this a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-11-30 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709550#comment-15709550
 ] 

Kevin Risden commented on SOLR-8593:


[~joel.bernstein] - Not sure I understand the need for a having stream...

To quote [~julianhyde]:
{quote}
In fact, Calcite will convert query into a Scan -> Filter -> Aggregate -> 
Filter -> Project logical plan (the first Filter is the WHERE clause, the 
second Filter is the HAVING clause), 
{quote}

Since a having clause is really just a filter on an aggregate, I'm not sure 
what we could really gain from pushing it down much further. The 
Avatica/Calcite JDBC implementation supports the having clause if we don't 
optimize for it.

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6203) cast exception while searching with sort function and result grouping

2016-11-30 Thread Judith Silverman (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Judith Silverman updated SOLR-6203:
---
Attachment: SOLR-6203.patch

> cast exception while searching with sort function and result grouping
> -
>
> Key: SOLR-6203
> URL: https://issues.apache.org/jira/browse/SOLR-6203
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.7, 4.8
>Reporter: Nate Dire
>Assignee: Christine Poerschke
> Attachments: README, SOLR-6203-unittest.patch, 
> SOLR-6203-unittest.patch, SOLR-6203.patch, SOLR-6203.patch
>
>
> After upgrading from 4.5.1 to 4.7+, a schema including a {{"*"}} dynamic 
> field as text gets a cast exception when using a sort function and result 
> grouping.  
> Repro (with example config):
> # Add {{"*"}} dynamic field as a {{TextField}}, eg:
> {noformat}
> 
> {noformat}
> #  Create  sharded collection
> {noformat}
> curl 
> 'http://localhost:8983/solr/admin/collections?action=CREATE=test=2=2'
> {noformat}
> # Add example docs (query must have some results)
> # Submit query which sorts on a function result and uses result grouping:
> {noformat}
> {
>   "responseHeader": {
> "status": 500,
> "QTime": 50,
> "params": {
>   "sort": "sqrt(popularity) desc",
>   "indent": "true",
>   "q": "*:*",
>   "_": "1403709010008",
>   "group.field": "manu",
>   "group": "true",
>   "wt": "json"
> }
>   },
>   "error": {
> "msg": "java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef",
> "code": 500
>   }
> }
> {noformat}
> Source exception from log:
> {noformat}
> ERROR - 2014-06-25 08:10:10.055; org.apache.solr.common.SolrException; 
> java.lang.ClassCastException: java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef
> at 
> org.apache.solr.schema.FieldType.marshalStringSortValue(FieldType.java:981)
> at org.apache.solr.schema.TextField.marshalSortValue(TextField.java:176)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.serializeSearchGroup(SearchGroupsResultTransformer.java:125)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:65)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:43)
> at 
> org.apache.solr.search.grouping.CommandHandler.processResult(CommandHandler.java:193)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:340)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   ...
> {noformat}
> It looks like {{serializeSearchGroup}} is matching the sort expression as the 
> {{"*"}} dynamic field, which is a TextField in the repro.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6203) cast exception while searching with sort function and result grouping

2016-11-30 Thread Judith Silverman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709543#comment-15709543
 ] 

Judith Silverman commented on SOLR-6203:


Thanks for the update, Christine.  I am happy to pursue your   
incremental approach.  I have made a patch to your branch in which I   
uncommented your signature changes and modified calls to the affected
methods to use the new signatures.  I also took this opportunity to  
start standardizing field and method names related to the field  
variously known throughout the codebase as "sortWithinGroup" and 
"withinGroupSort".  The latter fits better with related field and
method names, and since we are already deprecating 
GroupingSpecification's accessors for Sorts in favor of accessors of 
SortSpecs, this seems to me like a good time to make the change.  I   
renamed the new public accessors and also renamed private fields in 
all the files I was already modifying for this commit.  If you approve 
of this change, I will rename private fields in other files.  In the
meantime, I will start fleshing out utility functions as you indicated. 
  
Thanks,
Judith 

> cast exception while searching with sort function and result grouping
> -
>
> Key: SOLR-6203
> URL: https://issues.apache.org/jira/browse/SOLR-6203
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.7, 4.8
>Reporter: Nate Dire
>Assignee: Christine Poerschke
> Attachments: README, SOLR-6203-unittest.patch, 
> SOLR-6203-unittest.patch, SOLR-6203.patch
>
>
> After upgrading from 4.5.1 to 4.7+, a schema including a {{"*"}} dynamic 
> field as text gets a cast exception when using a sort function and result 
> grouping.  
> Repro (with example config):
> # Add {{"*"}} dynamic field as a {{TextField}}, eg:
> {noformat}
> 
> {noformat}
> #  Create  sharded collection
> {noformat}
> curl 
> 'http://localhost:8983/solr/admin/collections?action=CREATE=test=2=2'
> {noformat}
> # Add example docs (query must have some results)
> # Submit query which sorts on a function result and uses result grouping:
> {noformat}
> {
>   "responseHeader": {
> "status": 500,
> "QTime": 50,
> "params": {
>   "sort": "sqrt(popularity) desc",
>   "indent": "true",
>   "q": "*:*",
>   "_": "1403709010008",
>   "group.field": "manu",
>   "group": "true",
>   "wt": "json"
> }
>   },
>   "error": {
> "msg": "java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef",
> "code": 500
>   }
> }
> {noformat}
> Source exception from log:
> {noformat}
> ERROR - 2014-06-25 08:10:10.055; org.apache.solr.common.SolrException; 
> java.lang.ClassCastException: java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef
> at 
> org.apache.solr.schema.FieldType.marshalStringSortValue(FieldType.java:981)
> at org.apache.solr.schema.TextField.marshalSortValue(TextField.java:176)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.serializeSearchGroup(SearchGroupsResultTransformer.java:125)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:65)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:43)
> at 
> org.apache.solr.search.grouping.CommandHandler.processResult(CommandHandler.java:193)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:340)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   ...
> {noformat}
> It looks like {{serializeSearchGroup}} is matching the sort expression as the 
> {{"*"}} dynamic field, which is a TextField in the repro.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7282) Cache config or index schema objects by configset and share them across cores

2016-11-30 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709534#comment-15709534
 ] 

Scott Blum commented on SOLR-7282:
--

Thanks Kevin, we'll use that.

> Cache config or index schema objects by configset and share them across cores
> -
>
> Key: SOLR-7282
> URL: https://issues.apache.org/jira/browse/SOLR-7282
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7282.patch
>
>
> Sharing schema and config objects has been known to improve startup 
> performance when a large number of cores are on the same box (See 
> http://wiki.apache.org/solr/LotsOfCores).Damien also saw improvements to 
> cluster startup speed upon caching the index schema in SOLR-7191.
> Now that SolrCloud configuration is based on config sets in ZK, we should 
> explore how we can minimize config/schema parsing for each core in a way that 
> is compatible with the recent/planned changes in the config and schema APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7282) Cache config or index schema objects by configset and share them across cores

2016-11-30 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709533#comment-15709533
 ] 

Scott Blum commented on SOLR-7282:
--

OH, it was just added in 6.2

https://issues.apache.org/jira/browse/SOLR-9216

> Cache config or index schema objects by configset and share them across cores
> -
>
> Key: SOLR-7282
> URL: https://issues.apache.org/jira/browse/SOLR-7282
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7282.patch
>
>
> Sharing schema and config objects has been known to improve startup 
> performance when a large number of cores are on the same box (See 
> http://wiki.apache.org/solr/LotsOfCores).Damien also saw improvements to 
> cluster startup speed upon caching the index schema in SOLR-7191.
> Now that SolrCloud configuration is based on config sets in ZK, we should 
> explore how we can minimize config/schema parsing for each core in a way that 
> is compatible with the recent/planned changes in the config and schema APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7575) UnifiedHighlighter: add requireFieldMatch=false support

2016-11-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709524#comment-15709524
 ] 

David Smiley commented on LUCENE-7575:
--

Thanks for this contribution Jim! You're the first one to improve the UH 
outside of those who first worked on it.

Overall patch looks pretty good.

UnifiedHighlighter:
* What do you think of adding this to the HighlightFlags instead?  It's 
intended to be a single point to capture various boolean options.  As an aside, 
I'm kinda wondering if the default* and DEFAULT* boolean fields shouldn't exist 
and instead simply have a highlightFlags enumSet field.
* I think the results of filterExtractedTerms might now contain duplicated 
terms (BytesRefs)?  (see my note later about testing the same term and varying 
the field). We could simply collect those bytes into a HashSet, then extract to 
an array and then sort.

PhraseHelper:
* You applied SingleFieldFilterLeafReader at the top of getTermToSpans but I 
think this should be done by the caller so it happens just once, not per 
SpanQuery.
* FieldRewritingTermHashSet is so close to the other other one... hmm what 
if we had just one, remove "static" from class (thus has access to fieldName & 
requireFieldMatch), and then implement add() appropriately.

Tests:
* you used the same test input string for both the "field" and 
"field_require_field_match" fields. To make this more clear; can you vary them 
if even a little?
* in no test queries do I see the same term BytesRef across more than one 
field.  For example, maybe add a test incorporating something like {{field:test 
OR field_require_field_match:test}} -- granted the results might not be 
interesting but lets hope it doesn't puke.  Do for phrase as well.

I agree this requireFieldMatch=false should not be the default. It'll add some 
overhead -- especially for phrase and other position sensitive queries since we 
aren't de-duplicating them.  Besides, it's more accurate as-is.

As an aside... it'd be interesting if instead of a simple boolean toggle, if it 
were a {{Predicate}} fieldMatchPredicate so that only some fields could 
be collected in the query but not all.  Just an idea.

> UnifiedHighlighter: add requireFieldMatch=false support
> ---
>
> Key: LUCENE-7575
> URL: https://issues.apache.org/jira/browse/LUCENE-7575
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: LUCENE-7575.patch
>
>
> The UnifiedHighlighter (like the PostingsHighlighter) only supports 
> highlighting queries for the same fields that are being highlighted.  The 
> original Highlighter and FVH support loosening this, AKA 
> requireFieldMatch=false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7282) Cache config or index schema objects by configset and share them across cores

2016-11-30 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709514#comment-15709514
 ] 

Shawn Heisey commented on SOLR-7282:


The way that I was always aware of for changing this in cloud mode is zkcli, 
using the linkconfig command.  Followed of course by a collection reload.

I wasn't even really aware of MODIFYCOLLECTION until just now ... the docs do 
say that you can update collection.configName.  SOLR-5132 (which implemented 
MODIFYCOLLECTION) says that the intent was to automatcially a reload in the 
event the configname was modified.  I don't know if the automatic reload was 
implemented.

https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-modifycoll


> Cache config or index schema objects by configset and share them across cores
> -
>
> Key: SOLR-7282
> URL: https://issues.apache.org/jira/browse/SOLR-7282
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7282.patch
>
>
> Sharing schema and config objects has been known to improve startup 
> performance when a large number of cores are on the same box (See 
> http://wiki.apache.org/solr/LotsOfCores).Damien also saw improvements to 
> cluster startup speed upon caching the index schema in SOLR-7191.
> Now that SolrCloud configuration is based on config sets in ZK, we should 
> explore how we can minimize config/schema parsing for each core in a way that 
> is compatible with the recent/planned changes in the config and schema APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7282) Cache config or index schema objects by configset and share them across cores

2016-11-30 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709484#comment-15709484
 ] 

Kevin Risden commented on SOLR-7282:


[~dragonsinth] - Not sure if its in the COLLECTIONS API, but there is a 
linkconfig operation on the zkcli.sh script. Look under "Link a collection to a 
configuration set" on this page: 
https://cwiki.apache.org/confluence/display/solr/Command+Line+Utilities

> Cache config or index schema objects by configset and share them across cores
> -
>
> Key: SOLR-7282
> URL: https://issues.apache.org/jira/browse/SOLR-7282
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7282.patch
>
>
> Sharing schema and config objects has been known to improve startup 
> performance when a large number of cores are on the same box (See 
> http://wiki.apache.org/solr/LotsOfCores).Damien also saw improvements to 
> cluster startup speed upon caching the index schema in SOLR-7191.
> Now that SolrCloud configuration is based on config sets in ZK, we should 
> explore how we can minimize config/schema parsing for each core in a way that 
> is compatible with the recent/planned changes in the config and schema APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9616) Solr throws exception when expand=true on empty result

2016-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709476#comment-15709476
 ] 

ASF subversion and git services commented on SOLR-9616:
---

Commit e64bcb37ffe9ccbe1c88cb451ff147de774aec8e in lucene-solr's branch 
refs/heads/master from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e64bcb3 ]

SOLR-9616 Solr throws exception when expand=true on empty index


> Solr throws exception when expand=true on empty result
> --
>
> Key: SOLR-9616
> URL: https://issues.apache.org/jira/browse/SOLR-9616
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: 6.2.1, 6.3
>Reporter: Timo Hund
>Assignee: Ishan Chattopadhyaya
>Priority: Critical
>
> When i run a query with expand=true with field collapsing and the result set 
> is empty an exception is thrown:
> solr:8984/solr/core_en/select?={!collapse 
> field=pid}=true=10
> Produces:
>   "error":{
> "msg":"Index: 0, Size: 0",
> "trace":"java.lang.IndexOutOfBoundsException: Index: 0, Size: 0\n\tat 
> java.util.ArrayList.rangeCheck(ArrayList.java:653)\n\tat 
> java.util.ArrayList.get(ArrayList.java:429)\n\tat 
> java.util.Collections$UnmodifiableList.get(Collections.java:1309)\n\tat 
> org.apache.solr.handler.component.ExpandComponent.process(ExpandComponent.java:269)\n\tat
>  
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)\n\tat
>  org.apache.solr.core.SolrCore.execute(SolrCore.java:2036)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:657)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:518)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)\n\tat
>  
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)\n\tat
>  org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)\n\tat 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)\n\tat
>  java.lang.Thread.run(Thread.java:745)\n",
> "code":500}}
> Instead i would assume to get an empty result. 
> Is this a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9811) Make it easier to manually execute overseer commands

2016-11-30 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709464#comment-15709464
 ] 

Scott Blum commented on SOLR-9811:
--

Yeah I'll give that a shot next time, didn't know it existed before today.

> Make it easier to manually execute overseer commands
> 
>
> Key: SOLR-9811
> URL: https://issues.apache.org/jira/browse/SOLR-9811
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mike Drob
>
> Sometimes solrcloud will get into a bad state w.r.t. election or recovery and 
> it would be useful to have the ability to manually publish a node as active 
> or leader. This would be an alternative to some current ops practices of 
> restarting services, which may take a while to complete given many cores 
> hosted on a single server.
> This is an expert operator technique and readers should be made aware of 
> this, a.k.a. the "I don't care, just get it running" approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9811) Make it easier to manually execute overseer commands

2016-11-30 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709458#comment-15709458
 ] 

Ishan Chattopadhyaya commented on SOLR-9811:


[~dragonsinth], I think FORCELEADER is supposed to recover from situations 
where a shard has lost a leader and a new leader is not elected due to some 
race condition. To fix a DOWN replica and bring to ACTIVE, there is 
REQUESTRECOVERY; can you try that to see if it fixes the replica?

> Make it easier to manually execute overseer commands
> 
>
> Key: SOLR-9811
> URL: https://issues.apache.org/jira/browse/SOLR-9811
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mike Drob
>
> Sometimes solrcloud will get into a bad state w.r.t. election or recovery and 
> it would be useful to have the ability to manually publish a node as active 
> or leader. This would be an alternative to some current ops practices of 
> restarting services, which may take a while to complete given many cores 
> hosted on a single server.
> This is an expert operator technique and readers should be made aware of 
> this, a.k.a. the "I don't care, just get it running" approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9811) Make it easier to manually execute overseer commands

2016-11-30 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709449#comment-15709449
 ] 

Scott Blum commented on SOLR-9811:
--

Seems fine to me.  I was mostly posting what I'd done for [~mdrob] who needs to 
do something similiar.  I've tried FORCELEADER a few times but for me it never 
puts a replica erroneously marked as DOWN into an ACTIVE state.

> Make it easier to manually execute overseer commands
> 
>
> Key: SOLR-9811
> URL: https://issues.apache.org/jira/browse/SOLR-9811
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mike Drob
>
> Sometimes solrcloud will get into a bad state w.r.t. election or recovery and 
> it would be useful to have the ability to manually publish a node as active 
> or leader. This would be an alternative to some current ops practices of 
> restarting services, which may take a while to complete given many cores 
> hosted on a single server.
> This is an expert operator technique and readers should be made aware of 
> this, a.k.a. the "I don't care, just get it running" approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7282) Cache config or index schema objects by configset and share them across cores

2016-11-30 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709440#comment-15709440
 ] 

Scott Blum commented on SOLR-7282:
--

Erick, apologies, I'm sick so I'm not explaining this well.

I'm not talking about doing anything contrary to what the user asked for.  I am 
only talking about a memory-saving implementation detail.  If the internal 
object representing a configset is in fact immutable and sharable, then there 
is no user-facing difference as to whether two configsets with identical 
content are internally represented by the same immutable sharable object or two 
different but indistinguishable identical objects.

To answer your other questions we're not on an old solr, we're in solrcloud.  
core.properties doesn't apply the config is in ZK.  The problem is we have one 
configset per collection.  4000 collections, 4000 configsets, all content 
identical.  Does MODIFYCOLLECTION actually allow you to change the configset a 
particular collection points to?  I swear last time I looked at that doc, 
collection.configName wasn't in the list of things you could mutate.

> Cache config or index schema objects by configset and share them across cores
> -
>
> Key: SOLR-7282
> URL: https://issues.apache.org/jira/browse/SOLR-7282
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7282.patch
>
>
> Sharing schema and config objects has been known to improve startup 
> performance when a large number of cores are on the same box (See 
> http://wiki.apache.org/solr/LotsOfCores).Damien also saw improvements to 
> cluster startup speed upon caching the index schema in SOLR-7191.
> Now that SolrCloud configuration is based on config sets in ZK, we should 
> explore how we can minimize config/schema parsing for each core in a way that 
> is compatible with the recent/planned changes in the config and schema APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9811) Make it easier to manually execute overseer commands

2016-11-30 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709398#comment-15709398
 ] 

Mark Miller commented on SOLR-9811:
---

[~dragonsinth], I'm not saying there are not cases trying something drastic 
won't help in an emergency - I'm saying those types of efforts should be put 
into the forceLeader api that we already have. That command itself really 
should have been named something more like, try to make things work, don't 
worry if data might be lost.

> Make it easier to manually execute overseer commands
> 
>
> Key: SOLR-9811
> URL: https://issues.apache.org/jira/browse/SOLR-9811
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mike Drob
>
> Sometimes solrcloud will get into a bad state w.r.t. election or recovery and 
> it would be useful to have the ability to manually publish a node as active 
> or leader. This would be an alternative to some current ops practices of 
> restarting services, which may take a while to complete given many cores 
> hosted on a single server.
> This is an expert operator technique and readers should be made aware of 
> this, a.k.a. the "I don't care, just get it running" approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7282) Cache config or index schema objects by configset and share them across cores

2016-11-30 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709391#comment-15709391
 ] 

Erick Erickson commented on SOLR-7282:
--

Scott:

My concern has nothing to do with technical capabilities/correctness and 
everything to do with surprising the users and, as an aside, introducing the 
possibilities for error. Introducing such code to support an old version of 
Solr is even less appealing, this sounds like a local patch if you think it's 
worth it. And I'm a little puzzled. You mention "collections", which have 
always had the notion of configsets. Or are you dealing with stand-alone?

If you're really thinking cores and working with stand-alone, if/when you 
upgrade to a Solr that _does_ respect configsets, you should be able to change 
your core.properties files and add the configSet parameter, see: 
https://cwiki.apache.org/confluence/display/solr/Defining+core.properties and 
point them all at the same configset. Then the normal processing we're talking 
about here should work without deciding to reuse based on hashing rather than 
the user's express intent. Changing 4,000 properties files while inelegant 
seems like a _lot_ less work than coding/debugging/maintaining some kind of 
"you said to use X but we're ignoring that because we know better"

Reusing the same internal object based on identical specifications (i.e. the 
user named a particular configset) seems like a fine idea. Doing something 
other than what the user specified because we think we know better to support 
an edge case that there should be other ways of addressing seems unnecessary.

IMO of course.



> Cache config or index schema objects by configset and share them across cores
> -
>
> Key: SOLR-7282
> URL: https://issues.apache.org/jira/browse/SOLR-7282
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7282.patch
>
>
> Sharing schema and config objects has been known to improve startup 
> performance when a large number of cores are on the same box (See 
> http://wiki.apache.org/solr/LotsOfCores).Damien also saw improvements to 
> cluster startup speed upon caching the index schema in SOLR-7191.
> Now that SolrCloud configuration is based on config sets in ZK, we should 
> explore how we can minimize config/schema parsing for each core in a way that 
> is compatible with the recent/planned changes in the config and schema APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9616) Solr throws exception when expand=true on empty result

2016-11-30 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya reassigned SOLR-9616:
--

Assignee: Ishan Chattopadhyaya

> Solr throws exception when expand=true on empty result
> --
>
> Key: SOLR-9616
> URL: https://issues.apache.org/jira/browse/SOLR-9616
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: 6.2.1, 6.3
>Reporter: Timo Hund
>Assignee: Ishan Chattopadhyaya
>Priority: Critical
>
> When i run a query with expand=true with field collapsing and the result set 
> is empty an exception is thrown:
> solr:8984/solr/core_en/select?={!collapse 
> field=pid}=true=10
> Produces:
>   "error":{
> "msg":"Index: 0, Size: 0",
> "trace":"java.lang.IndexOutOfBoundsException: Index: 0, Size: 0\n\tat 
> java.util.ArrayList.rangeCheck(ArrayList.java:653)\n\tat 
> java.util.ArrayList.get(ArrayList.java:429)\n\tat 
> java.util.Collections$UnmodifiableList.get(Collections.java:1309)\n\tat 
> org.apache.solr.handler.component.ExpandComponent.process(ExpandComponent.java:269)\n\tat
>  
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)\n\tat
>  org.apache.solr.core.SolrCore.execute(SolrCore.java:2036)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:657)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:518)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)\n\tat
>  
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)\n\tat
>  org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)\n\tat 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)\n\tat
>  java.lang.Thread.run(Thread.java:745)\n",
> "code":500}}
> Instead i would assume to get an empty result. 
> Is this a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9616) Solr throws exception when expand=true on empty result

2016-11-30 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709390#comment-15709390
 ] 

Ishan Chattopadhyaya commented on SOLR-9616:


[~hossman], I can take a look.

> Solr throws exception when expand=true on empty result
> --
>
> Key: SOLR-9616
> URL: https://issues.apache.org/jira/browse/SOLR-9616
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: 6.2.1, 6.3
>Reporter: Timo Hund
>Priority: Critical
>
> When i run a query with expand=true with field collapsing and the result set 
> is empty an exception is thrown:
> solr:8984/solr/core_en/select?={!collapse 
> field=pid}=true=10
> Produces:
>   "error":{
> "msg":"Index: 0, Size: 0",
> "trace":"java.lang.IndexOutOfBoundsException: Index: 0, Size: 0\n\tat 
> java.util.ArrayList.rangeCheck(ArrayList.java:653)\n\tat 
> java.util.ArrayList.get(ArrayList.java:429)\n\tat 
> java.util.Collections$UnmodifiableList.get(Collections.java:1309)\n\tat 
> org.apache.solr.handler.component.ExpandComponent.process(ExpandComponent.java:269)\n\tat
>  
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)\n\tat
>  org.apache.solr.core.SolrCore.execute(SolrCore.java:2036)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:657)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:518)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)\n\tat
>  
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)\n\tat
>  org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)\n\tat 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)\n\tat
>  java.lang.Thread.run(Thread.java:745)\n",
> "code":500}}
> Instead i would assume to get an empty result. 
> Is this a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9811) Make it easier to manually execute overseer commands

2016-11-30 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709384#comment-15709384
 ] 

Shalin Shekhar Mangar commented on SOLR-9811:
-

It is an internal API so not publicly documented but useful to know for fixing 
misbehaved clusters.

> Make it easier to manually execute overseer commands
> 
>
> Key: SOLR-9811
> URL: https://issues.apache.org/jira/browse/SOLR-9811
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mike Drob
>
> Sometimes solrcloud will get into a bad state w.r.t. election or recovery and 
> it would be useful to have the ability to manually publish a node as active 
> or leader. This would be an alternative to some current ops practices of 
> restarting services, which may take a while to complete given many cores 
> hosted on a single server.
> This is an expert operator technique and readers should be made aware of 
> this, a.k.a. the "I don't care, just get it running" approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9616) Solr throws exception when expand=true on empty result

2016-11-30 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709359#comment-15709359
 ] 

Hoss Man commented on SOLR-9616:


I'm not in a position to apply/test the patch right now, but a quick read looks 
straight forward and the test seems solid: +1 from me.

> Solr throws exception when expand=true on empty result
> --
>
> Key: SOLR-9616
> URL: https://issues.apache.org/jira/browse/SOLR-9616
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: 6.2.1, 6.3
>Reporter: Timo Hund
>Priority: Critical
>
> When i run a query with expand=true with field collapsing and the result set 
> is empty an exception is thrown:
> solr:8984/solr/core_en/select?={!collapse 
> field=pid}=true=10
> Produces:
>   "error":{
> "msg":"Index: 0, Size: 0",
> "trace":"java.lang.IndexOutOfBoundsException: Index: 0, Size: 0\n\tat 
> java.util.ArrayList.rangeCheck(ArrayList.java:653)\n\tat 
> java.util.ArrayList.get(ArrayList.java:429)\n\tat 
> java.util.Collections$UnmodifiableList.get(Collections.java:1309)\n\tat 
> org.apache.solr.handler.component.ExpandComponent.process(ExpandComponent.java:269)\n\tat
>  
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)\n\tat
>  org.apache.solr.core.SolrCore.execute(SolrCore.java:2036)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:657)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:518)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)\n\tat
>  
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)\n\tat
>  org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)\n\tat 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)\n\tat
>  java.lang.Thread.run(Thread.java:745)\n",
> "code":500}}
> Instead i would assume to get an empty result. 
> Is this a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9817) Make Solr server startup directory configurable

2016-11-30 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709283#comment-15709283
 ] 

Shawn Heisey commented on SOLR-9817:


Why do you want to do this?  I'm not being facetious, I'd really like to know.  
IMHO, directory locations are the kind of thing that we (on the development 
side) must be able to rely on to NOT change.  Support becomes extremely 
difficult if we cannot be sure of the relative location of pretty much 
everything other than the solr home or the core root directory.


> Make Solr server startup directory configurable
> ---
>
> Key: SOLR-9817
> URL: https://issues.apache.org/jira/browse/SOLR-9817
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.0
>Reporter: Hrishikesh Gadre
>Priority: Minor
>
> The solr startup script (bin/solr) is hardcoded to use the 
> /server directory as the working directory during the 
> startup. 
> https://github.com/apache/lucene-solr/blob/9eaea79f5c89094c08f52245b9473ca14f368f57/solr/bin/solr#L1652
> This jira is to make the "current working directory" for Solr configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-7575) UnifiedHighlighter: add requireFieldMatch=false support

2016-11-30 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned LUCENE-7575:


Assignee: David Smiley

> UnifiedHighlighter: add requireFieldMatch=false support
> ---
>
> Key: LUCENE-7575
> URL: https://issues.apache.org/jira/browse/LUCENE-7575
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: LUCENE-7575.patch
>
>
> The UnifiedHighlighter (like the PostingsHighlighter) only supports 
> highlighting queries for the same fields that are being highlighted.  The 
> original Highlighter and FVH support loosening this, AKA 
> requireFieldMatch=false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-6.x - Build # 196 - Failure

2016-11-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-6.x/196/

No tests ran.

Build Log:
[...truncated 40550 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (16.6 MB/sec)
   [smoker]   check changes HTML...
   [smoker] 6.4.0: ('6', '4', '0')
   [smoker] 6.4.0: ('6', '4', '0')
   [smoker] 6.3.0  [2016-11-08]: ('6', '3', '0')
   [smoker] 6.2.1  [2016-09-20]: ('6', '2', '1')
   [smoker] 6.2.0  [2016-08-25]: ('6', '2', '0')
   [smoker] 6.1.0  [2016-06-17]: ('6', '1', '0')
   [smoker] 6.0.1  [2016-05-28]: ('6', '0', '1')
   [smoker] 6.0.0  [2016-04-08]: ('6', '0', '0')
   [smoker] 5.5.3  [2016-09-09]: ('5', '5', '3')
   [smoker] 5.5.2  [2016-06-25]: ('5', '5', '2')
   [smoker] 5.5.1  [2016-05-05]: ('5', '5', '1')
   [smoker] 5.5.0  [2016-02-22]: ('5', '5', '0')
   [smoker] 5.4.1  [2016-01-23]: ('5', '4', '1')
   [smoker] 5.4.0  [2015-12-14]: ('5', '4', '0')
   [smoker] 5.3.2  [2016-01-23]: ('5', '3', '2')
   [smoker] 5.3.1  [2015-09-24]: ('5', '3', '1')
   [smoker] 5.3.0  [2015-08-21]: ('5', '3', '0')
   [smoker] 5.2.1  [2015-06-15]: ('5', '2', '1')
   [smoker] 5.2.0  [2015-06-07]: ('5', '2', '0')
   [smoker] 5.1.0  [2015-04-14]: ('5', '1', '0')
   [smoker] 5.0.0  [2015-02-20]: ('5', '0', '0')
   [smoker] 4.10.4  [2015-03-03]: ('4', '10', '4')
   [smoker] 4.10.3  [2014-12-29]: ('4', '10', '3')
   [smoker] 4.10.2  [2014-10-31]: ('4', '10', '2')
   [smoker] 4.10.1  [2014-09-29]: ('4', '10', '1')
   [smoker] 4.10.0  [2014-09-03]: ('4', '10', '0')
   [smoker] 4.9.1  [2014-09-22]: ('4', '9', '1')
   [smoker] 4.9.0  [2014-06-25]: ('4', '9', '0')
   [smoker] 4.8.1  [2014-05-20]: ('4', '8', '1')
   [smoker] 4.8.0  [2014-04-28]: ('4', '8', '0')
   [smoker] 4.7.2  [2014-04-15]: ('4', '7', '2')
   [smoker] 4.7.1  [2014-04-02]: ('4', '7', '1')
   [smoker] 4.7.0  [2014-02-26]: ('4', '7', '0')
   [smoker] 4.6.1  [2014-01-28]: ('4', '6', '1')
   [smoker] 4.6.0  [2013-11-22]: ('4', '6', '0')
   [smoker] 4.5.1  [2013-10-24]: ('4', '5', '1')
   [smoker] 4.5.0  [2013-10-05]: ('4', '5', '0')
   [smoker] 4.4.0  [2013-07-23]: ('4', '4', '0')
   [smoker] 4.3.1  [2013-06-18]: ('4', '3', '1')
   [smoker] 4.3.0  [2013-05-06]: ('4', '3', '0')
   [smoker] 4.2.1  [2013-04-03]: ('4', '2', '1')
   [smoker] 4.2.0  [2013-03-11]: ('4', '2', '0')
   [smoker] 4.1.0  [2013-01-22]: ('4', '1', '0')
   [smoker] 4.0.0  [2012-10-12]: ('4', '0', '0')
   [smoker] 4.0.0-BETA  [2012-08-13]: ('4', '0', '0', '1')
   [smoker] 4.0.0-ALPHA  [2012-07-03]: ('4', '0', '0', '0')
   [smoker] 3.6.2  [2012-12-25]: ('3', '6', '2')
   [smoker] 3.6.1  [2012-07-22]: ('3', '6', '1')
   [smoker] 3.6.0  [2012-04-12]: ('3', '6', '0')
   [smoker] 3.5.0  [2011-11-11]: ('3', '5', '0')
   [smoker] 3.4.0  [2011-09-15]: ('3', '4', '0')
   [smoker] 3.3.0  [2011-07-10]: ('3', '3', '0')
   [smoker] 3.2.0  [2011-06-03]: ('3', '2', '0')
   [smoker] 3.1.0  [2011-03-31]: ('3', '1', '0')
   [smoker] 2.9.4 / 3.0.3 [2010-12-03]: ('2', '9', '4')
   [smoker] 2.9.3 / 3.0.2 [2010-06-18]: ('2', '9', '3')
   [smoker] 2.9.2 / 3.0.1 [2010-02-26]: ('2', '9', '2')
   [smoker] 3.0.0  [2009-11-25]: ('3', '0', '0')
   [smoker] 2.9.1  [2009-11-06]: ('2', '9', '1')
   [smoker] 2.9.0  [2009-09-25]: ('2', '9', '0')
   [smoker] 2.4.1  [2009-03-09]: ('2', '4', '1')
   [smoker] 2.4.0  [2008-10-08]: ('2', '4', '0')
   [smoker] 2.3.2  [2008-05-06]: ('2', '3', '2')
   [smoker] 2.3.1  [2008-02-22]: ('2', '3', '1')
   [smoker] 2.3.0  [2008-01-23]: ('2', '3', '0')
   [smoker] 2.2.0  [2007-06-19]: ('2', '2', '0')
   [smoker] 2.1.0  [2007-02-17]: ('2', '1', '0')
   [smoker] 2.0.0  [2006-05-26]: ('2', '0', '0')
   [smoker] 1.9.1  [2006-03-02]: ('1', '9', '1')
   [smoker] 1.9 final  [2006-02-27]: ('1', '9', '100')
   [smoker] 1.9 RC1  [2006-02-21]: ('1', '9', '1')
   [smoker] 1.4.3  [2004-12-07]: ('1', '4', '3')
   [smoker] 1.4.2  [2004-10-01]: ('1', '4', '2')
   [smoker] 1.4.1  [2004-08-02]: ('1', '4', '1')
   [smoker] 1.4 final  [2004-07-01]: ('1', '4', '100')
   [smoker] 1.4 RC3  [2004-05-11]: ('1', '4', '3')
   [smoker] 1.4 RC2  [2004-03-30]: ('1', '4', '2')
   [smoker] 1.4 RC1  [2004-03-29]: ('1', '4', '1')
   [smoker] 1.3 final  [2003-12-26]: ('1', '3', '100')
   [smoker] 1.3 RC3  [2003-11-25]: ('1', '3', '3')
   [smoker] 1.3 RC2  

[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_102) - Build # 6258 - Unstable!

2016-11-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6258/
Java: 32bit/jdk1.8.0_102 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail

Error Message:
expected:<200> but was:<404>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<404>
at 
__randomizedtesting.SeedInfo.seed([E461EBFE70E807A1:8CDEDED4A072154D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.cancelDelegationToken(TestSolrCloudWithDelegationTokens.java:140)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail(TestSolrCloudWithDelegationTokens.java:294)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-9811) Make it easier to manually execute overseer commands

2016-11-30 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709241#comment-15709241
 ] 

Scott Blum commented on SOLR-9811:
--

Didn't try that, I wasn't aware of that!  I wouldn't have thought to look in 
the coreadmin API for collection-state related issues.

Unsure about the root cause, it *could* have been from me having to nuke nuke a 
runaway state update queue but I've seen it a few times during times of cluster 
turbulence.

> Make it easier to manually execute overseer commands
> 
>
> Key: SOLR-9811
> URL: https://issues.apache.org/jira/browse/SOLR-9811
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mike Drob
>
> Sometimes solrcloud will get into a bad state w.r.t. election or recovery and 
> it would be useful to have the ability to manually publish a node as active 
> or leader. This would be an alternative to some current ops practices of 
> restarting services, which may take a while to complete given many cores 
> hosted on a single server.
> This is an expert operator technique and readers should be made aware of 
> this, a.k.a. the "I don't care, just get it running" approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7282) Cache config or index schema objects by configset and share them across cores

2016-11-30 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709228#comment-15709228
 ] 

Scott Blum edited comment on SOLR-7282 at 11/30/16 5:43 PM:


I mean, content hashing is a pretty ubiquitous technique.  Git is fundamentally 
built on it.

The reason I personally care is that our cluster is so old I'm not even sure if 
configsets were a thing when we started.  So we have 4000 collections with 4000 
identical configurations.

Is there even a way to change an existing collection to use a configset?


was (Author: dragonsinth):
I mean, content hashing is a pretty ubiquitous technique.  Git is fundamentally 
built on it.

The reason I personally care is that our cluster is so old I'm not even sure if 
configsets were a thing when we started.  So we have 4000 collections with 4000 
identical configurations.

> Cache config or index schema objects by configset and share them across cores
> -
>
> Key: SOLR-7282
> URL: https://issues.apache.org/jira/browse/SOLR-7282
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7282.patch
>
>
> Sharing schema and config objects has been known to improve startup 
> performance when a large number of cores are on the same box (See 
> http://wiki.apache.org/solr/LotsOfCores).Damien also saw improvements to 
> cluster startup speed upon caching the index schema in SOLR-7191.
> Now that SolrCloud configuration is based on config sets in ZK, we should 
> explore how we can minimize config/schema parsing for each core in a way that 
> is compatible with the recent/planned changes in the config and schema APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7282) Cache config or index schema objects by configset and share them across cores

2016-11-30 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709228#comment-15709228
 ] 

Scott Blum commented on SOLR-7282:
--

I mean, content hashing is a pretty ubiquitous technique.  Git is fundamentally 
built on it.

The reason I personally care is that our cluster is so old I'm not even sure if 
configsets were a thing when we started.  So we have 4000 collections with 4000 
identical configurations.

> Cache config or index schema objects by configset and share them across cores
> -
>
> Key: SOLR-7282
> URL: https://issues.apache.org/jira/browse/SOLR-7282
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7282.patch
>
>
> Sharing schema and config objects has been known to improve startup 
> performance when a large number of cores are on the same box (See 
> http://wiki.apache.org/solr/LotsOfCores).Damien also saw improvements to 
> cluster startup speed upon caching the index schema in SOLR-7191.
> Now that SolrCloud configuration is based on config sets in ZK, we should 
> explore how we can minimize config/schema parsing for each core in a way that 
> is compatible with the recent/planned changes in the config and schema APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-11-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709176#comment-15709176
 ] 

Joel Bernstein edited comment on SOLR-8593 at 11/30/16 5:38 PM:


One of things that is also not specifically handled is the HAVING clause. 

I think we should push down this capability to Solr as well so we can perform 
the HAVING logic on the worker nodes. In high cardinality use cases this will 
be a big performance improvement.

We also need to develop a HavingStream to manage the having logic. I'll start 
the work for the HavingStream in this branch as it directly supports the 
Calcite integration.

My plan is to implement the following classes for the having logic:

HavingStream  
BooleanOperation
AndOperation
OrOperation
NotOperation
EqualsOperation
LessThenOperation
GreaterThenOperation

Syntax:

*having*(streamExpr, *and*(*eq*(field1, value1), *not*(*eq*(field2, value2

The having function will read the Tuples from the streamExpr and apply the 
boolean operation to each Tuple.

If the boolean operation returns true the having stream will emit the Tuple.





was (Author: joel.bernstein):
One of things that is also not specifically handled is the HAVING clause. 

I think we should push down this capability to Solr as well so we can perform 
the HAVING logic on the worker nodes. In high cardinality use cases this will 
be a big performance improvement.

We also need to develop a HavingStream to manage the having logic. I'll start 
the work for the HavingStream in this branch as it directly supports the 
Calcite integration.

My plan is to implement the following classes for the having logic:

HavingStream  
BooleanOperation
AndOperation
OrOperation
NotOperation
EqualsOperation
LessThenOperation
GreaterThenOperation

Syntax:

*having*(streamExpr, *and*(*eq*(field1, value1), *eq*(field2, value2)))

The having function will read the Tuples from the streamExpr and apply the 
boolean operation to each Tuple.

If the boolean operation returns true the having stream will emit the Tuple.




> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-11-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709176#comment-15709176
 ] 

Joel Bernstein edited comment on SOLR-8593 at 11/30/16 5:37 PM:


One of things that is also not specifically handled is the HAVING clause. 

I think we should push down this capability to Solr as well so we can perform 
the HAVING logic on the worker nodes. In high cardinality use cases this will 
be a big performance improvement.

We also need to develop a HavingStream to manage the having logic. I'll start 
the work for the HavingStream in this branch as it directly supports the 
Calcite integration.

My plan is to implement the following classes for the having logic:

HavingStream  
BooleanOperation
AndOperation
OrOperation
NotOperation
EqualsOperation
LessThenOperation
GreaterThenOperation

Syntax:

*having*(streamExpr, *and*(*eq*(field1, value1), *eq*(field2, value2)))

The having function will read the Tuples from the streamExpr and apply the 
boolean operation to each Tuple.

If the boolean operation returns true the having stream will emit the Tuple.





was (Author: joel.bernstein):
One of things that is also not specifically handled is the HAVING clause. 

I think we should push down this capability to Solr as well so we can perform 
the HAVING logic on the worker nodes. In high cardinality use cases this will 
be a big performance improvement.

We also need to develop a HavingStream to manage the having logic. I'll start 
the work for the HavingStream in this branch as it directly supports the 
Calcite integration.

My plan is to implement the following classes for the having logic:

HavingStream  
BooleanOperation
AndOperation
OrOperation
NotOperation
EqualsOperation
LessThenOperation
GreaterThenOperation

Syntax:

having(streamExpr, and(eq(field1, value1), eq(field2, value2)))

The having function will read the Tuples from the streamExpr and apply the 
boolean operation to each Tuple.

If the boolean operation returns true the having stream will emit the Tuple.




> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-11-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709176#comment-15709176
 ] 

Joel Bernstein edited comment on SOLR-8593 at 11/30/16 5:36 PM:


One of things that is also not specifically handled is the HAVING clause. 

I think we should push down this capability to Solr as well so we can perform 
the HAVING logic on the worker nodes. In high cardinality use cases this will 
be a big performance improvement.

We also need to develop a HavingStream to manage the having logic. I'll start 
the work for the HavingStream in this branch as it directly supports the 
Calcite integration.

My plan is to implement the following classes for the having logic:

HavingStream  
BooleanOperation
AndOperation
OrOperation
NotOperation
EqualsOperation
LessThenOperation
GreaterThenOperation

Syntax:

having(streamExpr, and(eq(field1, value1), eq(field2, value2)))

The having function will read the Tuples from the streamExpr and apply the 
boolean operation to each Tuple.

If the boolean operation returns true the having stream will emit the Tuple.





was (Author: joel.bernstein):
One of things that is also not specifically handled is the HAVING clause. 

I think we should push down this capability to Solr as well so we can perform 
the HAVING logic on the worker nodes. In high cardinality use cases this will 
be a big performance improvement.

We also need to develop a HavingStream to manage the having logic. I'll start 
the work for the HavingStream in this branch as it directly supports the 
Calcite integration.

My plan is to implement the following classes for the having logic:

HavingStream  
BooleanOperation
AndOperation
OrOperation
NotOperation
EqualsOperation
LessThenOperation
GreaterThenOperation

Syntax:

having(streamExpr, and(eq(fieldName, value))

The having function will read the Tuples from the streamExpr and apply the 
boolean operation to each Tuple.

If the boolean operation returns true the having stream will emit the Tuple.




> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-11-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709176#comment-15709176
 ] 

Joel Bernstein edited comment on SOLR-8593 at 11/30/16 5:34 PM:


One of things that is also not specifically handled is the HAVING clause. 

I think we should push down this capability to Solr as well so we can perform 
the HAVING logic on the worker nodes. In high cardinality use cases this will 
be a big performance improvement.

We also need to develop a HavingStream to manage the having logic. I'll start 
the work for the HavingStream in this branch as it directly supports the 
Calcite integration.

My plan is to implement the following classes for the having logic:

HavingStream  
BooleanOperation
AndOperation
OrOperation
NotOperation
EqualsOperation
LessThenOperation
GreaterThenOperation

Syntax:

having(streamExpr, and(eq(fieldName, value))

The having function will read the Tuples from the streamExpr and apply the 
boolean operation to each Tuple.

If the boolean operation returns true the having stream will emit the Tuple.





was (Author: joel.bernstein):
One of things that is also not specifically handled is the HAVING clause. 

I think we should push down this capability to Solr as well so we can perform 
the HAVING logic on the worker nodes. In high cardinality use cases this will 
be a big performance improvement.

We also need to develop a HavingStream to manage the having logic. I'll start 
the work for the HavingStream in this branch as it directly supports the 
Calcite integration.

My plan is to implement the following classes for the having logic:

HavingStream  
BooleanOperation
AndOperation
OrOperation
NotOperation
EqualsOperation
LessThenOperation
GreaterThenOperation

Syntax:

having(streamExpr, and(equals(fieldName, value))

The having function will read the Tuples from the streamExpr and apply the 
boolean operation to each Tuple.

If the boolean operation returns true the having stream will emit the Tuple.




> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-11-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709176#comment-15709176
 ] 

Joel Bernstein edited comment on SOLR-8593 at 11/30/16 5:33 PM:


One of things that is also not specifically handled is the HAVING clause. 

I think we should push down this capability to Solr as well so we can perform 
the HAVING logic on the worker nodes. In high cardinality use cases this will 
be a big performance improvement.

We also need to develop a HavingStream to manage the having logic. I'll start 
the work for the HavingStream in this branch as it directly supports the 
Calcite integration.

My plan is to implement the following classes for the having logic:

HavingStream  
BooleanOperation
AndOperation
OrOperation
NotOperation
EqualsOperation
LessThenOperation
GreaterThenOperation

Syntax:

having(streamExpr, and(equals(fieldName, value))

The having function will read the Tuples from the streamExpr and apply the 
boolean operation to each Tuple.

If the boolean operation returns true the having stream will emit the Tuple.





was (Author: joel.bernstein):
One of things that is also not specifically handled is the HAVING clause. 

I think we should push down this capability to Solr as well so we can perform 
the HAVING logic on the worker nodes. In high cardinality use cases this will 
be a big performance improvement.

We also need to develop a HavingStream to manage the having logic. I'll start 
the work for the HavingStream in this branch as it directly supports the 
Calcite integration.

My pan is to implement the following classes for the having logic:

HavingStream  
BooleanOperation
AndOperation
OrOperation
NotOperation
EqualsOperation
LessThenOperation
GreaterThenOperation

Syntax:

having(streamExpr, and(equals(fieldName, value))

The having function will read the Tuples from the streamExpr and apply the 
boolean operation to each Tuple.

If the boolean operation returns true the having stream will emit the Tuple.




> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-11-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709176#comment-15709176
 ] 

Joel Bernstein edited comment on SOLR-8593 at 11/30/16 5:32 PM:


One of things that is also not specifically handled is the HAVING clause. 

I think we should push down this capability to Solr as well so we can perform 
the HAVING logic on the worker nodes. In high cardinality use cases this will 
be a big performance improvement.

We also need to develop a HavingStream to manage the having logic. I'll start 
the work for the HavingStream in this branch as it directly supports the 
Calcite integration.

My pan is to implement the following classes for the having logic:

HavingStream  
BooleanOperation
AndOperation
OrOperation
NotOperation
EqualsOperation
LessThenOperation
GreaterThenOperation

Syntax:

having(streamExpr, and(equals(fieldName, value))

The having function will read the Tuples from the streamExpr and apply the 
boolean operation to each Tuple.

If the boolean operation returns true the having stream will emit the Tuple.





was (Author: joel.bernstein):
One of things that is also not specifically handled is the HAVING clause. 

I think we should push down this capability to Solr as well so we can perform 
the HAVING logic on the worker nodes. In high cardinality use cases this will 
be a big performance improvement.

We also need to develop a HavingStream to manage the having logic. I'll start 
the work for the HavingStream in this branch as it directly supports the 
Calcite integration.

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-11-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709176#comment-15709176
 ] 

Joel Bernstein commented on SOLR-8593:
--

One of things that is also not specifically handing is the HAVING clause. 

I think we should push down this capability to Solr as well so we can perform 
the HAVING logic on the worker nodes. In high cardinality use cases this will 
be a big performance improvement.

We also need to develop a HavingStream to manage the having logic. I'll start 
the work for the HavingStream in this branch as it directly supports the 
Calcite integration.

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-11-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709176#comment-15709176
 ] 

Joel Bernstein edited comment on SOLR-8593 at 11/30/16 5:24 PM:


One of things that is also not specifically handled is the HAVING clause. 

I think we should push down this capability to Solr as well so we can perform 
the HAVING logic on the worker nodes. In high cardinality use cases this will 
be a big performance improvement.

We also need to develop a HavingStream to manage the having logic. I'll start 
the work for the HavingStream in this branch as it directly supports the 
Calcite integration.


was (Author: joel.bernstein):
One of things that is also not specifically handing is the HAVING clause. 

I think we should push down this capability to Solr as well so we can perform 
the HAVING logic on the worker nodes. In high cardinality use cases this will 
be a big performance improvement.

We also need to develop a HavingStream to manage the having logic. I'll start 
the work for the HavingStream in this branch as it directly supports the 
Calcite integration.

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4735) Improve Solr metrics reporting

2016-11-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709129#comment-15709129
 ] 

ASF subversion and git services commented on SOLR-4735:
---

Commit fea0e200a8083ebd86d8e522939e4977d072bbe7 in lucene-solr's branch 
refs/heads/feature/metrics from [~kwong494]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fea0e20 ]

SOLR-4735: SolrMetricsIntegrationTest (Kelvin Wong via Christine Poerschke)

Adds SolrMetricsIntegrationTest which uses solrconfig-metricreporter.xml which 
configures MockMetricReporter instances.

also:
* JmxUtil and SolrJmxReporter tweaks
* SolrMetricReporterTest.MockReporter turned into MockMetricReporter
* changes in SolrCoreMetricManagerTest and SolrJmxReporterTest:
** moved initCore from BeforeClass to Before(Test) so that After(Test) can do 
deleteCore
** TODO: verify interaction between tests (SolrCoreMetricManagerTest and 
SolrMetricsIntegrationTest and SolrJmxReporterTest)
* SolrCoreMetricManagerTest instead of SolrJmxReporter use MockMetricReporter


> Improve Solr metrics reporting
> --
>
> Key: SOLR-4735
> URL: https://issues.apache.org/jira/browse/SOLR-4735
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Andrzej Bialecki 
>Priority: Minor
> Attachments: SOLR-4735.patch, SOLR-4735.patch, SOLR-4735.patch, 
> SOLR-4735.patch
>
>
> Following on from a discussion on the mailing list:
> http://search-lucene.com/m/IO0EI1qdyJF1/codahale=Solr+metrics+in+Codahale+metrics+and+Graphite+
> It would be good to make Solr play more nicely with existing devops 
> monitoring systems, such as Graphite or Ganglia.  Stats monitoring at the 
> moment is poll-only, either via JMX or through the admin stats page.  I'd 
> like to refactor things a bit to make this more pluggable.
> This patch is a start.  It adds a new interface, InstrumentedBean, which 
> extends SolrInfoMBean to return a 
> [[Metrics|http://metrics.codahale.com/manual/core/]] MetricRegistry, and a 
> couple of MetricReporters (which basically just duplicate the JMX and admin 
> page reporting that's there at the moment, but which should be more 
> extensible).  The patch includes a change to RequestHandlerBase showing how 
> this could work.  The idea would be to eventually replace the getStatistics() 
> call on SolrInfoMBean with this instead.
> The next step would be to allow more MetricReporters to be defined in 
> solrconfig.xml.  The Metrics library comes with ganglia and graphite 
> reporting modules, and we can add contrib plugins for both of those.
> There's some more general cleanup that could be done around SolrInfoMBean 
> (we've got two plugin handlers at /mbeans and /plugins that basically do the 
> same thing, and the beans themselves have some weirdly inconsistent data on 
> them - getVersion() returns different things for different impls, and 
> getSource() seems pretty useless), but maybe that's for another issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4735) Improve Solr metrics reporting

2016-11-30 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709123#comment-15709123
 ] 

Andrzej Bialecki  commented on SOLR-4735:
-

Ad 1. Indeed, there is some inconsistency here. I think we should deprecate the 
{{}}, and turn on SolrJmxReporter by default in cloud mode (in the example 
non-cloud config {{}} is turned off).
Ad 2. currently it's independent. Yes, I think we should eventually remove the 
{{}} section.
Ad 3. Good point, I'll create one.
Ad 4. Right, I'll add this.
Ad 5. Not sure how to do that, let's discuss this offline.
Ad 6. This would be easy to add but it brings several additional dependencies 
from {{metrics-graphite}} and {{metrics-ganglia}} artifacts. Are we ok with 
that?

> Improve Solr metrics reporting
> --
>
> Key: SOLR-4735
> URL: https://issues.apache.org/jira/browse/SOLR-4735
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Andrzej Bialecki 
>Priority: Minor
> Attachments: SOLR-4735.patch, SOLR-4735.patch, SOLR-4735.patch, 
> SOLR-4735.patch
>
>
> Following on from a discussion on the mailing list:
> http://search-lucene.com/m/IO0EI1qdyJF1/codahale=Solr+metrics+in+Codahale+metrics+and+Graphite+
> It would be good to make Solr play more nicely with existing devops 
> monitoring systems, such as Graphite or Ganglia.  Stats monitoring at the 
> moment is poll-only, either via JMX or through the admin stats page.  I'd 
> like to refactor things a bit to make this more pluggable.
> This patch is a start.  It adds a new interface, InstrumentedBean, which 
> extends SolrInfoMBean to return a 
> [[Metrics|http://metrics.codahale.com/manual/core/]] MetricRegistry, and a 
> couple of MetricReporters (which basically just duplicate the JMX and admin 
> page reporting that's there at the moment, but which should be more 
> extensible).  The patch includes a change to RequestHandlerBase showing how 
> this could work.  The idea would be to eventually replace the getStatistics() 
> call on SolrInfoMBean with this instead.
> The next step would be to allow more MetricReporters to be defined in 
> solrconfig.xml.  The Metrics library comes with ganglia and graphite 
> reporting modules, and we can add contrib plugins for both of those.
> There's some more general cleanup that could be done around SolrInfoMBean 
> (we've got two plugin handlers at /mbeans and /plugins that basically do the 
> same thing, and the beans themselves have some weirdly inconsistent data on 
> them - getVersion() returns different things for different impls, and 
> getSource() seems pretty useless), but maybe that's for another issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9817) Make Solr server startup directory configurable

2016-11-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709117#comment-15709117
 ] 

ASF GitHub Bot commented on SOLR-9817:
--

Github user hgadre commented on the issue:

https://github.com/apache/lucene-solr/pull/121
  
@markrmiller can you take a look?


> Make Solr server startup directory configurable
> ---
>
> Key: SOLR-9817
> URL: https://issues.apache.org/jira/browse/SOLR-9817
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.0
>Reporter: Hrishikesh Gadre
>Priority: Minor
>
> The solr startup script (bin/solr) is hardcoded to use the 
> /server directory as the working directory during the 
> startup. 
> https://github.com/apache/lucene-solr/blob/9eaea79f5c89094c08f52245b9473ca14f368f57/solr/bin/solr#L1652
> This jira is to make the "current working directory" for Solr configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #121: [SOLR-9817] Make "working directory" for Solr server...

2016-11-30 Thread hgadre
Github user hgadre commented on the issue:

https://github.com/apache/lucene-solr/pull/121
  
@markrmiller can you take a look?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9817) Make Solr server startup directory configurable

2016-11-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709114#comment-15709114
 ] 

ASF GitHub Bot commented on SOLR-9817:
--

GitHub user hgadre opened a pull request:

https://github.com/apache/lucene-solr/pull/121

[SOLR-9817] Make "working directory" for Solr server during startup c…

…onfigurable

- Added an environment variable "SOLR_CONFIG_DIR" to specify the working 
directory.
  If this env variable is missing, then we use value of SOLR_SERVER_DIR as 
the default.
  This allows us to maintain backwards compatibility.
- Updated solr-jetty-context.xml to use the jetty.home system property 
(instead of jetty.base).
  This is required since the jetty.base would point to SOLR_CONFIG_DIR and 
we need the location
  specified by SOLR_SERVER_DIR variable.

Testing: Manual testing with (and without) specifying SOLR_CONFIG_DIR 
parameter. The server
 starts properly in both cases.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hgadre/lucene-solr SOLR-9817_fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/121.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #121


commit 6601b335e68a6d6df4fd724b61764f5d163f49f9
Author: Hrishikesh Gadre 
Date:   2016-11-30T16:50:29Z

[SOLR-9817] Make "working directory" for Solr server during startup 
configurable

- Added an environment variable "SOLR_CONFIG_DIR" to specify the working 
directory.
  If this env variable is missing, then we use value of SOLR_SERVER_DIR as 
the default.
  This allows us to maintain backwards compatibility.
- Updated solr-jetty-context.xml to use the jetty.home system property 
(instead of jetty.base).
  This is required since the jetty.base would point to SOLR_CONFIG_DIR and 
we need the location
  specified by SOLR_SERVER_DIR variable.

Testing: Manual testing with (and without) specifying SOLR_CONFIG_DIR 
parameter. The server
 starts properly in both cases.




> Make Solr server startup directory configurable
> ---
>
> Key: SOLR-9817
> URL: https://issues.apache.org/jira/browse/SOLR-9817
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.0
>Reporter: Hrishikesh Gadre
>Priority: Minor
>
> The solr startup script (bin/solr) is hardcoded to use the 
> /server directory as the working directory during the 
> startup. 
> https://github.com/apache/lucene-solr/blob/9eaea79f5c89094c08f52245b9473ca14f368f57/solr/bin/solr#L1652
> This jira is to make the "current working directory" for Solr configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #121: [SOLR-9817] Make "working directory" for Solr...

2016-11-30 Thread hgadre
GitHub user hgadre opened a pull request:

https://github.com/apache/lucene-solr/pull/121

[SOLR-9817] Make "working directory" for Solr server during startup c…

…onfigurable

- Added an environment variable "SOLR_CONFIG_DIR" to specify the working 
directory.
  If this env variable is missing, then we use value of SOLR_SERVER_DIR as 
the default.
  This allows us to maintain backwards compatibility.
- Updated solr-jetty-context.xml to use the jetty.home system property 
(instead of jetty.base).
  This is required since the jetty.base would point to SOLR_CONFIG_DIR and 
we need the location
  specified by SOLR_SERVER_DIR variable.

Testing: Manual testing with (and without) specifying SOLR_CONFIG_DIR 
parameter. The server
 starts properly in both cases.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hgadre/lucene-solr SOLR-9817_fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/121.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #121


commit 6601b335e68a6d6df4fd724b61764f5d163f49f9
Author: Hrishikesh Gadre 
Date:   2016-11-30T16:50:29Z

[SOLR-9817] Make "working directory" for Solr server during startup 
configurable

- Added an environment variable "SOLR_CONFIG_DIR" to specify the working 
directory.
  If this env variable is missing, then we use value of SOLR_SERVER_DIR as 
the default.
  This allows us to maintain backwards compatibility.
- Updated solr-jetty-context.xml to use the jetty.home system property 
(instead of jetty.base).
  This is required since the jetty.base would point to SOLR_CONFIG_DIR and 
we need the location
  specified by SOLR_SERVER_DIR variable.

Testing: Manual testing with (and without) specifying SOLR_CONFIG_DIR 
parameter. The server
 starts properly in both cases.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7090) Cross collection join

2016-11-30 Thread Dorian (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709093#comment-15709093
 ] 

Dorian commented on SOLR-7090:
--

Just for reference, distributed join is implemented in this elasticsearch 
plugin: https://github.com/sirensolutions/siren-join

> Cross collection join
> -
>
> Key: SOLR-7090
> URL: https://issues.apache.org/jira/browse/SOLR-7090
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7090-fulljoin.patch, SOLR-7090.patch
>
>
> Although SOLR-4905 supports joins across collections in Cloud mode, there are 
> limitations, (i) the secondary collection must be replicated at each node 
> where the primary collection has a replica, (ii) the secondary collection 
> must be singly sharded.
> This issue explores ideas/possibilities of cross collection joins, even 
> across nodes. This will be helpful for users who wish to maintain boosts or 
> signals in a secondary, more frequently updated collection, and perform query 
> time join of these boosts/signals with results from the primary collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9817) Make Solr server startup directory configurable

2016-11-30 Thread Hrishikesh Gadre (JIRA)
Hrishikesh Gadre created SOLR-9817:
--

 Summary: Make Solr server startup directory configurable
 Key: SOLR-9817
 URL: https://issues.apache.org/jira/browse/SOLR-9817
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.0
Reporter: Hrishikesh Gadre
Priority: Minor


The solr startup script (bin/solr) is hardcoded to use the 
/server directory as the working directory during the 
startup. 

https://github.com/apache/lucene-solr/blob/9eaea79f5c89094c08f52245b9473ca14f368f57/solr/bin/solr#L1652

This jira is to make the "current working directory" for Solr configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-11-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708937#comment-15708937
 ] 

Joel Bernstein edited comment on SOLR-8593 at 11/30/16 3:58 PM:


I've started to work on this ticket. As a first step I'm doing some refactoring 
on the SolrTable class to create methods for handling the different types of 
queries. After that I'll get the aggregationModes hooked up.


was (Author: joel.bernstein):
I've started to work on this ticket. As a first step I'm doing some refactoring 
to create methods for handling the different types of queries. After that I'll 
get the aggregationModes hooked up.

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-11-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708937#comment-15708937
 ] 

Joel Bernstein commented on SOLR-8593:
--

I've started to work on this ticket. As a first step I'm doing some refactoring 
to create methods for handling the different types of queries. After that I'll 
get the aggregationModes hooked up.

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7570) Tragic events during merges can lead to deadlock

2016-11-30 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708924#comment-15708924
 ] 

Michael McCandless commented on LUCENE-7570:


And thank you [~fwiffo]!

> Tragic events during merges can lead to deadlock
> 
>
> Key: LUCENE-7570
> URL: https://issues.apache.org/jira/browse/LUCENE-7570
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 5.5, master (7.0)
>Reporter: Joey Echeverria
> Attachments: thread_dump.txt
>
>
> When an {{IndexWriter#commit()}} is stalled due to too many pending merges, 
> you can get a deadlock if the currently active merge thread hits a tragic 
> event.
> # The thread performing the commit synchronizes on the the {{commitLock}} in 
> {{commitInternal}}.
> # The thread goes on to to call {{ConcurrentMergeScheduler#doStall()}} which 
> {{waits()}} on the {{ConcurrentMergeScheduler}} object. This release the 
> merge scheduler's monitor lock, but not the {{commitLock}} in {{IndexWriter}}.
> # Sometime after this wait begins, the merge thread gets a tragic exception 
> can calls {{IndexWriter#tragicEvent()}} which in turn calls 
> {{IndexWriter#rollbackInternal()}}.
> # The {{IndexWriter#rollbackInternal()}} synchronizes on the {{commitLock}} 
> which is still held by the committing thread from (1) above which is waiting 
> on the merge(s) to complete. Hence, deadlock.
> We hit this bug with Lucene 5.5, but I looked at the code in the master 
> branch and it looks like the deadlock still exists there as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7570) Tragic events during merges can lead to deadlock

2016-11-30 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708923#comment-15708923
 ] 

Michael McCandless commented on LUCENE-7570:


Thanks for reporting this [~marumarutan], I'll have a look.

> Tragic events during merges can lead to deadlock
> 
>
> Key: LUCENE-7570
> URL: https://issues.apache.org/jira/browse/LUCENE-7570
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 5.5, master (7.0)
>Reporter: Joey Echeverria
> Attachments: thread_dump.txt
>
>
> When an {{IndexWriter#commit()}} is stalled due to too many pending merges, 
> you can get a deadlock if the currently active merge thread hits a tragic 
> event.
> # The thread performing the commit synchronizes on the the {{commitLock}} in 
> {{commitInternal}}.
> # The thread goes on to to call {{ConcurrentMergeScheduler#doStall()}} which 
> {{waits()}} on the {{ConcurrentMergeScheduler}} object. This release the 
> merge scheduler's monitor lock, but not the {{commitLock}} in {{IndexWriter}}.
> # Sometime after this wait begins, the merge thread gets a tragic exception 
> can calls {{IndexWriter#tragicEvent()}} which in turn calls 
> {{IndexWriter#rollbackInternal()}}.
> # The {{IndexWriter#rollbackInternal()}} synchronizes on the {{commitLock}} 
> which is still held by the committing thread from (1) above which is waiting 
> on the merge(s) to complete. Hence, deadlock.
> We hit this bug with Lucene 5.5, but I looked at the code in the master 
> branch and it looks like the deadlock still exists there as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9816) Improvements to text logistic regression

2016-11-30 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9816:
-
Description: There were a few improvements to SOLR-9252 that hadn't yet 
been committed. This ticket will add those improvements.  (was: There were a 
few improvements to SOLR-9252 that hadn't yet been committed. This ticket will 
add those improves.)

> Improvements to text logistic regression
> 
>
> Key: SOLR-9816
> URL: https://issues.apache.org/jira/browse/SOLR-9816
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> There were a few improvements to SOLR-9252 that hadn't yet been committed. 
> This ticket will add those improvements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9252) Feature selection and logistic regression on text

2016-11-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708861#comment-15708861
 ] 

Joel Bernstein commented on SOLR-9252:
--

SOLR-9816 has been opened. We can add the latest patches from this ticket when 
we're ready to work on it.

> Feature selection and logistic regression on text
> -
>
> Key: SOLR-9252
> URL: https://issues.apache.org/jira/browse/SOLR-9252
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, SolrCloud, SolrJ
>Reporter: Cao Manh Dat
>Assignee: Joel Bernstein
>  Labels: Streaming
> Fix For: 6.2
>
> Attachments: SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, 
> SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, 
> SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, SOLR-9299-1.patch
>
>
> This ticket adds two new streaming expressions: *features* and *train*
> These two functions work together to train a logistic regression model on 
> text, from a training set stored in a SolrCloud collection.
> The syntax is as follows:
> {code}
> train(collection1, q="*:*",
>   features(collection1, 
>q="*:*",  
>field="body", 
>outcome="out_i", 
>positiveLabel=1, 
>numTerms=100),
>   field="body",
>   outcome="out_i",
>   maxIterations=100)
> {code}
> The *features* function extracts the feature terms from a training set using 
> *information gain* to score the terms. 
> http://www.jiliang.xyz/publication/feature_selection_for_classification.pdf
> The *train* function uses the extracted features to train a logistic 
> regression model on a text field in the training set.
> For both *features* and *train* the training set is defined by a query. The 
> doc vectors in the *train* function use tf-idf to represent the terms in the 
> document. The idf is calculated for the specific training set, allowing 
> multiple training sets to be stored in the same collection without polluting 
> the idf. 
> In the *train* function a batch gradient descent approach is used to 
> iteratively train the model.
> Both the *features* and the *train* function are embedded in Solr using the 
> AnalyticsQuery framework. So only the model is transported across the network 
> with each iteration.
> Both the features and the models can be stored in a SolrCloud collection. 
> Using this approach Solr can hold millions of models which can be selectively 
> deployed. For example a model could be trained for each user, to personalize 
> ranking and recommendations.
> Below is the final iteration of a model trained on the Enron Ham/Spam 
> dataset. The model includes the terms and their idfs and weights as well as a 
> classification evaluation describing the accuracy of model on the training 
> set. 
> {code}
> {
>   "idfs_ds": [1.2627703388716238, 1.2043595767152093, 
> 1.3886172425360304, 1.5488587854881268, 1.6127302558747882, 
> 2.1359177807201526, 1.514866246141212, 1.7375701403808523, 
> 1.6166175299631897, 1.756428159015249, 1.7929202354640175, 
> 1.2834893120635762, 1.899442866302021, 1.8639061320252337, 
> 1.7631697575821685, 1.6820002892260415, 1.4411352768194767, 
> 2.103708877350535, 1.2225773869965861, 2.208893321170597, 1.878981794430681, 
> 2.043737027506736, 2.2819184561854864, 2.3264563106163885, 
> 1.9336117619172708, 2.0467265663551024, 1.7386696457142692, 
> 2.468795829515302, 2.069437610615317, 2.6294363202479327, 3.7388303845193307, 
> 2.5446615802900157, 1.7430797961918219, 3.0787440662202736, 
> 1.9579702057493114, 2.289523055570706, 1.5362003886162032, 
> 2.7549569891263763, 3.955894889757158, 2.587435396273302, 3.945844553903657, 
> 1.003513057076781, 3.0416264032637708, 2.248395764146843, 4.018415246738492, 
> 2.2876164773001246, 3.3636289340509933, 1.2438124251270097, 
> 2.733903579928544, 3.439026951535205, 0.6709665389201712, 0.9546224358275518, 
> 2.8080115520822657, 2.477970205791343, 2.2631561797299637, 
> 3.2378087608499606, 0.36177021415584676, 4.1083634834014315, 
> 4.120197941048435, 2.471081544796158, 2.424147775633, 2.92339362620, 
> 2.9269972337044097, 3.2987413118451183, 2.383498249003407, 4.168988105217867, 
> 2.877691472720256, 4.233526626355437, 3.8505343740993316, 2.3264563106163885, 
> 2.6429318017228174, 4.260555298743357, 3.0058372954121855, 
> 3.8688835127675283, 3.021585652380325, 3.0295538220295017, 
> 1.9620882623582288, 3.469610374907285, 3.945844553903657, 3.4821105376715167, 
> 4.3169082352944885, 2.520329479630485, 3.609372317282444, 3.070375816549757, 
> 4.220281399605417, 3.985484239117, 3.6165408067610563, 3.788840805093992, 
> 4.392131656532076, 4.392131656532076, 

[jira] [Created] (SOLR-9816) Improvements to text logistic regression

2016-11-30 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-9816:


 Summary: Improvements to text logistic regression
 Key: SOLR-9816
 URL: https://issues.apache.org/jira/browse/SOLR-9816
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


There were a few improvements to SOLR-9252 that hadn't yet been committed. This 
ticket will add those improves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9252) Feature selection and logistic regression on text

2016-11-30 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-9252.
--
Resolution: Resolved

> Feature selection and logistic regression on text
> -
>
> Key: SOLR-9252
> URL: https://issues.apache.org/jira/browse/SOLR-9252
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, SolrCloud, SolrJ
>Reporter: Cao Manh Dat
>Assignee: Joel Bernstein
>  Labels: Streaming
> Fix For: 6.2
>
> Attachments: SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, 
> SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, 
> SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, SOLR-9299-1.patch
>
>
> This ticket adds two new streaming expressions: *features* and *train*
> These two functions work together to train a logistic regression model on 
> text, from a training set stored in a SolrCloud collection.
> The syntax is as follows:
> {code}
> train(collection1, q="*:*",
>   features(collection1, 
>q="*:*",  
>field="body", 
>outcome="out_i", 
>positiveLabel=1, 
>numTerms=100),
>   field="body",
>   outcome="out_i",
>   maxIterations=100)
> {code}
> The *features* function extracts the feature terms from a training set using 
> *information gain* to score the terms. 
> http://www.jiliang.xyz/publication/feature_selection_for_classification.pdf
> The *train* function uses the extracted features to train a logistic 
> regression model on a text field in the training set.
> For both *features* and *train* the training set is defined by a query. The 
> doc vectors in the *train* function use tf-idf to represent the terms in the 
> document. The idf is calculated for the specific training set, allowing 
> multiple training sets to be stored in the same collection without polluting 
> the idf. 
> In the *train* function a batch gradient descent approach is used to 
> iteratively train the model.
> Both the *features* and the *train* function are embedded in Solr using the 
> AnalyticsQuery framework. So only the model is transported across the network 
> with each iteration.
> Both the features and the models can be stored in a SolrCloud collection. 
> Using this approach Solr can hold millions of models which can be selectively 
> deployed. For example a model could be trained for each user, to personalize 
> ranking and recommendations.
> Below is the final iteration of a model trained on the Enron Ham/Spam 
> dataset. The model includes the terms and their idfs and weights as well as a 
> classification evaluation describing the accuracy of model on the training 
> set. 
> {code}
> {
>   "idfs_ds": [1.2627703388716238, 1.2043595767152093, 
> 1.3886172425360304, 1.5488587854881268, 1.6127302558747882, 
> 2.1359177807201526, 1.514866246141212, 1.7375701403808523, 
> 1.6166175299631897, 1.756428159015249, 1.7929202354640175, 
> 1.2834893120635762, 1.899442866302021, 1.8639061320252337, 
> 1.7631697575821685, 1.6820002892260415, 1.4411352768194767, 
> 2.103708877350535, 1.2225773869965861, 2.208893321170597, 1.878981794430681, 
> 2.043737027506736, 2.2819184561854864, 2.3264563106163885, 
> 1.9336117619172708, 2.0467265663551024, 1.7386696457142692, 
> 2.468795829515302, 2.069437610615317, 2.6294363202479327, 3.7388303845193307, 
> 2.5446615802900157, 1.7430797961918219, 3.0787440662202736, 
> 1.9579702057493114, 2.289523055570706, 1.5362003886162032, 
> 2.7549569891263763, 3.955894889757158, 2.587435396273302, 3.945844553903657, 
> 1.003513057076781, 3.0416264032637708, 2.248395764146843, 4.018415246738492, 
> 2.2876164773001246, 3.3636289340509933, 1.2438124251270097, 
> 2.733903579928544, 3.439026951535205, 0.6709665389201712, 0.9546224358275518, 
> 2.8080115520822657, 2.477970205791343, 2.2631561797299637, 
> 3.2378087608499606, 0.36177021415584676, 4.1083634834014315, 
> 4.120197941048435, 2.471081544796158, 2.424147775633, 2.92339362620, 
> 2.9269972337044097, 3.2987413118451183, 2.383498249003407, 4.168988105217867, 
> 2.877691472720256, 4.233526626355437, 3.8505343740993316, 2.3264563106163885, 
> 2.6429318017228174, 4.260555298743357, 3.0058372954121855, 
> 3.8688835127675283, 3.021585652380325, 3.0295538220295017, 
> 1.9620882623582288, 3.469610374907285, 3.945844553903657, 3.4821105376715167, 
> 4.3169082352944885, 2.520329479630485, 3.609372317282444, 3.070375816549757, 
> 4.220281399605417, 3.985484239117, 3.6165408067610563, 3.788840805093992, 
> 4.392131656532076, 4.392131656532076, 2.837281934382379, 3.698984475972131, 
> 4.331507034715641, 2.360699334038601, 2.7368842080666815, 3.730733174286711, 
> 

[jira] [Comment Edited] (SOLR-9252) Feature selection and logistic regression on text

2016-11-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708850#comment-15708850
 ] 

Joel Bernstein edited comment on SOLR-9252 at 11/30/16 3:17 PM:


I think the latest patches on this ticket have fallen through the cracks.

Let's close out this ticket and open a new one for [~caomanhdat]'s latest work.


was (Author: joel.bernstein):
I think the latest patches on this ticket have fallen through the cracks.

Let's close out this ticket and open a new one with for [~caomanhdat]'s latest 
work.

> Feature selection and logistic regression on text
> -
>
> Key: SOLR-9252
> URL: https://issues.apache.org/jira/browse/SOLR-9252
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, SolrCloud, SolrJ
>Reporter: Cao Manh Dat
>Assignee: Joel Bernstein
>  Labels: Streaming
> Fix For: 6.2
>
> Attachments: SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, 
> SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, 
> SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, SOLR-9299-1.patch
>
>
> This ticket adds two new streaming expressions: *features* and *train*
> These two functions work together to train a logistic regression model on 
> text, from a training set stored in a SolrCloud collection.
> The syntax is as follows:
> {code}
> train(collection1, q="*:*",
>   features(collection1, 
>q="*:*",  
>field="body", 
>outcome="out_i", 
>positiveLabel=1, 
>numTerms=100),
>   field="body",
>   outcome="out_i",
>   maxIterations=100)
> {code}
> The *features* function extracts the feature terms from a training set using 
> *information gain* to score the terms. 
> http://www.jiliang.xyz/publication/feature_selection_for_classification.pdf
> The *train* function uses the extracted features to train a logistic 
> regression model on a text field in the training set.
> For both *features* and *train* the training set is defined by a query. The 
> doc vectors in the *train* function use tf-idf to represent the terms in the 
> document. The idf is calculated for the specific training set, allowing 
> multiple training sets to be stored in the same collection without polluting 
> the idf. 
> In the *train* function a batch gradient descent approach is used to 
> iteratively train the model.
> Both the *features* and the *train* function are embedded in Solr using the 
> AnalyticsQuery framework. So only the model is transported across the network 
> with each iteration.
> Both the features and the models can be stored in a SolrCloud collection. 
> Using this approach Solr can hold millions of models which can be selectively 
> deployed. For example a model could be trained for each user, to personalize 
> ranking and recommendations.
> Below is the final iteration of a model trained on the Enron Ham/Spam 
> dataset. The model includes the terms and their idfs and weights as well as a 
> classification evaluation describing the accuracy of model on the training 
> set. 
> {code}
> {
>   "idfs_ds": [1.2627703388716238, 1.2043595767152093, 
> 1.3886172425360304, 1.5488587854881268, 1.6127302558747882, 
> 2.1359177807201526, 1.514866246141212, 1.7375701403808523, 
> 1.6166175299631897, 1.756428159015249, 1.7929202354640175, 
> 1.2834893120635762, 1.899442866302021, 1.8639061320252337, 
> 1.7631697575821685, 1.6820002892260415, 1.4411352768194767, 
> 2.103708877350535, 1.2225773869965861, 2.208893321170597, 1.878981794430681, 
> 2.043737027506736, 2.2819184561854864, 2.3264563106163885, 
> 1.9336117619172708, 2.0467265663551024, 1.7386696457142692, 
> 2.468795829515302, 2.069437610615317, 2.6294363202479327, 3.7388303845193307, 
> 2.5446615802900157, 1.7430797961918219, 3.0787440662202736, 
> 1.9579702057493114, 2.289523055570706, 1.5362003886162032, 
> 2.7549569891263763, 3.955894889757158, 2.587435396273302, 3.945844553903657, 
> 1.003513057076781, 3.0416264032637708, 2.248395764146843, 4.018415246738492, 
> 2.2876164773001246, 3.3636289340509933, 1.2438124251270097, 
> 2.733903579928544, 3.439026951535205, 0.6709665389201712, 0.9546224358275518, 
> 2.8080115520822657, 2.477970205791343, 2.2631561797299637, 
> 3.2378087608499606, 0.36177021415584676, 4.1083634834014315, 
> 4.120197941048435, 2.471081544796158, 2.424147775633, 2.92339362620, 
> 2.9269972337044097, 3.2987413118451183, 2.383498249003407, 4.168988105217867, 
> 2.877691472720256, 4.233526626355437, 3.8505343740993316, 2.3264563106163885, 
> 2.6429318017228174, 4.260555298743357, 3.0058372954121855, 
> 3.8688835127675283, 3.021585652380325, 

[jira] [Commented] (SOLR-9252) Feature selection and logistic regression on text

2016-11-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708850#comment-15708850
 ] 

Joel Bernstein commented on SOLR-9252:
--

I think the latest patches on this ticket have fallen through the cracks.

Let's close out this ticket and open a new one with for [~caomanhdat]'s latest 
work.

> Feature selection and logistic regression on text
> -
>
> Key: SOLR-9252
> URL: https://issues.apache.org/jira/browse/SOLR-9252
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, SolrCloud, SolrJ
>Reporter: Cao Manh Dat
>Assignee: Joel Bernstein
>  Labels: Streaming
> Fix For: 6.2
>
> Attachments: SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, 
> SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, 
> SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, SOLR-9299-1.patch
>
>
> This ticket adds two new streaming expressions: *features* and *train*
> These two functions work together to train a logistic regression model on 
> text, from a training set stored in a SolrCloud collection.
> The syntax is as follows:
> {code}
> train(collection1, q="*:*",
>   features(collection1, 
>q="*:*",  
>field="body", 
>outcome="out_i", 
>positiveLabel=1, 
>numTerms=100),
>   field="body",
>   outcome="out_i",
>   maxIterations=100)
> {code}
> The *features* function extracts the feature terms from a training set using 
> *information gain* to score the terms. 
> http://www.jiliang.xyz/publication/feature_selection_for_classification.pdf
> The *train* function uses the extracted features to train a logistic 
> regression model on a text field in the training set.
> For both *features* and *train* the training set is defined by a query. The 
> doc vectors in the *train* function use tf-idf to represent the terms in the 
> document. The idf is calculated for the specific training set, allowing 
> multiple training sets to be stored in the same collection without polluting 
> the idf. 
> In the *train* function a batch gradient descent approach is used to 
> iteratively train the model.
> Both the *features* and the *train* function are embedded in Solr using the 
> AnalyticsQuery framework. So only the model is transported across the network 
> with each iteration.
> Both the features and the models can be stored in a SolrCloud collection. 
> Using this approach Solr can hold millions of models which can be selectively 
> deployed. For example a model could be trained for each user, to personalize 
> ranking and recommendations.
> Below is the final iteration of a model trained on the Enron Ham/Spam 
> dataset. The model includes the terms and their idfs and weights as well as a 
> classification evaluation describing the accuracy of model on the training 
> set. 
> {code}
> {
>   "idfs_ds": [1.2627703388716238, 1.2043595767152093, 
> 1.3886172425360304, 1.5488587854881268, 1.6127302558747882, 
> 2.1359177807201526, 1.514866246141212, 1.7375701403808523, 
> 1.6166175299631897, 1.756428159015249, 1.7929202354640175, 
> 1.2834893120635762, 1.899442866302021, 1.8639061320252337, 
> 1.7631697575821685, 1.6820002892260415, 1.4411352768194767, 
> 2.103708877350535, 1.2225773869965861, 2.208893321170597, 1.878981794430681, 
> 2.043737027506736, 2.2819184561854864, 2.3264563106163885, 
> 1.9336117619172708, 2.0467265663551024, 1.7386696457142692, 
> 2.468795829515302, 2.069437610615317, 2.6294363202479327, 3.7388303845193307, 
> 2.5446615802900157, 1.7430797961918219, 3.0787440662202736, 
> 1.9579702057493114, 2.289523055570706, 1.5362003886162032, 
> 2.7549569891263763, 3.955894889757158, 2.587435396273302, 3.945844553903657, 
> 1.003513057076781, 3.0416264032637708, 2.248395764146843, 4.018415246738492, 
> 2.2876164773001246, 3.3636289340509933, 1.2438124251270097, 
> 2.733903579928544, 3.439026951535205, 0.6709665389201712, 0.9546224358275518, 
> 2.8080115520822657, 2.477970205791343, 2.2631561797299637, 
> 3.2378087608499606, 0.36177021415584676, 4.1083634834014315, 
> 4.120197941048435, 2.471081544796158, 2.424147775633, 2.92339362620, 
> 2.9269972337044097, 3.2987413118451183, 2.383498249003407, 4.168988105217867, 
> 2.877691472720256, 4.233526626355437, 3.8505343740993316, 2.3264563106163885, 
> 2.6429318017228174, 4.260555298743357, 3.0058372954121855, 
> 3.8688835127675283, 3.021585652380325, 3.0295538220295017, 
> 1.9620882623582288, 3.469610374907285, 3.945844553903657, 3.4821105376715167, 
> 4.3169082352944885, 2.520329479630485, 3.609372317282444, 3.070375816549757, 
> 4.220281399605417, 3.985484239117, 3.6165408067610563, 

[jira] [Commented] (SOLR-9252) Feature selection and logistic regression on text

2016-11-30 Thread Jeroen Steggink (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708840#comment-15708840
 ] 

Jeroen Steggink commented on SOLR-9252:
---

This would be great, as the regularization makes this the training way more 
useful.

> Feature selection and logistic regression on text
> -
>
> Key: SOLR-9252
> URL: https://issues.apache.org/jira/browse/SOLR-9252
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, SolrCloud, SolrJ
>Reporter: Cao Manh Dat
>Assignee: Joel Bernstein
>  Labels: Streaming
> Fix For: 6.2
>
> Attachments: SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, 
> SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, 
> SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, SOLR-9299-1.patch
>
>
> This ticket adds two new streaming expressions: *features* and *train*
> These two functions work together to train a logistic regression model on 
> text, from a training set stored in a SolrCloud collection.
> The syntax is as follows:
> {code}
> train(collection1, q="*:*",
>   features(collection1, 
>q="*:*",  
>field="body", 
>outcome="out_i", 
>positiveLabel=1, 
>numTerms=100),
>   field="body",
>   outcome="out_i",
>   maxIterations=100)
> {code}
> The *features* function extracts the feature terms from a training set using 
> *information gain* to score the terms. 
> http://www.jiliang.xyz/publication/feature_selection_for_classification.pdf
> The *train* function uses the extracted features to train a logistic 
> regression model on a text field in the training set.
> For both *features* and *train* the training set is defined by a query. The 
> doc vectors in the *train* function use tf-idf to represent the terms in the 
> document. The idf is calculated for the specific training set, allowing 
> multiple training sets to be stored in the same collection without polluting 
> the idf. 
> In the *train* function a batch gradient descent approach is used to 
> iteratively train the model.
> Both the *features* and the *train* function are embedded in Solr using the 
> AnalyticsQuery framework. So only the model is transported across the network 
> with each iteration.
> Both the features and the models can be stored in a SolrCloud collection. 
> Using this approach Solr can hold millions of models which can be selectively 
> deployed. For example a model could be trained for each user, to personalize 
> ranking and recommendations.
> Below is the final iteration of a model trained on the Enron Ham/Spam 
> dataset. The model includes the terms and their idfs and weights as well as a 
> classification evaluation describing the accuracy of model on the training 
> set. 
> {code}
> {
>   "idfs_ds": [1.2627703388716238, 1.2043595767152093, 
> 1.3886172425360304, 1.5488587854881268, 1.6127302558747882, 
> 2.1359177807201526, 1.514866246141212, 1.7375701403808523, 
> 1.6166175299631897, 1.756428159015249, 1.7929202354640175, 
> 1.2834893120635762, 1.899442866302021, 1.8639061320252337, 
> 1.7631697575821685, 1.6820002892260415, 1.4411352768194767, 
> 2.103708877350535, 1.2225773869965861, 2.208893321170597, 1.878981794430681, 
> 2.043737027506736, 2.2819184561854864, 2.3264563106163885, 
> 1.9336117619172708, 2.0467265663551024, 1.7386696457142692, 
> 2.468795829515302, 2.069437610615317, 2.6294363202479327, 3.7388303845193307, 
> 2.5446615802900157, 1.7430797961918219, 3.0787440662202736, 
> 1.9579702057493114, 2.289523055570706, 1.5362003886162032, 
> 2.7549569891263763, 3.955894889757158, 2.587435396273302, 3.945844553903657, 
> 1.003513057076781, 3.0416264032637708, 2.248395764146843, 4.018415246738492, 
> 2.2876164773001246, 3.3636289340509933, 1.2438124251270097, 
> 2.733903579928544, 3.439026951535205, 0.6709665389201712, 0.9546224358275518, 
> 2.8080115520822657, 2.477970205791343, 2.2631561797299637, 
> 3.2378087608499606, 0.36177021415584676, 4.1083634834014315, 
> 4.120197941048435, 2.471081544796158, 2.424147775633, 2.92339362620, 
> 2.9269972337044097, 3.2987413118451183, 2.383498249003407, 4.168988105217867, 
> 2.877691472720256, 4.233526626355437, 3.8505343740993316, 2.3264563106163885, 
> 2.6429318017228174, 4.260555298743357, 3.0058372954121855, 
> 3.8688835127675283, 3.021585652380325, 3.0295538220295017, 
> 1.9620882623582288, 3.469610374907285, 3.945844553903657, 3.4821105376715167, 
> 4.3169082352944885, 2.520329479630485, 3.609372317282444, 3.070375816549757, 
> 4.220281399605417, 3.985484239117, 3.6165408067610563, 3.788840805093992, 
> 4.392131656532076, 4.392131656532076, 2.837281934382379, 

[jira] [Comment Edited] (SOLR-9252) Feature selection and logistic regression on text

2016-11-30 Thread Jeroen Steggink (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708840#comment-15708840
 ] 

Jeroen Steggink edited comment on SOLR-9252 at 11/30/16 3:11 PM:
-

This would be great, as the regularization makes the training way more useful.


was (Author: jeroens):
This would be great, as the regularization makes this the training way more 
useful.

> Feature selection and logistic regression on text
> -
>
> Key: SOLR-9252
> URL: https://issues.apache.org/jira/browse/SOLR-9252
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, SolrCloud, SolrJ
>Reporter: Cao Manh Dat
>Assignee: Joel Bernstein
>  Labels: Streaming
> Fix For: 6.2
>
> Attachments: SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, 
> SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, 
> SOLR-9252.patch, SOLR-9252.patch, SOLR-9252.patch, SOLR-9299-1.patch
>
>
> This ticket adds two new streaming expressions: *features* and *train*
> These two functions work together to train a logistic regression model on 
> text, from a training set stored in a SolrCloud collection.
> The syntax is as follows:
> {code}
> train(collection1, q="*:*",
>   features(collection1, 
>q="*:*",  
>field="body", 
>outcome="out_i", 
>positiveLabel=1, 
>numTerms=100),
>   field="body",
>   outcome="out_i",
>   maxIterations=100)
> {code}
> The *features* function extracts the feature terms from a training set using 
> *information gain* to score the terms. 
> http://www.jiliang.xyz/publication/feature_selection_for_classification.pdf
> The *train* function uses the extracted features to train a logistic 
> regression model on a text field in the training set.
> For both *features* and *train* the training set is defined by a query. The 
> doc vectors in the *train* function use tf-idf to represent the terms in the 
> document. The idf is calculated for the specific training set, allowing 
> multiple training sets to be stored in the same collection without polluting 
> the idf. 
> In the *train* function a batch gradient descent approach is used to 
> iteratively train the model.
> Both the *features* and the *train* function are embedded in Solr using the 
> AnalyticsQuery framework. So only the model is transported across the network 
> with each iteration.
> Both the features and the models can be stored in a SolrCloud collection. 
> Using this approach Solr can hold millions of models which can be selectively 
> deployed. For example a model could be trained for each user, to personalize 
> ranking and recommendations.
> Below is the final iteration of a model trained on the Enron Ham/Spam 
> dataset. The model includes the terms and their idfs and weights as well as a 
> classification evaluation describing the accuracy of model on the training 
> set. 
> {code}
> {
>   "idfs_ds": [1.2627703388716238, 1.2043595767152093, 
> 1.3886172425360304, 1.5488587854881268, 1.6127302558747882, 
> 2.1359177807201526, 1.514866246141212, 1.7375701403808523, 
> 1.6166175299631897, 1.756428159015249, 1.7929202354640175, 
> 1.2834893120635762, 1.899442866302021, 1.8639061320252337, 
> 1.7631697575821685, 1.6820002892260415, 1.4411352768194767, 
> 2.103708877350535, 1.2225773869965861, 2.208893321170597, 1.878981794430681, 
> 2.043737027506736, 2.2819184561854864, 2.3264563106163885, 
> 1.9336117619172708, 2.0467265663551024, 1.7386696457142692, 
> 2.468795829515302, 2.069437610615317, 2.6294363202479327, 3.7388303845193307, 
> 2.5446615802900157, 1.7430797961918219, 3.0787440662202736, 
> 1.9579702057493114, 2.289523055570706, 1.5362003886162032, 
> 2.7549569891263763, 3.955894889757158, 2.587435396273302, 3.945844553903657, 
> 1.003513057076781, 3.0416264032637708, 2.248395764146843, 4.018415246738492, 
> 2.2876164773001246, 3.3636289340509933, 1.2438124251270097, 
> 2.733903579928544, 3.439026951535205, 0.6709665389201712, 0.9546224358275518, 
> 2.8080115520822657, 2.477970205791343, 2.2631561797299637, 
> 3.2378087608499606, 0.36177021415584676, 4.1083634834014315, 
> 4.120197941048435, 2.471081544796158, 2.424147775633, 2.92339362620, 
> 2.9269972337044097, 3.2987413118451183, 2.383498249003407, 4.168988105217867, 
> 2.877691472720256, 4.233526626355437, 3.8505343740993316, 2.3264563106163885, 
> 2.6429318017228174, 4.260555298743357, 3.0058372954121855, 
> 3.8688835127675283, 3.021585652380325, 3.0295538220295017, 
> 1.9620882623582288, 3.469610374907285, 3.945844553903657, 3.4821105376715167, 
> 4.3169082352944885, 2.520329479630485, 3.609372317282444, 

[jira] [Updated] (SOLR-9815) Verbose Garbage Collection logging is on by default

2016-11-30 Thread Gethin James (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gethin James updated SOLR-9815:
---
Priority: Minor  (was: Major)

> Verbose Garbage Collection logging is on by default
> ---
>
> Key: SOLR-9815
> URL: https://issues.apache.org/jira/browse/SOLR-9815
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Affects Versions: 6.3
>Reporter: Gethin James
>Priority: Minor
>
> There have been some excellent logging fixes in 6.3 
> (http://www.cominvent.com/2016/11/07/solr-logging-just-got-better/).  However 
> now, by default, Solr is logging a great deal of garbage collection 
> information.
> It seems that this logging is excessive, can we make the default logging to 
> not be verbose?
> For linux/mac setting GC_LOG_OPTS="" in solr.in.sh seems to work around the 
> issue, but looking at solr.cmd I don't think that will work for windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9815) Verbose Garbage Collection logging is on by default

2016-11-30 Thread Gethin James (JIRA)
Gethin James created SOLR-9815:
--

 Summary: Verbose Garbage Collection logging is on by default
 Key: SOLR-9815
 URL: https://issues.apache.org/jira/browse/SOLR-9815
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: logging
Affects Versions: 6.3
Reporter: Gethin James


There have been some excellent logging fixes in 6.3 
(http://www.cominvent.com/2016/11/07/solr-logging-just-got-better/).  However 
now, by default, Solr is logging a great deal of garbage collection information.

It seems that this logging is excessive, can we make the default logging to not 
be verbose?

For linux/mac setting GC_LOG_OPTS="" in solr.in.sh seems to work around the 
issue, but looking at solr.cmd I don't think that will work for windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7577) PrefixCodedTerms should cache its hash code

2016-11-30 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708807#comment-15708807
 ] 

Robert Muir commented on LUCENE-7577:
-

Well, I'm not that strongly opinionated on it to block the change, I just think 
its important to look at the tradeoffs. This class is a part of indexwriter, 
and indexwriter is complicated. 

I don't think its good to let some esoteric queries make the index package even 
more complicated than it needs to be.

I already don't like that termsquery & co use it from the beginning, i mean 
that change is really unfortunate since it means prefixcodedterms and ramfile 
both have to have hashcode/equals at all: just to support these queries!

adding stuff like caching the hashcode, i mean its not that i'm against that 
one little change, esp since its immutable, but its just continuing in the same 
direction. 

Its also the case that lucene queries have historically had a ton of hashcode 
and equals bugs, and adding optimizations for that, man, I honestly think that 
isn't a good idea and shouldnt be done at all, anywhere. Lucene isn't tall 
enough to ride, it shouldn't have optimizations like this unless something is 
changed to show it can have correctness first. but adding those optos to a 
piece that indexwriter uses for low level stuff? IMO that's even more 
dangerous, especially around equals/hashcode which could easily "slip in" to IW 
without much notice due to how it works in java.


> PrefixCodedTerms should cache its hash code
> ---
>
> Key: LUCENE-7577
> URL: https://issues.apache.org/jira/browse/LUCENE-7577
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7577.patch
>
>
> We have several queries that cache the hashcode of a PrefixCodedTerms 
> instance on top of it, so we could simplify by moving the caching to 
> PrefixCodedTerms directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9811) Make it easier to manually execute overseer commands

2016-11-30 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708803#comment-15708803
 ] 

Shalin Shekhar Mangar commented on SOLR-9811:
-

Scott, have you tried issuing the requestrecovery coreadmin API in such a case? 
Also, any idea about the root cause?

> Make it easier to manually execute overseer commands
> 
>
> Key: SOLR-9811
> URL: https://issues.apache.org/jira/browse/SOLR-9811
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mike Drob
>
> Sometimes solrcloud will get into a bad state w.r.t. election or recovery and 
> it would be useful to have the ability to manually publish a node as active 
> or leader. This would be an alternative to some current ops practices of 
> restarting services, which may take a while to complete given many cores 
> hosted on a single server.
> This is an expert operator technique and readers should be made aware of 
> this, a.k.a. the "I don't care, just get it running" approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 215 - Still unstable

2016-11-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/215/

6 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testUpdateLogSynchronisation

Error Message:
Timeout waiting for CDCR replication to complete @source_collection:shard2

Stack Trace:
java.lang.RuntimeException: Timeout waiting for CDCR replication to complete 
@source_collection:shard2
at 
__randomizedtesting.SeedInfo.seed([C646D558AEA7863D:38298DFB6C87A52C]:0)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForReplicationToComplete(BaseCdcrDistributedZkTest.java:795)
at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testUpdateLogSynchronisation(CdcrReplicationDistributedZkTest.java:377)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 981 - Unstable!

2016-11-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/981/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.schema.TestCloudSchemaless.test

Error Message:
QUERY FAILED: 
xpath=/response/arr[@name='fields']/lst/str[@name='name'][.='newTestFieldInt441']
  request=/schema/fields?wt=xml  response=  

[jira] [Commented] (SOLR-4735) Improve Solr metrics reporting

2016-11-30 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708742#comment-15708742
 ] 

Shalin Shekhar Mangar commented on SOLR-4735:
-

I built Solr from the feature/metric branch and tried it out. I have a few 
questions/comments:
# I expected that the SolrJmxReporter would be enabled by default but it is 
not. Should it be? Eventually we should get rid of our current JMX integration 
(maybe in 7.0?) so it makes sense to have the alternative enabled by default.
# If or how does the SolrJmxReporter work with the {{}} tag in 
solrconfig.xml? Does that get deprecated eventually?
# There is no test solrconfig.xml which has a reporter section in it. There 
should be at least one with the jmx reporter configured that we test instead of 
just relying on code to create a new metric manager and add a reporter to it.
# The example solrconfig.xml should have a sample reporter section even if it 
is commented out if the default jmx reporter is not enabled by default.
# The metric reporter should be configurable via the Config API
# Do we want to support Graphite or Ganglia reporters as well?

The last two can be worked upon in separate issues.

> Improve Solr metrics reporting
> --
>
> Key: SOLR-4735
> URL: https://issues.apache.org/jira/browse/SOLR-4735
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Andrzej Bialecki 
>Priority: Minor
> Attachments: SOLR-4735.patch, SOLR-4735.patch, SOLR-4735.patch, 
> SOLR-4735.patch
>
>
> Following on from a discussion on the mailing list:
> http://search-lucene.com/m/IO0EI1qdyJF1/codahale=Solr+metrics+in+Codahale+metrics+and+Graphite+
> It would be good to make Solr play more nicely with existing devops 
> monitoring systems, such as Graphite or Ganglia.  Stats monitoring at the 
> moment is poll-only, either via JMX or through the admin stats page.  I'd 
> like to refactor things a bit to make this more pluggable.
> This patch is a start.  It adds a new interface, InstrumentedBean, which 
> extends SolrInfoMBean to return a 
> [[Metrics|http://metrics.codahale.com/manual/core/]] MetricRegistry, and a 
> couple of MetricReporters (which basically just duplicate the JMX and admin 
> page reporting that's there at the moment, but which should be more 
> extensible).  The patch includes a change to RequestHandlerBase showing how 
> this could work.  The idea would be to eventually replace the getStatistics() 
> call on SolrInfoMBean with this instead.
> The next step would be to allow more MetricReporters to be defined in 
> solrconfig.xml.  The Metrics library comes with ganglia and graphite 
> reporting modules, and we can add contrib plugins for both of those.
> There's some more general cleanup that could be done around SolrInfoMBean 
> (we've got two plugin handlers at /mbeans and /plugins that basically do the 
> same thing, and the beans themselves have some weirdly inconsistent data on 
> them - getVersion() returns different things for different impls, and 
> getSource() seems pretty useless), but maybe that's for another issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7577) PrefixCodedTerms should cache its hash code

2016-11-30 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708718#comment-15708718
 ] 

Adrien Grand commented on LUCENE-7577:
--

Centralizing the hashcode caching in one single place sounded like a good way 
to avoid caching bugs in the various places that do that. But I also see how 
IndexWriter should remain the main use-case for PrefixCodedTerms so I don't 
mind leaving things as-is if you don't like this change.

> PrefixCodedTerms should cache its hash code
> ---
>
> Key: LUCENE-7577
> URL: https://issues.apache.org/jira/browse/LUCENE-7577
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7577.patch
>
>
> We have several queries that cache the hashcode of a PrefixCodedTerms 
> instance on top of it, so we could simplify by moving the caching to 
> PrefixCodedTerms directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Ishan Chattopadhyaya as Lucene/Solr committer

2016-11-30 Thread Ramkumar R. Aiyengar
Welcome Ishan :)

On Tue, Nov 29, 2016 at 6:17 PM, Mark Miller  wrote:

> I'm pleased to announce that Ishan Chattopadhyaya has accepted the PMC's
> invitation to become a committer.
>
> Ishan, it's tradition that you introduce yourself with a brief bio /
> origin story, explaining how you arrived here.
>
> Your handle "ishan" has already added to the “lucene" LDAP group, so
> you now have commit privileges.
>
> Please celebrate this rite of passage, and confirm that the right
> karma has in fact enabled, by embarking on the challenge of adding
> yourself to the committers section of the Who We Are page on the
> website: http://lucene.apache.org/whoweare.html (use the ASF CMS
> bookmarklet
> at the bottom of the page here: https://cms.apache.org/#bookmark -
> more info here http://www.apache.org/dev/cms.html).
>
> Congratulations and welcome!
> --
> - Mark
> about.me/markrmiller
>



-- 
Not sent from my iPhone or my Blackberry or anyone else's


[jira] [Resolved] (SOLR-9814) Solr 6.2.1 is starting very slow

2016-11-30 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-9814.
-
Resolution: Invalid

Please ask usage questions on the solr-user mailing list. Jira issues are 
reserved for actual Solr bugs. You can re-open this issue once an actual bug is 
identified in Solr after discussions on the user mailing list.

> Solr 6.2.1 is starting very slow
> 
>
> Key: SOLR-9814
> URL: https://issues.apache.org/jira/browse/SOLR-9814
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging, Server
>Affects Versions: 6.2.1
> Environment: Linux version 3.10.0-327.el7.x86_64 
> (buil...@kbuilder.dev.centos.org) (gcc version 4.8.3 20140911 (Red Hat 
> 4.8.3-9) (GCC) )
>Reporter: Monti Chandra
>Priority: Blocker
>
> Hello team,
> I am working on solr version to 6.2.1. It was working so nice for the first 
> 20 days and now the server is restarting very slow(15-20 min).  
> Please get the hardware specs of my system below:
> Linux version 3.10.0-327.el7.x86_64 (buil...@kbuilder.dev.centos.org) (gcc 
> version 4.8.3 20140911 (Red Hat 4.8.3-9) (GCC) )
> kernel-3.10.0-327.el7.x86_64
> It is working fine when i am taking solr directory to another server of same 
> configuartion. Is there any H/w, OS or kernel level threat caused by running 
> solr?
> Please help, i got stuck in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >