[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+155) - Build # 18959 - Unstable!

2017-02-13 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18959/
Java: 32bit/jdk-9-ea+155 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.handler.admin.TestApiFramework.testFramework

Error Message:


Stack Trace:
java.lang.ExceptionInInitializerError
at 
__randomizedtesting.SeedInfo.seed([5E0D79DEB3B371E5:497BB3F9B5679DD8]:0)
at 
net.sf.cglib.core.KeyFactory$Generator.generateClass(KeyFactory.java:166)
at 
net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at 
net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at net.sf.cglib.core.KeyFactory$Generator.create(KeyFactory.java:144)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:116)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:108)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:104)
at net.sf.cglib.proxy.Enhancer.(Enhancer.java:69)
at 
org.easymock.internal.ClassProxyFactory.createEnhancer(ClassProxyFactory.java:259)
at 
org.easymock.internal.ClassProxyFactory.createProxy(ClassProxyFactory.java:174)
at org.easymock.internal.MocksControl.createMock(MocksControl.java:60)
at org.easymock.EasyMock.createMock(EasyMock.java:104)
at 
org.apache.solr.handler.admin.TestCoreAdminApis.getCoreContainerMock(TestCoreAdminApis.java:83)
at 
org.apache.solr.handler.admin.TestApiFramework.testFramework(TestApiFramework.java:59)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:543)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Updated] (SOLR-9835) Create another replication mode for SolrCloud

2017-02-13 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-9835:
---
Attachment: SOLR-9835.patch

Updated patch, resolve the potential problem with SOLR-5944. 
In this patch, updates is being sorted before applying when a replica become 
new leader.

> Create another replication mode for SolrCloud
> -
>
> Key: SOLR-9835
> URL: https://issues.apache.org/jira/browse/SOLR-9835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch
>
>
> The current replication mechanism of SolrCloud is called state machine, which 
> replicas start in same initial state and for each input, the input is 
> distributed across replicas so all replicas will end up with same next state. 
> But this type of replication have some drawbacks
> - The commit (which costly) have to run on all replicas
> - Slow recovery, because if replica miss more than N updates on its down 
> time, the replica have to download entire index from its leader.
> So we create create another replication mode for SolrCloud called state 
> transfer, which acts like master/slave replication. In basically
> - Leader distribute the update to other replicas, but the leader only apply 
> the update to IW, other replicas just store the update to UpdateLog (act like 
> replication).
> - Replicas frequently polling the latest segments from leader.
> Pros:
> - Lightweight for indexing, because only leader are running the commit, 
> updates.
> - Very fast recovery, replicas just have to download the missing segments.
> On CAP point of view, this ticket will trying to promise to end users a 
> distributed systems :
> - Partition tolerance
> - Weak Consistency for normal query : clusters can serve stale data. This 
> happen when leader finish a commit and slave is fetching for latest segment. 
> This period can at most {{pollInterval + time to fetch latest segment}}.
> - Consistency for RTG : just like original SolrCloud mode
> - Weak Availability : just like original SolrCloud mode. If a leader down, 
> client must wait until new leader being elected.
> To use this new replication mode, a new collection must be created with an 
> additional parameter {{liveReplicas=1}}
> {code}
> http://localhost:8983/solr/admin/collections?action=CREATE=newCollection=2=1=1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+155) - Build # 2850 - Still Unstable!

2017-02-13 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2850/
Java: 32bit/jdk-9-ea+155 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.handler.admin.TestApiFramework.testFramework

Error Message:


Stack Trace:
java.lang.ExceptionInInitializerError
at 
__randomizedtesting.SeedInfo.seed([C6A57A2819619522:D1D3B00F1FB5791F]:0)
at 
net.sf.cglib.core.KeyFactory$Generator.generateClass(KeyFactory.java:166)
at 
net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at 
net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at net.sf.cglib.core.KeyFactory$Generator.create(KeyFactory.java:144)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:116)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:108)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:104)
at net.sf.cglib.proxy.Enhancer.(Enhancer.java:69)
at 
org.easymock.internal.ClassProxyFactory.createEnhancer(ClassProxyFactory.java:259)
at 
org.easymock.internal.ClassProxyFactory.createProxy(ClassProxyFactory.java:174)
at org.easymock.internal.MocksControl.createMock(MocksControl.java:60)
at org.easymock.EasyMock.createMock(EasyMock.java:104)
at 
org.apache.solr.handler.admin.TestCoreAdminApis.getCoreContainerMock(TestCoreAdminApis.java:83)
at 
org.apache.solr.handler.admin.TestApiFramework.testFramework(TestApiFramework.java:59)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:543)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Issue Comment Deleted] (SOLR-9530) Add an Atomic Update Processor

2017-02-13 Thread AMRIT SARKAR (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

AMRIT SARKAR updated SOLR-9530:
---
Comment: was deleted

(was: Files in the patch:
1. AtomicUpdateProcessorFactory.java
2. AtomicUpdateProcessorFactoryTest.java
3. solrconfig-atomic-update-processor.xml)

> Add an Atomic Update Processor 
> ---
>
> Key: SOLR-9530
> URL: https://issues.apache.org/jira/browse/SOLR-9530
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: SOLR-9530.patch
>
>
> I'd like to explore the idea of adding a new update processor to help ingest 
> partial updates.
> Example use-case - There are two datasets with a common id field. How can I 
> merge both of them at index time?
> Proposed Solution: 
> {code}
> 
>   
> add
>   
>   
>   
> 
> {code}
> So the first JSON dump could be ingested against 
> {{http://localhost:8983/solr/gettingstarted/update/json}}
> And then the second JSON could be ingested against
> {{http://localhost:8983/solr/gettingstarted/update/json?processor=atomic}}
> The Atomic Update Processor could support all the atomic update operations 
> currently supported.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9530) Add an Atomic Update Processor

2017-02-13 Thread AMRIT SARKAR (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865096#comment-15865096
 ] 

AMRIT SARKAR edited comment on SOLR-9530 at 2/14/17 5:36 AM:
-

Hi Varun, Alexandre, Noble,

SOLR-9530.patch uploaded for a new update processor - AtomicUpdateProcessor 
which which will accept conventional key-value update document and convert them 
into atomic update document for the fields specified in the processor 
definition. Fields which are not specified in the processor parameters will be 
updated in conventional manner.

Files specified in the patch:
1. AtomicUpdateProcessorFactory.java
2. AtomicUpdateProcessorFactoryTest.java (test class for 
AtomicUpdateProcessorFactory)
3. solrconfig-atomic-update-processor.xml (sample solrconfig for 
AtomicUpdateProcessorFactoryTest test cases)

As Alexandre mentioned, it will work as a standalone processor doing the 
conversion and updated document will passed onto next processor defined.
Noble, this patch right now doesn't support accepting request params as it is 
difficult to assign atomic operation to the respective field.

I will request you to review the patch and your feedback will be deeply 
appreciated.

Thanks
Amrit Sarkar


was (Author: sarkaramr...@gmail.com):
Hi Varun, Alexandre, Noble,

SOLR-9530.patch uploaded for a new update processor - AtomicUpdateProcessor 
which which will accept conventional key-value update document and convert them 
into atomic update document for the fields specified in the processor 
definition. Fields which are not specified in the processor parameters will be 
updated in conventional manner.

Files specified in the patch:
1. AtomicUpdateProcessorFactory
2. AtomicUpdateProcessorFactoryTest (test class for 
AtomicUpdateProcessorFactory)
3. solrconfig-atomic-update-processor.xml (sample solrconfig for 
AtomicUpdateProcessorFactoryTest test cases)

As Alexandre mentioned, it will work as a standalone processor doing the 
conversion and updated document will passed onto next processor defined.
Noble, this patch right now doesn't support accepting request params as it is 
difficult to assign atomic operation to the respective field.

I will request you to review the patch and your feedback will be deeply 
appreciated.

Thanks
Amrit Sarkar

> Add an Atomic Update Processor 
> ---
>
> Key: SOLR-9530
> URL: https://issues.apache.org/jira/browse/SOLR-9530
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: SOLR-9530.patch
>
>
> I'd like to explore the idea of adding a new update processor to help ingest 
> partial updates.
> Example use-case - There are two datasets with a common id field. How can I 
> merge both of them at index time?
> Proposed Solution: 
> {code}
> 
>   
> add
>   
>   
>   
> 
> {code}
> So the first JSON dump could be ingested against 
> {{http://localhost:8983/solr/gettingstarted/update/json}}
> And then the second JSON could be ingested against
> {{http://localhost:8983/solr/gettingstarted/update/json?processor=atomic}}
> The Atomic Update Processor could support all the atomic update operations 
> currently supported.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9530) Add an Atomic Update Processor

2017-02-13 Thread AMRIT SARKAR (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

AMRIT SARKAR updated SOLR-9530:
---
Attachment: SOLR-9530.patch

Files in the patch:
1. AtomicUpdateProcessorFactory.java
2. AtomicUpdateProcessorFactoryTest.java
3. solrconfig-atomic-update-processor.xml

> Add an Atomic Update Processor 
> ---
>
> Key: SOLR-9530
> URL: https://issues.apache.org/jira/browse/SOLR-9530
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: SOLR-9530.patch
>
>
> I'd like to explore the idea of adding a new update processor to help ingest 
> partial updates.
> Example use-case - There are two datasets with a common id field. How can I 
> merge both of them at index time?
> Proposed Solution: 
> {code}
> 
>   
> add
>   
>   
>   
> 
> {code}
> So the first JSON dump could be ingested against 
> {{http://localhost:8983/solr/gettingstarted/update/json}}
> And then the second JSON could be ingested against
> {{http://localhost:8983/solr/gettingstarted/update/json?processor=atomic}}
> The Atomic Update Processor could support all the atomic update operations 
> currently supported.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9530) Add an Atomic Update Processor

2017-02-13 Thread AMRIT SARKAR (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865096#comment-15865096
 ] 

AMRIT SARKAR commented on SOLR-9530:


Hi Varun, Alexandre, Noble,

SOLR-9530.patch uploaded for a new update processor - AtomicUpdateProcessor 
which which will accept conventional key-value update document and convert them 
into atomic update document for the fields specified in the processor 
definition. Fields which are not specified in the processor parameters will be 
updated in conventional manner.

Files specified in the patch:
1. AtomicUpdateProcessorFactory
2. AtomicUpdateProcessorFactoryTest (test class for 
AtomicUpdateProcessorFactory)
3. solrconfig-atomic-update-processor.xml (sample solrconfig for 
AtomicUpdateProcessorFactoryTest test cases)

As Alexandre mentioned, it will work as a standalone processor doing the 
conversion and updated document will passed onto next processor defined.
Noble, this patch right now doesn't support accepting request params as it is 
difficult to assign atomic operation to the respective field.

I will request you to review the patch and your feedback will be deeply 
appreciated.

Thanks
Amrit Sarkar

> Add an Atomic Update Processor 
> ---
>
> Key: SOLR-9530
> URL: https://issues.apache.org/jira/browse/SOLR-9530
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>
> I'd like to explore the idea of adding a new update processor to help ingest 
> partial updates.
> Example use-case - There are two datasets with a common id field. How can I 
> merge both of them at index time?
> Proposed Solution: 
> {code}
> 
>   
> add
>   
>   
>   
> 
> {code}
> So the first JSON dump could be ingested against 
> {{http://localhost:8983/solr/gettingstarted/update/json}}
> And then the second JSON could be ingested against
> {{http://localhost:8983/solr/gettingstarted/update/json?processor=atomic}}
> The Atomic Update Processor could support all the atomic update operations 
> currently supported.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7692) PatternReplaceCharFilterFactory should implement MultiTermAware

2017-02-13 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865082#comment-15865082
 ] 

Erick Erickson commented on LUCENE-7692:


The basic rule is that as long as the filter doesn't output  more than one 
token per input token, making it MultiTermAware is appropriate.

There was never an attempt to do an exhaustive analysis of _all_ the filters 
that qualified. Frankly, my motivation was that explaining over and over again 
that "wildcard searches are case sensitive because" got really tiring so 
just fixing that use-case was enough to get us going, the rest was an added 
benefit ;)

Since then it's more have been added but mostly whenever someone was motivated 
to add another, so please feel free.

> PatternReplaceCharFilterFactory should implement MultiTermAware
> ---
>
> Key: LUCENE-7692
> URL: https://issues.apache.org/jira/browse/LUCENE-7692
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
>
> The multi-term aware marker API is useful to know which analysis components 
> to apply when analyzing prefix or wildcard queries. I think 
> PatternReplaceCharFilterFactory qualifies?
> For the record, we have MappingCharFilterFactory that does a similar job 
> (except that it takes an explicit map of replacements  rather than regular 
> expressions) and implements MultiTermAware.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1130 - Unstable!

2017-02-13 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1130/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.AliasIntegrationTest.testErrorChecks

Error Message:
Error from server at https://127.0.0.1:45187/solr: deletealias the collection 
time out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:45187/solr: deletealias the collection time 
out:180s
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:627)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:439)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:391)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1358)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1109)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1042)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:177)
at 
org.apache.solr.cloud.AliasIntegrationTest.testErrorChecks(AliasIntegrationTest.java:197)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-9759) Admin UI should post streaming expressions

2017-02-13 Thread Gus Heck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864964#comment-15864964
 ] 

Gus Heck commented on SOLR-9759:


Good point Joel, though this is not really supported in the Admin UI.

> Admin UI should post streaming expressions
> --
>
> Key: SOLR-9759
> URL: https://issues.apache.org/jira/browse/SOLR-9759
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI
>Affects Versions: 6.2.1
>Reporter: Gus Heck
>
> Haven't had the chance to test this in 6.3, but in 6.2.1 I just ran into 
> request entity too large, when I pasted an expression into the admin ui to 
> begin debugging it... 
> Furthermore, the UI gives no indication of any error at all, leading one to 
> sit, waiting for the response. Firefox JavaScript console shows a 413 
> response and this:
> {code}
> 11:01:11.095 Error: JSON.parse: unexpected character at line 1 column 1 of 
> the JSON data
> $scope.doStream/<@http://localhost:8984/solr/js/angular/controllers/stream.js:48:24
> v/http://localhost:8984/solr/libs/angular-resource.min.js:33:133
> processQueue@http://localhost:8984/solr/libs/angular.js:13193:27
> scheduleProcessQueue/<@http://localhost:8984/solr/libs/angular.js:13209:27
> $RootScopeProvider/this.$gethttp://localhost:8984/solr/libs/angular.js:14406:16
> $RootScopeProvider/this.$gethttp://localhost:8984/solr/libs/angular.js:14222:15
> $RootScopeProvider/this.$gethttp://localhost:8984/solr/libs/angular.js:14511:13
> done@http://localhost:8984/solr/libs/angular.js:9669:36
> completeRequest@http://localhost:8984/solr/libs/angular.js:9859:7
> requestLoaded@http://localhost:8984/solr/libs/angular.js:9800:9
> 1angular.js:11617:18
> consoleLog/<()angular.js:11617
> $ExceptionHandlerProvider/this.$get processQueue()angular.js:13201
> scheduleProcessQueue/<()angular.js:13209
> $RootScopeProvider/this.$get $RootScopeProvider/this.$get $RootScopeProvider/this.$get done()angular.js:9669
> completeRequest()angular.js:9859
> requestLoaded()angular.js:9800
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 723 - Unstable

2017-02-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/723/

1 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
expected:<3> but was:<1>

Stack Trace:
java.lang.AssertionError: expected:<3> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([8B146D5EBA420611:C36119EABC712984]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:522)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11480 lines...]
   [junit4] Suite: org.apache.solr.cloud.CollectionsAPIDistributedZkTest
   [junit4]   2> Creating dataDir: 

[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+155) - Build # 2849 - Still Unstable!

2017-02-13 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2849/
Java: 64bit/jdk-9-ea+155 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.handler.admin.TestApiFramework.testFramework

Error Message:


Stack Trace:
java.lang.ExceptionInInitializerError
at 
__randomizedtesting.SeedInfo.seed([93D4B65B57503374:84A27C7C5184DF49]:0)
at 
net.sf.cglib.core.KeyFactory$Generator.generateClass(KeyFactory.java:166)
at 
net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at 
net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at net.sf.cglib.core.KeyFactory$Generator.create(KeyFactory.java:144)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:116)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:108)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:104)
at net.sf.cglib.proxy.Enhancer.(Enhancer.java:69)
at 
org.easymock.internal.ClassProxyFactory.createEnhancer(ClassProxyFactory.java:259)
at 
org.easymock.internal.ClassProxyFactory.createProxy(ClassProxyFactory.java:174)
at org.easymock.internal.MocksControl.createMock(MocksControl.java:60)
at org.easymock.EasyMock.createMock(EasyMock.java:104)
at 
org.apache.solr.handler.admin.TestCoreAdminApis.getCoreContainerMock(TestCoreAdminApis.java:83)
at 
org.apache.solr.handler.admin.TestApiFramework.testFramework(TestApiFramework.java:59)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:543)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Updated] (SOLR-10132) Support facet.matches to cull facets returned with a regex

2017-02-13 Thread Gus Heck (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gus Heck updated SOLR-10132:

Attachment: SOLR-10132.patch

Initial patch with some tests, but not ready since I still need to figure out 
what to do with the check for numeric facets (see question in comment patch).

> Support facet.matches to cull facets returned with a regex
> --
>
> Key: SOLR-10132
> URL: https://issues.apache.org/jira/browse/SOLR-10132
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 6.4.1
>Reporter: Gus Heck
> Attachments: SOLR-10132.patch
>
>
> I recently ran into a case where I really wanted to only return the next 
> level of a hierarchical facet, and while I was able to do that with a 
> coordinated set of dynamic fields, it occurred to me that this would have 
> been much much easier if I could have simply used PathHierarchyTokenizer and 
> written
> ="/my/current/prefix/[^/]+$"
> thereby limiting the returned facets to the next level down and not return 
> the  additional  N levels I didn't (yet) want to display (numbering in the 
> thousands near the top of the tree). I suspect there are other good use 
> cases, and the patch seemed relatively tractable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10132) Support facet.matches to cull facets returned with a regex

2017-02-13 Thread Gus Heck (JIRA)
Gus Heck created SOLR-10132:
---

 Summary: Support facet.matches to cull facets returned with a regex
 Key: SOLR-10132
 URL: https://issues.apache.org/jira/browse/SOLR-10132
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: faceting
Affects Versions: 6.4.1
Reporter: Gus Heck


I recently ran into a case where I really wanted to only return the next level 
of a hierarchical facet, and while I was able to do that with a coordinated set 
of dynamic fields, it occurred to me that this would have been much much easier 
if I could have simply used PathHierarchyTokenizer and written

="/my/current/prefix/[^/]+$"

thereby limiting the returned facets to the next level down and not return the  
additional  N levels I didn't (yet) want to display (numbering in the thousands 
near the top of the tree). I suspect there are other good use cases, and the 
patch seemed relatively tractable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9963) Add Calcite Avatica handler to Solr

2017-02-13 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864911#comment-15864911
 ] 

Joel Bernstein commented on SOLR-9963:
--

I was reading through the patch. I'm not sure I fully understand it yet. It 
appears that the response is a String. I suspect there is a way we can make the 
response stream.

> Add Calcite Avatica handler to Solr
> ---
>
> Key: SOLR-9963
> URL: https://issues.apache.org/jira/browse/SOLR-9963
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Reporter: Kevin Risden
>Assignee: Kevin Risden
> Attachments: SOLR-9963.patch, SOLR-9963.patch, SOLR-9963.patch
>
>
> Calcite Avatica has an http endpoint which allows Avatica drivers to connect 
> to the server. This can be wired in as a handler to Solr. This would allow 
> Solr to be used by any Avatica JDBC/ODBC driver. This depends on the Calcite 
> work from SOLR-8593.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-Tests-5.5 - Build # 26 - Still Failing

2017-02-13 Thread Steve Rowe
This failure reproduces for me on branch_5_5, but not on master or branch_6x.

--
Steve
www.lucidworks.com

> On Feb 13, 2017, at 9:03 PM, Apache Jenkins Server 
>  wrote:
> 
> Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.5/26/
> 
> 1 tests failed.
> FAILED:  org.apache.lucene.index.TestAllFilesCheckIndexHeader.test
> 
> Error Message:
> file "_h_Lucene50_0.doc" was already written to
> 
> Stack Trace:
> java.io.IOException: file "_h_Lucene50_0.doc" was already written to
>   at 
> __randomizedtesting.SeedInfo.seed([4BC507D87C083D63:C3913802D2F4509B]:0)
>   at 
> org.apache.lucene.store.MockDirectoryWrapper.createOutput(MockDirectoryWrapper.java:558)
>   at 
> org.apache.lucene.index.TestAllFilesCheckIndexHeader.checkOneFile(TestAllFilesCheckIndexHeader.java:111)
>   at 
> org.apache.lucene.index.TestAllFilesCheckIndexHeader.checkIndexHeader(TestAllFilesCheckIndexHeader.java:87)
>   at 
> org.apache.lucene.index.TestAllFilesCheckIndexHeader.test(TestAllFilesCheckIndexHeader.java:80)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
>   at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
>   at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>   at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
>   at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>   at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
>   at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
>   at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>   at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
>   at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>   at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>   at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
>   at java.lang.Thread.run(Thread.java:745)
> 
> 
> 
> 
> Build Log:
> [...truncated 723 lines...]
>   [junit4] Suite: org.apache.lucene.index.TestAllFilesCheckIndexHeader
>   [junit4]   2> NOTE: reproduce 

[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+155) - Build # 18957 - Still Unstable!

2017-02-13 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18957/
Java: 32bit/jdk-9-ea+155 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.handler.admin.TestApiFramework.testFramework

Error Message:


Stack Trace:
java.lang.ExceptionInInitializerError
at 
__randomizedtesting.SeedInfo.seed([D90B4900F23D75A9:CE7D8327F4E4]:0)
at 
net.sf.cglib.core.KeyFactory$Generator.generateClass(KeyFactory.java:166)
at 
net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at 
net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at net.sf.cglib.core.KeyFactory$Generator.create(KeyFactory.java:144)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:116)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:108)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:104)
at net.sf.cglib.proxy.Enhancer.(Enhancer.java:69)
at 
org.easymock.internal.ClassProxyFactory.createEnhancer(ClassProxyFactory.java:259)
at 
org.easymock.internal.ClassProxyFactory.createProxy(ClassProxyFactory.java:174)
at org.easymock.internal.MocksControl.createMock(MocksControl.java:60)
at org.easymock.EasyMock.createMock(EasyMock.java:104)
at 
org.apache.solr.handler.admin.TestCoreAdminApis.getCoreContainerMock(TestCoreAdminApis.java:83)
at 
org.apache.solr.handler.admin.TestApiFramework.testFramework(TestApiFramework.java:59)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:543)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[JENKINS] Lucene-Solr-Tests-5.5 - Build # 26 - Still Failing

2017-02-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.5/26/

1 tests failed.
FAILED:  org.apache.lucene.index.TestAllFilesCheckIndexHeader.test

Error Message:
file "_h_Lucene50_0.doc" was already written to

Stack Trace:
java.io.IOException: file "_h_Lucene50_0.doc" was already written to
at 
__randomizedtesting.SeedInfo.seed([4BC507D87C083D63:C3913802D2F4509B]:0)
at 
org.apache.lucene.store.MockDirectoryWrapper.createOutput(MockDirectoryWrapper.java:558)
at 
org.apache.lucene.index.TestAllFilesCheckIndexHeader.checkOneFile(TestAllFilesCheckIndexHeader.java:111)
at 
org.apache.lucene.index.TestAllFilesCheckIndexHeader.checkIndexHeader(TestAllFilesCheckIndexHeader.java:87)
at 
org.apache.lucene.index.TestAllFilesCheckIndexHeader.test(TestAllFilesCheckIndexHeader.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 723 lines...]
   [junit4] Suite: org.apache.lucene.index.TestAllFilesCheckIndexHeader
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestAllFilesCheckIndexHeader -Dtests.method=test 
-Dtests.seed=4BC507D87C083D63 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=fi -Dtests.timezone=Etc/GMT+4 -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR   0.82s J2 | TestAllFilesCheckIndexHeader.test <<<
  

[JENKINS] Lucene-Solr-Tests-master - Build # 1663 - Unstable

2017-02-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1663/

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([43D04B7CD7002ED9:CB8474A679FC4321]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:326)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:277)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:259)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:138)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3830 - Unstable!

2017-02-13 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3830/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  
org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForImplicitRouter

Error Message:
Collection not found: withShardField

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: withShardField
at 
__randomizedtesting.SeedInfo.seed([3332E6EB014EF6B0:66620E79ADB73940]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.getCollectionNames(CloudSolrClient.java:1376)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1072)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1042)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:232)
at 
org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForImplicitRouter(CustomCollectionTest.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 

[jira] [Updated] (SOLR-10130) Serious performance degradation in Solr 6.4.1 due to the new metrics collection

2017-02-13 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-10130:
-
Attachment: SOLR-10130.patch

This patch turns off Directory and Index metrics by default, and adds config 
knobs to selectively turn them on in {{solrconfig.xml}} (default are all false 
now, so this section is optional):
{code}

...
  
...
...

  false
  false
  false
  false
  
{code}

> Serious performance degradation in Solr 6.4.1 due to the new metrics 
> collection
> ---
>
> Key: SOLR-10130
> URL: https://issues.apache.org/jira/browse/SOLR-10130
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 6.4.1
> Environment: Centos 7, OpenJDK 1.8.0 update 111
>Reporter: Ere Maijala
>Assignee: Andrzej Bialecki 
>Priority: Blocker
>  Labels: perfomance
> Attachments: SOLR-10130.patch, solr-8983-console-f1.log
>
>
> We've stumbled on serious performance issues after upgrading to Solr 6.4.1. 
> Looks like the new metrics collection system in MetricsDirectoryFactory is 
> causing a major slowdown. This happens with an index configuration that, as 
> far as I can see, has no metrics specific configuration and uses 
> luceneMatchVersion 5.5.0. In practice a moderate load will completely bog 
> down the server with Solr threads constantly using up all CPU (600% on 6 core 
> machine) capacity with a load that normally  where we normally see an average 
> load of < 50%.
> I took stack traces (I'll attach them) and noticed that the threads are 
> spending time in com.codahale.metrics.Meter.mark. I tested building Solr 
> 6.4.1 with the metrics collection disabled in MetricsDirectoryFactory getByte 
> and getBytes methods and was unable to reproduce the issue.
> As far as I can see there are several issues:
> 1. Collecting metrics on every single byte read is slow.
> 2. Having it enabled by default is not a good idea.
> 3. The comment "enable coarse-grained metrics by default" at 
> https://github.com/apache/lucene-solr/blob/branch_6x/solr/core/src/java/org/apache/solr/update/SolrIndexConfig.java#L104
>  implies that only coarse-grained metrics should be enabled by default, and 
> this contradicts with collecting metrics on every single byte read.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_121) - Build # 2848 - Unstable!

2017-02-13 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2848/
Java: 32bit/jdk1.8.0_121 -client -XX:+UseG1GC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=9503, name=searcherExecutor-4667-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=9503, name=searcherExecutor-4667-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([A5A729BFC5AC5F2A]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=9503, name=searcherExecutor-4667-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=9503, name=searcherExecutor-4667-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([A5A729BFC5AC5F2A]:0)


FAILED:  org.apache.solr.core.TestLazyCores.testNoCommit

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([A5A729BFC5AC5F2A:7AC7886E0E8B3C8F]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:918)
at org.apache.solr.core.TestLazyCores.check10(TestLazyCores.java:794)
at 
org.apache.solr.core.TestLazyCores.testNoCommit(TestLazyCores.java:776)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 

[jira] [Commented] (SOLR-10121) BlockCache corruption with high concurrency

2017-02-13 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864590#comment-15864590
 ] 

Yonik Seeley commented on SOLR-10121:
-

Thanks for the extra info - running the eviction listener in a separate thread 
shouldn't matter for correctness, but may work better the way this BlockCache 
code is written anyway.

I went back and re-tested right before the Caffeine switch (SOLR-7355) and was 
able to reproduce some fails by bumping up the concurrency.

> BlockCache corruption with high concurrency
> ---
>
> Key: SOLR-10121
> URL: https://issues.apache.org/jira/browse/SOLR-10121
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>
> Improving the tests of the BlockCache in SOLR-10116 uncovered a corruption 
> bug (either that or the test is flawed... TBD).
> The failing test is TestBlockCache.testBlockCacheConcurrent()



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7686) NRT suggester should have option to filter out duplicates

2017-02-13 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864589#comment-15864589
 ] 

Michael McCandless commented on LUCENE-7686:


[~thetaphi] had a good suggestion on the ES issue, to use the FST earlier to 
dedup, instead of doing it at collection time ... I'll explore this.  It should 
make dedup very fast.

> NRT suggester should have option to filter out duplicates
> -
>
> Key: LUCENE-7686
> URL: https://issues.apache.org/jira/browse/LUCENE-7686
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7686.patch, LUCENE-7686.patch
>
>
> Some of the other suggesters have this ability, and it's quite simple to add 
> it to the NRT suggester as long as the thing we are filtering on is the 
> suggest key itself, not e.g. another stored field from the document.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10130) Serious performance degradation in Solr 6.4.1 due to the new metrics collection

2017-02-13 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864585#comment-15864585
 ] 

Andrzej Bialecki  commented on SOLR-10130:
--

bq. Does disabling metrics fix it or we we need to go back to 6.4.0?
Unfortunately no, these metrics are always turned on both in 6.4.0 and in 
6.4.1. I'll upload a patch that disables this by default and allows turning it 
on via a solrconfig.xml knob.

> Serious performance degradation in Solr 6.4.1 due to the new metrics 
> collection
> ---
>
> Key: SOLR-10130
> URL: https://issues.apache.org/jira/browse/SOLR-10130
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 6.4.1
> Environment: Centos 7, OpenJDK 1.8.0 update 111
>Reporter: Ere Maijala
>Assignee: Andrzej Bialecki 
>Priority: Blocker
>  Labels: perfomance
> Attachments: solr-8983-console-f1.log
>
>
> We've stumbled on serious performance issues after upgrading to Solr 6.4.1. 
> Looks like the new metrics collection system in MetricsDirectoryFactory is 
> causing a major slowdown. This happens with an index configuration that, as 
> far as I can see, has no metrics specific configuration and uses 
> luceneMatchVersion 5.5.0. In practice a moderate load will completely bog 
> down the server with Solr threads constantly using up all CPU (600% on 6 core 
> machine) capacity with a load that normally  where we normally see an average 
> load of < 50%.
> I took stack traces (I'll attach them) and noticed that the threads are 
> spending time in com.codahale.metrics.Meter.mark. I tested building Solr 
> 6.4.1 with the metrics collection disabled in MetricsDirectoryFactory getByte 
> and getBytes methods and was unable to reproduce the issue.
> As far as I can see there are several issues:
> 1. Collecting metrics on every single byte read is slow.
> 2. Having it enabled by default is not a good idea.
> 3. The comment "enable coarse-grained metrics by default" at 
> https://github.com/apache/lucene-solr/blob/branch_6x/solr/core/src/java/org/apache/solr/update/SolrIndexConfig.java#L104
>  implies that only coarse-grained metrics should be enabled by default, and 
> this contradicts with collecting metrics on every single byte read.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10130) Serious performance degradation in Solr 6.4.1 due to the new metrics collection

2017-02-13 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  reassigned SOLR-10130:


Assignee: Andrzej Bialecki 

> Serious performance degradation in Solr 6.4.1 due to the new metrics 
> collection
> ---
>
> Key: SOLR-10130
> URL: https://issues.apache.org/jira/browse/SOLR-10130
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 6.4.1
> Environment: Centos 7, OpenJDK 1.8.0 update 111
>Reporter: Ere Maijala
>Assignee: Andrzej Bialecki 
>Priority: Blocker
>  Labels: perfomance
> Attachments: solr-8983-console-f1.log
>
>
> We've stumbled on serious performance issues after upgrading to Solr 6.4.1. 
> Looks like the new metrics collection system in MetricsDirectoryFactory is 
> causing a major slowdown. This happens with an index configuration that, as 
> far as I can see, has no metrics specific configuration and uses 
> luceneMatchVersion 5.5.0. In practice a moderate load will completely bog 
> down the server with Solr threads constantly using up all CPU (600% on 6 core 
> machine) capacity with a load that normally  where we normally see an average 
> load of < 50%.
> I took stack traces (I'll attach them) and noticed that the threads are 
> spending time in com.codahale.metrics.Meter.mark. I tested building Solr 
> 6.4.1 with the metrics collection disabled in MetricsDirectoryFactory getByte 
> and getBytes methods and was unable to reproduce the issue.
> As far as I can see there are several issues:
> 1. Collecting metrics on every single byte read is slow.
> 2. Having it enabled by default is not a good idea.
> 3. The comment "enable coarse-grained metrics by default" at 
> https://github.com/apache/lucene-solr/blob/branch_6x/solr/core/src/java/org/apache/solr/update/SolrIndexConfig.java#L104
>  implies that only coarse-grained metrics should be enabled by default, and 
> this contradicts with collecting metrics on every single byte read.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7694) Update forbiddenapis to 2.3

2017-02-13 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-7694.
---
Resolution: Fixed

> Update forbiddenapis to 2.3
> ---
>
> Key: LUCENE-7694
> URL: https://issues.apache.org/jira/browse/LUCENE-7694
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 6.x, master (7.0), 6.5
>
>
> Forbiddenapis 2.3 was released an hour ago. This is just a maintenance 
> update, the full release notes are here: 
> [https://github.com/policeman-tools/forbidden-apis/wiki/Changes#version-23-released-2017-02-13]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7694) Update forbiddenapis to 2.3

2017-02-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864538#comment-15864538
 ] 

ASF subversion and git services commented on LUCENE-7694:
-

Commit 48901b2e50d36afac4d355178dcd7bd777e19ad3 in lucene-solr's branch 
refs/heads/branch_6x from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=48901b2 ]

LUCENE-7694: Update forbiddenapis to version 2.3


> Update forbiddenapis to 2.3
> ---
>
> Key: LUCENE-7694
> URL: https://issues.apache.org/jira/browse/LUCENE-7694
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 6.x, master (7.0), 6.5
>
>
> Forbiddenapis 2.3 was released an hour ago. This is just a maintenance 
> update, the full release notes are here: 
> [https://github.com/policeman-tools/forbidden-apis/wiki/Changes#version-23-released-2017-02-13]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7694) Update forbiddenapis to 2.3

2017-02-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864536#comment-15864536
 ] 

ASF subversion and git services commented on LUCENE-7694:
-

Commit 88d2658e4191d1bd172ebfa5dd93e38ccdbed15e in lucene-solr's branch 
refs/heads/master from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=88d2658 ]

LUCENE-7694: Update forbiddenapis to version 2.3


> Update forbiddenapis to 2.3
> ---
>
> Key: LUCENE-7694
> URL: https://issues.apache.org/jira/browse/LUCENE-7694
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 6.x, master (7.0), 6.5
>
>
> Forbiddenapis 2.3 was released an hour ago. This is just a maintenance 
> update, the full release notes are here: 
> [https://github.com/policeman-tools/forbidden-apis/wiki/Changes#version-23-released-2017-02-13]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7694) Update forbiddenapis to 2.3

2017-02-13 Thread Uwe Schindler (JIRA)
Uwe Schindler created LUCENE-7694:
-

 Summary: Update forbiddenapis to 2.3
 Key: LUCENE-7694
 URL: https://issues.apache.org/jira/browse/LUCENE-7694
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 6.x, master (7.0), 6.5


Forbiddenapis 2.3 was released an hour ago. This is just a maintenance update, 
the full release notes are here: 
[https://github.com/policeman-tools/forbidden-apis/wiki/Changes#version-23-released-2017-02-13]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_121) - Build # 6393 - Still unstable!

2017-02-13 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6393/
Java: 32bit/jdk1.8.0_121 -client -XX:+UseParallelGC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestCloudRecovery

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_D27AB8772F76557A-001\tempDir-001\node2\collection1_shard2_replica2\data\tlog\tlog.004:
 java.nio.file.FileSystemException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_D27AB8772F76557A-001\tempDir-001\node2\collection1_shard2_replica2\data\tlog\tlog.004:
 The process cannot access the file because it is being used by another 
process. 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_D27AB8772F76557A-001\tempDir-001\node2\collection1_shard2_replica2\data\tlog:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_D27AB8772F76557A-001\tempDir-001\node2\collection1_shard2_replica2\data\tlog

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_D27AB8772F76557A-001\tempDir-001\node2\collection1_shard2_replica2\data:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_D27AB8772F76557A-001\tempDir-001\node2\collection1_shard2_replica2\data

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_D27AB8772F76557A-001\tempDir-001\node2\collection1_shard2_replica2:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_D27AB8772F76557A-001\tempDir-001\node2\collection1_shard2_replica2

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_D27AB8772F76557A-001\tempDir-001\node2\collection1_shard1_replica2\data\tlog\tlog.004:
 java.nio.file.FileSystemException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_D27AB8772F76557A-001\tempDir-001\node2\collection1_shard1_replica2\data\tlog\tlog.004:
 The process cannot access the file because it is being used by another 
process. 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_D27AB8772F76557A-001\tempDir-001\node2\collection1_shard1_replica2\data\tlog:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_D27AB8772F76557A-001\tempDir-001\node2\collection1_shard1_replica2\data\tlog

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_D27AB8772F76557A-001\tempDir-001\node2\collection1_shard1_replica2\data:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_D27AB8772F76557A-001\tempDir-001\node2\collection1_shard1_replica2\data

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_D27AB8772F76557A-001\tempDir-001\node2\collection1_shard1_replica2:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_D27AB8772F76557A-001\tempDir-001\node2\collection1_shard1_replica2

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_D27AB8772F76557A-001\tempDir-001\node2:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_D27AB8772F76557A-001\tempDir-001\node2

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_D27AB8772F76557A-001\tempDir-001\node1\collection1_shard2_replica1\data\tlog\tlog.004:
 java.nio.file.FileSystemException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_D27AB8772F76557A-001\tempDir-001\node1\collection1_shard2_replica1\data\tlog\tlog.004:
 The process cannot access the file because it is being used by another 
process. 

Re: [VOTE] Release PyLucene 6.4.1 (rc1)

2017-02-13 Thread Andi Vajda


This vote has now passed.
Thank you all for voting.

Andi..

On Mon, 13 Feb 2017, Jan Høydahl wrote:


Hi,

I found the reason, it is a Java bug which is fixed in Java9: 
https://bugs.openjdk.java.net/browse/JDK-7131356 


The workaround was to install Apple?s Java6, then make and make install 
succeeds.

I then tested python IndexFiles.py  and python SearchFiles.py and it 
all works :-)

+1 to release

PS: The page http://lucene.apache.org/pylucene/install.html 
 is outdated wrt Mac, versions 
etc and should probably mention the Java6 bug as well

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com


13. feb. 2017 kl. 12.12 skrev Jan Høydahl :

Here is a GIST with complete install log and Makefile. I did not modify 
setup.py, it looked good to go

https://gist.github.com/janhoy/c996529dc492ec3ad9cb3b81e80719f2#file-pylucene-install-log-txt
 


In Makefile I customized only these vars


PREFIX_PYTHON=/usr/local/Cellar/python/2.7.13/
ANT=/usr/local/Cellar/ant/1.10.0/bin/ant
PYTHON=$(PREFIX_PYTHON)/bin/python
JCC=$(PYTHON) -m jcc
NUM_FILES=8



JCC finds Java Home, and python version is 2.7.13
My version of ?make? is macOS default gmake 3.81

I also tried with (g)make 4.2.1 but same problem.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com 


13. feb. 2017 kl. 00.47 skrev Andi Vajda >:


On Mon, 13 Feb 2017, Jan Høydahl wrote:


Tried to build on my Mac again, same problem as last time when running ?make?, 
the command 'python -m jcc.__main__ --shared --arch ?.? requests old Apple-Java 
6:


No Java runtime present, requesting install.


When building JCC (before building PyLucene), you need to ensure that the 
proper version of Java is found. The setup.py program tries to figure it out 
for you and tells what it's about to build with on stdout.

Then you need to install JCC.

Then, when building PyLucene, you need to make sure that the same python 
install you used to build JCC is also going to be used by the PyLucene 
Makefile, since that's where the current JCC you just built got installed.
You need edit that Makefile and uncomment/edit one of the configuration
examples to match your setup.

I'm sure it also helps if at the command line, you see something like this
 $ java -version
 Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
 Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode)

If not, fix this before trying anything else.

Andi..



--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com 


11. feb. 2017 kl. 23.23 skrev Andi Vajda >:


Ping ?
Two more PMC votes are needed before this release can happen.
Thanks !

Andi..


On Feb 6, 2017, at 13:38, Andi Vajda > wrote:


The PyLucene 6.4.1 (rc1) release tracking today's release of
Apache Lucene 6.4.1 is ready.

A release candidate is available from:
https://dist.apache.org/repos/dist/dev/lucene/pylucene/6.4.1-rc1/ 


PyLucene 6.4.1 is built with JCC 2.23 included in these release artifacts.

Please vote to release these artifacts as PyLucene 6.4.1.
Anyone interested in this release can and should vote !

Thanks !

Andi..

ps: the KEYS file for PyLucene release signing is at:
https://dist.apache.org/repos/dist/release/lucene/pylucene/KEYS
https://dist.apache.org/repos/dist/dev/lucene/pylucene/KEYS

pps: here is my +1








Re: [VOTE] Release PyLucene 6.4.1 (rc1)

2017-02-13 Thread Andi Vajda


On Mon, 13 Feb 2017, Jan Høydahl wrote:


I did some website fixes wrt versions and Mac OS X -> macOS renaming.


LGTM !

Andi..



--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com


13. feb. 2017 kl. 13.14 skrev Jan Høydahl :

Hi,

I found the reason, it is a Java bug which is fixed in Java9: 
https://bugs.openjdk.java.net/browse/JDK-7131356 


The workaround was to install Apple?s Java6, then make and make install 
succeeds.

I then tested python IndexFiles.py  and python SearchFiles.py and it 
all works :-)

+1 to release

PS: The page http://lucene.apache.org/pylucene/install.html 
 is outdated wrt Mac, versions 
etc and should probably mention the Java6 bug as well

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com 


13. feb. 2017 kl. 12.12 skrev Jan Høydahl >:

Here is a GIST with complete install log and Makefile. I did not modify 
setup.py, it looked good to go

https://gist.github.com/janhoy/c996529dc492ec3ad9cb3b81e80719f2#file-pylucene-install-log-txt
 


In Makefile I customized only these vars


PREFIX_PYTHON=/usr/local/Cellar/python/2.7.13/
ANT=/usr/local/Cellar/ant/1.10.0/bin/ant
PYTHON=$(PREFIX_PYTHON)/bin/python
JCC=$(PYTHON) -m jcc
NUM_FILES=8



JCC finds Java Home, and python version is 2.7.13
My version of ?make? is macOS default gmake 3.81

I also tried with (g)make 4.2.1 but same problem.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com 


13. feb. 2017 kl. 00.47 skrev Andi Vajda >:


On Mon, 13 Feb 2017, Jan Høydahl wrote:


Tried to build on my Mac again, same problem as last time when running ?make?, 
the command 'python -m jcc.__main__ --shared --arch ?.? requests old Apple-Java 
6:


No Java runtime present, requesting install.


When building JCC (before building PyLucene), you need to ensure that the 
proper version of Java is found. The setup.py program tries to figure it out 
for you and tells what it's about to build with on stdout.

Then you need to install JCC.

Then, when building PyLucene, you need to make sure that the same python 
install you used to build JCC is also going to be used by the PyLucene 
Makefile, since that's where the current JCC you just built got installed.
You need edit that Makefile and uncomment/edit one of the configuration
examples to match your setup.

I'm sure it also helps if at the command line, you see something like this
 $ java -version
 Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
 Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode)

If not, fix this before trying anything else.

Andi..



--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com 


11. feb. 2017 kl. 23.23 skrev Andi Vajda >:


Ping ?
Two more PMC votes are needed before this release can happen.
Thanks !

Andi..


On Feb 6, 2017, at 13:38, Andi Vajda > wrote:


The PyLucene 6.4.1 (rc1) release tracking today's release of
Apache Lucene 6.4.1 is ready.

A release candidate is available from:
https://dist.apache.org/repos/dist/dev/lucene/pylucene/6.4.1-rc1/ 


PyLucene 6.4.1 is built with JCC 2.23 included in these release artifacts.

Please vote to release these artifacts as PyLucene 6.4.1.
Anyone interested in this release can and should vote !

Thanks !

Andi..

ps: the KEYS file for PyLucene release signing is at:
https://dist.apache.org/repos/dist/release/lucene/pylucene/KEYS 

https://dist.apache.org/repos/dist/dev/lucene/pylucene/KEYS

pps: here is my +1










Re: [VOTE] Release PyLucene 6.4.1 (rc1)

2017-02-13 Thread Andi Vajda


On Mon, 13 Feb 2017, Jan Høydahl wrote:


Hi,

I found the reason, it is a Java bug which is fixed in Java9: 
https://bugs.openjdk.java.net/browse/JDK-7131356 



The workaround was to install Apple?s Java6, then make and make install 
succeeds.


I then tested python IndexFiles.py  and python SearchFiles.py and 
it all works :-)


Wow. Thank you for elucidating this !!
I always have the old Java 6 installed because Photoshop requires it.
Sigh. Phew.

Thanks !

Andi..



+1 to release

PS: The page http://lucene.apache.org/pylucene/install.html 
 is outdated wrt Mac, versions 
etc and should probably mention the Java6 bug as well

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com


13. feb. 2017 kl. 12.12 skrev Jan Høydahl :

Here is a GIST with complete install log and Makefile. I did not modify 
setup.py, it looked good to go

https://gist.github.com/janhoy/c996529dc492ec3ad9cb3b81e80719f2#file-pylucene-install-log-txt
 


In Makefile I customized only these vars


PREFIX_PYTHON=/usr/local/Cellar/python/2.7.13/
ANT=/usr/local/Cellar/ant/1.10.0/bin/ant
PYTHON=$(PREFIX_PYTHON)/bin/python
JCC=$(PYTHON) -m jcc
NUM_FILES=8



JCC finds Java Home, and python version is 2.7.13
My version of ?make? is macOS default gmake 3.81

I also tried with (g)make 4.2.1 but same problem.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com 


13. feb. 2017 kl. 00.47 skrev Andi Vajda >:


On Mon, 13 Feb 2017, Jan Høydahl wrote:


Tried to build on my Mac again, same problem as last time when running ?make?, 
the command 'python -m jcc.__main__ --shared --arch ?.? requests old Apple-Java 
6:


No Java runtime present, requesting install.


When building JCC (before building PyLucene), you need to ensure that the 
proper version of Java is found. The setup.py program tries to figure it out 
for you and tells what it's about to build with on stdout.

Then you need to install JCC.

Then, when building PyLucene, you need to make sure that the same python 
install you used to build JCC is also going to be used by the PyLucene 
Makefile, since that's where the current JCC you just built got installed.
You need edit that Makefile and uncomment/edit one of the configuration
examples to match your setup.

I'm sure it also helps if at the command line, you see something like this
 $ java -version
 Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
 Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode)

If not, fix this before trying anything else.

Andi..



--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com 


11. feb. 2017 kl. 23.23 skrev Andi Vajda >:


Ping ?
Two more PMC votes are needed before this release can happen.
Thanks !

Andi..


On Feb 6, 2017, at 13:38, Andi Vajda > wrote:


The PyLucene 6.4.1 (rc1) release tracking today's release of
Apache Lucene 6.4.1 is ready.

A release candidate is available from:
https://dist.apache.org/repos/dist/dev/lucene/pylucene/6.4.1-rc1/ 


PyLucene 6.4.1 is built with JCC 2.23 included in these release artifacts.

Please vote to release these artifacts as PyLucene 6.4.1.
Anyone interested in this release can and should vote !

Thanks !

Andi..

ps: the KEYS file for PyLucene release signing is at:
https://dist.apache.org/repos/dist/release/lucene/pylucene/KEYS
https://dist.apache.org/repos/dist/dev/lucene/pylucene/KEYS

pps: here is my +1








Re: [VOTE] Release PyLucene 6.4.1 (rc1)

2017-02-13 Thread Andi Vajda


On Mon, 13 Feb 2017, Jan Høydahl wrote:


Here is a GIST with complete install log and Makefile. I did not modify 
setup.py, it looked good to go

https://gist.github.com/janhoy/c996529dc492ec3ad9cb3b81e80719f2#file-pylucene-install-log-txt
 


In Makefile I customized only these vars


PREFIX_PYTHON=/usr/local/Cellar/python/2.7.13/
ANT=/usr/local/Cellar/ant/1.10.0/bin/ant
PYTHON=$(PREFIX_PYTHON)/bin/python
JCC=$(PYTHON) -m jcc
NUM_FILES=8



JCC finds Java Home, and python version is 2.7.13
My version of ?make? is macOS default gmake 3.81

I also tried with (g)make 4.2.1 but same problem.


Apart from the trailing slash in PREFIX_PYTHON which leads to a // later on 
(which I don't think is the cause of the problem), I can't see anything 
wrong.


Andi..



--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com


13. feb. 2017 kl. 00.47 skrev Andi Vajda :


On Mon, 13 Feb 2017, Jan Høydahl wrote:


Tried to build on my Mac again, same problem as last time when running ?make?, 
the command 'python -m jcc.__main__ --shared --arch ?.? requests old Apple-Java 
6:


No Java runtime present, requesting install.


When building JCC (before building PyLucene), you need to ensure that the 
proper version of Java is found. The setup.py program tries to figure it out 
for you and tells what it's about to build with on stdout.

Then you need to install JCC.

Then, when building PyLucene, you need to make sure that the same python 
install you used to build JCC is also going to be used by the PyLucene 
Makefile, since that's where the current JCC you just built got installed.
You need edit that Makefile and uncomment/edit one of the configuration
examples to match your setup.

I'm sure it also helps if at the command line, you see something like this
 $ java -version
 Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
 Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode)

If not, fix this before trying anything else.

Andi..



--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com


11. feb. 2017 kl. 23.23 skrev Andi Vajda :


Ping ?
Two more PMC votes are needed before this release can happen.
Thanks !

Andi..


On Feb 6, 2017, at 13:38, Andi Vajda  wrote:


The PyLucene 6.4.1 (rc1) release tracking today's release of
Apache Lucene 6.4.1 is ready.

A release candidate is available from:
https://dist.apache.org/repos/dist/dev/lucene/pylucene/6.4.1-rc1/

PyLucene 6.4.1 is built with JCC 2.23 included in these release artifacts.

Please vote to release these artifacts as PyLucene 6.4.1.
Anyone interested in this release can and should vote !

Thanks !

Andi..

ps: the KEYS file for PyLucene release signing is at:
https://dist.apache.org/repos/dist/release/lucene/pylucene/KEYS
https://dist.apache.org/repos/dist/dev/lucene/pylucene/KEYS

pps: here is my +1






[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+155) - Build # 18956 - Unstable!

2017-02-13 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18956/
Java: 64bit/jdk-9-ea+155 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.handler.admin.TestApiFramework.testFramework

Error Message:


Stack Trace:
java.lang.ExceptionInInitializerError
at 
__randomizedtesting.SeedInfo.seed([96178041E172643D:81614A66E7A68800]:0)
at 
net.sf.cglib.core.KeyFactory$Generator.generateClass(KeyFactory.java:166)
at 
net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at 
net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at net.sf.cglib.core.KeyFactory$Generator.create(KeyFactory.java:144)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:116)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:108)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:104)
at net.sf.cglib.proxy.Enhancer.(Enhancer.java:69)
at 
org.easymock.internal.ClassProxyFactory.createEnhancer(ClassProxyFactory.java:259)
at 
org.easymock.internal.ClassProxyFactory.createProxy(ClassProxyFactory.java:174)
at org.easymock.internal.MocksControl.createMock(MocksControl.java:60)
at org.easymock.EasyMock.createMock(EasyMock.java:104)
at 
org.apache.solr.handler.admin.TestCoreAdminApis.getCoreContainerMock(TestCoreAdminApis.java:83)
at 
org.apache.solr.handler.admin.TestApiFramework.testFramework(TestApiFramework.java:59)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:543)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)

[jira] [Commented] (SOLR-9987) Implement support for multi-valued DocValues in PointFields

2017-02-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864458#comment-15864458
 ] 

ASF subversion and git services commented on SOLR-9987:
---

Commit 81b4288a2133dce87e0ac92da5f6e37dc28176f6 in lucene-solr's branch 
refs/heads/branch_6x from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=81b4288 ]

SOLR-8396, SOLR-9987, SOLR-10011: Move CHANGES entries from 7.0 to 6.5


> Implement support for multi-valued DocValues in PointFields
> ---
>
> Key: SOLR-9987
> URL: https://issues.apache.org/jira/browse/SOLR-9987
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
> Attachments: SOLR-9987.patch, SOLR-9987.patch
>
>
> This is not currently supported, and since PointFields can't use FieldCache, 
> faceting, stats, etc is not supported on multi-valued point fields. Followup 
> task of SOLR-8396



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8396) Add support for PointFields in Solr

2017-02-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864456#comment-15864456
 ] 

ASF subversion and git services commented on SOLR-8396:
---

Commit bc10fa67b641d0cfb9bd1954378019d4fc343ae8 in lucene-solr's branch 
refs/heads/branch_6x from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bc10fa6 ]

SOLR-9987: Implement support for multi-valued DocValues in PointFields
CC SOLR-8396


> Add support for PointFields in Solr
> ---
>
> Key: SOLR-8396
> URL: https://issues.apache.org/jira/browse/SOLR-8396
> Project: Solr
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Tomás Fernández Löbbe
> Attachments: SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch
>
>
> In LUCENE-6917, [~mikemccand] mentioned that DimensionalValues are better 
> than NumericFields in most respects. We should explore the benefits of using 
> it in Solr and hence, if appropriate, switch over to using them.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8396) Add support for PointFields in Solr

2017-02-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864457#comment-15864457
 ] 

ASF subversion and git services commented on SOLR-8396:
---

Commit 81b4288a2133dce87e0ac92da5f6e37dc28176f6 in lucene-solr's branch 
refs/heads/branch_6x from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=81b4288 ]

SOLR-8396, SOLR-9987, SOLR-10011: Move CHANGES entries from 7.0 to 6.5


> Add support for PointFields in Solr
> ---
>
> Key: SOLR-8396
> URL: https://issues.apache.org/jira/browse/SOLR-8396
> Project: Solr
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Tomás Fernández Löbbe
> Attachments: SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch
>
>
> In LUCENE-6917, [~mikemccand] mentioned that DimensionalValues are better 
> than NumericFields in most respects. We should explore the benefits of using 
> it in Solr and hence, if appropriate, switch over to using them.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9996) Unstored PointFields return types are wrong

2017-02-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864461#comment-15864461
 ] 

ASF subversion and git services commented on SOLR-9996:
---

Commit 8a7594d180d8f3d23c7ccff5864e59ef961d137a in lucene-solr's branch 
refs/heads/branch_6x from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8a7594d ]

SOLR-9996: Unstored IntPointField returns Long type


> Unstored PointFields return types are wrong
> ---
>
> Key: SOLR-9996
> URL: https://issues.apache.org/jira/browse/SOLR-9996
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Attachments: SOLR-9996.patch, SOLR-9996.patch, SOLR-9996.patch
>
>
> Seems like unstored PointFields return Long types, ignoring the actual type.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8396) Add support for PointFields in Solr

2017-02-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864460#comment-15864460
 ] 

ASF subversion and git services commented on SOLR-8396:
---

Commit 796da187d28c8426cbc60b13808e775bf95a93d2 in lucene-solr's branch 
refs/heads/branch_6x from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=796da18 ]

SOLR-8396: Fix compile issues after merge


> Add support for PointFields in Solr
> ---
>
> Key: SOLR-8396
> URL: https://issues.apache.org/jira/browse/SOLR-8396
> Project: Solr
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Tomás Fernández Löbbe
> Attachments: SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch
>
>
> In LUCENE-6917, [~mikemccand] mentioned that DimensionalValues are better 
> than NumericFields in most respects. We should explore the benefits of using 
> it in Solr and hence, if appropriate, switch over to using them.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10011) Refactor PointField & TrieField to share common code

2017-02-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864459#comment-15864459
 ] 

ASF subversion and git services commented on SOLR-10011:


Commit 81b4288a2133dce87e0ac92da5f6e37dc28176f6 in lucene-solr's branch 
refs/heads/branch_6x from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=81b4288 ]

SOLR-8396, SOLR-9987, SOLR-10011: Move CHANGES entries from 7.0 to 6.5


> Refactor PointField & TrieField to share common code
> 
>
> Key: SOLR-10011
> URL: https://issues.apache.org/jira/browse/SOLR-10011
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Attachments: SOLR-10011.patch, SOLR-10011.patch, SOLR-10011.patch, 
> SOLR-10011.patch, SOLR-10011.patch
>
>
> We should eliminate PointTypes and TrieTypes enum to have a common enum for 
> both. That would enable us to share a lot of code between the two field types.
> In the process, fix the bug:
> PointFields with indexed=false, docValues=true seem to be using 
> (Int|Double|Float|Long)Point.newRangeQuery() for performing exact matches and 
> range queries. However, they should instead be using DocValues based range 
> query.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10011) Refactor PointField & TrieField to share common code

2017-02-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864451#comment-15864451
 ] 

ASF subversion and git services commented on SOLR-10011:


Commit 5a7cdd89756baed3a7d49d923fa9f66cb2baff98 in lucene-solr's branch 
refs/heads/branch_6x from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5a7cdd8 ]

SOLR-10011: Fix exception log message


> Refactor PointField & TrieField to share common code
> 
>
> Key: SOLR-10011
> URL: https://issues.apache.org/jira/browse/SOLR-10011
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Attachments: SOLR-10011.patch, SOLR-10011.patch, SOLR-10011.patch, 
> SOLR-10011.patch, SOLR-10011.patch
>
>
> We should eliminate PointTypes and TrieTypes enum to have a common enum for 
> both. That would enable us to share a lot of code between the two field types.
> In the process, fix the bug:
> PointFields with indexed=false, docValues=true seem to be using 
> (Int|Double|Float|Long)Point.newRangeQuery() for performing exact matches and 
> range queries. However, they should instead be using DocValues based range 
> query.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9987) Implement support for multi-valued DocValues in PointFields

2017-02-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864454#comment-15864454
 ] 

ASF subversion and git services commented on SOLR-9987:
---

Commit bc10fa67b641d0cfb9bd1954378019d4fc343ae8 in lucene-solr's branch 
refs/heads/branch_6x from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bc10fa6 ]

SOLR-9987: Implement support for multi-valued DocValues in PointFields
CC SOLR-8396


> Implement support for multi-valued DocValues in PointFields
> ---
>
> Key: SOLR-9987
> URL: https://issues.apache.org/jira/browse/SOLR-9987
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
> Attachments: SOLR-9987.patch, SOLR-9987.patch
>
>
> This is not currently supported, and since PointFields can't use FieldCache, 
> faceting, stats, etc is not supported on multi-valued point fields. Followup 
> task of SOLR-8396



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9996) Unstored PointFields return types are wrong

2017-02-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864462#comment-15864462
 ] 

ASF subversion and git services commented on SOLR-9996:
---

Commit a5ccebc838f6b8cb4524b6fe92cfd00aa12e89ce in lucene-solr's branch 
refs/heads/branch_6x from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a5ccebc ]

SOLR-9996: Ignore the RTG calls for tests where UpdateLog is disabled


> Unstored PointFields return types are wrong
> ---
>
> Key: SOLR-9996
> URL: https://issues.apache.org/jira/browse/SOLR-9996
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Attachments: SOLR-9996.patch, SOLR-9996.patch, SOLR-9996.patch
>
>
> Seems like unstored PointFields return Long types, ignoring the actual type.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10011) Refactor PointField & TrieField to share common code

2017-02-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864452#comment-15864452
 ] 

ASF subversion and git services commented on SOLR-10011:


Commit c27880e332722e992294e05749b63300d3eaab44 in lucene-solr's branch 
refs/heads/branch_6x from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c27880e ]

SOLR-10011: Add NumberType getNumberType() to FieldType and deprecate 
LegacyNumericType getNumericType()

Modify references to getNumericType() to use the new getNumberType(). 
NumberType is shared for the different numeric implementations supported in 
Solr (TrieFields and PointFields).
CC SOLR-8396


> Refactor PointField & TrieField to share common code
> 
>
> Key: SOLR-10011
> URL: https://issues.apache.org/jira/browse/SOLR-10011
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Attachments: SOLR-10011.patch, SOLR-10011.patch, SOLR-10011.patch, 
> SOLR-10011.patch, SOLR-10011.patch
>
>
> We should eliminate PointTypes and TrieTypes enum to have a common enum for 
> both. That would enable us to share a lot of code between the two field types.
> In the process, fix the bug:
> PointFields with indexed=false, docValues=true seem to be using 
> (Int|Double|Float|Long)Point.newRangeQuery() for performing exact matches and 
> range queries. However, they should instead be using DocValues based range 
> query.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8396) Add support for PointFields in Solr

2017-02-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864449#comment-15864449
 ] 

ASF subversion and git services commented on SOLR-8396:
---

Commit b92e318dc929defc5d100d82704898e834510265 in lucene-solr's branch 
refs/heads/branch_6x from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b92e318 ]

SOLR-8396: Add support for PointFields in Solr


> Add support for PointFields in Solr
> ---
>
> Key: SOLR-8396
> URL: https://issues.apache.org/jira/browse/SOLR-8396
> Project: Solr
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Tomás Fernández Löbbe
> Attachments: SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch
>
>
> In LUCENE-6917, [~mikemccand] mentioned that DimensionalValues are better 
> than NumericFields in most respects. We should explore the benefits of using 
> it in Solr and hence, if appropriate, switch over to using them.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10011) Refactor PointField & TrieField to share common code

2017-02-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864450#comment-15864450
 ] 

ASF subversion and git services commented on SOLR-10011:


Commit 6a97952a6173298f457aebe869a53ba130512f6f in lucene-solr's branch 
refs/heads/branch_6x from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6a97952 ]

SOLR-10011: Refactor PointField & TrieField to now have a common base class, 
NumericFieldType.

  The TrieField.TrieTypes and PointField.PointTypes are now consolidated to 
NumericFieldType.NumberType. This
  refactoring also fixes a bug whereby PointFields were not using DocValues for 
range queries for
  indexed=false, docValues=true fields.


> Refactor PointField & TrieField to share common code
> 
>
> Key: SOLR-10011
> URL: https://issues.apache.org/jira/browse/SOLR-10011
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Attachments: SOLR-10011.patch, SOLR-10011.patch, SOLR-10011.patch, 
> SOLR-10011.patch, SOLR-10011.patch
>
>
> We should eliminate PointTypes and TrieTypes enum to have a common enum for 
> both. That would enable us to share a lot of code between the two field types.
> In the process, fix the bug:
> PointFields with indexed=false, docValues=true seem to be using 
> (Int|Double|Float|Long)Point.newRangeQuery() for performing exact matches and 
> range queries. However, they should instead be using DocValues based range 
> query.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8396) Add support for PointFields in Solr

2017-02-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864453#comment-15864453
 ] 

ASF subversion and git services commented on SOLR-8396:
---

Commit c27880e332722e992294e05749b63300d3eaab44 in lucene-solr's branch 
refs/heads/branch_6x from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c27880e ]

SOLR-10011: Add NumberType getNumberType() to FieldType and deprecate 
LegacyNumericType getNumericType()

Modify references to getNumericType() to use the new getNumberType(). 
NumberType is shared for the different numeric implementations supported in 
Solr (TrieFields and PointFields).
CC SOLR-8396


> Add support for PointFields in Solr
> ---
>
> Key: SOLR-8396
> URL: https://issues.apache.org/jira/browse/SOLR-8396
> Project: Solr
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Tomás Fernández Löbbe
> Attachments: SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch
>
>
> In LUCENE-6917, [~mikemccand] mentioned that DimensionalValues are better 
> than NumericFields in most respects. We should explore the benefits of using 
> it in Solr and hence, if appropriate, switch over to using them.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10121) BlockCache corruption with high concurrency

2017-02-13 Thread Ben Manes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864352#comment-15864352
 ] 

Ben Manes commented on SOLR-10121:
--

Can you try a local hack of changing Caffeine versions and, if it fails, try 
reverting back to CLHM? Both should be easy changes that could help us isolate 
it.

Also note that CLHM ran the eviction listener on the same thread, whereas 
Caffeine delegates that to the executor. If there is a race due to that, you 
could use `executor(Runnable::run)` in the builder.

> BlockCache corruption with high concurrency
> ---
>
> Key: SOLR-10121
> URL: https://issues.apache.org/jira/browse/SOLR-10121
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>
> Improving the tests of the BlockCache in SOLR-10116 uncovered a corruption 
> bug (either that or the test is flawed... TBD).
> The failing test is TestBlockCache.testBlockCacheConcurrent()



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10121) BlockCache corruption with high concurrency

2017-02-13 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864343#comment-15864343
 ] 

Yonik Seeley commented on SOLR-10121:
-

Hmmm, so on further review of BlockCache.java, I think I've found 2 concurrency 
issues.
Unfortunately, fixing those issues does not get my test to pass.
Another "issue" is that my test did pass pre-Caffeine, which means the test is 
not good enough at sussing out issues (since the BlockCache bugs I identified 
should not depend on the underlying map implementation).

> BlockCache corruption with high concurrency
> ---
>
> Key: SOLR-10121
> URL: https://issues.apache.org/jira/browse/SOLR-10121
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>
> Improving the tests of the BlockCache in SOLR-10116 uncovered a corruption 
> bug (either that or the test is flawed... TBD).
> The failing test is TestBlockCache.testBlockCacheConcurrent()



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.5 - Build # 25 - Failure

2017-02-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.5/25/

1 tests failed.
FAILED:  
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithTimeDelay

Error Message:
Could not find collection : c1

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : c1
at 
__randomizedtesting.SeedInfo.seed([F3AE569F025DE351:8C30E11A6B3FCEDB]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:170)
at 
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdate(ZkStateReaderTest.java:129)
at 
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithTimeDelay(ZkStateReaderTest.java:52)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11096 lines...]
   [junit4] Suite: org.apache.solr.cloud.overseer.ZkStateReaderTest
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (LUCENE-7693) revisit "org.apache." logic in GetMavenDependenciesTask.java

2017-02-13 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864285#comment-15864285
 ] 

Steve Rowe commented on LUCENE-7693:


The idea seems okay (as long as it doesn't cause trouble for the native 
modules; I assume we're on the same page here).

I skimmed the patches, and they look reasonable.

I'll take another look once the patches are complete - should be simple enough 
to compare POM output before and after.

> revisit "org.apache." logic in GetMavenDependenciesTask.java
> 
>
> Key: LUCENE-7693
> URL: https://issues.apache.org/jira/browse/LUCENE-7693
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-7693-step1.patch, LUCENE-7693-step2.patch
>
>
> Objective:
> * replace the {{... "org.apache." + ...}} logic in 
> GetMavenDependenciesTask.java at 
> [L399|https://github.com/apache/lucene-solr/blob/master/lucene/tools/src/java/org/apache/lucene/dependencies/GetMavenDependenciesTask.java#L399]
>  and 
> [L584|https://github.com/apache/lucene-solr/blob/master/lucene/tools/src/java/org/apache/lucene/dependencies/GetMavenDependenciesTask.java#L584]
> Motivation:
> * support for custom {{solr/contrib/...-myteam}} modules where the custom 
> modules have dependencies between them and the package structure is 
> _com.mycompany.myteam_ rather than _org.apache.solr_
> Approach:
> * step 1:
> ** in GetMavenDependenciesTask.java build a map out of all the ivy.xml files' 
> info elements e.g.
> {code}
> 
>   
> 
> {code}
> ** temporarily instrument GetMavenDependenciesTask.java to help determine how 
> the info element mappings differ from the current in-code logic
> * step 2:
> ** adjust selected ivy.xml files to minimise differences
> * step 3:
> ** switch over to 'new way' logic where this matches current in-code logic
> ** remove the temporary instrumentation



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10006) Cannot do a full sync (fetchindex) if the replica can't open a searcher

2017-02-13 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864195#comment-15864195
 ] 

Erick Erickson edited comment on SOLR-10006 at 2/13/17 7:13 PM:


Still fails, see the attached log for everything after I restarted the solr 
node that I had removed some index files from one of the cores on. This is on a 
fresh 6x pull in the last hour.

The take-away here is that the solr core must be restarted so there is never an 
open searcher on that core, perhaps your stress test isn't doing that? In this 
state commands appear to succeed.

So I poked a little more, here are a couple of observations:

> for this scenario to fail you must restart Solr. I suspect the pre-condition 
> here is that the searcher has never been successfully opened.

> reloading the core from the admin UI silently fails with a .doc file removed. 
> By that I mean the UI doesn't show any problems even though the log file has 
> exceptions.

> The core admin API correctly reports an error  for action=RELOAD though (curl 
> or the like)

> the admin UI still thinks the replica is active.

> a search on the replica with distrib=false also succeeds, even when I set a 
> very large start parameter, but I suspect this is a function there still 
> being an open file handle on the file I deleted so it's "kinda there" until 
> restart.

> At this point (the searcher is working even thought the doc file is missing), 
> a fetchindex doesn't think there's any work to do so "succeeds", i.e. it 
> doesn't fetch from the masterUrl, here's the log messages:

INFO  - 2017-02-13 18:50:57.434; [c:eoe s:shard1 r:core_node2 
x:eoe_shard1_replica2] org.apache.solr.core.SolrCore; [eoe_shard1_replica2]  
webapp=/solr path=/replication 
params={masterUrl=http://localhost:8982/solr/eoe_shard1_replica1=fetchindex}
 status=0 QTime=0
INFO  - 2017-02-13 18:50:57.439; [c:eoe s:shard1 r:core_node2 
x:eoe_shard1_replica2] org.apache.solr.handler.IndexFetcher; Master's 
generation: 4
INFO  - 2017-02-13 18:50:57.439; [c:eoe s:shard1 r:core_node2 
x:eoe_shard1_replica2] org.apache.solr.handler.IndexFetcher; Master's version: 
1487010762766
INFO  - 2017-02-13 18:50:57.439; [c:eoe s:shard1 r:core_node2 
x:eoe_shard1_replica2] org.apache.solr.handler.IndexFetcher; Slave's 
generation: 4
INFO  - 2017-02-13 18:50:57.439; [c:eoe s:shard1 r:core_node2 
x:eoe_shard1_replica2] org.apache.solr.handler.IndexFetcher; Slave's version: 
1487010762766
INFO  - 2017-02-13 18:50:57.439; [c:eoe s:shard1 r:core_node2 
x:eoe_shard1_replica2] org.apache.solr.handler.IndexFetcher; Slave in sync with 
master.



was (Author: erickerickson):
Still fails, see the attached log for everything after I restarted the solr 
node that I had removed some index files from one of the cores on. This is on a 
fresh 6x pull in the last hour.

> Cannot do a full sync (fetchindex) if the replica can't open a searcher
> ---
>
> Key: SOLR-10006
> URL: https://issues.apache.org/jira/browse/SOLR-10006
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.3.1, 6.4
>Reporter: Erick Erickson
> Attachments: SOLR-10006.patch, SOLR-10006.patch, solr.log, solr.log
>
>
> Doing a full sync or fetchindex requires an open searcher and if you can't 
> open the searcher those operations fail.
> For discussion. I've seen a situation in the field where a replica's index 
> became corrupt. When the node was restarted, the replica tried to do a full 
> sync but fails because the core can't open a searcher. The replica went into 
> an endless sync/fail/sync cycle.
> I couldn't reproduce that exact scenario, but it's easy enough to get into a 
> similar situation. Create a 2x2 collection and index some docs. Then stop one 
> of the instances and go in and remove a couple of segments files and restart.
> The replica stays in the "down" state, fine so far.
> Manually issue a fetchindex. That fails because the replica can't open a 
> searcher. Sure, issuing a fetchindex is abusive but I think it's the same 
> underlying issue: why should we care about the state of a replica's current 
> index when we're going to completely replace it anyway?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7465) Add a PatternTokenizer that uses Lucene's RegExp implementation

2017-02-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864235#comment-15864235
 ] 

ASF subversion and git services commented on LUCENE-7465:
-

Commit c24e03e6bf4d09e6f31eee8192bb6c0c4b2b6d27 in lucene-solr's branch 
refs/heads/branch_6x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c24e03e ]

LUCENE-7465: add SimplePatternTokenizer and SimpleSplitPatternTokenizer, for 
tokenization using Lucene's regexp/automaton implementation


> Add a PatternTokenizer that uses Lucene's RegExp implementation
> ---
>
> Key: LUCENE-7465
> URL: https://issues.apache.org/jira/browse/LUCENE-7465
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7465.patch, LUCENE-7465.patch
>
>
> I think there are some nice benefits to a version of PatternTokenizer that 
> uses Lucene's RegExp impl instead of the JDK's:
>   * Lucene's RegExp is compiled to a DFA up front, so if a "too hard" RegExp 
> is attempted the user discovers it up front instead of later on when a 
> "lucky" document arrives
>   * It processes the incoming characters as a stream, only pulling 128 
> characters at a time, vs the existing {{PatternTokenizer}} which currently 
> reads the entire string up front (this has caused heap problems in the past)
>   * It should be fast.
> I named it {{SimplePatternTokenizer}}, and it still needs a factory and 
> improved tests, but I think it's otherwise close.
> It currently does not take a {{group}} parameter because Lucene's RegExps 
> don't yet implement sub group capture.  I think we could add that at some 
> point, but it's a bit tricky.
> This doesn't even have group=-1 support (like String.split) ... I think if we 
> did that we should maybe name it differently 
> ({{SimplePatternSplitTokenizer}}?).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7465) Add a PatternTokenizer that uses Lucene's RegExp implementation

2017-02-13 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-7465.

   Resolution: Fixed
Fix Version/s: (was: 6.4)
   6.5

> Add a PatternTokenizer that uses Lucene's RegExp implementation
> ---
>
> Key: LUCENE-7465
> URL: https://issues.apache.org/jira/browse/LUCENE-7465
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7465.patch, LUCENE-7465.patch
>
>
> I think there are some nice benefits to a version of PatternTokenizer that 
> uses Lucene's RegExp impl instead of the JDK's:
>   * Lucene's RegExp is compiled to a DFA up front, so if a "too hard" RegExp 
> is attempted the user discovers it up front instead of later on when a 
> "lucky" document arrives
>   * It processes the incoming characters as a stream, only pulling 128 
> characters at a time, vs the existing {{PatternTokenizer}} which currently 
> reads the entire string up front (this has caused heap problems in the past)
>   * It should be fast.
> I named it {{SimplePatternTokenizer}}, and it still needs a factory and 
> improved tests, but I think it's otherwise close.
> It currently does not take a {{group}} parameter because Lucene's RegExps 
> don't yet implement sub group capture.  I think we could add that at some 
> point, but it's a bit tricky.
> This doesn't even have group=-1 support (like String.split) ... I think if we 
> did that we should maybe name it differently 
> ({{SimplePatternSplitTokenizer}}?).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7693) revisit "org.apache." logic in GetMavenDependenciesTask.java

2017-02-13 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated LUCENE-7693:

Attachment: LUCENE-7693-step2.patch
LUCENE-7693-step1.patch

[~dancollins] and I are collaborating on this and attached are patches for the 
steps 1 and 2 described above.

[~steve_rowe] - would you have any thoughts on the approach and/or the 
work-in-progress patches? Thanks.

> revisit "org.apache." logic in GetMavenDependenciesTask.java
> 
>
> Key: LUCENE-7693
> URL: https://issues.apache.org/jira/browse/LUCENE-7693
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-7693-step1.patch, LUCENE-7693-step2.patch
>
>
> Objective:
> * replace the {{... "org.apache." + ...}} logic in 
> GetMavenDependenciesTask.java at 
> [L399|https://github.com/apache/lucene-solr/blob/master/lucene/tools/src/java/org/apache/lucene/dependencies/GetMavenDependenciesTask.java#L399]
>  and 
> [L584|https://github.com/apache/lucene-solr/blob/master/lucene/tools/src/java/org/apache/lucene/dependencies/GetMavenDependenciesTask.java#L584]
> Motivation:
> * support for custom {{solr/contrib/...-myteam}} modules where the custom 
> modules have dependencies between them and the package structure is 
> _com.mycompany.myteam_ rather than _org.apache.solr_
> Approach:
> * step 1:
> ** in GetMavenDependenciesTask.java build a map out of all the ivy.xml files' 
> info elements e.g.
> {code}
> 
>   
> 
> {code}
> ** temporarily instrument GetMavenDependenciesTask.java to help determine how 
> the info element mappings differ from the current in-code logic
> * step 2:
> ** adjust selected ivy.xml files to minimise differences
> * step 3:
> ** switch over to 'new way' logic where this matches current in-code logic
> ** remove the temporary instrumentation



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7693) revisit "org.apache." logic in GetMavenDependenciesTask.java

2017-02-13 Thread Christine Poerschke (JIRA)
Christine Poerschke created LUCENE-7693:
---

 Summary: revisit "org.apache." logic in 
GetMavenDependenciesTask.java
 Key: LUCENE-7693
 URL: https://issues.apache.org/jira/browse/LUCENE-7693
 Project: Lucene - Core
  Issue Type: Wish
Reporter: Christine Poerschke
Assignee: Christine Poerschke
Priority: Minor


Objective:
* replace the {{... "org.apache." + ...}} logic in 
GetMavenDependenciesTask.java at 
[L399|https://github.com/apache/lucene-solr/blob/master/lucene/tools/src/java/org/apache/lucene/dependencies/GetMavenDependenciesTask.java#L399]
 and 
[L584|https://github.com/apache/lucene-solr/blob/master/lucene/tools/src/java/org/apache/lucene/dependencies/GetMavenDependenciesTask.java#L584]

Motivation:
* support for custom {{solr/contrib/...-myteam}} modules where the custom 
modules have dependencies between them and the package structure is 
_com.mycompany.myteam_ rather than _org.apache.solr_

Approach:
* step 1:
** in GetMavenDependenciesTask.java build a map out of all the ivy.xml files' 
info elements e.g.
{code}

  

{code}
** temporarily instrument GetMavenDependenciesTask.java to help determine how 
the info element mappings differ from the current in-code logic
* step 2:
** adjust selected ivy.xml files to minimise differences
* step 3:
** switch over to 'new way' logic where this matches current in-code logic
** remove the temporary instrumentation



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10006) Cannot do a full sync (fetchindex) if the replica can't open a searcher

2017-02-13 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-10006:
--
Attachment: solr.log

Still fails, see the attached log for everything after I restarted the solr 
node that I had removed some index files from one of the cores on. This is on a 
fresh 6x pull in the last hour.

> Cannot do a full sync (fetchindex) if the replica can't open a searcher
> ---
>
> Key: SOLR-10006
> URL: https://issues.apache.org/jira/browse/SOLR-10006
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.3.1, 6.4
>Reporter: Erick Erickson
> Attachments: SOLR-10006.patch, SOLR-10006.patch, solr.log, solr.log
>
>
> Doing a full sync or fetchindex requires an open searcher and if you can't 
> open the searcher those operations fail.
> For discussion. I've seen a situation in the field where a replica's index 
> became corrupt. When the node was restarted, the replica tried to do a full 
> sync but fails because the core can't open a searcher. The replica went into 
> an endless sync/fail/sync cycle.
> I couldn't reproduce that exact scenario, but it's easy enough to get into a 
> similar situation. Create a 2x2 collection and index some docs. Then stop one 
> of the instances and go in and remove a couple of segments files and restart.
> The replica stays in the "down" state, fine so far.
> Manually issue a fetchindex. That fails because the replica can't open a 
> searcher. Sure, issuing a fetchindex is abusive but I think it's the same 
> underlying issue: why should we care about the state of a replica's current 
> index when we're going to completely replace it anyway?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10131) Solr returns 500 instead of 400 from update with bad value for UUID

2017-02-13 Thread Walter Underwood (JIRA)
Walter Underwood created SOLR-10131:
---

 Summary: Solr returns 500 instead of 400 from update with bad 
value for UUID
 Key: SOLR-10131
 URL: https://issues.apache.org/jira/browse/SOLR-10131
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.4.1
 Environment: Linux new-solr-c15.test3.cloud.cheggnet.com 
3.10.0-229.20.1.el7.x86_64 #1 SMP Tue Nov 3 19:10:07 UTC 2015 x86_64 x86_64 
x86_64 GNU/Linux
Reporter: Walter Underwood


This error should return a 400 with a message about an illegal value for the 
UUID field.

null:org.apache.solr.common.SolrException: Error while creating field 
'chegg_uuid{type=uuid,properties=indexed,stored,omitNorms,omitTermFreqAndPositions,sortMissingLast}'
 from value '1249948'
at org.apache.solr.schema.FieldType.createField(FieldType.java:273)
at org.apache.solr.schema.StrField.createFields(StrField.java:44)
at 
org.apache.solr.update.DocumentBuilder.addField(DocumentBuilder.java:47)
at 
org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:122)
at 
org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:82)
at 
org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:277)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:211)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:166)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:48)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:955)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1110)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:736)
at 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
at 
org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:97)




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10006) Cannot do a full sync (fetchindex) if the replica can't open a searcher

2017-02-13 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864134#comment-15864134
 ] 

Mike Drob commented on SOLR-10006:
--

Erick - can you check this again with whatever test you ran before? I think 
LUCENE-7662 takes care of this with no Solr changes necessary, and my local 
tests pass, but want to get your confirmation before closing this out.

> Cannot do a full sync (fetchindex) if the replica can't open a searcher
> ---
>
> Key: SOLR-10006
> URL: https://issues.apache.org/jira/browse/SOLR-10006
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.3.1, 6.4
>Reporter: Erick Erickson
> Attachments: SOLR-10006.patch, SOLR-10006.patch, solr.log
>
>
> Doing a full sync or fetchindex requires an open searcher and if you can't 
> open the searcher those operations fail.
> For discussion. I've seen a situation in the field where a replica's index 
> became corrupt. When the node was restarted, the replica tried to do a full 
> sync but fails because the core can't open a searcher. The replica went into 
> an endless sync/fail/sync cycle.
> I couldn't reproduce that exact scenario, but it's easy enough to get into a 
> similar situation. Create a 2x2 collection and index some docs. Then stop one 
> of the instances and go in and remove a couple of segments files and restart.
> The replica stays in the "down" state, fine so far.
> Manually issue a fetchindex. That fails because the replica can't open a 
> searcher. Sure, issuing a fetchindex is abusive but I think it's the same 
> underlying issue: why should we care about the state of a replica's current 
> index when we're going to completely replace it anyway?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8396) Add support for PointFields in Solr

2017-02-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864098#comment-15864098
 ] 

Tomás Fernández Löbbe commented on SOLR-8396:
-

Just committed the changes for  SOLR-9987. 
I’ll now backport the following commits from master

57934ba4480d71218c7f60d0417dbae9d26188d0 SOLR-8396: Add support for PointFields 
in Solr
285a1013ad04dd1cd5e5e41ffa93a87fe862c152 SOLR-10011: Refactor PointField & 
TrieField to now have a common base class, NumericFieldType
0f7990b2c8590d169add59354cc2678260f94e03 SOLR-10011: Fix exception log message
59c41e2a6c685dd9ac943c69d12e9bfe2a7d380e SOLR-10011: Add NumberType 
getNumberType() to FieldType and deprecate LegacyNumericType getNumericType()
7dcf9de41f6435a741910a6367ef9fece11a588b SOLR-9987: Implement support for 
multi-valued DocValues in PointFields

I’ll also move the CHANGES entries from from 7 to 6.5

> Add support for PointFields in Solr
> ---
>
> Key: SOLR-8396
> URL: https://issues.apache.org/jira/browse/SOLR-8396
> Project: Solr
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Tomás Fernández Löbbe
> Attachments: SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch
>
>
> In LUCENE-6917, [~mikemccand] mentioned that DimensionalValues are better 
> than NumericFields in most respects. We should explore the benefits of using 
> it in Solr and hence, if appropriate, switch over to using them.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.5-Linux (32bit/jdk1.8.0_121) - Build # 471 - Still Unstable!

2017-02-13 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/471/
Java: 32bit/jdk1.8.0_121 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.test

Error Message:
Exactly one shard should have changed, instead: [shard2, shard1] 
nodes=([core_node3(shard2), core_node2(shard1), core_node4(shard1)]) 
expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: Exactly one shard should have changed, instead: 
[shard2, shard1] nodes=([core_node3(shard2), core_node2(shard1), 
core_node4(shard1)]) expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([87480164DC29BA70:F1C3EBE72D5D788]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.test(DistribDocExpirationUpdateProcessorTest.java:119)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Commented] (SOLR-10114) child documents lack _version_, susceptible to reordered delete-by-query

2017-02-13 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864086#comment-15864086
 ] 

Mike Drob commented on SOLR-10114:
--

I think it makes sense to split the fix into two parts - one patch to take care 
of future indices and a separate fix to look at existing indices. Especially if 
one half of that is much easier and can be done significantly faster.

> child documents lack _version_, susceptible to reordered delete-by-query 
> -
>
> Key: SOLR-10114
> URL: https://issues.apache.org/jira/browse/SOLR-10114
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
>
> It looks like when a block of documents is indexed, child documents get no 
> \_version\_ field.  This means (among other potential issues) that a 
> delete-by-query that is reordered will cause matching child documents to be 
> deleted.  DBQ normally prevents deleting newer docs by including a 
> restriction on \_version\_, which doesn't work for anything lacking that 
> field.  Re-ordered delete-by-term of any child docs would also be affected 
> (although it should be a much rarer issue.)
> The leading candidate for a fix is to use the exact same \_version\_ for all 
> child docs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7465) Add a PatternTokenizer that uses Lucene's RegExp implementation

2017-02-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864078#comment-15864078
 ] 

ASF subversion and git services commented on LUCENE-7465:
-

Commit 93fa72f77bd024aa09eef043c65c64a6524613dc in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=93fa72f ]

LUCENE-7465: add SimplePatternTokenizer and SimpleSplitPatternTokenizer, for 
tokenization using Lucene's regexp/automaton implementation


> Add a PatternTokenizer that uses Lucene's RegExp implementation
> ---
>
> Key: LUCENE-7465
> URL: https://issues.apache.org/jira/browse/LUCENE-7465
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7465.patch, LUCENE-7465.patch
>
>
> I think there are some nice benefits to a version of PatternTokenizer that 
> uses Lucene's RegExp impl instead of the JDK's:
>   * Lucene's RegExp is compiled to a DFA up front, so if a "too hard" RegExp 
> is attempted the user discovers it up front instead of later on when a 
> "lucky" document arrives
>   * It processes the incoming characters as a stream, only pulling 128 
> characters at a time, vs the existing {{PatternTokenizer}} which currently 
> reads the entire string up front (this has caused heap problems in the past)
>   * It should be fast.
> I named it {{SimplePatternTokenizer}}, and it still needs a factory and 
> improved tests, but I think it's otherwise close.
> It currently does not take a {{group}} parameter because Lucene's RegExps 
> don't yet implement sub group capture.  I think we could add that at some 
> point, but it's a bit tricky.
> This doesn't even have group=-1 support (like String.split) ... I think if we 
> did that we should maybe name it differently 
> ({{SimplePatternSplitTokenizer}}?).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8396) Add support for PointFields in Solr

2017-02-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864074#comment-15864074
 ] 

ASF subversion and git services commented on SOLR-8396:
---

Commit 7dcf9de41f6435a741910a6367ef9fece11a588b in lucene-solr's branch 
refs/heads/master from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7dcf9de ]

SOLR-9987: Implement support for multi-valued DocValues in PointFields
CC SOLR-8396


> Add support for PointFields in Solr
> ---
>
> Key: SOLR-8396
> URL: https://issues.apache.org/jira/browse/SOLR-8396
> Project: Solr
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Tomás Fernández Löbbe
> Attachments: SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch
>
>
> In LUCENE-6917, [~mikemccand] mentioned that DimensionalValues are better 
> than NumericFields in most respects. We should explore the benefits of using 
> it in Solr and hence, if appropriate, switch over to using them.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9987) Implement support for multi-valued DocValues in PointFields

2017-02-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864073#comment-15864073
 ] 

ASF subversion and git services commented on SOLR-9987:
---

Commit 7dcf9de41f6435a741910a6367ef9fece11a588b in lucene-solr's branch 
refs/heads/master from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7dcf9de ]

SOLR-9987: Implement support for multi-valued DocValues in PointFields
CC SOLR-8396


> Implement support for multi-valued DocValues in PointFields
> ---
>
> Key: SOLR-9987
> URL: https://issues.apache.org/jira/browse/SOLR-9987
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
> Attachments: SOLR-9987.patch, SOLR-9987.patch
>
>
> This is not currently supported, and since PointFields can't use FieldCache, 
> faceting, stats, etc is not supported on multi-valued point fields. Followup 
> task of SOLR-8396



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7692) PatternReplaceCharFilterFactory should implement MultiTermAware

2017-02-13 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-7692:


 Summary: PatternReplaceCharFilterFactory should implement 
MultiTermAware
 Key: LUCENE-7692
 URL: https://issues.apache.org/jira/browse/LUCENE-7692
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor


The multi-term aware marker API is useful to know which analysis components to 
apply when analyzing prefix or wildcard queries. I think 
PatternReplaceCharFilterFactory qualifies?

For the record, we have MappingCharFilterFactory that does a similar job 
(except that it takes an explicit map of replacements  rather than regular 
expressions) and implements MultiTermAware.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10114) child documents lack _version_, susceptible to reordered delete-by-query

2017-02-13 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864038#comment-15864038
 ] 

Yonik Seeley commented on SOLR-10114:
-

bq. if the fix is to store the version with the child docs, it requires 
reindexing to resolve the issue.

Right, this won't fix old indexes.

bq. I was thinking of adding another iteration to fetch parent version for 
childdocs without version. 

That seems difficult, unless we just assume that any doc w/o a version is a 
child doc.
Also, another thing to watch out for is that the version field is technically 
not mandatory for non-solrcloud.  The presence of a \_root\_ field could be 
used to further determine if a doc is a child doc, but that may be expensive 
too. 

> child documents lack _version_, susceptible to reordered delete-by-query 
> -
>
> Key: SOLR-10114
> URL: https://issues.apache.org/jira/browse/SOLR-10114
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
>
> It looks like when a block of documents is indexed, child documents get no 
> \_version\_ field.  This means (among other potential issues) that a 
> delete-by-query that is reordered will cause matching child documents to be 
> deleted.  DBQ normally prevents deleting newer docs by including a 
> restriction on \_version\_, which doesn't work for anything lacking that 
> field.  Re-ordered delete-by-term of any child docs would also be affected 
> (although it should be a much rarer issue.)
> The leading candidate for a fix is to use the exact same \_version\_ for all 
> child docs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10127) OverseerRolesTest needs to be hardened.

2017-02-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864008#comment-15864008
 ] 

ASF subversion and git services commented on SOLR-10127:


Commit c19dff9d03aa8ef15013e25a2009ce91c189392d in lucene-solr's branch 
refs/heads/branch_6x from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c19dff9 ]

SOLR-10127: OverseerRolesTest needs to be hardened.


> OverseerRolesTest needs to be hardened.
> ---
>
> Key: SOLR-10127
> URL: https://issues.apache.org/jira/browse/SOLR-10127
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9997) Enable configuring SolrHttpClientBuilder via java system property

2017-02-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-9997.
---
   Resolution: Fixed
Fix Version/s: master (7.0)
   6.5

> Enable configuring SolrHttpClientBuilder via java system property
> -
>
> Key: SOLR-9997
> URL: https://issues.apache.org/jira/browse/SOLR-9997
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Hrishikesh Gadre
>Assignee: Mark Miller
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9997_6x.patch
>
>
> Currently SolrHttpClientBuilder needs to be configured via invoking 
> HttpClientUtil#setHttpClientBuilder(...) API. On the other hand SolrCLI 
> attempts to support configuring SolrHttpClientBuilder via Java system 
> property.  
> https://github.com/apache/lucene-solr/blob/9f58b6cd177f72b226c83adbb965cfe08d61d2fb/solr/core/src/java/org/apache/solr/util/SolrCLI.java#L265
> But after changes for SOLR-4509, this is no longer working. This is because 
> we need to configure HttpClientBuilderFactory which can provide appropriate 
> SolrHttpClientBuilder instance (e.g. Krb5HttpClientBuilder). I verified that 
> SolrCLI does not work in a kerberos enabled cluster. During the testing I 
> also found that SolrCLI is hardcoded to use basic authentication,
> https://github.com/apache/lucene-solr/blob/9f58b6cd177f72b226c83adbb965cfe08d61d2fb/solr/core/src/java/org/apache/solr/util/SolrCLI.java#L156
> This jira is to add support for configuring HttpClientBuilderFactory as a 
> java system property so that SolrCLI as well as other Solr clients can also 
> benefit this. Also we should provide a HttpClientBuilderFactory which support 
> configuring preemptive basic authentication so that we can remove the 
> hardcoded basic auth usage in SolrCLI (and enable it work with kerberos). 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9997) Enable configuring SolrHttpClientBuilder via java system property

2017-02-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864009#comment-15864009
 ] 

ASF subversion and git services commented on SOLR-9997:
---

Commit a986368fd0670840177a8c19fb15dcd1f0e69797 in lucene-solr's branch 
refs/heads/branch_6x from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a986368 ]

SOLR-9997: Enable configuring SolrHttpClientBuilder via java system property.


> Enable configuring SolrHttpClientBuilder via java system property
> -
>
> Key: SOLR-9997
> URL: https://issues.apache.org/jira/browse/SOLR-9997
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Hrishikesh Gadre
>Assignee: Mark Miller
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9997_6x.patch
>
>
> Currently SolrHttpClientBuilder needs to be configured via invoking 
> HttpClientUtil#setHttpClientBuilder(...) API. On the other hand SolrCLI 
> attempts to support configuring SolrHttpClientBuilder via Java system 
> property.  
> https://github.com/apache/lucene-solr/blob/9f58b6cd177f72b226c83adbb965cfe08d61d2fb/solr/core/src/java/org/apache/solr/util/SolrCLI.java#L265
> But after changes for SOLR-4509, this is no longer working. This is because 
> we need to configure HttpClientBuilderFactory which can provide appropriate 
> SolrHttpClientBuilder instance (e.g. Krb5HttpClientBuilder). I verified that 
> SolrCLI does not work in a kerberos enabled cluster. During the testing I 
> also found that SolrCLI is hardcoded to use basic authentication,
> https://github.com/apache/lucene-solr/blob/9f58b6cd177f72b226c83adbb965cfe08d61d2fb/solr/core/src/java/org/apache/solr/util/SolrCLI.java#L156
> This jira is to add support for configuring HttpClientBuilderFactory as a 
> java system property so that SolrCLI as well as other Solr clients can also 
> benefit this. Also we should provide a HttpClientBuilderFactory which support 
> configuring preemptive basic authentication so that we can remove the 
> hardcoded basic auth usage in SolrCLI (and enable it work with kerberos). 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10114) child documents lack _version_, susceptible to reordered delete-by-query

2017-02-13 Thread Mano Kovacs (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863988#comment-15863988
 ] 

Mano Kovacs commented on SOLR-10114:


[~yo...@apache.org], thanks for the hints, makes it much easier to test it. I 
am preparing the tests first, then make them pass with a fix. I was wondering, 
if the fix is to store the version with the child docs, it requires reindexing 
to resolve the issue. I was thinking of adding another iteration to fetch 
parent version for childdocs without version. It might have significant 
performance impact on DBQ, though.

> child documents lack _version_, susceptible to reordered delete-by-query 
> -
>
> Key: SOLR-10114
> URL: https://issues.apache.org/jira/browse/SOLR-10114
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
>
> It looks like when a block of documents is indexed, child documents get no 
> \_version\_ field.  This means (among other potential issues) that a 
> delete-by-query that is reordered will cause matching child documents to be 
> deleted.  DBQ normally prevents deleting newer docs by including a 
> restriction on \_version\_, which doesn't work for anything lacking that 
> field.  Re-ordered delete-by-term of any child docs would also be affected 
> (although it should be a much rarer issue.)
> The leading candidate for a fix is to use the exact same \_version\_ for all 
> child docs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2017-02-13 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863969#comment-15863969
 ] 

Joel Bernstein commented on SOLR-8593:
--

Ok, I've pushed what I think are the final changes out to 
https://github.com/apache/lucene-solr/tree/jira/solr-8593.

I believe we are ready to merge to master. The one complication with this is 
that when the branch is merged into master we're going to get compilation 
errors due to changes in SOLR-9916. These should hopefully be easy to fix.

[~risdenk], what do think is the best way is to go about the merge?

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>  Components: Parallel SQL
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10108) bin/solr script recursive copy broken

2017-02-13 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863953#comment-15863953
 ] 

Erick Erickson commented on SOLR-10108:
---

Jan:

Thanks for looking. I'm trying to go the other way (which I should have 
specified, just tried on 6x since I have it handy):

bin/solr zk cp -r zk:/ ~/eoezk -z localhost:2181

Fails with: ERROR: Invalid path string "//configs" caused by empty node name 
specified [~1...@c07.de]

where 
bin/solr zk cp -r zk:/whatever ~/eoezk -z localhost:2181
works.

The context here was a client who'd accidentally removed the zoo_data directory 
on all ZKs but still had them running. We hit on the bright idea "hey, we can 
just dump all of the ZK data since the data is still available until we restart 
the ZK nodes" but ran into this when trying to copy stuff down.

So I worked out the issues with the path and got the two-way copy to work, but 
also noticed another issue. Since ZK nodes can have data whether leaf nodes or 
not, the current process is lossy since non-leaf nodes don't get their data 
restored.

This makes it impossible to backup the collection node and restore it since the 
collection can have a configset name as data. My take is that copying back and 
forth _should_ restore intermediate node's data, do you (and others) concur?

My first-attempt PoC is to create a _very special file name_ something like 
node_zookeeper_solr.data to put any information associated with non-leaf nodes 
when the data is not empty as a PoC. That feels like a hack though as there's 
the possibility of collisions. Hmmm, maybe {generated_guid}.znode.solr.data? 
Still possibly a collision if someone, somehow managed to have a znode with a 
GUID followed by .znode.solr.data I suppose, but that's seems unlikely enough 
that I'm not willing to worry about it. How about 
"erick.erickson.was.here.data"? Maybe not.

WDYT


> bin/solr script recursive copy broken
> -
>
> Key: SOLR-10108
> URL: https://issues.apache.org/jira/browse/SOLR-10108
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>
> cp /r zk:/ fails with "cannot create //whatever".



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10130) Serious performance degradation in Solr 6.4.1 due to the new metrics collection

2017-02-13 Thread Walter Underwood (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863948#comment-15863948
 ] 

Walter Underwood commented on SOLR-10130:
-

I’m seeing similar problems here. With 6.4.0, we were handling 6000 
requests/minute. With 6.4.1 it is 1000 rpm with median response times around 
2.5 seconds. I also switched to the G1 collector. I’m going to back that out 
and retest today to see if the performance comes back.

Does disabling metrics fix it or we we need to go back to 6.4.0?

wunder

> Serious performance degradation in Solr 6.4.1 due to the new metrics 
> collection
> ---
>
> Key: SOLR-10130
> URL: https://issues.apache.org/jira/browse/SOLR-10130
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 6.4.1
> Environment: Centos 7, OpenJDK 1.8.0 update 111
>Reporter: Ere Maijala
>Priority: Blocker
>  Labels: perfomance
> Attachments: solr-8983-console-f1.log
>
>
> We've stumbled on serious performance issues after upgrading to Solr 6.4.1. 
> Looks like the new metrics collection system in MetricsDirectoryFactory is 
> causing a major slowdown. This happens with an index configuration that, as 
> far as I can see, has no metrics specific configuration and uses 
> luceneMatchVersion 5.5.0. In practice a moderate load will completely bog 
> down the server with Solr threads constantly using up all CPU (600% on 6 core 
> machine) capacity with a load that normally  where we normally see an average 
> load of < 50%.
> I took stack traces (I'll attach them) and noticed that the threads are 
> spending time in com.codahale.metrics.Meter.mark. I tested building Solr 
> 6.4.1 with the metrics collection disabled in MetricsDirectoryFactory getByte 
> and getBytes methods and was unable to reproduce the issue.
> As far as I can see there are several issues:
> 1. Collecting metrics on every single byte read is slow.
> 2. Having it enabled by default is not a good idea.
> 3. The comment "enable coarse-grained metrics by default" at 
> https://github.com/apache/lucene-solr/blob/branch_6x/solr/core/src/java/org/apache/solr/update/SolrIndexConfig.java#L104
>  implies that only coarse-grained metrics should be enabled by default, and 
> this contradicts with collecting metrics on every single byte read.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10130) Serious performance degradation in Solr 6.4.1 due to the new metrics collection

2017-02-13 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-10130:

Priority: Blocker  (was: Major)

> Serious performance degradation in Solr 6.4.1 due to the new metrics 
> collection
> ---
>
> Key: SOLR-10130
> URL: https://issues.apache.org/jira/browse/SOLR-10130
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 6.4.1
> Environment: Centos 7, OpenJDK 1.8.0 update 111
>Reporter: Ere Maijala
>Priority: Blocker
>  Labels: perfomance
> Attachments: solr-8983-console-f1.log
>
>
> We've stumbled on serious performance issues after upgrading to Solr 6.4.1. 
> Looks like the new metrics collection system in MetricsDirectoryFactory is 
> causing a major slowdown. This happens with an index configuration that, as 
> far as I can see, has no metrics specific configuration and uses 
> luceneMatchVersion 5.5.0. In practice a moderate load will completely bog 
> down the server with Solr threads constantly using up all CPU (600% on 6 core 
> machine) capacity with a load that normally  where we normally see an average 
> load of < 50%.
> I took stack traces (I'll attach them) and noticed that the threads are 
> spending time in com.codahale.metrics.Meter.mark. I tested building Solr 
> 6.4.1 with the metrics collection disabled in MetricsDirectoryFactory getByte 
> and getBytes methods and was unable to reproduce the issue.
> As far as I can see there are several issues:
> 1. Collecting metrics on every single byte read is slow.
> 2. Having it enabled by default is not a good idea.
> 3. The comment "enable coarse-grained metrics by default" at 
> https://github.com/apache/lucene-solr/blob/branch_6x/solr/core/src/java/org/apache/solr/update/SolrIndexConfig.java#L104
>  implies that only coarse-grained metrics should be enabled by default, and 
> this contradicts with collecting metrics on every single byte read.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 282 - Still Failing

2017-02-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/282/

5 tests failed.
FAILED:  org.apache.lucene.index.TestIndexSorting.testRandom3

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space
at 
__randomizedtesting.SeedInfo.seed([977CE2E4FCF62D27:35A4AC3E98040421]:0)
at org.apache.lucene.util.packed.Packed64.(Packed64.java:73)
at 
org.apache.lucene.util.packed.PackedInts.getMutable(PackedInts.java:972)
at 
org.apache.lucene.util.packed.PackedInts.getMutable(PackedInts.java:939)
at 
org.apache.lucene.util.packed.GrowableWriter.ensureCapacity(GrowableWriter.java:80)
at 
org.apache.lucene.util.packed.GrowableWriter.set(GrowableWriter.java:88)
at 
org.apache.lucene.util.packed.AbstractPagedMutable.set(AbstractPagedMutable.java:98)
at org.apache.lucene.util.fst.NodeHash.addNew(NodeHash.java:152)
at org.apache.lucene.util.fst.NodeHash.rehash(NodeHash.java:169)
at org.apache.lucene.util.fst.NodeHash.add(NodeHash.java:133)
at org.apache.lucene.util.fst.Builder.compileNode(Builder.java:214)
at org.apache.lucene.util.fst.Builder.freezeTail(Builder.java:310)
at org.apache.lucene.util.fst.Builder.add(Builder.java:414)
at 
org.apache.lucene.codecs.memory.MemoryDocValuesConsumer.writeFST(MemoryDocValuesConsumer.java:367)
at 
org.apache.lucene.codecs.memory.MemoryDocValuesConsumer.addSortedField(MemoryDocValuesConsumer.java:404)
at 
org.apache.lucene.codecs.DocValuesConsumer.mergeSortedField(DocValuesConsumer.java:653)
at 
org.apache.lucene.codecs.DocValuesConsumer.merge(DocValuesConsumer.java:204)
at 
org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.merge(PerFieldDocValuesFormat.java:153)
at 
org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:167)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:111)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4363)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3940)
at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2091)
at 
org.apache.lucene.index.IndexWriter.doAfterSegmentFlushed(IndexWriter.java:5004)
at 
org.apache.lucene.index.DocumentsWriter$MergePendingEvent.process(DocumentsWriter.java:731)
at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5042)
at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5033)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1582)
at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1324)
at 
org.apache.lucene.index.TestIndexSorting.testRandom3(TestIndexSorting.java:2229)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)


FAILED:  
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testBatchAddsWithDelete

Error Message:
Timeout while trying to assert number of documents @ target_collection

Stack Trace:
java.lang.AssertionError: Timeout while trying to assert number of documents @ 
target_collection
at 
__randomizedtesting.SeedInfo.seed([8D622B05D79CDF:77971B8FADCE9DF3]:0)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.assertNumDocs(BaseCdcrDistributedZkTest.java:271)
at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testBatchAddsWithDelete(CdcrReplicationDistributedZkTest.java:532)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 

[jira] [Commented] (SOLR-10121) BlockCache corruption with high concurrency

2017-02-13 Thread Ben Manes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863924#comment-15863924
 ] 

Ben Manes commented on SOLR-10121:
--

Yes, a write should constitute a publication. Caffeine decorates a 
ConcurrentHashMap but does bypass it at times. By default eviction is 
asynchronous by delegating to fjp commonPool, but can be configured to use the 
caller instead. That might be useful for testing.

Solr uses an old version of Caffeine. A patch was reviewed and approved, but 
needs someone to merge it in SOLR-8241. I'm not aware of a visibility bug in 
any release, but staying current would be helpful as I have fixed bugs since 
that version.

> BlockCache corruption with high concurrency
> ---
>
> Key: SOLR-10121
> URL: https://issues.apache.org/jira/browse/SOLR-10121
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>
> Improving the tests of the BlockCache in SOLR-10116 uncovered a corruption 
> bug (either that or the test is flawed... TBD).
> The failing test is TestBlockCache.testBlockCacheConcurrent()



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2017-02-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863865#comment-15863865
 ] 

ASF subversion and git services commented on SOLR-8029:
---

Commit 8b5dec52f5d331ad3febd599016f8dd85480e628 in lucene-solr's branch 
refs/heads/branch_6x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8b5dec5 ]

SOLR-8029: disabled easymock for java9


> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
> Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, 
> SOLR-8029.patch, SOLR-8029.patch
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 4 types of requests in the new API 
> * {{/v2//*}} : Hit a collection directly or manage 
> collections/shards/replicas 
> * {{/v2//*}} : Hit a core directly or manage cores 
> * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection 
> or core. e.g: security, overseer ops etc
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7690) TestSimpleTextPointsFormat.testWithExceptions() failure

2017-02-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863830#comment-15863830
 ] 

ASF subversion and git services commented on LUCENE-7690:
-

Commit 00449959d61aa33dd879a987dd1379e6496ca7b1 in lucene-solr's branch 
refs/heads/branch_6x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0044995 ]

LUCENE-7690: also handle expected CorruptIndexException in this test


> TestSimpleTextPointsFormat.testWithExceptions() failure
> ---
>
> Key: LUCENE-7690
> URL: https://issues.apache.org/jira/browse/LUCENE-7690
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.5
>
>
> Reproducing branch_6x seed from 
> [https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/690/]:
> {noformat}
>[junit4] Suite: 
> org.apache.lucene.codecs.simpletext.TestSimpleTextPointsFormat
>[junit4] IGNOR/A 0.02s J0 | TestSimpleTextPointsFormat.testRandomBinaryBig
>[junit4]> Assumption #1: 'nightly' test group is disabled (@Nightly())
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestSimpleTextPointsFormat -Dtests.method=testWithExceptions 
> -Dtests.seed=CCE1E867577CFFF6 -Dtests.slow=true -Dtests.locale=uk-UA 
> -Dtests.timezone=Asia/Qatar -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.93s J0 | TestSimpleTextPointsFormat.testWithExceptions 
> <<<
>[junit4]> Throwable #1: java.lang.IllegalStateException: this writer 
> hit an unrecoverable error; cannot complete forceMerge
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([CCE1E867577CFFF6:6EB2741BD8F2B00C]:0)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1931)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1881)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.forceMerge(RandomIndexWriter.java:429)
>[junit4]>  at 
> org.apache.lucene.index.BasePointsFormatTestCase.verify(BasePointsFormatTestCase.java:701)
>[junit4]>  at 
> org.apache.lucene.index.BasePointsFormatTestCase.testWithExceptions(BasePointsFormatTestCase.java:224)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: org.apache.lucene.index.CorruptIndexException: 
> Problem reading index from 
> MockDirectoryWrapper(NIOFSDirectory@/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/build/codecs/test/J0/temp/lucene.codecs.simpletext.TestSimpleTextPointsFormat_CCE1E867577CFFF6-001/tempDir-001
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@4d6de658) 
> (resource=MockDirectoryWrapper(NIOFSDirectory@/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/build/codecs/test/J0/temp/lucene.codecs.simpletext.TestSimpleTextPointsFormat_CCE1E867577CFFF6-001/tempDir-001
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@4d6de658))
>[junit4]>  at 
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:140)
>[junit4]>  at 
> org.apache.lucene.index.SegmentReader.(SegmentReader.java:74)
>[junit4]>  at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
>[junit4]>  at 
> org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:617)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4293)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3940)
>[junit4]>  at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
>[junit4]>  at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
>[junit4]> Caused by: java.io.FileNotFoundException: a random 
> IOException (_0.inf)
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.maybeThrowIOExceptionOnOpen(MockDirectoryWrapper.java:575)
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:744)
>[junit4]>  at 
> org.apache.lucene.store.Directory.openChecksumInput(Directory.java:137)
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.openChecksumInput(MockDirectoryWrapper.java:1072)
>[junit4]>  at 
> org.apache.lucene.codecs.simpletext.SimpleTextFieldInfosFormat.read(SimpleTextFieldInfosFormat.java:73)
>[junit4]>  at 
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:107)
>[junit4]>  ... 7 more
>[junit4] IGNOR/A 0.01s J0 | 

[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+155) - Build # 2846 - Unstable!

2017-02-13 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2846/
Java: 32bit/jdk-9-ea+155 -server -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.handler.admin.TestApiFramework.testFramework

Error Message:


Stack Trace:
java.lang.ExceptionInInitializerError
at 
__randomizedtesting.SeedInfo.seed([826BE8C91B43F0C8:951D22EE1D971CF5]:0)
at 
net.sf.cglib.core.KeyFactory$Generator.generateClass(KeyFactory.java:166)
at 
net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at 
net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at net.sf.cglib.core.KeyFactory$Generator.create(KeyFactory.java:144)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:116)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:108)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:104)
at net.sf.cglib.proxy.Enhancer.(Enhancer.java:69)
at 
org.easymock.internal.ClassProxyFactory.createEnhancer(ClassProxyFactory.java:259)
at 
org.easymock.internal.ClassProxyFactory.createProxy(ClassProxyFactory.java:174)
at org.easymock.internal.MocksControl.createMock(MocksControl.java:60)
at org.easymock.EasyMock.createMock(EasyMock.java:104)
at 
org.apache.solr.handler.admin.TestCoreAdminApis.getCoreContainerMock(TestCoreAdminApis.java:76)
at 
org.apache.solr.handler.admin.TestApiFramework.testFramework(TestApiFramework.java:59)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:543)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (LUCENE-7690) TestSimpleTextPointsFormat.testWithExceptions() failure

2017-02-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863826#comment-15863826
 ] 

ASF subversion and git services commented on LUCENE-7690:
-

Commit f1c5cd5784dd50a030c2923d2ad25d5178f60e6a in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f1c5cd5 ]

LUCENE-7690: also handle expected CorruptIndexException in this test


> TestSimpleTextPointsFormat.testWithExceptions() failure
> ---
>
> Key: LUCENE-7690
> URL: https://issues.apache.org/jira/browse/LUCENE-7690
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.5
>
>
> Reproducing branch_6x seed from 
> [https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/690/]:
> {noformat}
>[junit4] Suite: 
> org.apache.lucene.codecs.simpletext.TestSimpleTextPointsFormat
>[junit4] IGNOR/A 0.02s J0 | TestSimpleTextPointsFormat.testRandomBinaryBig
>[junit4]> Assumption #1: 'nightly' test group is disabled (@Nightly())
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestSimpleTextPointsFormat -Dtests.method=testWithExceptions 
> -Dtests.seed=CCE1E867577CFFF6 -Dtests.slow=true -Dtests.locale=uk-UA 
> -Dtests.timezone=Asia/Qatar -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.93s J0 | TestSimpleTextPointsFormat.testWithExceptions 
> <<<
>[junit4]> Throwable #1: java.lang.IllegalStateException: this writer 
> hit an unrecoverable error; cannot complete forceMerge
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([CCE1E867577CFFF6:6EB2741BD8F2B00C]:0)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1931)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1881)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.forceMerge(RandomIndexWriter.java:429)
>[junit4]>  at 
> org.apache.lucene.index.BasePointsFormatTestCase.verify(BasePointsFormatTestCase.java:701)
>[junit4]>  at 
> org.apache.lucene.index.BasePointsFormatTestCase.testWithExceptions(BasePointsFormatTestCase.java:224)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: org.apache.lucene.index.CorruptIndexException: 
> Problem reading index from 
> MockDirectoryWrapper(NIOFSDirectory@/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/build/codecs/test/J0/temp/lucene.codecs.simpletext.TestSimpleTextPointsFormat_CCE1E867577CFFF6-001/tempDir-001
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@4d6de658) 
> (resource=MockDirectoryWrapper(NIOFSDirectory@/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/build/codecs/test/J0/temp/lucene.codecs.simpletext.TestSimpleTextPointsFormat_CCE1E867577CFFF6-001/tempDir-001
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@4d6de658))
>[junit4]>  at 
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:140)
>[junit4]>  at 
> org.apache.lucene.index.SegmentReader.(SegmentReader.java:74)
>[junit4]>  at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
>[junit4]>  at 
> org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:617)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4293)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3940)
>[junit4]>  at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
>[junit4]>  at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
>[junit4]> Caused by: java.io.FileNotFoundException: a random 
> IOException (_0.inf)
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.maybeThrowIOExceptionOnOpen(MockDirectoryWrapper.java:575)
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:744)
>[junit4]>  at 
> org.apache.lucene.store.Directory.openChecksumInput(Directory.java:137)
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.openChecksumInput(MockDirectoryWrapper.java:1072)
>[junit4]>  at 
> org.apache.lucene.codecs.simpletext.SimpleTextFieldInfosFormat.read(SimpleTextFieldInfosFormat.java:73)
>[junit4]>  at 
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:107)
>[junit4]>  ... 7 more
>[junit4] IGNOR/A 0.01s J0 | 

[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2017-02-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863825#comment-15863825
 ] 

ASF subversion and git services commented on SOLR-8029:
---

Commit 563f522643b5460e5b3bde1815f3f0b08c248eef in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=563f522 ]

SOLR-8029: disabled easymock for java9


> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
> Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, 
> SOLR-8029.patch, SOLR-8029.patch
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 4 types of requests in the new API 
> * {{/v2//*}} : Hit a collection directly or manage 
> collections/shards/replicas 
> * {{/v2//*}} : Hit a core directly or manage cores 
> * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection 
> or core. e.g: security, overseer ops etc
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7315) SSL options don't seem to be working on trunk

2017-02-13 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863824#comment-15863824
 ] 

Kevin Risden commented on SOLR-7315:


I'm not sure that PKCS12 type would have worked prior to SOLR-9728? 

> SSL options don't seem to be working on trunk
> -
>
> Key: SOLR-7315
> URL: https://issues.apache.org/jira/browse/SOLR-7315
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Hoss Man
>Assignee: Steve Rowe
>
> while trying to review another patch affecting bin/solr, i attempted to 
> verify that things were working with SSL, and then realized that even with an 
> unmodified trunk, the documented steps for enabling SSL don't seem to work -- 
> *THEY DO WORK ON 5X, JUST NOT TRUNK*
> i'll post full details in a comment



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10072) The test TestSelectiveWeightCreation appears to be unreliable.

2017-02-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-10072.

   Resolution: Fixed
Fix Version/s: master (7.0)
   6.5

> The test TestSelectiveWeightCreation appears to be unreliable.
> --
>
> Key: SOLR-10072
> URL: https://issues.apache.org/jira/browse/SOLR-10072
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 6.5, master (7.0)
>
> Attachments: stdout, stdout
>
>
> TestSelectiveWeightCreation 17.00% unreliable 30.00 24.66



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10072) The test TestSelectiveWeightCreation appears to be unreliable.

2017-02-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-10072:
--

Assignee: Mark Miller  (was: Christine Poerschke)

> The test TestSelectiveWeightCreation appears to be unreliable.
> --
>
> Key: SOLR-10072
> URL: https://issues.apache.org/jira/browse/SOLR-10072
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: stdout, stdout
>
>
> TestSelectiveWeightCreation 17.00% unreliable 30.00 24.66



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr Search - Issue

2017-02-13 Thread Jan Høydahl
Hi,

http://people.apache.org/~hossman/#solr-user

Your question is better suited for the solr-user@lucene mailing list ...
not the dev@lucene list.  The dev list is for discussing development of
the internals of Solr and the Lucene Java library ... it is *not* the 
appropriate place to ask questions about how to use Solr or the Lucene 
Java library when developing your own applications.  Please resend your 
message to the solr-user mailing list, where you are likely to get 
more/betterresponses since that list also has a larger number of subscribers.

http://lucene.apache.org/solr/community.html#mailing-lists-irc

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 13. feb. 2017 kl. 15.16 skrev Neeraj Kumar :
> 
> Hi Team,
> Any update on this.
>  
> Mit freundlichen Grüßen / Best Regards
> TCS Deutschland GmbH
> i.V. Neeraj Kumar
> 
> TCS Deutschland GmbH
> Contractpartner of Bayer Business Services GmbH
> BBS-ITS-R/BCS
>  
> On behalf of
> Bayer Business Services GmbH
> 51368 Leverkusen, Deutschland
>  
> Tel: +91-120-6163122
> E-Mail: neeraj.kumar@bayer.com
> Web: http://www.business-services.bayer.com
>  
> Geschäftsführung: Daniel Hartert, Vorsitzender   |   Wilhelm Oehlschläger, 
> Arbeitsdirektor
> Vorsitzender des Aufsichtsrats: Johannes Dietsch
> Sitz der Gesellschaft: Leverkusen   |   Amtsgericht Köln, HRB 49895
>  
> From: Neeraj Kumar 
> Sent: Tuesday, February 07, 2017 12:42 PM
> To: 'dev@lucene.apache.org'
> Subject: Solr Search - Issue
>  
> Hi Team,
> 
> I am new to solr and need your help. My problem statement is as below
> 
> I have uploaded document in solr as below. #sb# represents sentence begining 
> and #se# represents senetence ending. Now I want to search terms which occur 
> in same sentence .  If I search for q=text:"Federer Wimbledon" ,  below 
> document should come in search result as both the terms occur in same 
> senetence. If I search for q=text:"Federer Rafa" ,below document should not 
> come in search result as the terms occur in different sentences.
> 
> #sb#Federer, who looked ahead at Sunday's final, calling it an 'epic battle', 
> said, "Maybe I lost the Wimbledon final in 2008 because of too many clay 
> court matches #se##sb# He crushed me at the French Open final #se##sb# I 
> think it affected my first two sets at Wimbledon #se##sb# Maybe that's why I 
> ended up losing#se##sb# The Swiss continued, "I know Rafa played great in 
> that final #se##sb# I actually ended up playing great too, but I wasn't 
> fighting the right way #se##sb# I think that was the effect of that French 
> Open loss. It was more mental #se#
> 
> Could you please tell me how to fire query to achieve this.
>  
>  
> Mit freundlichen Grüßen / Best Regards
> TCS Deutschland GmbH
> i.V. Neeraj Kumar
> 
> TCS Deutschland GmbH
> Contractpartner of Bayer Business Services GmbH
> BBS-ITS-R/BCS
>  
> On behalf of
> Bayer Business Services GmbH
> 51368 Leverkusen, Deutschland
>  
> Tel: +91-120-6163122
> E-Mail: neeraj.kumar@bayer.com
> Web: http://www.business-services.bayer.com
>  
> Geschäftsführung: Daniel Hartert, Vorsitzender   |   Wilhelm Oehlschläger, 
> Arbeitsdirektor
> Vorsitzender des Aufsichtsrats: Johannes Dietsch
> Sitz der Gesellschaft: Leverkusen   |   Amtsgericht Köln, HRB 49895


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10121) BlockCache corruption with high concurrency

2017-02-13 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863699#comment-15863699
 ] 

Yonik Seeley edited comment on SOLR-10121 at 2/13/17 2:17 PM:
--

I reviewed the pertinent BlockCache and haven't seen any thread safety issues 
yet. Looking at the history of BlockCache, I reverted to right before SOLR-7355 
was applied, and the issues went away.  So it looks like it could be a thread 
safety or usage issue with Caffeine?
 [~ben.manes], does putting a key/value in Caffeine constitute safe publication 
to a different thread (as is the case with ConcurrentHashMap for example)?

Note that this doesn't necessarily mean something is wrong with Caffeine... it 
may be that the increased concurrency or other allowable differences in 
behavior uncover a bug in BlockCache as well.


was (Author: ysee...@gmail.com):
I reviewed the pertinent BlockCache and couldn't see any thread safety issues.
Looking at the history of BlockCache, I reverted to right before SOLR-7355 was 
applied, and the issues went away.  So it looks like a thread safety or usage 
issue with Caffeine?
 [~ben.manes], does putting a key/value in Caffeine constitute safe publication 
to a different thread (as is the case with ConcurrentHashMap for example)?

> BlockCache corruption with high concurrency
> ---
>
> Key: SOLR-10121
> URL: https://issues.apache.org/jira/browse/SOLR-10121
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>
> Improving the tests of the BlockCache in SOLR-10116 uncovered a corruption 
> bug (either that or the test is flawed... TBD).
> The failing test is TestBlockCache.testBlockCacheConcurrent()



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Solr Search - Issue

2017-02-13 Thread Neeraj Kumar
Hi Team,
Any update on this.

Mit freundlichen Grüßen / Best Regards
TCS Deutschland GmbH
i.V. Neeraj Kumar

TCS Deutschland GmbH
Contractpartner of Bayer Business Services GmbH
BBS-ITS-R/BCS

On behalf of
Bayer Business Services GmbH
51368 Leverkusen, Deutschland

Tel: +91-120-6163122
E-Mail: neeraj.kumar@bayer.com
Web: 
http://www.business-services.bayer.com

Geschäftsführung: Daniel Hartert, Vorsitzender   |   Wilhelm Oehlschläger, 
Arbeitsdirektor
Vorsitzender des Aufsichtsrats: Johannes Dietsch
Sitz der Gesellschaft: Leverkusen   |   Amtsgericht Köln, HRB 49895

From: Neeraj Kumar
Sent: Tuesday, February 07, 2017 12:42 PM
To: 'dev@lucene.apache.org'
Subject: Solr Search - Issue


Hi Team,

I am new to solr and need your help. My problem statement is as below

I have uploaded document in solr as below. #sb# represents sentence begining 
and #se# represents senetence ending. Now I want to search terms which occur in 
same sentence .  If I search for q=text:"Federer Wimbledon" ,  below document 
should come in search result as both the terms occur in same senetence. If I 
search for q=text:"Federer Rafa" ,below document should not come in search 
result as the terms occur in different sentences.

#sb#Federer, who looked ahead at Sunday's final, calling it an 'epic battle', 
said, "Maybe I lost the Wimbledon final in 2008 because of too many clay court 
matches #se##sb# He crushed me at the French Open final #se##sb# I think it 
affected my first two sets at Wimbledon #se##sb# Maybe that's why I ended up 
losing#se##sb# The Swiss continued, "I know Rafa played great in that final 
#se##sb# I actually ended up playing great too, but I wasn't fighting the right 
way #se##sb# I think that was the effect of that French Open loss. It was more 
mental #se#
Could you please tell me how to fire query to achieve this.


Mit freundlichen Grüßen / Best Regards
TCS Deutschland GmbH
i.V. Neeraj Kumar

TCS Deutschland GmbH
Contractpartner of Bayer Business Services GmbH
BBS-ITS-R/BCS

On behalf of
Bayer Business Services GmbH
51368 Leverkusen, Deutschland

Tel: +91-120-6163122
E-Mail: neeraj.kumar@bayer.com
Web: 
http://www.business-services.bayer.com

Geschäftsführung: Daniel Hartert, Vorsitzender   |   Wilhelm Oehlschläger, 
Arbeitsdirektor
Vorsitzender des Aufsichtsrats: Johannes Dietsch
Sitz der Gesellschaft: Leverkusen   |   Amtsgericht Köln, HRB 49895



[JENKINS] Lucene-Solr-5.5-Linux (32bit/jdk1.7.0_80) - Build # 470 - Unstable!

2017-02-13 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/470/
Java: 32bit/jdk1.7.0_80 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.schema.TestManagedSchemaAPI.test

Error Message:
Error from server at http://127.0.0.1:42198/solr/testschemaapi_shard1_replica2: 
ERROR: [doc=2] unknown field 'myNewField1'

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:42198/solr/testschemaapi_shard1_replica2: ERROR: 
[doc=2] unknown field 'myNewField1'
at 
__randomizedtesting.SeedInfo.seed([CAD1305F8E79CBD2:42850F852085A62A]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:653)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1002)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:891)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:827)
at 
org.apache.solr.schema.TestManagedSchemaAPI.testAddFieldAndDocument(TestManagedSchemaAPI.java:101)
at 
org.apache.solr.schema.TestManagedSchemaAPI.test(TestManagedSchemaAPI.java:69)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

Re: [VOTE] Release Lucene/Solr 5.5.4 RC1

2017-02-13 Thread Adrien Grand
Awesome, thanks Steve for looking into it.

This vote has passed, so I will start working on releasing this candidate.
Thanks to all who voted.

Le lun. 13 févr. 2017 à 12:59, Steve Rowe  a écrit :

Hi Adrien,

This failure was addressed by SOLR-9088 and SOLR-9832, included in 6.2 &
6.4, respectively:
-
   [smoker][junit4] ERROR   13.9s J0 | TestManagedSchemaAPI.test <<<
   [smoker][junit4]> Throwable #1:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error
from server at http://127.0.0.1:44072/solr/testschemaapi_shard1_replica1:
ERROR: [doc=2] unknown field 'myNewField1’
-

This failure was AFAICT addressed by SOLR-9181 (since failures stopped
happening on master/branch_6x after it was committed), included in 6.2:
-
   [smoker][junit4] ERROR   0.24s J1 |
ZkStateReaderTest.testStateFormatUpdateWithExplicitRefresh <<<
   [smoker][junit4]> Throwable #1:
org.apache.solr.common.SolrException: Could not find collection : c1
-

+1 to release 5.5.4 without backporting these fixes.

--
Steve
www.lucidworks.com

> On Feb 13, 2017, at 4:10 AM, Adrien Grand  wrote:
>
> I would appreciate if someone could confirm that failures from
https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.5/13/  are not
release blockers.
>
> Le sam. 11 févr. 2017 à 06:31, Steve Rowe  a écrit :
> +1
>
> Changes, docs and javadocs look good, and the smoke tester was happy
(with --test-java8): SUCCESS! [0:41:56.360623]
>
> --
> Steve
> www.lucidworks.com
>
> > On Feb 8, 2017, at 3:28 PM, Adrien Grand  wrote:
> >
> > Le jeu. 9 févr. 2017 à 00:26, Adrien Grand  a écrit :
> > Please vote for release candidate 1 for Lucene/Solr 6.4.1.
> >
> > I meant 5.5.4.
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org


[jira] [Commented] (SOLR-7315) SSL options don't seem to be working on trunk

2017-02-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863700#comment-15863700
 ] 

Jan Høydahl commented on SOLR-7315:
---

When this issue talks about "trunk" that will mean 6.x, so is it safe to assume 
this has since been fixed?

> SSL options don't seem to be working on trunk
> -
>
> Key: SOLR-7315
> URL: https://issues.apache.org/jira/browse/SOLR-7315
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Hoss Man
>Assignee: Steve Rowe
>
> while trying to review another patch affecting bin/solr, i attempted to 
> verify that things were working with SSL, and then realized that even with an 
> unmodified trunk, the documented steps for enabling SSL don't seem to work -- 
> *THEY DO WORK ON 5X, JUST NOT TRUNK*
> i'll post full details in a comment



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10121) BlockCache corruption with high concurrency

2017-02-13 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863699#comment-15863699
 ] 

Yonik Seeley commented on SOLR-10121:
-

I reviewed the pertinent BlockCache and couldn't see any thread safety issues.
Looking at the history of BlockCache, I reverted to right before SOLR-7355 was 
applied, and the issues went away.  So it looks like a thread safety or usage 
issue with Caffeine?
 [~ben.manes], does putting a key/value in Caffeine constitute safe publication 
to a different thread (as is the case with ConcurrentHashMap for example)?

> BlockCache corruption with high concurrency
> ---
>
> Key: SOLR-10121
> URL: https://issues.apache.org/jira/browse/SOLR-10121
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>
> Improving the tests of the BlockCache in SOLR-10116 uncovered a corruption 
> bug (either that or the test is flawed... TBD).
> The failing test is TestBlockCache.testBlockCacheConcurrent()



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release PyLucene 6.4.1 (rc1)

2017-02-13 Thread Jan Høydahl
I did some website fixes wrt versions and Mac OS X -> macOS renaming.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 13. feb. 2017 kl. 13.14 skrev Jan Høydahl :
> 
> Hi,
> 
> I found the reason, it is a Java bug which is fixed in Java9: 
> https://bugs.openjdk.java.net/browse/JDK-7131356 
> 
> 
> The workaround was to install Apple’s Java6, then make and make install 
> succeeds.
> 
> I then tested python IndexFiles.py  and python SearchFiles.py and it 
> all works :-)
> 
> +1 to release
> 
> PS: The page http://lucene.apache.org/pylucene/install.html 
>  is outdated wrt Mac, 
> versions etc and should probably mention the Java6 bug as well
> 
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com 
> 
>> 13. feb. 2017 kl. 12.12 skrev Jan Høydahl > >:
>> 
>> Here is a GIST with complete install log and Makefile. I did not modify 
>> setup.py, it looked good to go
>> 
>> https://gist.github.com/janhoy/c996529dc492ec3ad9cb3b81e80719f2#file-pylucene-install-log-txt
>>  
>> 
>> 
>> In Makefile I customized only these vars
>> 
>>> PREFIX_PYTHON=/usr/local/Cellar/python/2.7.13/
>>> ANT=/usr/local/Cellar/ant/1.10.0/bin/ant
>>> PYTHON=$(PREFIX_PYTHON)/bin/python
>>> JCC=$(PYTHON) -m jcc
>>> NUM_FILES=8
>> 
>> 
>> JCC finds Java Home, and python version is 2.7.13
>> My version of ‘make’ is macOS default gmake 3.81
>> 
>> I also tried with (g)make 4.2.1 but same problem.
>> 
>> --
>> Jan Høydahl, search solution architect
>> Cominvent AS - www.cominvent.com 
>> 
>>> 13. feb. 2017 kl. 00.47 skrev Andi Vajda >> >:
>>> 
>>> 
>>> On Mon, 13 Feb 2017, Jan Høydahl wrote:
>>> 
 Tried to build on my Mac again, same problem as last time when running 
 ?make?, the command 'python -m jcc.__main__ --shared --arch ?.? requests 
 old Apple-Java 6:
 
> No Java runtime present, requesting install.
>>> 
>>> When building JCC (before building PyLucene), you need to ensure that the 
>>> proper version of Java is found. The setup.py program tries to figure it 
>>> out for you and tells what it's about to build with on stdout.
>>> 
>>> Then you need to install JCC.
>>> 
>>> Then, when building PyLucene, you need to make sure that the same python 
>>> install you used to build JCC is also going to be used by the PyLucene 
>>> Makefile, since that's where the current JCC you just built got installed.
>>> You need edit that Makefile and uncomment/edit one of the configuration
>>> examples to match your setup.
>>> 
>>> I'm sure it also helps if at the command line, you see something like this
>>>  $ java -version
>>>  Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
>>>  Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode)
>>> 
>>> If not, fix this before trying anything else.
>>> 
>>> Andi..
>>> 
 
 --
 Jan Høydahl, search solution architect
 Cominvent AS - www.cominvent.com 
 
> 11. feb. 2017 kl. 23.23 skrev Andi Vajda  >:
> 
> 
> Ping ?
> Two more PMC votes are needed before this release can happen.
> Thanks !
> 
> Andi..
> 
>> On Feb 6, 2017, at 13:38, Andi Vajda > > wrote:
>> 
>> 
>> The PyLucene 6.4.1 (rc1) release tracking today's release of
>> Apache Lucene 6.4.1 is ready.
>> 
>> A release candidate is available from:
>> https://dist.apache.org/repos/dist/dev/lucene/pylucene/6.4.1-rc1/ 
>> 
>> 
>> PyLucene 6.4.1 is built with JCC 2.23 included in these release 
>> artifacts.
>> 
>> Please vote to release these artifacts as PyLucene 6.4.1.
>> Anyone interested in this release can and should vote !
>> 
>> Thanks !
>> 
>> Andi..
>> 
>> ps: the KEYS file for PyLucene release signing is at:
>> https://dist.apache.org/repos/dist/release/lucene/pylucene/KEYS 
>> 
>> https://dist.apache.org/repos/dist/dev/lucene/pylucene/KEYS
>> 
>> pps: here is my +1
 
>> 
> 



[jira] [Resolved] (LUCENE-7550) QueryParser parses query differently depending on the default operator

2017-02-13 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-7550.
-
Resolution: Won't Fix
  Assignee: Dawid Weiss

Sorry to return to this so late. This is a known behavior of how QueryParser 
works, Paweł.

Quoting Hoss: "If the default operator is set to “And” then the behavior is 
just plain weird.". You can read about the Boolean logic and query parser 
behavior at [1]. Also, check out PrecedenceQueryParser which should return the 
result you expect.

[1] https://lucidworks.com/2011/12/28/why-not-and-or-and-not/

> QueryParser parses query differently depending on the default operator
> --
>
> Key: LUCENE-7550
> URL: https://issues.apache.org/jira/browse/LUCENE-7550
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
>
> As explained by Paweł Róg on java-user [1], the output of parsing the queries 
> below is different depending on the default operator. This looks odd and 
> should be investigated.
> {code}
> QueryParser parser = new QueryParser("test", new WhitespaceAnalyzer());
> parser.setDefaultOperator(QueryParser.Operator.AND);
> Query query = parser.parse("foo AND bar OR baz ");
> System.out.println(query.toString());
> parser.setDefaultOperator(QueryParser.Operator.OR);
> query = parser.parse("foo AND bar OR baz ");
> System.out.println(query.toString());
> {code}
> Results in :
> {code}
> +test:foo test:bar test:baz
> +test:foo +test:bar test:baz
> {code}
> [1] 
> http://mail-archives.apache.org/mod_mbox/lucene-java-user/201611.mbox/browser



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10130) Serious performance degradation in Solr 6.4.1 due to the new metrics collection

2017-02-13 Thread Ere Maijala (JIRA)
Ere Maijala created SOLR-10130:
--

 Summary: Serious performance degradation in Solr 6.4.1 due to the 
new metrics collection
 Key: SOLR-10130
 URL: https://issues.apache.org/jira/browse/SOLR-10130
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: metrics
Affects Versions: 6.4.1
 Environment: Centos 7, OpenJDK 1.8.0 update 111
Reporter: Ere Maijala
 Attachments: solr-8983-console-f1.log

We've stumbled on serious performance issues after upgrading to Solr 6.4.1. 
Looks like the new metrics collection system in MetricsDirectoryFactory is 
causing a major slowdown. This happens with an index configuration that, as far 
as I can see, has no metrics specific configuration and uses luceneMatchVersion 
5.5.0. In practice a moderate load will completely bog down the server with 
Solr threads constantly using up all CPU (600% on 6 core machine) capacity with 
a load that normally  where we normally see an average load of < 50%.

I took stack traces (I'll attach them) and noticed that the threads are 
spending time in com.codahale.metrics.Meter.mark. I tested building Solr 6.4.1 
with the metrics collection disabled in MetricsDirectoryFactory getByte and 
getBytes methods and was unable to reproduce the issue.

As far as I can see there are several issues:
1. Collecting metrics on every single byte read is slow.
2. Having it enabled by default is not a good idea.
3. The comment "enable coarse-grained metrics by default" at 
https://github.com/apache/lucene-solr/blob/branch_6x/solr/core/src/java/org/apache/solr/update/SolrIndexConfig.java#L104
 implies that only coarse-grained metrics should be enabled by default, and 
this contradicts with collecting metrics on every single byte read.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10129) Expose lucene PointRange fields in Solr

2017-02-13 Thread Alan Woodward (JIRA)
Alan Woodward created SOLR-10129:


 Summary: Expose lucene PointRange fields in Solr
 Key: SOLR-10129
 URL: https://issues.apache.org/jira/browse/SOLR-10129
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Alan Woodward
Assignee: Alan Woodward


Follow up to SOLR-8396, it would be nice to expose the sandbox PointRange 
fields in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10130) Serious performance degradation in Solr 6.4.1 due to the new metrics collection

2017-02-13 Thread Ere Maijala (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ere Maijala updated SOLR-10130:
---
Attachment: solr-8983-console-f1.log

> Serious performance degradation in Solr 6.4.1 due to the new metrics 
> collection
> ---
>
> Key: SOLR-10130
> URL: https://issues.apache.org/jira/browse/SOLR-10130
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 6.4.1
> Environment: Centos 7, OpenJDK 1.8.0 update 111
>Reporter: Ere Maijala
>  Labels: perfomance
> Attachments: solr-8983-console-f1.log
>
>
> We've stumbled on serious performance issues after upgrading to Solr 6.4.1. 
> Looks like the new metrics collection system in MetricsDirectoryFactory is 
> causing a major slowdown. This happens with an index configuration that, as 
> far as I can see, has no metrics specific configuration and uses 
> luceneMatchVersion 5.5.0. In practice a moderate load will completely bog 
> down the server with Solr threads constantly using up all CPU (600% on 6 core 
> machine) capacity with a load that normally  where we normally see an average 
> load of < 50%.
> I took stack traces (I'll attach them) and noticed that the threads are 
> spending time in com.codahale.metrics.Meter.mark. I tested building Solr 
> 6.4.1 with the metrics collection disabled in MetricsDirectoryFactory getByte 
> and getBytes methods and was unable to reproduce the issue.
> As far as I can see there are several issues:
> 1. Collecting metrics on every single byte read is slow.
> 2. Having it enabled by default is not a good idea.
> 3. The comment "enable coarse-grained metrics by default" at 
> https://github.com/apache/lucene-solr/blob/branch_6x/solr/core/src/java/org/apache/solr/update/SolrIndexConfig.java#L104
>  implies that only coarse-grained metrics should be enabled by default, and 
> this contradicts with collecting metrics on every single byte read.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7579) Sorting on flushed segment

2017-02-13 Thread Jim Ferenczi (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Ferenczi resolved LUCENE-7579.
--
Resolution: Fixed

> Sorting on flushed segment
> --
>
> Key: LUCENE-7579
> URL: https://issues.apache.org/jira/browse/LUCENE-7579
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Jim Ferenczi
>
> Today flushed segments built by an index writer with an index sort specified 
> are not sorted. The merge is responsible of sorting these segments 
> potentially with others that are already sorted (resulted from another 
> merge). 
> I'd like to investigate the cost of sorting the segment directly during the 
> flush. This could make the merge faster since they are some cheap 
> optimizations that can be done only if all segments to be merged are sorted.
>  For instance the merge of the points could use the bulk merge instead of 
> rebuilding the points from scratch.
> I made a small prototype which sort the segment on flush here:
> https://github.com/apache/lucene-solr/compare/master...jimczi:flush_sort
> The idea is simple, for points, norms, docvalues and terms I use the 
> SortingLeafReader implementation to translate the values that we have in RAM 
> in a sorted enumeration for the writers.
> For stored fields I use a two pass scheme where the documents are first 
> written to disk unsorted and then copied to another file with the correct 
> sorting. I use the same stored field format for the two steps and just remove 
> the file produced by the first pass at the end of the process.
> This prototype has no implementation for index sorting that use term vectors 
> yet. I'll add this later if the tests are good enough.
> Speaking of testing, I tried this branch on [~mikemccand] benchmark scripts 
> and compared master with index sorting against my branch with index sorting 
> on flush. I tried with sparsetaxis and wikipedia and the first results are 
> weird. When I use the SerialScheduler and only one thread to write the docs,  
> index sorting on flush is slower. But when I use two threads the sorting on 
> flush is much faster even with the SerialScheduler. I'll continue to run the 
> tests in order to be able to share something more meaningful.
> The tests are passing except one about concurrent DV updates. I don't know 
> this part at all so I did not fix the test yet. I don't even know if we can 
> make it work with index sorting ;).
>  [~mikemccand] I would love to have your feedback about the prototype. Could 
> you please take a look ? I am sure there are plenty of bugs, ... but I think 
> it's a good start to evaluate the feasibility of this feature.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >