[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_121) - Build # 6403 - Unstable!

2017-02-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6403/
Java: 32bit/jdk1.8.0_121 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([1885730F7A9FED5E:90D14CD5D46380A6]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:326)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:277)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:259)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:138)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Updated] (SOLR-10152) PostingsSolrHighlighter support for CustomSeparatorBreakIterator (LUCENE-6485)

2017-02-17 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-10152:

Attachment: SOLR-10152.patch

> PostingsSolrHighlighter support for CustomSeparatorBreakIterator (LUCENE-6485)
> --
>
> Key: SOLR-10152
> URL: https://issues.apache.org/jira/browse/SOLR-10152
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Reporter: Amrit Sarkar
> Attachments: SOLR-10152.patch
>
>
> Lucene 5.3 added a CustomSeparatorBreakIterator (see LUCENE-6485)
> SOLR-10152.patch uploaded which incorporates CustomSeparatorBreakIterator in 
> PostingsSolrHighlighter.
> - added a new request param option to specify which separator char to use. 
> *customSeparatorChar*.
> - changed PostingsSolrHighlighter.getBreakIterator to check 
> HighlightParams.BS_TYPE first.
> - if type=='CUSTOM', look for the new separator param, in getBreakIterator, 
> validate it's a single char, & skip locale parsing.
> - 'WHOLE' option moved from parseBreakIterator to getBreakIterator, as it 
> doesn't depend on locale.
> Changes made in:
> * HighlightParams.java
> * PostingsSolrHighlighter.java
> * test cases added in TestPostingsSolrHighlighter



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10152) PostingsSolrHighlighter support for CustomSeparatorBreakIterator (LUCENE-6485)

2017-02-17 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-10152:

Description: 
Lucene 5.3 added a CustomSeparatorBreakIterator (see LUCENE-6485)

SOLR-10152.patch uploaded which incorporates CustomSeparatorBreakIterator in 
PostingsSolrHighlighter.

- added a new request param option to specify which separator char to use. 
*customSeparatorChar*.
- changed PostingsSolrHighlighter.getBreakIterator to check 
HighlightParams.BS_TYPE first.
- if type=='CUSTOM', look for the new separator param, in getBreakIterator, 
validate it's a single char, & skip locale parsing.
- 'WHOLE' option moved from parseBreakIterator to getBreakIterator, as it 
doesn't depend on locale.

Changes made in:
* HighlightParams.java
* PostingsSolrHighlighter.java
* test cases added in TestPostingsSolrHighlighter

  was:
Lucene 5.3 added a CustomSeparatorBreakIterator (see LUCENE-6485)

SOLR-10152.patch uploaded which incorporates CustomSeparatorBreakIterator in 
PostingsSolrHighlighter.

- added a new request param option to specify which separator char to use. 
*customSeparatorChar*.
- changed PostingsSolrHighlighter.getBreakIterator to check 
HighlightParams.BS_TYPE first.
- if type=='CUSTOM', look for the new separator param, in getBreakIterator, 
validate it's a single char, & skip locale parsing.
- 'WHOLE' option moved from parseBreakIterator to getBreakIterator, as it 
doesn't depend on locale.

Changes made in:
* HighlightParams.java
* PostingsSolrHighlighter.java


> PostingsSolrHighlighter support for CustomSeparatorBreakIterator (LUCENE-6485)
> --
>
> Key: SOLR-10152
> URL: https://issues.apache.org/jira/browse/SOLR-10152
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Reporter: Amrit Sarkar
>
> Lucene 5.3 added a CustomSeparatorBreakIterator (see LUCENE-6485)
> SOLR-10152.patch uploaded which incorporates CustomSeparatorBreakIterator in 
> PostingsSolrHighlighter.
> - added a new request param option to specify which separator char to use. 
> *customSeparatorChar*.
> - changed PostingsSolrHighlighter.getBreakIterator to check 
> HighlightParams.BS_TYPE first.
> - if type=='CUSTOM', look for the new separator param, in getBreakIterator, 
> validate it's a single char, & skip locale parsing.
> - 'WHOLE' option moved from parseBreakIterator to getBreakIterator, as it 
> doesn't depend on locale.
> Changes made in:
> * HighlightParams.java
> * PostingsSolrHighlighter.java
> * test cases added in TestPostingsSolrHighlighter



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10152) PostingsSolrHighlighter support for CustomSeparatorBreakIterator (LUCENE-6485)

2017-02-17 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-10152:

Attachment: (was: SOLR-10152.patch)

> PostingsSolrHighlighter support for CustomSeparatorBreakIterator (LUCENE-6485)
> --
>
> Key: SOLR-10152
> URL: https://issues.apache.org/jira/browse/SOLR-10152
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Reporter: Amrit Sarkar
>
> Lucene 5.3 added a CustomSeparatorBreakIterator (see LUCENE-6485)
> SOLR-10152.patch uploaded which incorporates CustomSeparatorBreakIterator in 
> PostingsSolrHighlighter.
> - added a new request param option to specify which separator char to use. 
> *customSeparatorChar*.
> - changed PostingsSolrHighlighter.getBreakIterator to check 
> HighlightParams.BS_TYPE first.
> - if type=='CUSTOM', look for the new separator param, in getBreakIterator, 
> validate it's a single char, & skip locale parsing.
> - 'WHOLE' option moved from parseBreakIterator to getBreakIterator, as it 
> doesn't depend on locale.
> Changes made in:
> * HighlightParams.java
> * PostingsSolrHighlighter.java
> * test cases added in TestPostingsSolrHighlighter



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10141) Caffeine cache causes BlockCache corruption

2017-02-17 Thread Ben Manes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15873011#comment-15873011
 ] 

Ben Manes commented on SOLR-10141:
--

[Pull Request|https://github.com/ben-manes/caffeine/pull/144] with the fix and 
your test case.

> Caffeine cache causes BlockCache corruption 
> 
>
> Key: SOLR-10141
> URL: https://issues.apache.org/jira/browse/SOLR-10141
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
> Attachments: SOLR-10141.patch, Solr10141Test.java
>
>
> After fixing the race conditions in the BlockCache itself (SOLR-10121), the 
> concurrency test passes with the previous implementation using 
> ConcurrentLinkedHashMap and fail with Caffeine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+155) - Build # 18990 - Still Unstable!

2017-02-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18990/
Java: 32bit/jdk-9-ea+155 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.handler.admin.TestApiFramework.testFramework

Error Message:


Stack Trace:
java.lang.ExceptionInInitializerError
at 
__randomizedtesting.SeedInfo.seed([8D68045E9108CF6D:9A1ECE7997DC2350]:0)
at 
net.sf.cglib.core.KeyFactory$Generator.generateClass(KeyFactory.java:166)
at 
net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at 
net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at net.sf.cglib.core.KeyFactory$Generator.create(KeyFactory.java:144)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:116)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:108)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:104)
at net.sf.cglib.proxy.Enhancer.(Enhancer.java:69)
at 
org.easymock.internal.ClassProxyFactory.createEnhancer(ClassProxyFactory.java:259)
at 
org.easymock.internal.ClassProxyFactory.createProxy(ClassProxyFactory.java:174)
at org.easymock.internal.MocksControl.createMock(MocksControl.java:60)
at org.easymock.EasyMock.createMock(EasyMock.java:104)
at 
org.apache.solr.handler.admin.TestCoreAdminApis.getCoreContainerMock(TestCoreAdminApis.java:83)
at 
org.apache.solr.handler.admin.TestApiFramework.testFramework(TestApiFramework.java:59)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:543)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Comment Edited] (SOLR-10141) Caffeine cache causes BlockCache corruption

2017-02-17 Thread Ben Manes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872983#comment-15872983
 ] 

Ben Manes edited comment on SOLR-10141 at 2/18/17 5:08 AM:
---

Thanks!!! I think I found the bug. It now passes your test case.

The problem was due to put() stampeding over the value during the eviction. The 
[eviction 
routine|https://github.com/ben-manes/caffeine/blob/65e3efd4b50613c27567ff594877d0f63acfbce2/caffeine/src/main/java/com/github/benmanes/caffeine/cache/BoundedLocalCache.java#L725]
 performed the following:
# Read the key, value, etc
# Conditionally removed in a computeIfPresent() block
#* resurrected if a race occurred (e.g. was thought expired, but newly accessed)
# Mark the entry as "dead" (using a synchronized (entry) block)
# Notify the listener

This failed because 
[putFast|https://github.com/ben-manes/caffeine/blob/65e3efd4b50613c27567ff594877d0f63acfbce2/caffeine/src/main/java/com/github/benmanes/caffeine/cache/BoundedLocalCache.java#L1521]
 can perform its update outside of a hash table lock (e.g. a computation). It 
synchronizes on the entry to update, checking first if it was still alive. This 
resulted in a race where the entry was removed from the hash table, the value 
updated, and entry marked as dead. When the listener was notified, it received 
the wrong value.

The solution I have now is to expand the synchronized block on eviction. This 
passes your test and should be cheap. I'd like to review it a little more and 
incorporate your test into my suite.

This is an excellent find. I've stared at the code many times and the race 
seems obvious in hindsight.


was (Author: ben.manes):
Thanks!!! I think I found the bug. It now passes your test case.

The problem was due to put() stampeding over the value during the eviction. The 
[eviction 
routine|https://github.com/ben-manes/caffeine/blob/65e3efd4b50613c27567ff594877d0f63acfbce2/caffeine/src/main/java/com/github/benmanes/caffeine/cache/BoundedLocalCache.java#L725]
 performed the following:
# Read the key, value, etc
# Conditionally removed in a computeIfPresent() block
   - resurrected if a race occurred (e.g. was thought expired, but newly 
accessed)
# Mark the entry as "dead" (using a synchronized (entry) block)
# Notify the listener

This failed because 
[putFast|https://github.com/ben-manes/caffeine/blob/65e3efd4b50613c27567ff594877d0f63acfbce2/caffeine/src/main/java/com/github/benmanes/caffeine/cache/BoundedLocalCache.java#L1521]
 can perform its update outside of a hash table lock (e.g. a computation). It 
synchronizes on the entry to update, checking first if it was still alive. This 
resulted in a race where the entry was removed from the hash table, the value 
updated, and entry marked as dead. When the listener was notified, it received 
the wrong value.

The solution I have now is to expand the synchronized block on eviction. This 
passes your test and should be cheap. I'd like to review it a little more and 
incorporate your test into my suite.

This is an excellent find. I've stared at the code many times and the race 
seems obvious in hindsight.

> Caffeine cache causes BlockCache corruption 
> 
>
> Key: SOLR-10141
> URL: https://issues.apache.org/jira/browse/SOLR-10141
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
> Attachments: SOLR-10141.patch, Solr10141Test.java
>
>
> After fixing the race conditions in the BlockCache itself (SOLR-10121), the 
> concurrency test passes with the previous implementation using 
> ConcurrentLinkedHashMap and fail with Caffeine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10141) Caffeine cache causes BlockCache corruption

2017-02-17 Thread Ben Manes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872983#comment-15872983
 ] 

Ben Manes commented on SOLR-10141:
--

Thanks!!! I think I found the bug. It now passes your test case.

The problem was due to put() stampeding over the value during the eviction. The 
[eviction 
routine|https://github.com/ben-manes/caffeine/blob/65e3efd4b50613c27567ff594877d0f63acfbce2/caffeine/src/main/java/com/github/benmanes/caffeine/cache/BoundedLocalCache.java#L725]
 performed the following:
# Read the key, value, etc
# Conditionally removed in a computeIfPresent() block
   - resurrected if a race occurred (e.g. was thought expired, but newly 
accessed)
# Mark the entry as "dead" (using a synchronized (entry) block)
# Notify the listener

This failed because 
[putFast|https://github.com/ben-manes/caffeine/blob/65e3efd4b50613c27567ff594877d0f63acfbce2/caffeine/src/main/java/com/github/benmanes/caffeine/cache/BoundedLocalCache.java#L1521]
 can perform its update outside of a hash table lock (e.g. a computation). It 
synchronizes on the entry to update, checking first if it was still alive. This 
resulted in a race where the entry was removed from the hash table, the value 
updated, and entry marked as dead. When the listener was notified, it received 
the wrong value.

The solution I have now is to expand the synchronized block on eviction. This 
passes your test and should be cheap. I'd like to review it a little more and 
incorporate your test into my suite.

This is an excellent find. I've stared at the code many times and the race 
seems obvious in hindsight.

> Caffeine cache causes BlockCache corruption 
> 
>
> Key: SOLR-10141
> URL: https://issues.apache.org/jira/browse/SOLR-10141
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
> Attachments: SOLR-10141.patch, Solr10141Test.java
>
>
> After fixing the race conditions in the BlockCache itself (SOLR-10121), the 
> concurrency test passes with the previous implementation using 
> ConcurrentLinkedHashMap and fail with Caffeine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+155) - Build # 2881 - Still Unstable!

2017-02-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2881/
Java: 32bit/jdk-9-ea+155 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.handler.admin.TestApiFramework.testFramework

Error Message:


Stack Trace:
java.lang.ExceptionInInitializerError
at 
__randomizedtesting.SeedInfo.seed([7C446C55BD468BA5:6B32A672BB926798]:0)
at 
net.sf.cglib.core.KeyFactory$Generator.generateClass(KeyFactory.java:166)
at 
net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at 
net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at net.sf.cglib.core.KeyFactory$Generator.create(KeyFactory.java:144)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:116)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:108)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:104)
at net.sf.cglib.proxy.Enhancer.(Enhancer.java:69)
at 
org.easymock.internal.ClassProxyFactory.createEnhancer(ClassProxyFactory.java:259)
at 
org.easymock.internal.ClassProxyFactory.createProxy(ClassProxyFactory.java:174)
at org.easymock.internal.MocksControl.createMock(MocksControl.java:60)
at org.easymock.EasyMock.createMock(EasyMock.java:104)
at 
org.apache.solr.handler.admin.TestCoreAdminApis.getCoreContainerMock(TestCoreAdminApis.java:83)
at 
org.apache.solr.handler.admin.TestApiFramework.testFramework(TestApiFramework.java:59)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:543)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (SOLR-10141) Caffeine cache causes BlockCache corruption

2017-02-17 Thread Ben Manes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872969#comment-15872969
 ] 

Ben Manes commented on SOLR-10141:
--

Thanks! I'm resolving some issues with the latest error-prone (static analyzer) 
and dig into it.

> Caffeine cache causes BlockCache corruption 
> 
>
> Key: SOLR-10141
> URL: https://issues.apache.org/jira/browse/SOLR-10141
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
> Attachments: SOLR-10141.patch, Solr10141Test.java
>
>
> After fixing the race conditions in the BlockCache itself (SOLR-10121), the 
> concurrency test passes with the previous implementation using 
> ConcurrentLinkedHashMap and fail with Caffeine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10141) Caffeine cache causes BlockCache corruption

2017-02-17 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872965#comment-15872965
 ] 

Yonik Seeley commented on SOLR-10141:
-

I checked in the test (test method testCacheConcurrent) : 
https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=blob;f=solr/core/src/test/org/apache/solr/store/blockcache/BlockCacheTest.java


> Caffeine cache causes BlockCache corruption 
> 
>
> Key: SOLR-10141
> URL: https://issues.apache.org/jira/browse/SOLR-10141
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
> Attachments: SOLR-10141.patch, Solr10141Test.java
>
>
> After fixing the race conditions in the BlockCache itself (SOLR-10121), the 
> concurrency test passes with the previous implementation using 
> ConcurrentLinkedHashMap and fail with Caffeine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 1672 - Unstable

2017-02-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1672/

1 tests failed.
FAILED:  org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery

Error Message:
Expected a collection with one shard and two replicas null Last available 
state: 
DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/8)={
   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "core":"MissingSegmentRecoveryTest_shard1_replica1",   
"base_url":"https://127.0.0.1:55597/solr;,   
"node_name":"127.0.0.1:55597_solr",   "state":"active",   
"leader":"true"}, "core_node2":{   
"core":"MissingSegmentRecoveryTest_shard1_replica2",   
"base_url":"https://127.0.0.1:37299/solr;,   
"node_name":"127.0.0.1:37299_solr",   "state":"down",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected a collection with one shard and two replicas
null
Last available state: 
DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/8)={
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "core":"MissingSegmentRecoveryTest_shard1_replica1",
  "base_url":"https://127.0.0.1:55597/solr;,
  "node_name":"127.0.0.1:55597_solr",
  "state":"active",
  "leader":"true"},
"core_node2":{
  "core":"MissingSegmentRecoveryTest_shard1_replica2",
  "base_url":"https://127.0.0.1:37299/solr;,
  "node_name":"127.0.0.1:37299_solr",
  "state":"down",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([3ED021C92F6AD0E6:6E85B9CA764B66FB]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:265)
at 
org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery(MissingSegmentRecoveryTest.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-10141) Caffeine cache causes BlockCache corruption

2017-02-17 Thread Ben Manes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872943#comment-15872943
 ] 

Ben Manes commented on SOLR-10141:
--

Can you provide me with the latest version of a self-contained test? If I can 
reproduce and debug it, I'll have a fix over the weekend.

v2 introduced a new eviction policy to take into account the frequency. The 
eviction should be rapid, so these issues remaining are surprising. I've tried 
to be diligent about testing, so will investigate.

> Caffeine cache causes BlockCache corruption 
> 
>
> Key: SOLR-10141
> URL: https://issues.apache.org/jira/browse/SOLR-10141
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
> Attachments: SOLR-10141.patch, Solr10141Test.java
>
>
> After fixing the race conditions in the BlockCache itself (SOLR-10121), the 
> concurrency test passes with the previous implementation using 
> ConcurrentLinkedHashMap and fail with Caffeine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10141) Caffeine cache causes BlockCache corruption

2017-02-17 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872937#comment-15872937
 ] 

Yonik Seeley commented on SOLR-10141:
-

Well darn... it looked like things were fixed by the upgrade to 2.3.5, but then 
I looked a little closer.
I happened to notice that the hit rate was super high, when I designed the test 
to be closer to 50% (maxEntries = maxBlocks/2)

When I set these parameters in the test:
{code}
final int readLastBlockOdds=0; // odds (1 in N) of the next block operation 
being on the same block as the previous operation... helps flush concurrency 
issues
final boolean updateAnyway = false; // sometimes insert a new entry for the 
key even if one was found
{code}

Results in something like this:
{code}
Done! # of Elements = 200 inserts=17234 removals=17034 hits=9982766 
maxObservedSize=401
{code}

So for 10M multi-threaded reads, our hit rate was 99.8%, which artificially 
lowers the rate at which we insert new entries, and hence doesn't exercise the 
concurrency as well, leading to a passing test most of the time.

When I modified the test to increase the write concurrency again, accounting 
for a cache that is apparently too big:
{code}
final int readLastBlockOdds=10; // odds (1 in N) of the next block 
operation being on the same block as the previous operation... helps flush 
concurrency issues
final boolean updateAnyway = true; // sometimes insert a new entry for the 
key even if one was found
{code}
The removal listener issues reappear:
{code}
WARNING: Exception thrown by removal listener
java.lang.RuntimeException: listener called more than once! k=103 
v=org.apache.solr.store.blockcache.BlockCacheTest$Val@49dbc210 removalCause=SIZE
at 
org.apache.solr.store.blockcache.BlockCacheTest.lambda$testCacheConcurrent$0(BlockCacheTest.java:250)
at 
org.apache.solr.store.blockcache.BlockCacheTest$$Lambda$5/498475569.onRemoval(Unknown
 Source)
at 
com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$notifyRemoval$1(BoundedLocalCache.java:286)
at 
com.github.benmanes.caffeine.cache.BoundedLocalCache$$Lambda$12/1297599052.run(Unknown
 Source)
at 
org.apache.solr.store.blockcache.BlockCacheTest$$Lambda$7/957914685.execute(Unknown
 Source)
{code}
Guarding against the removal listener being called more than once with the same 
entry also doesn't seem to work (same as before) since it then becomes apparent 
that some entries never get passed to the removal listener.

Even if the removal listener issues are fixed, the fact that the cache can be 
bigger than the configured size is a problem for us.  The map itself is not 
storing the data, only controlling access to direct memory, so timely removal 
(and a timely call to the removal listener) under heavy concurrency is 
critical.  Without that, the cache will cease to function as a LRU cache under 
load because we won't be able to find a free block int he direct memory to 
actually use.

Even with only 2 threads, I see the cache going to at least double the 
configured maxEntries.  Is there a way to configure the size checking to be 
more strict?

> Caffeine cache causes BlockCache corruption 
> 
>
> Key: SOLR-10141
> URL: https://issues.apache.org/jira/browse/SOLR-10141
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
> Attachments: SOLR-10141.patch, Solr10141Test.java
>
>
> After fixing the race conditions in the BlockCache itself (SOLR-10121), the 
> concurrency test passes with the previous implementation using 
> ConcurrentLinkedHashMap and fail with Caffeine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+155) - Build # 18989 - Still Unstable!

2017-02-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18989/
Java: 32bit/jdk-9-ea+155 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.handler.admin.TestApiFramework.testFramework

Error Message:


Stack Trace:
java.lang.ExceptionInInitializerError
at 
__randomizedtesting.SeedInfo.seed([36130A941980:2165C0B31F7E46BD]:0)
at 
net.sf.cglib.core.KeyFactory$Generator.generateClass(KeyFactory.java:166)
at 
net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at 
net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at net.sf.cglib.core.KeyFactory$Generator.create(KeyFactory.java:144)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:116)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:108)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:104)
at net.sf.cglib.proxy.Enhancer.(Enhancer.java:69)
at 
org.easymock.internal.ClassProxyFactory.createEnhancer(ClassProxyFactory.java:259)
at 
org.easymock.internal.ClassProxyFactory.createProxy(ClassProxyFactory.java:174)
at org.easymock.internal.MocksControl.createMock(MocksControl.java:60)
at org.easymock.EasyMock.createMock(EasyMock.java:104)
at 
org.apache.solr.handler.admin.TestCoreAdminApis.getCoreContainerMock(TestCoreAdminApis.java:83)
at 
org.apache.solr.handler.admin.TestApiFramework.testFramework(TestApiFramework.java:59)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:543)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (SOLR-10120) A SolrCore reload can remove the index from the previous SolrCore during replication index rollover.

2017-02-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872910#comment-15872910
 ] 

ASF subversion and git services commented on SOLR-10120:


Commit 19c8ec2bf1882bed1bb34d0b55198d03f2018838 in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=19c8ec2 ]

SOLR-10120: Clean up earlier so we don't hit closed resources.


> A SolrCore reload can remove the index from the previous SolrCore during 
> replication index rollover.
> 
>
> Key: SOLR-10120
> URL: https://issues.apache.org/jira/browse/SOLR-10120
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
> Attachments: SOLR-10120.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9846) Overseer is not always closed after being started.

2017-02-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872909#comment-15872909
 ] 

ASF subversion and git services commented on SOLR-9846:
---

Commit ed05debb4e223e07aeeccdc0a802b8c2a514ba23 in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ed05deb ]

SOLR-9846: Overseer is not always closed after being started.


> Overseer is not always closed after being started.
> --
>
> Key: SOLR-9846
> URL: https://issues.apache.org/jira/browse/SOLR-9846
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 6.4, master (7.0)
>
>
> We should interrupt it on close.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10064) The Nightly test HdfsCollectionsAPIDistributedZkTest appears to be too fragile.

2017-02-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872908#comment-15872908
 ] 

ASF subversion and git services commented on SOLR-10064:


Commit 6b169d2051ce892cab25c08adf08956423fbe048 in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6b169d2 ]

SOLR-10064: The Nightly test HdfsCollectionsAPIDistributedZkTest appears to be 
too fragile.


> The Nightly test HdfsCollectionsAPIDistributedZkTest appears to be too 
> fragile.
> ---
>
> Key: SOLR-10064
> URL: https://issues.apache.org/jira/browse/SOLR-10064
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>
> HdfsCollectionsAPIDistributedZkTest 73.00% half–cracked 30.00 282.56 @Nightly



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10141) Caffeine cache causes BlockCache corruption

2017-02-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872906#comment-15872906
 ] 

ASF subversion and git services commented on SOLR-10141:


Commit d810edf5e900bef32b10928d275a02c093d359b6 in lucene-solr's branch 
refs/heads/branch_6x from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d810edf ]

SOLR-10141: add test for underlying cache


> Caffeine cache causes BlockCache corruption 
> 
>
> Key: SOLR-10141
> URL: https://issues.apache.org/jira/browse/SOLR-10141
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
> Attachments: SOLR-10141.patch, Solr10141Test.java
>
>
> After fixing the race conditions in the BlockCache itself (SOLR-10121), the 
> concurrency test passes with the previous implementation using 
> ConcurrentLinkedHashMap and fail with Caffeine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10141) Caffeine cache causes BlockCache corruption

2017-02-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872905#comment-15872905
 ] 

ASF subversion and git services commented on SOLR-10141:


Commit 33e398c02115c57ea54bda5f6f612f1b06c1e771 in lucene-solr's branch 
refs/heads/master from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=33e398c ]

SOLR-10141: add test for underlying cache


> Caffeine cache causes BlockCache corruption 
> 
>
> Key: SOLR-10141
> URL: https://issues.apache.org/jira/browse/SOLR-10141
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
> Attachments: SOLR-10141.patch, Solr10141Test.java
>
>
> After fixing the race conditions in the BlockCache itself (SOLR-10121), the 
> concurrency test passes with the previous implementation using 
> ConcurrentLinkedHashMap and fail with Caffeine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+155) - Build # 2880 - Unstable!

2017-02-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2880/
Java: 32bit/jdk-9-ea+155 -server -XX:+UseG1GC

2 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
expected:<3> but was:<1>

Stack Trace:
java.lang.AssertionError: expected:<3> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([3205351A035A8EDD:7A7041AE0569A148]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:522)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:543)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.handler.admin.TestApiFramework.testFramework

Error Message:


Stack Trace:

[jira] [Updated] (LUCENE-7698) CommonGramsQueryFilter in the query analyzer chain breaks phrase queries

2017-02-17 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-7698:
---
Fix Version/s: 6.4.2
   master (7.0)

> CommonGramsQueryFilter in the query analyzer chain breaks phrase queries
> 
>
> Key: LUCENE-7698
> URL: https://issues.apache.org/jira/browse/LUCENE-7698
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 6.4, 6.4.1
>Reporter: Ere Maijala
>  Labels: regression
> Fix For: master (7.0), 6.4.2
>
> Attachments: LUCENE-7698.patch
>
>
> (Please pardon me if the project or component are wrong!)
> CommonGramsQueryFilter breaks phrase queries. The behavior also seems to 
> change with addition or removal of adjacent terms.
> Steps to reproduce:
> 1.) Download and extract Solr (in my test case version 6.4.1) somewhere.
> 2.) Modify 
> server/solr/configsets/sample_techproducts_configs/conf/managed-schema and 
> modify text_general fieldType by adding CommonGrams(Query)Filter before 
> stopWordFilter:
>  positionIncrementGap="100">
>   
> 
>  words="stopwords.txt" />
>  words="stopwords.txt" />
> 
> 
>   
>   
> 
>  words="stopwords.txt"/>
>  words="stopwords.txt" />
>  ignoreCase="true" expand="true"/>
> 
>   
> 
> 3.) Add "with" to 
> server/solr/configsets/sample_techproducts_configs/conf/stopwords.txt and 
> make sure the file has correct line endings (extracted from Solr zip it seems 
> to contain DOS/Windows lien endings which may break things).
> 4.) Run the techproducts example with "bin/solr -e techproducts"
> 5.) Browse to 
> 
> 6.) Observe that parsedquery in the debug output is empty
> 7.) Browse to 
> 
> 8.) Observe that parsedquery contains ipod_with as expected but not 
> with_video.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9966) Convert/migrate tests using EasyMock to Mockito

2017-02-17 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-9966:
---
Attachment: SOLR-9966.patch

A patch for this ticket. If no one have problem with this patch, I will commit 
it shortly.

> Convert/migrate tests using EasyMock to Mockito
> ---
>
> Key: SOLR-9966
> URL: https://issues.apache.org/jira/browse/SOLR-9966
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Uwe Schindler
> Attachments: SOLR-9966.patch
>
>
> In SOLR-9893 I disabled all tests on Java 9 that use EasyMock, because 
> Easymock is not compatible with Java 9 (it uses outdated cglib version that 
> does not work with Jigsaw module system). To me the project seems dead (no 
> releases since more than 2 years).
> Mockito latest version is compatible to Java 9 because it no longer uses 
> cglib and the more modern and powerful Byte-Buddy lib; SOLR-9893 updated to 
> it. 
> I found this about more or less "automated rewrite" of EasyMock tests to 
> Mockito:
> - 
> https://wiki.magnolia-cms.com/display/DEV/Converting+Easymock-Tests+to+Mockito
> - A script doing this: 
> https://gist.github.com/stefanbirkner/1095194/904909cc229b6acb55c18f529e396089129e20e9
> It is not many tests, so this would be a great cleanup:
> - core/src/test/org/apache/solr/cloud/ClusterStateTest.java
> - 
> core/src/test/org/apache/solr/cloud/OverseerCollectionConfigSetProcessorTest.java
> - core/src/test/org/apache/solr/core/BlobRepositoryMockingTest.java
> - core/src/test/org/apache/solr/core/CoreSorterTest.java
> - core/src/test/org/apache/solr/security/TestPKIAuthenticationPlugin.java
> - core/src/test/org/apache/solr/servlet/SolrRequestParserTest.java
> - 
> solrj/src/test/org/apache/solr/client/solrj/impl/CloudSolrClientCacheTest.java
> There is one special case:
> - 
> contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestJdbcDataSource.java
> I am not sure how to convert this one, because it uses some strange system 
> properties and a handler that intercepts some EasyMock stuff. I may need help 
> to convert that one!
> After this is resolved we can remove the following dependencies from Solr:
> - cglib-nodep
> - easymock



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3840 - Still Unstable!

2017-02-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3840/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

7 tests failed.
FAILED:  
org.apache.solr.cloud.TestSegmentSorting.testAtomicUpdateOfSegmentSortField

Error Message:
Error from server at http://127.0.0.1:62871/solr: Expected mime type 
application/octet-stream but got text/html.Error 
404HTTP ERROR: 404 Problem accessing 
/solr/admin/collections. Reason: Can not find: 
/solr/admin/collections http://eclipse.org/jetty;>Powered by Jetty:// 9.3.14.v20161028 
  

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:62871/solr: Expected mime type 
application/octet-stream but got text/html. 


Error 404 


HTTP ERROR: 404
Problem accessing /solr/admin/collections. Reason:
Can not find: /solr/admin/collections
http://eclipse.org/jetty;>Powered by Jetty:// 
9.3.14.v20161028



at 
__randomizedtesting.SeedInfo.seed([5B01E11038708C60:BA13C615EE3517D8]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:595)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:439)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:391)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1358)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1109)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1042)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:177)
at 
org.apache.solr.cloud.TestSegmentSorting.createCollection(TestSegmentSorting.java:84)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:941)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:47)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)

[JENKINS] Lucene-Solr-Tests-6.x - Build # 731 - Unstable

2017-02-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/731/

1 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
expected:<3> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<3> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([B62CBD36366C7450:FE59C982305F5BC5]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:522)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11902 lines...]
   [junit4] Suite: org.apache.solr.cloud.CollectionsAPIDistributedZkTest
   [junit4]   2> Creating dataDir: 

[jira] [Resolved] (LUCENE-6837) Add N-best output capability to JapaneseTokenizer

2017-02-17 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-6837.

   Resolution: Fixed
Fix Version/s: 6.0

> Add N-best output capability to JapaneseTokenizer
> -
>
> Key: LUCENE-6837
> URL: https://issues.apache.org/jira/browse/LUCENE-6837
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 5.3
>Reporter: KONNO, Hiroharu
>Assignee: Christian Moen
>Priority: Minor
> Fix For: 6.0
>
> Attachments: LUCENE-6837 for 5.4.zip, LUCENE-6837.patch, 
> LUCENE-6837.patch, LUCENE-6837.patch, LUCENE-6837.patch, LUCENE-6837.patch
>
>
> Japanese morphological analyzers often generate mis-segmented tokens. N-best 
> output reduces the impact of mis-segmentation on search result. N-best output 
> is more meaningful than character N-gram, and it increases hit count too.
> If you use N-best output, you can get decompounded tokens (ex: 
> "シニアソフトウェアエンジニア" => {"シニア", "シニアソフトウェアエンジニア", "ソフトウェア", "エンジニア"}) and 
> overwrapped tokens (ex: "数学部長谷川" => {"数学", "部", "部長", "長谷川", "谷川"}), 
> depending on the dictionary and N-best parameter settings.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+155) - Build # 18988 - Unstable!

2017-02-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18988/
Java: 64bit/jdk-9-ea+155 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.handler.admin.TestApiFramework.testFramework

Error Message:


Stack Trace:
java.lang.ExceptionInInitializerError
at 
__randomizedtesting.SeedInfo.seed([5A1E11220C196181:4D68DB050ACD8DBC]:0)
at 
net.sf.cglib.core.KeyFactory$Generator.generateClass(KeyFactory.java:166)
at 
net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at 
net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at net.sf.cglib.core.KeyFactory$Generator.create(KeyFactory.java:144)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:116)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:108)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:104)
at net.sf.cglib.proxy.Enhancer.(Enhancer.java:69)
at 
org.easymock.internal.ClassProxyFactory.createEnhancer(ClassProxyFactory.java:259)
at 
org.easymock.internal.ClassProxyFactory.createProxy(ClassProxyFactory.java:174)
at org.easymock.internal.MocksControl.createMock(MocksControl.java:60)
at org.easymock.EasyMock.createMock(EasyMock.java:104)
at 
org.apache.solr.handler.admin.TestCoreAdminApis.getCoreContainerMock(TestCoreAdminApis.java:83)
at 
org.apache.solr.handler.admin.TestApiFramework.testFramework(TestApiFramework.java:59)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:543)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Updated] (SOLR-9555) Leader incorrectly publishes state for replica when it puts replica into LIR.

2017-02-17 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-9555:

Attachment: SOLR-9555-WIP.patch

Here's a work in progress patch. The rough outline of the changes is:
- When publishing an active state, also publish an Active into the LIR znode 
and put a watch on that. If the leader overwrites this as down, start recovery.
-- I tried to have a check here to ensure that nodes can only publish 
themselves as active, but I got messed up on the logic. Not sure if it's 
necessary for correctness, but felt like a good safegaurd.
- Leader no longer needs to send a request recovery command directly to the 
replica. The ZK watch should handle this.
- Leader no longer publishes the node's state. The node will update this itself 
when it starts the recovery process.
-- This means that there is a period of time after the leader has encountered 
the first error and before the node puts itself into recovery that the leader 
may try to send additional updates and get additional errors. Might need a flag 
to mark the node as dead locally or something like that.


I've got about 5 test failures here, and I put an {{@Ignore}} on the 
TestLeaderInitiatedRecoveryThread class because the whole internals of that are 
changing. I think I somehow broke leader election with this change set, so any 
help would be appreciated.

> Leader incorrectly publishes state for replica when it puts replica into LIR.
> -
>
> Key: SOLR-9555
> URL: https://issues.apache.org/jira/browse/SOLR-9555
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
> Attachments: SOLR-9555-WIP.patch
>
>
> See 
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17888/consoleFull 
> for an example



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10157) JSON Facets should give more helpful error msg when users attempt to an unknown aggregation

2017-02-17 Thread Hoss Man (JIRA)
Hoss Man created SOLR-10157:
---

 Summary: JSON Facets should give more helpful error msg when users 
attempt to an unknown aggregation
 Key: SOLR-10157
 URL: https://issues.apache.org/jira/browse/SOLR-10157
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man


Sample question from a confused solr-user email...

{noformat}
> I'm getting this error when I tried to do a division in JSON Facet.
>
>   "error":{
> "msg":"org.apache.solr.search.SyntaxError: Unknown aggregation agg_div in 
> ('div(4,2)', pos=4)",
> "code":400}}
>
>
> Is this division function supported in JSON Facet?
{noformat}

And the subsequent followup from the same user...

bq. I found that we can't put div(4,2) directly, as it wouldn't work.

bq. It will work if I put something like max(div(4,2)).



It seems like a better error handline code path for 
{{FunctionQParser.parseAgg}} (once we've confirmed no such aggregation exists) 
would be:

* attempt to parse the original string as a regular (non-Agg)ValueSource) 
function
** if that succeeds, give the user an error indicating that this ValueSource 
must be wrapped in an aggregation
** if it fails, continue to throw the original error
* either way, any error thrown should refer to the _original_ {{id}} before 

For example: 
* {{div(price,popularity)}} should throw an error with a msg along the lines 
of: {{'div' is a per-document function, not a multi-document aggregation 
function, input: div(price,popularity)}}
*  {{HOSS(price,popularity)}} on the other hand should throw an error such as: 
{{Unknown aggregation HOSS in ('HOSS(price,populaity)' ...}}
** note the message cites {{HOSS}} not {{agg_HOSS}}






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5143) rm or formalize dealing with "general" KEYS files in our dist dir

2017-02-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/LUCENE-5143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872622#comment-15872622
 ] 

Jan Høydahl commented on LUCENE-5143:
-

Question: How does the auto-generated KEYS file end up in the version-specifig 
release dir? Cannot see anything about it in the ReleaseTodo?

> rm or formalize dealing with "general" KEYS files in our dist dir
> -
>
> Key: LUCENE-5143
> URL: https://issues.apache.org/jira/browse/LUCENE-5143
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Hoss Man
>
> At some point in the past, we started creating a snapshots of KEYS (taken 
> from the auto-generated data from id.apache.org) in the release dir of each 
> release...
> http://www.apache.org/dist/lucene/solr/4.4.0/KEYS
> http://www.apache.org/dist/lucene/java/4.4.0/KEYS
> http://archive.apache.org/dist/lucene/java/4.3.0/KEYS
> http://archive.apache.org/dist/lucene/solr/4.3.0/KEYS
> etc...
> But we also still have some "general" KEYS files...
> https://www.apache.org/dist/lucene/KEYS
> https://www.apache.org/dist/lucene/java/KEYS
> https://www.apache.org/dist/lucene/solr/KEYS
> ...which (as i discovered when i went to add my key to them today) are stale 
> and don't seem to be getting updated.
> I vaguely remember someone (rmuir?) explaining to me at one point the reason 
> we started creating a fresh copy of KEYS in each release dir, but i no longer 
> remember what they said, and i can't find any mention of a reason in any of 
> the release docs, or in any sort of comment in buildAndPushRelease.py
> we probably do one of the following:
>  * remove these "general" KEYS files
>  * add a disclaimer to the top of these files that they are legacy files for 
> verifying old releases and are no longer used for new releases
>  * ensure these files are up to date stop generating per-release KEYS file 
> copies
>  * update our release process to ensure that the general files get updated on 
> each release as well



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5143) rm or formalize dealing with "general" KEYS files in our dist dir

2017-02-17 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872497#comment-15872497
 ] 

Cassandra Targett commented on LUCENE-5143:
---

Maybe or maybe not coincidentally, I got a similar request from Sebb to update 
my key to the fingerprint. I've done it now.

> rm or formalize dealing with "general" KEYS files in our dist dir
> -
>
> Key: LUCENE-5143
> URL: https://issues.apache.org/jira/browse/LUCENE-5143
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Hoss Man
>
> At some point in the past, we started creating a snapshots of KEYS (taken 
> from the auto-generated data from id.apache.org) in the release dir of each 
> release...
> http://www.apache.org/dist/lucene/solr/4.4.0/KEYS
> http://www.apache.org/dist/lucene/java/4.4.0/KEYS
> http://archive.apache.org/dist/lucene/java/4.3.0/KEYS
> http://archive.apache.org/dist/lucene/solr/4.3.0/KEYS
> etc...
> But we also still have some "general" KEYS files...
> https://www.apache.org/dist/lucene/KEYS
> https://www.apache.org/dist/lucene/java/KEYS
> https://www.apache.org/dist/lucene/solr/KEYS
> ...which (as i discovered when i went to add my key to them today) are stale 
> and don't seem to be getting updated.
> I vaguely remember someone (rmuir?) explaining to me at one point the reason 
> we started creating a fresh copy of KEYS in each release dir, but i no longer 
> remember what they said, and i can't find any mention of a reason in any of 
> the release docs, or in any sort of comment in buildAndPushRelease.py
> we probably do one of the following:
>  * remove these "general" KEYS files
>  * add a disclaimer to the top of these files that they are legacy files for 
> verifying old releases and are no longer used for new releases
>  * ensure these files are up to date stop generating per-release KEYS file 
> copies
>  * update our release process to ensure that the general files get updated on 
> each release as well



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 286 - Failure

2017-02-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/286/

7 tests failed.
FAILED:  org.apache.lucene.index.TestIndexSorting.testRandom3

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space
at 
__randomizedtesting.SeedInfo.seed([EC890CB51E00F414:4E51426F7AF2DD12]:0)
at org.apache.lucene.util.packed.Packed64.(Packed64.java:73)
at 
org.apache.lucene.util.packed.PackedInts.getMutable(PackedInts.java:972)
at 
org.apache.lucene.util.packed.PackedInts.getMutable(PackedInts.java:939)
at 
org.apache.lucene.util.packed.GrowableWriter.ensureCapacity(GrowableWriter.java:80)
at 
org.apache.lucene.util.packed.GrowableWriter.set(GrowableWriter.java:88)
at 
org.apache.lucene.util.packed.AbstractPagedMutable.set(AbstractPagedMutable.java:98)
at org.apache.lucene.util.fst.NodeHash.addNew(NodeHash.java:152)
at org.apache.lucene.util.fst.NodeHash.rehash(NodeHash.java:169)
at org.apache.lucene.util.fst.NodeHash.add(NodeHash.java:133)
at org.apache.lucene.util.fst.Builder.compileNode(Builder.java:214)
at org.apache.lucene.util.fst.Builder.freezeTail(Builder.java:310)
at org.apache.lucene.util.fst.Builder.add(Builder.java:414)
at 
org.apache.lucene.codecs.memory.MemoryDocValuesConsumer.writeFST(MemoryDocValuesConsumer.java:367)
at 
org.apache.lucene.codecs.memory.MemoryDocValuesConsumer.addSortedField(MemoryDocValuesConsumer.java:404)
at 
org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addSortedField(PerFieldDocValuesFormat.java:121)
at 
org.apache.lucene.index.SortedDocValuesWriter.flush(SortedDocValuesWriter.java:130)
at 
org.apache.lucene.index.DefaultIndexingChain.writeDocValues(DefaultIndexingChain.java:258)
at 
org.apache.lucene.index.DefaultIndexingChain.flush(DefaultIndexingChain.java:142)
at 
org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:444)
at 
org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:539)
at 
org.apache.lucene.index.DocumentsWriter.postUpdate(DocumentsWriter.java:396)
at 
org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:499)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1579)
at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1324)
at 
org.apache.lucene.index.TestIndexSorting.testRandom3(TestIndexSorting.java:2230)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)


FAILED:  org.apache.lucene.search.TestFuzzyQuery.testRandom

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([EC890CB51E00F414]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.search.TestFuzzyQuery

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([EC890CB51E00F414]:0)


FAILED:  org.apache.solr.cloud.hdfs.StressHdfsTest.test

Error Message:
Timeout occured while waiting response from server at: http://127.0.0.1:45628

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:45628
at 
__randomizedtesting.SeedInfo.seed([C98063D373201AE6:41D45C09DDDC771E]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:621)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:435)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:387)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1358)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1109)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1042)
at 

[jira] [Updated] (SOLR-10132) Support facet.matches to cull facets returned with a regex

2017-02-17 Thread Gus Heck (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gus Heck updated SOLR-10132:

Attachment: SOLR-10132.patch

revised patch with inheritance from SubstringBytesRefFilter

> Support facet.matches to cull facets returned with a regex
> --
>
> Key: SOLR-10132
> URL: https://issues.apache.org/jira/browse/SOLR-10132
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 6.4.1
>Reporter: Gus Heck
> Attachments: SOLR-10132.patch, SOLR-10132.patch
>
>
> I recently ran into a case where I really wanted to only return the next 
> level of a hierarchical facet, and while I was able to do that with a 
> coordinated set of dynamic fields, it occurred to me that this would have 
> been much much easier if I could have simply used PathHierarchyTokenizer and 
> written
> ="/my/current/prefix/[^/]+$"
> thereby limiting the returned facets to the next level down and not return 
> the  additional  N levels I didn't (yet) want to display (numbering in the 
> thousands near the top of the tree). I suspect there are other good use 
> cases, and the patch seemed relatively tractable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10141) Caffeine cache causes BlockCache corruption

2017-02-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872344#comment-15872344
 ] 

ASF subversion and git services commented on SOLR-10141:


Commit be61c6634872435614ea4d59fd14df3426398116 in lucene-solr's branch 
refs/heads/branch_6x from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=be61c66 ]

SOLR-10141: Upgrade to Caffeine 2.3.5 to fix issues with removal listener


> Caffeine cache causes BlockCache corruption 
> 
>
> Key: SOLR-10141
> URL: https://issues.apache.org/jira/browse/SOLR-10141
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
> Attachments: SOLR-10141.patch, Solr10141Test.java
>
>
> After fixing the race conditions in the BlockCache itself (SOLR-10121), the 
> concurrency test passes with the previous implementation using 
> ConcurrentLinkedHashMap and fail with Caffeine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+155) - Build # 2878 - Still Unstable!

2017-02-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2878/
Java: 32bit/jdk-9-ea+155 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.handler.admin.TestApiFramework.testFramework

Error Message:


Stack Trace:
java.lang.ExceptionInInitializerError
at 
__randomizedtesting.SeedInfo.seed([3BF80788F0E02A9:14C94A5F89DAEE94]:0)
at 
net.sf.cglib.core.KeyFactory$Generator.generateClass(KeyFactory.java:166)
at 
net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at 
net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at net.sf.cglib.core.KeyFactory$Generator.create(KeyFactory.java:144)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:116)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:108)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:104)
at net.sf.cglib.proxy.Enhancer.(Enhancer.java:69)
at 
org.easymock.internal.ClassProxyFactory.createEnhancer(ClassProxyFactory.java:259)
at 
org.easymock.internal.ClassProxyFactory.createProxy(ClassProxyFactory.java:174)
at org.easymock.internal.MocksControl.createMock(MocksControl.java:60)
at org.easymock.EasyMock.createMock(EasyMock.java:104)
at 
org.apache.solr.handler.admin.TestCoreAdminApis.getCoreContainerMock(TestCoreAdminApis.java:83)
at 
org.apache.solr.handler.admin.TestApiFramework.testFramework(TestApiFramework.java:59)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:543)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (SOLR-8241) Evaluate W-TinyLfu cache

2017-02-17 Thread Ben Manes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872244#comment-15872244
 ] 

Ben Manes commented on SOLR-8241:
-

[~Timothy055], solr master is now on 2.3.5 (to upgrade its usage in the block 
cache).

> Evaluate W-TinyLfu cache
> 
>
> Key: SOLR-8241
> URL: https://issues.apache.org/jira/browse/SOLR-8241
> Project: Solr
>  Issue Type: Wish
>  Components: search
>Reporter: Ben Manes
>Priority: Minor
> Attachments: proposal.patch, SOLR-8241.patch, SOLR-8241.patch, 
> SOLR-8241.patch
>
>
> SOLR-2906 introduced an LFU cache and in-progress SOLR-3393 makes it O(1). 
> The discussions seem to indicate that the higher hit rate (vs LRU) is offset 
> by the slower performance of the implementation. An original goal appeared to 
> be to introduce ARC, a patented algorithm that uses ghost entries to retain 
> history information.
> My analysis of Window TinyLfu indicates that it may be a better option. It 
> uses a frequency sketch to compactly estimate an entry's popularity. It uses 
> LRU to capture recency and operate in O(1) time. When using available 
> academic traces the policy provides a near optimal hit rate regardless of the 
> workload.
> I'm getting ready to release the policy in Caffeine, which Solr already has a 
> dependency on. But, the code is fairly straightforward and a port into Solr's 
> caches instead is a pragmatic alternative. More interesting is what the 
> impact would be in Solr's workloads and feedback on the policy's design.
> https://github.com/ben-manes/caffeine/wiki/Efficiency



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10141) Caffeine cache causes BlockCache corruption

2017-02-17 Thread Ben Manes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872221#comment-15872221
 ] 

Ben Manes commented on SOLR-10141:
--

Thanks [~ysee...@gmail.com]. Sorry about any frustrations this caused.

> Caffeine cache causes BlockCache corruption 
> 
>
> Key: SOLR-10141
> URL: https://issues.apache.org/jira/browse/SOLR-10141
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
> Attachments: SOLR-10141.patch, Solr10141Test.java
>
>
> After fixing the race conditions in the BlockCache itself (SOLR-10121), the 
> concurrency test passes with the previous implementation using 
> ConcurrentLinkedHashMap and fail with Caffeine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+155) - Build # 18986 - Unstable!

2017-02-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18986/
Java: 32bit/jdk-9-ea+155 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.handler.admin.TestApiFramework.testFramework

Error Message:


Stack Trace:
java.lang.ExceptionInInitializerError
at 
__randomizedtesting.SeedInfo.seed([D9C7E6FFAAF33A5C:CEB12CD8AC27D661]:0)
at 
net.sf.cglib.core.KeyFactory$Generator.generateClass(KeyFactory.java:166)
at 
net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at 
net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at net.sf.cglib.core.KeyFactory$Generator.create(KeyFactory.java:144)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:116)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:108)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:104)
at net.sf.cglib.proxy.Enhancer.(Enhancer.java:69)
at 
org.easymock.internal.ClassProxyFactory.createEnhancer(ClassProxyFactory.java:259)
at 
org.easymock.internal.ClassProxyFactory.createProxy(ClassProxyFactory.java:174)
at org.easymock.internal.MocksControl.createMock(MocksControl.java:60)
at org.easymock.EasyMock.createMock(EasyMock.java:104)
at 
org.apache.solr.handler.admin.TestCoreAdminApis.getCoreContainerMock(TestCoreAdminApis.java:83)
at 
org.apache.solr.handler.admin.TestApiFramework.testFramework(TestApiFramework.java:59)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:543)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (SOLR-10141) Caffeine cache causes BlockCache corruption

2017-02-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872211#comment-15872211
 ] 

ASF subversion and git services commented on SOLR-10141:


Commit 6804f3694210ac34728dd6f1a74736681dae2837 in lucene-solr's branch 
refs/heads/master from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6804f36 ]

SOLR-10141: Upgrade to Caffeine 2.3.5 to fix issues with removal listener


> Caffeine cache causes BlockCache corruption 
> 
>
> Key: SOLR-10141
> URL: https://issues.apache.org/jira/browse/SOLR-10141
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
> Attachments: SOLR-10141.patch, Solr10141Test.java
>
>
> After fixing the race conditions in the BlockCache itself (SOLR-10121), the 
> concurrency test passes with the previous implementation using 
> ConcurrentLinkedHashMap and fail with Caffeine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10156) Add significantTerms Streaming Expression

2017-02-17 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10156:
--
Description: 
The significantTerms Streaming Expression will emit a set of terms from a *text 
field* within a doc frequency range for a specific query. It will also score 
the terms based on how many times the terms appear in the result set, and how 
many times the terms appear in the corpus, and return the top N terms based on 
this significance score.

Syntax:

{code}
significantTerms(collection, 
   q="abc", 
   field="some_text_field", 
   minDocFreq="x", 
   maxDocFreq="y",
   limit="50")
{code}

  was:
The significantTerms Streaming Expression will emit a set of terms from a *text 
field* within a doc frequency range for a specific query. It will also score 
the terms based on how many times the terms appear in the result set, and how 
many times the terms appear in the corpus.

Syntax:

{code}
significantTerms(collection, 
   q="abc", 
   field="some_text_field", 
   minDocFreq="x", 
   maxDocFreq="y",
   limit="50")
{code}


> Add significantTerms Streaming Expression
> -
>
> Key: SOLR-10156
> URL: https://issues.apache.org/jira/browse/SOLR-10156
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.5
>
>
> The significantTerms Streaming Expression will emit a set of terms from a 
> *text field* within a doc frequency range for a specific query. It will also 
> score the terms based on how many times the terms appear in the result set, 
> and how many times the terms appear in the corpus, and return the top N terms 
> based on this significance score.
> Syntax:
> {code}
> significantTerms(collection, 
>q="abc", 
>field="some_text_field", 
>minDocFreq="x", 
>maxDocFreq="y",
>limit="50")
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10156) Add significantTerms Streaming Expression

2017-02-17 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10156:
--
Description: 
The significantTerms Streaming Expression will emit a set of terms from a *text 
field* within a doc frequency range for a specific query. It will also score 
the terms based on how many times the terms appear in the result set, and how 
many times the terms appear in the corpus.

Syntax:

{code}
significantTerms(collection, 
   q="abc", 
   field="some_text_field", 
   minDocFreq="x", 
   maxDocFreq="y",
   limit="50")
{code}

  was:
The significantTerms Streaming Expression will emit a set of terms from a *text 
field* within a doc frequency range for a specific query. It will then score 
the terms based on how many times the terms appear in the result set, and how 
many times the terms appear in the corpus.

Syntax:

{code}
significantTerms(collection, 
   q="abc", 
   field="some_text_field", 
   minDocFreq="x", 
   maxDocFreq="y",
   limit="50")
{code}


> Add significantTerms Streaming Expression
> -
>
> Key: SOLR-10156
> URL: https://issues.apache.org/jira/browse/SOLR-10156
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.5
>
>
> The significantTerms Streaming Expression will emit a set of terms from a 
> *text field* within a doc frequency range for a specific query. It will also 
> score the terms based on how many times the terms appear in the result set, 
> and how many times the terms appear in the corpus.
> Syntax:
> {code}
> significantTerms(collection, 
>q="abc", 
>field="some_text_field", 
>minDocFreq="x", 
>maxDocFreq="y",
>limit="50")
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10156) Add significantTerms Streaming Expression

2017-02-17 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10156:
--
Description: 
The significantTerms Streaming Expression will emit a set of terms from a *text 
field* within a doc frequency range for a specific query. It will then score 
the terms based on how many times the terms appear in the result set, and how 
many times the terms appear in the corpus.

Syntax:

{code}
significantTerms(collection, 
   q="abc", 
   field="some_text_field", 
   minDocFreq="x", 
   maxDocFreq="y",
   limit="50")
{code}

  was:
The significantTerms Streaming Expression will emit a set of terms from a *text 
field* within a doc frequency range for a specific query. It will then score 
the terms based on how many times the terms appear in the result set, and how 
many times the terms appear in the corpus.



> Add significantTerms Streaming Expression
> -
>
> Key: SOLR-10156
> URL: https://issues.apache.org/jira/browse/SOLR-10156
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.5
>
>
> The significantTerms Streaming Expression will emit a set of terms from a 
> *text field* within a doc frequency range for a specific query. It will then 
> score the terms based on how many times the terms appear in the result set, 
> and how many times the terms appear in the corpus.
> Syntax:
> {code}
> significantTerms(collection, 
>q="abc", 
>field="some_text_field", 
>minDocFreq="x", 
>maxDocFreq="y",
>limit="50")
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10156) Add significantTerms Streaming Expression

2017-02-17 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10156:
--
Fix Version/s: 6.5

> Add significantTerms Streaming Expression
> -
>
> Key: SOLR-10156
> URL: https://issues.apache.org/jira/browse/SOLR-10156
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.5
>
>
> The significantTerms Streaming Expression will emit a set of terms from a 
> *text field* within a doc frequency range for a specific query. It will then 
> score the terms based on how many times the terms appear in the result set, 
> and how many times the terms appear in the corpus.
> Syntax:
> {code}
> significantTerms(collection, 
>q="abc", 
>field="some_text_field", 
>minDocFreq="x", 
>maxDocFreq="y",
>limit="50")
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10156) Add significantTerms Streaming Expression

2017-02-17 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-10156:
-

Assignee: Joel Bernstein

> Add significantTerms Streaming Expression
> -
>
> Key: SOLR-10156
> URL: https://issues.apache.org/jira/browse/SOLR-10156
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> The significantTerms Streaming Expression will emit a set of terms from a 
> *text field* within a doc frequency range for a specific query. It will then 
> score the terms based on how many times the terms appear in the result set, 
> and how many times the terms appear in the corpus.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10156) Add significantTerms Streaming Expression

2017-02-17 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-10156:
-

 Summary: Add significantTerms Streaming Expression
 Key: SOLR-10156
 URL: https://issues.apache.org/jira/browse/SOLR-10156
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


The significantTerms Streaming Expression will emit a set of terms from a *text 
field* within a doc frequency range for a specific query. It will then score 
the terms based on how many times the terms appear in the result set, and how 
many times the terms appear in the corpus.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Forbidden API's checker throwing exceptions

2017-02-17 Thread Gus Heck
Kevin's suggestion solved it for me. Thanks :)

On Fri, Feb 17, 2017 at 12:15 PM, Kevin Risden 
wrote:

> May have to run something like:
>
> ant clean clean-jars jar-checksums compile
>
> That would make sure the jars are up to date.
>
> Kevin Risden
>
> On Fri, Feb 17, 2017 at 11:08 AM, Uwe Schindler  wrote:
>
>> Could it be that your Ivy Cache is broken? The forbidden precommit checks
>> work on Jenkins, so it must be something on your setup.
>>
>> Uwe
>>
>>
>> Am 17. Februar 2017 17:57:01 MEZ schrieb Gus Heck :
>>>
>>> Is anyone else getting this running precommit? Worked a week or so ago,
>>> now it doesn't.
>>>
>>> BUILD FAILED
>>> /Users/gus/projects/solr/lucene-solr/build.xml:117: The following error
>>> occurred while executing this line:
>>> /Users/gus/projects/solr/lucene-solr/solr/build.xml:369: The following
>>> error occurred while executing this line:
>>> /Users/gus/projects/solr/lucene-solr/solr/common-build.xml:514: Check
>>> for forbidden API calls failed: java.lang.ClassNotFoundException:
>>> com.facebook.presto.sql.tree.AstVisitor
>>> at de.thetaphi.forbiddenapis.Checker.getClassFromClassLoader(Ch
>>> ecker.java:314)
>>> at de.thetaphi.forbiddenapis.Checker.lookupRelatedClass(Checker
>>> .java:326)
>>> at de.thetaphi.forbiddenapis.ClassScanner.checkClassUse(ClassSc
>>> anner.java:120)
>>> at de.thetaphi.forbiddenapis.ClassScanner.checkClassUse(ClassSc
>>> anner.java:132)
>>> at de.thetaphi.forbiddenapis.ClassScanner.checkClassDefinition(
>>> ClassScanner.java:137)
>>> at de.thetaphi.forbiddenapis.ClassScanner.checkType(ClassScanne
>>> r.java:172)
>>> at de.thetaphi.forbiddenapis.ClassScanner.checkDescriptor(Class
>>> Scanner.java:210)
>>> at de.thetaphi.forbiddenapis.ClassScanner$1.(ClassScanner
>>> .java:276)
>>> at de.thetaphi.forbiddenapis.ClassScanner.visitField(ClassScann
>>> er.java:271)
>>> at de.thetaphi.forbiddenapis.asm.ClassReader.a(Unknown Source)
>>> at de.thetaphi.forbiddenapis.asm.ClassReader.accept(Unknown
>>> Source)
>>> at de.thetaphi.forbiddenapis.asm.ClassReader.accept(Unknown
>>> Source)
>>> at de.thetaphi.forbiddenapis.Checker.checkClass(Checker.java:
>>> 602)
>>> at de.thetaphi.forbiddenapis.Checker.run(Checker.java:619)
>>> at de.thetaphi.forbiddenapis.ant.AntTask.execute(AntTask.java:2
>>> 08)
>>> at org.apache.tools.ant.UnknownElement.execute(UnknownElement.
>>> java:293)
>>> at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>>> thodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:498)
>>> at org.apache.tools.ant.dispatch.DispatchUtils.execute(Dispatch
>>> Utils.java:106)
>>> at org.apache.tools.ant.Task.perform(Task.java:348)
>>>
>>>
>>> --
>>> http://www.the111shift.com
>>>
>>
>> --
>> Uwe Schindler
>> Achterdiek 19, 28357 Bremen
>> https://www.thetaphi.de
>>
>
>


-- 
http://www.the111shift.com


Re: Forbidden API's checker throwing exceptions

2017-02-17 Thread Kevin Risden
May have to run something like:

ant clean clean-jars jar-checksums compile

That would make sure the jars are up to date.

Kevin Risden

On Fri, Feb 17, 2017 at 11:08 AM, Uwe Schindler  wrote:

> Could it be that your Ivy Cache is broken? The forbidden precommit checks
> work on Jenkins, so it must be something on your setup.
>
> Uwe
>
>
> Am 17. Februar 2017 17:57:01 MEZ schrieb Gus Heck :
>>
>> Is anyone else getting this running precommit? Worked a week or so ago,
>> now it doesn't.
>>
>> BUILD FAILED
>> /Users/gus/projects/solr/lucene-solr/build.xml:117: The following error
>> occurred while executing this line:
>> /Users/gus/projects/solr/lucene-solr/solr/build.xml:369: The following
>> error occurred while executing this line:
>> /Users/gus/projects/solr/lucene-solr/solr/common-build.xml:514: Check
>> for forbidden API calls failed: java.lang.ClassNotFoundException:
>> com.facebook.presto.sql.tree.AstVisitor
>> at de.thetaphi.forbiddenapis.Checker.getClassFromClassLoader(
>> Checker.java:314)
>> at de.thetaphi.forbiddenapis.Checker.lookupRelatedClass(
>> Checker.java:326)
>> at de.thetaphi.forbiddenapis.ClassScanner.checkClassUse(
>> ClassScanner.java:120)
>> at de.thetaphi.forbiddenapis.ClassScanner.checkClassUse(
>> ClassScanner.java:132)
>> at de.thetaphi.forbiddenapis.ClassScanner.checkClassDefinition(
>> ClassScanner.java:137)
>> at de.thetaphi.forbiddenapis.ClassScanner.checkType(
>> ClassScanner.java:172)
>> at de.thetaphi.forbiddenapis.ClassScanner.checkDescriptor(
>> ClassScanner.java:210)
>> at de.thetaphi.forbiddenapis.ClassScanner$1.(
>> ClassScanner.java:276)
>> at de.thetaphi.forbiddenapis.ClassScanner.visitField(
>> ClassScanner.java:271)
>> at de.thetaphi.forbiddenapis.asm.ClassReader.a(Unknown Source)
>> at de.thetaphi.forbiddenapis.asm.ClassReader.accept(Unknown
>> Source)
>> at de.thetaphi.forbiddenapis.asm.ClassReader.accept(Unknown
>> Source)
>> at de.thetaphi.forbiddenapis.Checker.checkClass(Checker.java:602)
>> at de.thetaphi.forbiddenapis.Checker.run(Checker.java:619)
>> at de.thetaphi.forbiddenapis.ant.AntTask.execute(AntTask.java:
>> 208)
>> at org.apache.tools.ant.UnknownElement.execute(
>> UnknownElement.java:293)
>> at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
>> DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:498)
>> at org.apache.tools.ant.dispatch.DispatchUtils.execute(
>> DispatchUtils.java:106)
>> at org.apache.tools.ant.Task.perform(Task.java:348)
>>
>>
>> --
>> http://www.the111shift.com
>>
>
> --
> Uwe Schindler
> Achterdiek 19, 28357 Bremen
> https://www.thetaphi.de
>


Re: Forbidden API's checker throwing exceptions

2017-02-17 Thread Uwe Schindler
Could it be that your Ivy Cache is broken? The forbidden precommit checks work 
on Jenkins, so it must be something on your setup.

Uwe

Am 17. Februar 2017 17:57:01 MEZ schrieb Gus Heck :
>Is anyone else getting this running precommit? Worked a week or so ago,
>now
>it doesn't.
>
>BUILD FAILED
>/Users/gus/projects/solr/lucene-solr/build.xml:117: The following error
>occurred while executing this line:
>/Users/gus/projects/solr/lucene-solr/solr/build.xml:369: The following
>error occurred while executing this line:
>/Users/gus/projects/solr/lucene-solr/solr/common-build.xml:514: Check
>for
>forbidden API calls failed: java.lang.ClassNotFoundException:
>com.facebook.presto.sql.tree.AstVisitor
>at
>de.thetaphi.forbiddenapis.Checker.getClassFromClassLoader(Checker.java:314)
>at
>de.thetaphi.forbiddenapis.Checker.lookupRelatedClass(Checker.java:326)
>at
>de.thetaphi.forbiddenapis.ClassScanner.checkClassUse(ClassScanner.java:120)
>at
>de.thetaphi.forbiddenapis.ClassScanner.checkClassUse(ClassScanner.java:132)
>at
>de.thetaphi.forbiddenapis.ClassScanner.checkClassDefinition(ClassScanner.java:137)
>at
>de.thetaphi.forbiddenapis.ClassScanner.checkType(ClassScanner.java:172)
>at
>de.thetaphi.forbiddenapis.ClassScanner.checkDescriptor(ClassScanner.java:210)
>at
>de.thetaphi.forbiddenapis.ClassScanner$1.(ClassScanner.java:276)
>at
>de.thetaphi.forbiddenapis.ClassScanner.visitField(ClassScanner.java:271)
>at de.thetaphi.forbiddenapis.asm.ClassReader.a(Unknown Source)
>at de.thetaphi.forbiddenapis.asm.ClassReader.accept(Unknown Source)
>at de.thetaphi.forbiddenapis.asm.ClassReader.accept(Unknown Source)
>  at de.thetaphi.forbiddenapis.Checker.checkClass(Checker.java:602)
>at de.thetaphi.forbiddenapis.Checker.run(Checker.java:619)
> at de.thetaphi.forbiddenapis.ant.AntTask.execute(AntTask.java:208)
>at
>org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:293)
>at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>at
>sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>at java.lang.reflect.Method.invoke(Method.java:498)
>at
>org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
>at org.apache.tools.ant.Task.perform(Task.java:348)
>
>
>-- 
>http://www.the111shift.com

--
Uwe Schindler
Achterdiek 19, 28357 Bremen
https://www.thetaphi.de

Re: Welcome Toke Eskildsen as a Lucene/Solr committer

2017-02-17 Thread Mark Miller
Welcome!

On Thu, Feb 16, 2017 at 4:59 PM jim ferenczi  wrote:

> Welcome Toke!
>
> Le 16 févr. 2017 5:08 PM, "Erick Erickson"  a
> écrit :
>
> Congrats Toke!
>
> On Thu, Feb 16, 2017 at 6:37 AM, Dmitry Kan 
> wrote:
> > Hi Toke, congrats! Glad for you and well deserved!
> >
> > P.S. Was awesome to test faceting module speed ups you did for high
> > cardinality fields. Your skill to explain complex things very
> efficiently is
> > unmatched.
> >
> > Dmitry
> > --
> > Dmitry Kan
> > Luke Toolbox: http://github.com/DmitryKey/luke
> > Blog: http://dmitrykan.blogspot.com
> > Twitter: http://twitter.com/dmitrykan
> >
> > On 14 February 2017 at 20:11, Toke Eskildsen  wrote:
> >>
> >> Thank you for the invitation and the warm welcome.
> >>
> >>
> >> I am a 43 year old Danish man, with a family and a job at the Royal
> Danish
> >> Library, where I have been working mostly with search-related
> technology for
> >> 10 years.
> >>
> >> I have done a fair bit of Lucene/Solr hacking during the years, with
> focus
> >> on speed- and memory-optimizations. Implementing bit-packing structures,
> >> eliminating steps in calculations and in general making more things
> possible
> >> on less hardware is a bit of an obsession. I hope to continue in that
> >> direction as a committer and am looking forward to a more controlled and
> >> community-oriented way of writing code: The one-man-show is a lot of
> fun and
> >> can work well for specific use cases, but it tends to get a bit out of
> >> control and the result might not be that usable elsewhere.
> >>
> >> Happy to be here,
> >> Toke
> >>
> >> -
> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >> For additional commands, e-mail: dev-h...@lucene.apache.org
> >>
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
- Mark
about.me/markrmiller


[jira] [Commented] (LUCENE-7449) Add CROSSES query support to RangeField

2017-02-17 Thread Nicholas Knize (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872113#comment-15872113
 ] 

Nicholas Knize commented on LUCENE-7449:


Thanks [~mikemccand]! Bone head logic issue on my part. I pushed a fix!

> Add CROSSES query support to RangeField
> ---
>
> Key: LUCENE-7449
> URL: https://issues.apache.org/jira/browse/LUCENE-7449
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>Assignee: Nicholas Knize
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7449.patch, LUCENE-7449.patch, LUCENE-7449.patch
>
>
> {{RangeField}} currently supports {{INTERSECTS}}, {{WITHIN}}, and 
> {{CONTAINS}} query behavior. This feature adds support for an explicit 
> {{CROSSES}} query. Unlike {{INTERSECT}} and {{OVERLAP}} queries the 
> {{CROSSES}} query finds any indexed ranges whose interior (within range) 
> intersect the interior AND exterior (outside range) of the query range.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Forbidden API's checker throwing exceptions

2017-02-17 Thread Gus Heck
Is anyone else getting this running precommit? Worked a week or so ago, now
it doesn't.

BUILD FAILED
/Users/gus/projects/solr/lucene-solr/build.xml:117: The following error
occurred while executing this line:
/Users/gus/projects/solr/lucene-solr/solr/build.xml:369: The following
error occurred while executing this line:
/Users/gus/projects/solr/lucene-solr/solr/common-build.xml:514: Check for
forbidden API calls failed: java.lang.ClassNotFoundException:
com.facebook.presto.sql.tree.AstVisitor
at
de.thetaphi.forbiddenapis.Checker.getClassFromClassLoader(Checker.java:314)
at
de.thetaphi.forbiddenapis.Checker.lookupRelatedClass(Checker.java:326)
at
de.thetaphi.forbiddenapis.ClassScanner.checkClassUse(ClassScanner.java:120)
at
de.thetaphi.forbiddenapis.ClassScanner.checkClassUse(ClassScanner.java:132)
at
de.thetaphi.forbiddenapis.ClassScanner.checkClassDefinition(ClassScanner.java:137)
at
de.thetaphi.forbiddenapis.ClassScanner.checkType(ClassScanner.java:172)
at
de.thetaphi.forbiddenapis.ClassScanner.checkDescriptor(ClassScanner.java:210)
at
de.thetaphi.forbiddenapis.ClassScanner$1.(ClassScanner.java:276)
at
de.thetaphi.forbiddenapis.ClassScanner.visitField(ClassScanner.java:271)
at de.thetaphi.forbiddenapis.asm.ClassReader.a(Unknown Source)
at de.thetaphi.forbiddenapis.asm.ClassReader.accept(Unknown Source)
at de.thetaphi.forbiddenapis.asm.ClassReader.accept(Unknown Source)
at de.thetaphi.forbiddenapis.Checker.checkClass(Checker.java:602)
at de.thetaphi.forbiddenapis.Checker.run(Checker.java:619)
at de.thetaphi.forbiddenapis.ant.AntTask.execute(AntTask.java:208)
at
org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:293)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)


-- 
http://www.the111shift.com


[jira] [Commented] (LUCENE-7449) Add CROSSES query support to RangeField

2017-02-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872107#comment-15872107
 ] 

ASF subversion and git services commented on LUCENE-7449:
-

Commit e426ec4f7a254d532b75d5663f71fb97dcc386ac in lucene-solr's branch 
refs/heads/branch_6x from [~nknize]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e426ec4 ]

LUCENE-7449: Fix bug in RangeFieldQuery.scorer


> Add CROSSES query support to RangeField
> ---
>
> Key: LUCENE-7449
> URL: https://issues.apache.org/jira/browse/LUCENE-7449
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>Assignee: Nicholas Knize
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7449.patch, LUCENE-7449.patch, LUCENE-7449.patch
>
>
> {{RangeField}} currently supports {{INTERSECTS}}, {{WITHIN}}, and 
> {{CONTAINS}} query behavior. This feature adds support for an explicit 
> {{CROSSES}} query. Unlike {{INTERSECT}} and {{OVERLAP}} queries the 
> {{CROSSES}} query finds any indexed ranges whose interior (within range) 
> intersect the interior AND exterior (outside range) of the query range.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7449) Add CROSSES query support to RangeField

2017-02-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872104#comment-15872104
 ] 

ASF subversion and git services commented on LUCENE-7449:
-

Commit 907c43ce7af389c42ef200e5c2ecefbc5eee8a7a in lucene-solr's branch 
refs/heads/master from [~nknize]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=907c43c ]

LUCENE-7449: Fix bug in RangeFieldQuery.scorer


> Add CROSSES query support to RangeField
> ---
>
> Key: LUCENE-7449
> URL: https://issues.apache.org/jira/browse/LUCENE-7449
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>Assignee: Nicholas Knize
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7449.patch, LUCENE-7449.patch, LUCENE-7449.patch
>
>
> {{RangeField}} currently supports {{INTERSECTS}}, {{WITHIN}}, and 
> {{CONTAINS}} query behavior. This feature adds support for an explicit 
> {{CROSSES}} query. Unlike {{INTERSECT}} and {{OVERLAP}} queries the 
> {{CROSSES}} query finds any indexed ranges whose interior (within range) 
> intersect the interior AND exterior (outside range) of the query range.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8589) Add aliases to the LIST action results in the Collections API

2017-02-17 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872085#comment-15872085
 ] 

Mark Miller commented on SOLR-8589:
---

I don't know if the patch in the other issue offers anything or not, but looks 
like the same issue. I'd just finish the issue, add a test, etc.

> Add aliases to the LIST action results in the Collections API
> -
>
> Key: SOLR-8589
> URL: https://issues.apache.org/jira/browse/SOLR-8589
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
>Priority: Minor
> Attachments: solr-8589-new-list-details-aliases.png, SOLR-8589.patch, 
> SOLR-8589.patch, SOLR-8589.patch, SOLR-8589.patch
>
>
> Although it is possible to get a list of SolrCloud aliases vi an HTTP API, it 
> is not available as a typical query response, I believe it is only available 
> via the http API for zookeeper.
> The results from the LIST action in the Collections API is well-situated to 
> handle this. The current results are contained in a "collections" node, we 
> can simply add an "aliases" node if there are any aliases defined.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10132) Support facet.matches to cull facets returned with a regex

2017-02-17 Thread Gus Heck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872078#comment-15872078
 ] 

Gus Heck commented on SOLR-10132:
-

Ok I'll use the suggested, somewhat hackish workaround... opened 
https://issues.apache.org/jira/browse/SOLR-10155 for the review of this check

> Support facet.matches to cull facets returned with a regex
> --
>
> Key: SOLR-10132
> URL: https://issues.apache.org/jira/browse/SOLR-10132
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 6.4.1
>Reporter: Gus Heck
> Attachments: SOLR-10132.patch
>
>
> I recently ran into a case where I really wanted to only return the next 
> level of a hierarchical facet, and while I was able to do that with a 
> coordinated set of dynamic fields, it occurred to me that this would have 
> been much much easier if I could have simply used PathHierarchyTokenizer and 
> written
> ="/my/current/prefix/[^/]+$"
> thereby limiting the returned facets to the next level down and not return 
> the  additional  N levels I didn't (yet) want to display (numbering in the 
> thousands near the top of the tree). I suspect there are other good use 
> cases, and the patch seemed relatively tractable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10155) Clarify logic for term filters on numeric types

2017-02-17 Thread Gus Heck (JIRA)
Gus Heck created SOLR-10155:
---

 Summary: Clarify logic for term filters on numeric types
 Key: SOLR-10155
 URL: https://issues.apache.org/jira/browse/SOLR-10155
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: faceting
Affects Versions: 6.4.1
Reporter: Gus Heck
Priority: Minor


The following code has been found to be confusing to multiple folks working in 
SimpleFacets.java (see SOLR-10132)

{code}
if (termFilter != null) {
  // TODO: understand this logic... what is the case for supporting 
an empty string
  // for contains on numeric facets? What does that achieve?
  // The exception message is misleading in the case of an 
excludeTerms filter in any case...
  // Also maybe vulnerable to NPE on isEmpty test?
  final boolean supportedOperation = (termFilter instanceof 
SubstringBytesRefFilter) && ((SubstringBytesRefFilter) 
termFilter).substring().isEmpty();
  if (!supportedOperation) {
throw new SolrException(ErrorCode.BAD_REQUEST, 
FacetParams.FACET_CONTAINS + " is not supported on numeric types");
  }
}
{code}

This is found around line 482 or so. The comment in the code above is mine, and 
won't be found in the codebase. This ticket can be resolved by eliminating the 
complex check and just denying all termFilters with a better exception message 
not specific to contains filters (and perhaps consolidated with the proceeding 
check for about prefix filters?), or adding a comment to the code base 
explaining why we need to allow a term filter with an empty, non-null string to 
be processed, and why this isn't an NPE waiting to happen.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7628) Add a getMatchingChildren() method to DisjunctionScorer

2017-02-17 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872061#comment-15872061
 ] 

Alan Woodward commented on LUCENE-7628:
---

Now that ToParentBlockJoinCollector is gone, I think I can re-apply this patch 
for 6.5?  Running tests now.

> Add a getMatchingChildren() method to DisjunctionScorer
> ---
>
> Key: LUCENE-7628
> URL: https://issues.apache.org/jira/browse/LUCENE-7628
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 6.5
>
> Attachments: LUCENE-7628.patch, LUCENE-7628.patch
>
>
> This one is a bit convoluted, so bear with me...
> The luwak highlighter works by rewriting queries into their Span-equivalents, 
> and then running them with a special Collector.  At each matching doc, the 
> highlighter gathers all the Spans objects positioned on the current doc and 
> collects their positions using the SpanCollection API.
> Some queries can't be translated into Spans.  For those queries that generate 
> Scorers with ChildScorers, like BooleanQuery, we can call .getChildren() on 
> the Scorer and see if any of them are SpanScorers, and for those that aren't 
> we can call .getChildren() again and recurse down.  For each child scorer, we 
> check that it's positioned on the current document, so non-matching 
> subscorers can be skipped.
> This all works correctly *except* in the case of a DisjunctionScorer where 
> one of the children is a two-phase iterator that has matched its 
> approximation, but not its refinement query.  A SpanScorer in this situation 
> will be correctly positioned on the current document, but its Spans will be 
> in an undefined state, meaning the highlighter will either collect incorrect 
> hits, or it will throw an Exception and prevent hits being collected from 
> other subspans.
> We've tried various ways around this (including forking SpanNearQuery and 
> adding a bunch of slow position checks to it that are used only by the 
> highlighting code), but it turns out that the simplest fix is to add a new 
> method to DisjunctionScorer that only returns the currently matching child 
> Scorers.  It's a bit of a hack, and it won't be used anywhere else, but it's 
> a fairly small and contained hack.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #:

2017-02-17 Thread nguyenhoan
Github user nguyenhoan commented on the pull request:


https://github.com/apache/lucene-solr/commit/e327efb676e04f72c39e902f08c0d11497b4c57d#commitcomment-20933965
  
In lucene/join/src/test/org/apache/lucene/search/join/TestBlockJoin.java:
In lucene/join/src/test/org/apache/lucene/search/join/TestBlockJoin.java on 
line 1082:
Hi @mvgsoftware 
We are a team of researchers from Iowa State, The University of Texs at 
Dallas and Oregon State University, USA. We are investigating common/repeated 
code changes.
We have four short questions regarding the change in the image below which 
is part of this commit.

![image](https://cloud.githubusercontent.com/assets/2257582/23073136/2159b232-f4fa-11e6-8af9-d71c1192dec9.png)

Questions:

Q1- Is the change at these lines similar to another change from before? 
(yes, no, not sure)

Q2- Can you briefly describe the change and why you made it? (for example, 
checking parameter before calling the method to avoid a Null Pointer Exception)

Q3- Can you give it a name? (for example, Null Check)

Q4- Would you like to have this change automated by a tool? (Yes, No, 
Already automated)

The data collected from the answers will never be associated with you or 
your project. Our questions are about recurring code changes from the developer 
community, not about personal information. All the data is merged across 
recurring changes from GitHub repositories. We will publish aggregated data 
from the trends of the whole community. 
We have a long tradition of developing refactoring tools and contributing 
them freely to the Eclipse, Netbeans, Android Studio under their respective 
FLOSS licenses. For example, look at some of our recently released refactoring 
tools: http://refactoring.info/tools/ 

Thank you,
Hoan Nguyen https://sites.google.com/site/nguyenanhhoan/
Michael Hilton http://web.engr.oregonstate.edu/~hiltonm/
Tien Nguyen http://www.utdallas.edu/~tien.n.nguyen/
Danny Dig http://eecs.oregonstate.edu/people/dig-danny



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10032) Create report to assess Solr test quality at a commit point.

2017-02-17 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872034#comment-15872034
 ] 

Mark Miller commented on SOLR-10032:


bq. Would it be convenient for you to publicly share the folder these reports 
go into so I could bookmark that?

https://drive.google.com/drive/folders/0ByYyjsrbz7-qa2dOaU1UZDdRVzg?usp=sharing

> Create report to assess Solr test quality at a commit point.
> 
>
> Key: SOLR-10032
> URL: https://issues.apache.org/jira/browse/SOLR-10032
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: Lucene-Solr Master Test Beast Results 
> 01-24-2017-9899cbd031dc3fc37a384b1f9e2b379e90a9a3a6 Level Medium- Running 30 
> iterations, 12 at a time .pdf, Lucene-Solr Master Test Beasults 
> 02-01-2017-bbc455de195c83d9f807980b510fa46018f33b1b Level Medium- Running 30 
> iterations, 10 at a time.pdf, Lucene-Solr Master Test Beasults 
> 02-08-2017-6696eafaae18948c2891ce758c7a2ec09873dab8 Level Medium+- Running 30 
> iterations, 10 at a time, 8 cores.pdf, Lucene-Solr Master Test Beasults 
> 02-14-2017- Level Medium+-a1f114f70f3800292c25be08213edf39b3e37f6a Running 30 
> iterations, 10 at a time, 8 cores.pdf
>
>
> We have many Jenkins instances blasting tests, some official, some policeman, 
> I and others have or had their own, and the email trail proves the power of 
> the Jenkins cluster to find test fails.
> However, I still have a very hard time with some basic questions:
> what tests are flakey right now? which test fails actually affect devs most? 
> did I break it? was that test already flakey? is that test still flakey? what 
> are our worst tests right now? is that test getting better or worse?
> We really need a way to see exactly what tests are the problem, not because 
> of OS or environmental issues, but more basic test quality issues. Which 
> tests are flakey and how flakey are they at any point in time.
> Reports:
> https://drive.google.com/drive/folders/0ByYyjsrbz7-qa2dOaU1UZDdRVzg?usp=sharing
> 01/24/2017 - 
> https://docs.google.com/spreadsheets/d/1JySta2j2s7A_p16wA1UO-l6c4GsUHBIb4FONS2EzW9k/edit?usp=sharing
> 02/01/2017 - 
> https://docs.google.com/spreadsheets/d/1FndoyHmihaOVL2o_Zns5alpNdAJlNsEwQVoJ4XDWj3c/edit?usp=sharing
> 02/08/2017 - 
> https://docs.google.com/spreadsheets/d/1N6RxH4Edd7ldRIaVfin0si-uSLGyowQi8-7mcux27S0/edit?usp=sharing
> 02/14/2017 - 
> https://docs.google.com/spreadsheets/d/1eZ9_ds_0XyqsKKp8xkmESrcMZRP85jTxSKkNwgtcUn0/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10032) Create report to assess Solr test quality at a commit point.

2017-02-17 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-10032:
---
Description: 
We have many Jenkins instances blasting tests, some official, some policeman, I 
and others have or had their own, and the email trail proves the power of the 
Jenkins cluster to find test fails.

However, I still have a very hard time with some basic questions:

what tests are flakey right now? which test fails actually affect devs most? 
did I break it? was that test already flakey? is that test still flakey? what 
are our worst tests right now? is that test getting better or worse?

We really need a way to see exactly what tests are the problem, not because of 
OS or environmental issues, but more basic test quality issues. Which tests are 
flakey and how flakey are they at any point in time.


Reports:
https://drive.google.com/drive/folders/0ByYyjsrbz7-qa2dOaU1UZDdRVzg?usp=sharing

01/24/2017 - 
https://docs.google.com/spreadsheets/d/1JySta2j2s7A_p16wA1UO-l6c4GsUHBIb4FONS2EzW9k/edit?usp=sharing
02/01/2017 - 
https://docs.google.com/spreadsheets/d/1FndoyHmihaOVL2o_Zns5alpNdAJlNsEwQVoJ4XDWj3c/edit?usp=sharing
02/08/2017 - 
https://docs.google.com/spreadsheets/d/1N6RxH4Edd7ldRIaVfin0si-uSLGyowQi8-7mcux27S0/edit?usp=sharing
02/14/2017 - 
https://docs.google.com/spreadsheets/d/1eZ9_ds_0XyqsKKp8xkmESrcMZRP85jTxSKkNwgtcUn0/edit?usp=sharing


  was:
We have many Jenkins instances blasting tests, some official, some policeman, I 
and others have or had their own, and the email trail proves the power of the 
Jenkins cluster to find test fails.

However, I still have a very hard time with some basic questions:

what tests are flakey right now? which test fails actually affect devs most? 
did I break it? was that test already flakey? is that test still flakey? what 
are our worst tests right now? is that test getting better or worse?

We really need a way to see exactly what tests are the problem, not because of 
OS or environmental issues, but more basic test quality issues. Which tests are 
flakey and how flakey are they at any point in time.


Reports:
01/24/2017 - 
https://docs.google.com/spreadsheets/d/1JySta2j2s7A_p16wA1UO-l6c4GsUHBIb4FONS2EzW9k/edit?usp=sharing
02/01/2017 - 
https://docs.google.com/spreadsheets/d/1FndoyHmihaOVL2o_Zns5alpNdAJlNsEwQVoJ4XDWj3c/edit?usp=sharing
02/08/2017 - 
https://docs.google.com/spreadsheets/d/1N6RxH4Edd7ldRIaVfin0si-uSLGyowQi8-7mcux27S0/edit?usp=sharing
02/14/2017 - 
https://docs.google.com/spreadsheets/d/1eZ9_ds_0XyqsKKp8xkmESrcMZRP85jTxSKkNwgtcUn0/edit?usp=sharing



> Create report to assess Solr test quality at a commit point.
> 
>
> Key: SOLR-10032
> URL: https://issues.apache.org/jira/browse/SOLR-10032
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: Lucene-Solr Master Test Beast Results 
> 01-24-2017-9899cbd031dc3fc37a384b1f9e2b379e90a9a3a6 Level Medium- Running 30 
> iterations, 12 at a time .pdf, Lucene-Solr Master Test Beasults 
> 02-01-2017-bbc455de195c83d9f807980b510fa46018f33b1b Level Medium- Running 30 
> iterations, 10 at a time.pdf, Lucene-Solr Master Test Beasults 
> 02-08-2017-6696eafaae18948c2891ce758c7a2ec09873dab8 Level Medium+- Running 30 
> iterations, 10 at a time, 8 cores.pdf, Lucene-Solr Master Test Beasults 
> 02-14-2017- Level Medium+-a1f114f70f3800292c25be08213edf39b3e37f6a Running 30 
> iterations, 10 at a time, 8 cores.pdf
>
>
> We have many Jenkins instances blasting tests, some official, some policeman, 
> I and others have or had their own, and the email trail proves the power of 
> the Jenkins cluster to find test fails.
> However, I still have a very hard time with some basic questions:
> what tests are flakey right now? which test fails actually affect devs most? 
> did I break it? was that test already flakey? is that test still flakey? what 
> are our worst tests right now? is that test getting better or worse?
> We really need a way to see exactly what tests are the problem, not because 
> of OS or environmental issues, but more basic test quality issues. Which 
> tests are flakey and how flakey are they at any point in time.
> Reports:
> https://drive.google.com/drive/folders/0ByYyjsrbz7-qa2dOaU1UZDdRVzg?usp=sharing
> 01/24/2017 - 
> https://docs.google.com/spreadsheets/d/1JySta2j2s7A_p16wA1UO-l6c4GsUHBIb4FONS2EzW9k/edit?usp=sharing
> 02/01/2017 - 
> https://docs.google.com/spreadsheets/d/1FndoyHmihaOVL2o_Zns5alpNdAJlNsEwQVoJ4XDWj3c/edit?usp=sharing
> 02/08/2017 - 
> https://docs.google.com/spreadsheets/d/1N6RxH4Edd7ldRIaVfin0si-uSLGyowQi8-7mcux27S0/edit?usp=sharing
> 02/14/2017 - 
> 

[jira] [Commented] (LUCENE-7688) add a OneMergeWrappingMergePolicy class

2017-02-17 Thread Keith Laban (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872030#comment-15872030
 ] 

Keith Laban commented on LUCENE-7688:
-

Hi Michael, the main use case is easily overaload the {{wrapForMerge}} function 
in {{OneMerge}} without having write a whole merge policy. The required ticket 
SOLR-10046 uses it to wrap the CodecReader with one that has access to 
FeildCache and will add docvalues when merging segments if required. But 
generally you can do anything you can do by wrapping a CodeReader, add/remove 
fields, etc. 

> add a OneMergeWrappingMergePolicy class
> ---
>
> Key: LUCENE-7688
> URL: https://issues.apache.org/jira/browse/LUCENE-7688
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-7688.patch
>
>
> This ticket splits out the lucene part of the changes proposed in SOLR-10046 
> for a conversation on whether or not the {{OneMergeWrappingMergePolicy}} 
> class would best be located in Lucene or in Solr.
> (As an aside, the proposed use of 
> [java.util.function.UnaryOperator|https://docs.oracle.com/javase/8/docs/api/java/util/function/UnaryOperator.html]
>  causes {{ant documentation-lint}} to fail, I have created LUCENE-7689 
> separately for that.)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7698) CommonGramsQueryFilter in the query analyzer chain breaks phrase queries

2017-02-17 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-7698:
---
Attachment: LUCENE-7698.patch

OK here's a patch fixing {{CommonGraphsQueryFilter}} to not create a 
disconnected graph.  [~emaijala] could you please try this and see if it fixes 
your use case?  Thanks.

I also added an experimental option to {{QueryBuilder}} (base class for query 
parsers) to disable graph handling, as a safety for other tokenizer components 
that may create disconnected graphs.

> CommonGramsQueryFilter in the query analyzer chain breaks phrase queries
> 
>
> Key: LUCENE-7698
> URL: https://issues.apache.org/jira/browse/LUCENE-7698
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 6.4, 6.4.1
>Reporter: Ere Maijala
>  Labels: regression
> Attachments: LUCENE-7698.patch
>
>
> (Please pardon me if the project or component are wrong!)
> CommonGramsQueryFilter breaks phrase queries. The behavior also seems to 
> change with addition or removal of adjacent terms.
> Steps to reproduce:
> 1.) Download and extract Solr (in my test case version 6.4.1) somewhere.
> 2.) Modify 
> server/solr/configsets/sample_techproducts_configs/conf/managed-schema and 
> modify text_general fieldType by adding CommonGrams(Query)Filter before 
> stopWordFilter:
>  positionIncrementGap="100">
>   
> 
>  words="stopwords.txt" />
>  words="stopwords.txt" />
> 
> 
>   
>   
> 
>  words="stopwords.txt"/>
>  words="stopwords.txt" />
>  ignoreCase="true" expand="true"/>
> 
>   
> 
> 3.) Add "with" to 
> server/solr/configsets/sample_techproducts_configs/conf/stopwords.txt and 
> make sure the file has correct line endings (extracted from Solr zip it seems 
> to contain DOS/Windows lien endings which may break things).
> 4.) Run the techproducts example with "bin/solr -e techproducts"
> 5.) Browse to 
> 
> 6.) Observe that parsedquery in the debug output is empty
> 7.) Browse to 
> 
> 8.) Observe that parsedquery contains ipod_with as expected but not 
> with_video.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+155) - Build # 2877 - Still Unstable!

2017-02-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2877/
Java: 64bit/jdk-9-ea+155 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.handler.admin.TestApiFramework.testFramework

Error Message:


Stack Trace:
java.lang.ExceptionInInitializerError
at 
__randomizedtesting.SeedInfo.seed([69F7EC32E9531449:7E812615EF87F874]:0)
at 
net.sf.cglib.core.KeyFactory$Generator.generateClass(KeyFactory.java:166)
at 
net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at 
net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at net.sf.cglib.core.KeyFactory$Generator.create(KeyFactory.java:144)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:116)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:108)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:104)
at net.sf.cglib.proxy.Enhancer.(Enhancer.java:69)
at 
org.easymock.internal.ClassProxyFactory.createEnhancer(ClassProxyFactory.java:259)
at 
org.easymock.internal.ClassProxyFactory.createProxy(ClassProxyFactory.java:174)
at org.easymock.internal.MocksControl.createMock(MocksControl.java:60)
at org.easymock.EasyMock.createMock(EasyMock.java:104)
at 
org.apache.solr.handler.admin.TestCoreAdminApis.getCoreContainerMock(TestCoreAdminApis.java:83)
at 
org.apache.solr.handler.admin.TestApiFramework.testFramework(TestApiFramework.java:59)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:543)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

on getting back LUCENE-6212 behaviour ;) , Re: On LUCENE-5611 and 6.4.1

2017-02-17 Thread Ľuboš Koščo
D'oh !!!

OK, I see where this happened, we were leveraging this:
https://issues.apache.org/jira/browse/LUCENE-6212

So changing the subject - can I get 6212 behaviour back in latest lucene
somehow ???

Resp. what I am doing now is to use the fields setTokenStream
to have its own tokenstream per doc ...
does the tokenstream need to be private, or can one instance be reused ?

thnx
L

On 17 February 2017 at 09:27, Ľuboš Koščo  wrote:

> One more Q before I can work on tests
>
> how does recent lucene pick appropriate analyzer for the doc?
> Were you doing some changes in that area since 4.7.1 ?
> (if we decide the indexing chain didn't influence this and still uses
> analyzer properly picked)
> (I checked changelogs and didn't find any suspicious change in that area
> ... )
>
> thnx
> L
>
>
> On 11 February 2017 at 00:47, Michael McCandless <
> luc...@mikemccandless.com> wrote:
>
>> Could you make a small standalone test case showing what used to work
>> and what no longer works?
>>
>> I don't think that issue was supposed to alter how IndexWriter
>> interacts with the analysis chain.
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>> On Fri, Feb 10, 2017 at 9:48 AM, Ľuboš Koščo  wrote:
>> > Resp. how to make the double inherited analyzer (on the bottom of
>> > inheritance) be used again, instead of hidden by its father direct
>> > descendant of Analyzer?
>> > (father:
>> > https://github.com/OpenGrok/OpenGrok/blob/master/src/org/ope
>> nsolaris/opengrok/analysis/FileAnalyzer.java
>> > child:
>> > https://github.com/OpenGrok/OpenGrok/blob/master/src/org/ope
>> nsolaris/opengrok/analysis/java/JavaAnalyzer.java
>> > - looking at above it's even deeper inheritance, so Analyzer ->
>> FileAnalyzer
>> > -> ... ->JavaAnalyzer as the last child)
>> >
>> > (funny enough the code on our side that creates docs didn't really
>> change
>> > since 4.7.1 , but new lucene now picks FileAnalyzer over any other
>> analyzer
>> > for createComponents anyways)
>> >
>> > tia
>> > L
>> >
>> > On 10 February 2017 at 13:41, Ľuboš Koščo  wrote:
>> >>
>> >> Hi guys, Mike
>> >>
>> >> is there any chance I can somehow get the indexing chain to behave
>> similar
>> >> as before LUCENE-5611 in 6.4.1 ?
>> >>
>> >> We used to have analyzers that inherited multiple times from Analyzer
>> >> (e.g. second child and relaxed and overriden createComponents) and
>> lucene
>> >> used to run them for appropriate docs properly
>> >> but after LUCENE-5611 I can see the chain changed and only the first
>> child
>> >> is always taken into account, even though the document is handled by
>> proper
>> >> analyzer ...
>> >> (basically between 4.7.1 and 6.4.1 something changed that made lucene
>> just
>> >> ignore second child of analyzer and won't use it and always use first
>> one
>> >> (and its father, the direct override of createComponents))
>> >> Some code pointers on what used to work and now isn't :
>> >> https://github.com/OpenGrok/OpenGrok/issues/1376
>> >> (and I tried to dig the changelogs and the only thing I found is really
>> >> around 5611, hence this silly Q)
>> >>
>> >> any clues how to get old behaviour back?
>> >>
>> >> thnx
>> >> L
>> >>
>> >
>>
>
>


[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1240 - Still unstable

2017-02-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1240/

4 tests failed.
FAILED:  org.apache.lucene.search.TestFuzzyQuery.testRandom

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([9F84F0743241659B]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.search.TestFuzzyQuery

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([9F84F0743241659B]:0)


FAILED:  org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:41640/aqe/f/forceleader_test_collection

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: 
http://127.0.0.1:41640/aqe/f/forceleader_test_collection
at 
__randomizedtesting.SeedInfo.seed([EADAED056DE8C26C:C4DD9C5546A3B0D]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:638)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.HttpPartitionTest.realTimeGetDocId(HttpPartitionTest.java:635)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:620)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:575)
at 
org.apache.solr.cloud.ForceLeaderTest.bringBackOldLeaderAndSendDoc(ForceLeaderTest.java:402)
at 
org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader(ForceLeaderTest.java:145)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3839 - Still Unstable!

2017-02-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3839/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([ADABBEDA75D38385:25FF8100DB2FEE7D]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:326)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:277)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:259)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:138)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+155) - Build # 2876 - Still Unstable!

2017-02-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2876/
Java: 32bit/jdk-9-ea+155 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.handler.admin.TestApiFramework.testFramework

Error Message:


Stack Trace:
java.lang.ExceptionInInitializerError
at 
__randomizedtesting.SeedInfo.seed([1CCAFBD1B6E5450F:BBC31F6B031A932]:0)
at 
net.sf.cglib.core.KeyFactory$Generator.generateClass(KeyFactory.java:166)
at 
net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at 
net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at net.sf.cglib.core.KeyFactory$Generator.create(KeyFactory.java:144)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:116)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:108)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:104)
at net.sf.cglib.proxy.Enhancer.(Enhancer.java:69)
at 
org.easymock.internal.ClassProxyFactory.createEnhancer(ClassProxyFactory.java:259)
at 
org.easymock.internal.ClassProxyFactory.createProxy(ClassProxyFactory.java:174)
at org.easymock.internal.MocksControl.createMock(MocksControl.java:60)
at org.easymock.EasyMock.createMock(EasyMock.java:104)
at 
org.apache.solr.handler.admin.TestCoreAdminApis.getCoreContainerMock(TestCoreAdminApis.java:83)
at 
org.apache.solr.handler.admin.TestApiFramework.testFramework(TestApiFramework.java:59)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:543)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (LUCENE-7698) CommonGramsQueryFilter in the query analyzer chain breaks phrase queries

2017-02-17 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871777#comment-15871777
 ] 

Michael McCandless commented on LUCENE-7698:


OK I see what's happening: this filter ({{CommonGramsQueryFilter}}) deletes the 
unigram tokens, but keeps {{posLength=2}} on the bigram tokens, which makes a 
disconnected graph, and then the query parser does the wrong thing.

I think the right fix is for it to set {{posLength}} to 1 when it drops unigram 
tokens .. I'll work on a patch.

> CommonGramsQueryFilter in the query analyzer chain breaks phrase queries
> 
>
> Key: LUCENE-7698
> URL: https://issues.apache.org/jira/browse/LUCENE-7698
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 6.4, 6.4.1
>Reporter: Ere Maijala
>  Labels: regression
>
> (Please pardon me if the project or component are wrong!)
> CommonGramsQueryFilter breaks phrase queries. The behavior also seems to 
> change with addition or removal of adjacent terms.
> Steps to reproduce:
> 1.) Download and extract Solr (in my test case version 6.4.1) somewhere.
> 2.) Modify 
> server/solr/configsets/sample_techproducts_configs/conf/managed-schema and 
> modify text_general fieldType by adding CommonGrams(Query)Filter before 
> stopWordFilter:
>  positionIncrementGap="100">
>   
> 
>  words="stopwords.txt" />
>  words="stopwords.txt" />
> 
> 
>   
>   
> 
>  words="stopwords.txt"/>
>  words="stopwords.txt" />
>  ignoreCase="true" expand="true"/>
> 
>   
> 
> 3.) Add "with" to 
> server/solr/configsets/sample_techproducts_configs/conf/stopwords.txt and 
> make sure the file has correct line endings (extracted from Solr zip it seems 
> to contain DOS/Windows lien endings which may break things).
> 4.) Run the techproducts example with "bin/solr -e techproducts"
> 5.) Browse to 
> 
> 6.) Observe that parsedquery in the debug output is empty
> 7.) Browse to 
> 
> 8.) Observe that parsedquery contains ipod_with as expected but not 
> with_video.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10130) Serious performance degradation in Solr 6.4.1 due to the new metrics collection

2017-02-17 Thread Ere Maijala (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871772#comment-15871772
 ] 

Ere Maijala commented on SOLR-10130:


I still don't have proper benchmarks, but I've tested enough to say with fair 
confidence that this is fixed for us.

> Serious performance degradation in Solr 6.4.1 due to the new metrics 
> collection
> ---
>
> Key: SOLR-10130
> URL: https://issues.apache.org/jira/browse/SOLR-10130
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 6.4.1
> Environment: Centos 7, OpenJDK 1.8.0 update 111
>Reporter: Ere Maijala
>Assignee: Andrzej Bialecki 
>Priority: Blocker
>  Labels: perfomance
> Fix For: master (7.0), 6.4.2
>
> Attachments: SOLR-10130.patch, SOLR-10130.patch, 
> solr-8983-console-f1.log
>
>
> We've stumbled on serious performance issues after upgrading to Solr 6.4.1. 
> Looks like the new metrics collection system in MetricsDirectoryFactory is 
> causing a major slowdown. This happens with an index configuration that, as 
> far as I can see, has no metrics specific configuration and uses 
> luceneMatchVersion 5.5.0. In practice a moderate load will completely bog 
> down the server with Solr threads constantly using up all CPU (600% on 6 core 
> machine) capacity with a load that normally  where we normally see an average 
> load of < 50%.
> I took stack traces (I'll attach them) and noticed that the threads are 
> spending time in com.codahale.metrics.Meter.mark. I tested building Solr 
> 6.4.1 with the metrics collection disabled in MetricsDirectoryFactory getByte 
> and getBytes methods and was unable to reproduce the issue.
> As far as I can see there are several issues:
> 1. Collecting metrics on every single byte read is slow.
> 2. Having it enabled by default is not a good idea.
> 3. The comment "enable coarse-grained metrics by default" at 
> https://github.com/apache/lucene-solr/blob/branch_6x/solr/core/src/java/org/apache/solr/update/SolrIndexConfig.java#L104
>  implies that only coarse-grained metrics should be enabled by default, and 
> this contradicts with collecting metrics on every single byte read.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8045) Deploy Solr in ROOT (/) path

2017-02-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871750#comment-15871750
 ] 

Jan Høydahl commented on SOLR-8045:
---

So all these {{admin/foo => ../admin/foo}} renames are because the UI now lives 
under {{/ui}}. And once we wish to switch the UI over to using v2 APIs, we'd 
change those relative paths once again to {{../v2/admin/foo}}? Have you double 
checked that there are no other places assuming that a certain image resides 
where it used to? 

One thing to test could be {{bin/solr -e techproducts && open 
http://localhost:8983/solr/techproducts/browse/}} and see whether the Solr logo 
still shows..

> Deploy Solr in ROOT (/) path 
> -
>
> Key: SOLR-8045
> URL: https://issues.apache.org/jira/browse/SOLR-8045
> Project: Solr
>  Issue Type: Wish
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 6.0
>
> Attachments: SOLR-8045.patch
>
>
> This does not mean that the path to access Solr will be changed. All paths 
> will remain as is and would behave exactly the same



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6671) Introduce a solr.data.home as root dir for all data

2017-02-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871741#comment-15871741
 ] 

Jan Høydahl commented on SOLR-6671:
---

I see many customer examples where there is a wish for separating data home 
from config home, so I'd like to push this forward again.

The default for SOLR_DATA_HOME could still be same as SOLR_HOME, but in the 
linux installer script, we could default to using {{/var/solr/data}} for data 
and {{/var/solr/home}} for home, so solr.in.sh would typically look like:
{noformat}
SOLR_PID_DIR="/var/solr"
SOLR_HOME="/var/solr/home"
SOLR_DATA_HOME="/var/solr/data"
LOG4J_PROPS="/var/solr/log4j.properties"
SOLR_LOGS_DIR="/var/solr/logs"
SOLR_PORT="8983"
{noformat}

and produce this tree:

{noformat}
/var/solr
├── data
│   ├── bar
│   │   └── data
│   │   ├── index
│   │   └── tlog
│   └── foo
│   └── data
│   ├── index
│   └── tlog
├── home
│   ├── bar
│   │   ├── conf
│   │   │   ├── managed-schema
│   │   │   └── solrconfig.xml
│   │   └── core.properties
│   ├── foo
│   │   ├── conf
│   │   │   ├── managed-schema
│   │   │   └── solrconfig.xml
│   │   └── core.properties
│   ├── solr.xml
│   └── zoo.cfg
├── log4j.properties
└── logs
└── solr.log.1
{noformat}

Benefit is that it is super easy move data to a new partition/disk with a 
single {{mv}} command. We just now have a customer who upgrade from 4.x to 6.x 
using Linux installer, but still want to run non-cloud. They need to separate 
data from config, i.e. they are not happy to have configs in /var/solr/data 
together with data, it makes upgrading only the config harder. Today they solve 
it by hardcoding {{}} in every single solrconfig.xml. In the new install I 
have used symlinks for each conf folder instead, so they can have a partition 
where they replace the {{home//conf}} folders from SCM without disturbing 
data.

This would also help solve SOLR-10095.

> Introduce a solr.data.home as root dir for all data
> ---
>
> Key: SOLR-6671
> URL: https://issues.apache.org/jira/browse/SOLR-6671
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Affects Versions: 4.10.1
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-6671.patch, SOLR-6671.patch, SOLR-6671.patch, 
> SOLR-6671.patch, SOLR-6671.patch
>
>
> Many users prefer to deploy code, config and data on separate disk locations, 
> so the default of placing the indexes under 
> {{$\{solr.solr.home\}/$\{solr.core.name\}/data}} is not always wanted.
> In a multi-core/collection system, there is not much help in the 
> {{solr.data.dir}} option, as it would set the {{dataDir}} to the same folder 
> for all collections. One workaround, if you don't want to hardcode paths in 
> your {{solrconfig.xml}}, is to specify the {{dataDir}} property in each 
> {{solr.properties}} file.
> A more elegant solution would be to introduce a new Java-option 
> {{solr.data.home}} which would be to data the same as {{solr.solr.home}} is 
> for config. If set, all collections would default their {{dataDir}} as 
> {{$\{solr.data.home\)/$\{solr.core.name\}/data}}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10154) ant run-example fails to start due to missing solr.log.dir

2017-02-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871704#comment-15871704
 ] 

Jan Høydahl commented on SOLR-10154:


Another question is whether we should keep run-example at all, see discussions 
in SOLR-6926.

If the {{bin/solr start -e }} command would support foreground mode 
{{-f}}, then it would be nice to be able to do {{ant run-example 
-Dexample=techproducts}} to compile, start in debug mode, index content in one 
go... Could we not support foreground mode for the examples by ending the 
example with starting a small script that tails the log and stops Solr on exit?

> ant run-example fails to start due to missing solr.log.dir
> --
>
> Key: SOLR-10154
> URL: https://issues.apache.org/jira/browse/SOLR-10154
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Affects Versions: 6.4.1
>Reporter: Jan Høydahl
>
> Running {{ant run-example}} fails to start. Problem is that this solr 
> instance is not started using bin/solr and thus does not have proper 
> variables such as {{solr.install.dir}} and {{solr.log.dir}}. Error output in 
> next comment.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8045) Deploy Solr in ROOT (/) path

2017-02-17 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871662#comment-15871662
 ] 

Cao Manh Dat commented on SOLR-8045:


[~janhoy] Can you take a look at changes at admin ui?

> Deploy Solr in ROOT (/) path 
> -
>
> Key: SOLR-8045
> URL: https://issues.apache.org/jira/browse/SOLR-8045
> Project: Solr
>  Issue Type: Wish
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 6.0
>
> Attachments: SOLR-8045.patch
>
>
> This does not mean that the path to access Solr will be changed. All paths 
> will remain as is and would behave exactly the same



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7698) CommonGramsQueryFilter in the query analyzer chain breaks phrase queries

2017-02-17 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871620#comment-15871620
 ] 

Michael McCandless commented on LUCENE-7698:


Hmm, no good, sorry about this ... thank you for reporting this [~emaijala]; 
I'll try to make a Lucene test case showing this.

> CommonGramsQueryFilter in the query analyzer chain breaks phrase queries
> 
>
> Key: LUCENE-7698
> URL: https://issues.apache.org/jira/browse/LUCENE-7698
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 6.4, 6.4.1
>Reporter: Ere Maijala
>  Labels: regression
>
> (Please pardon me if the project or component are wrong!)
> CommonGramsQueryFilter breaks phrase queries. The behavior also seems to 
> change with addition or removal of adjacent terms.
> Steps to reproduce:
> 1.) Download and extract Solr (in my test case version 6.4.1) somewhere.
> 2.) Modify 
> server/solr/configsets/sample_techproducts_configs/conf/managed-schema and 
> modify text_general fieldType by adding CommonGrams(Query)Filter before 
> stopWordFilter:
>  positionIncrementGap="100">
>   
> 
>  words="stopwords.txt" />
>  words="stopwords.txt" />
> 
> 
>   
>   
> 
>  words="stopwords.txt"/>
>  words="stopwords.txt" />
>  ignoreCase="true" expand="true"/>
> 
>   
> 
> 3.) Add "with" to 
> server/solr/configsets/sample_techproducts_configs/conf/stopwords.txt and 
> make sure the file has correct line endings (extracted from Solr zip it seems 
> to contain DOS/Windows lien endings which may break things).
> 4.) Run the techproducts example with "bin/solr -e techproducts"
> 5.) Browse to 
> 
> 6.) Observe that parsedquery in the debug output is empty
> 7.) Browse to 
> 
> 8.) Observe that parsedquery contains ipod_with as expected but not 
> with_video.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9450) Link to online Javadocs instead of distributing with binary download

2017-02-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871615#comment-15871615
 ] 

Jan Høydahl commented on SOLR-9450:
---

Are we still positive to this change? I could push it for 6.5
One benefit of not distributing javadocs with the binary release is that 
Windows users won't have to wait a year for Unzip to complete :-)

> Link to online Javadocs instead of distributing with binary download
> 
>
> Key: SOLR-9450
> URL: https://issues.apache.org/jira/browse/SOLR-9450
> Project: Solr
>  Issue Type: Sub-task
>  Components: Build
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9450.patch, SOLR-9450.patch, SOLR-9450.patch
>
>
> Spinoff from SOLR-6806. This sub task will replace the contents of {{docs}} 
> in the binary download with a link to the online JavaDocs. The build should 
> make sure to generate a link to the correct version. I believe this is the 
> correct tamplate: http://lucene.apache.org/solr/6_2_0/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9450) Link to online Javadocs instead of distributing with binary download

2017-02-17 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-9450:
--
Fix Version/s: (was: 6.4)
   6.5

> Link to online Javadocs instead of distributing with binary download
> 
>
> Key: SOLR-9450
> URL: https://issues.apache.org/jira/browse/SOLR-9450
> Project: Solr
>  Issue Type: Sub-task
>  Components: Build
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9450.patch, SOLR-9450.patch, SOLR-9450.patch
>
>
> Spinoff from SOLR-6806. This sub task will replace the contents of {{docs}} 
> in the binary download with a link to the online JavaDocs. The build should 
> make sure to generate a link to the correct version. I believe this is the 
> correct tamplate: http://lucene.apache.org/solr/6_2_0/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7686) NRT suggester should have option to filter out duplicates

2017-02-17 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-7686:
---
Attachment: LUCENE-7686.patch

Another iteration, this time filtering dups earlier in the top N
search.  I added a new method, {{acceptPartialPath}} to the
{{Util.TopNSearcher}} class so that subclasses are able to prune a
still in-progress path, not just a completed path.  This should be
quite efficient even when the number of duplicates is very high,
because the top N search will quickly push to the one not deleted, not
filtered out, highest scoring document with the suggestion, record
that surface form, and then prune subsequent intermediate paths
sharing that same surface form.

I also added another "extreme" dedup test case to test the logic that
computes the necessary queue size, and it's passing, and the new
{{TestSuggestField.testRandom}} seems to survive moderate beasting...

I think it's ready.


> NRT suggester should have option to filter out duplicates
> -
>
> Key: LUCENE-7686
> URL: https://issues.apache.org/jira/browse/LUCENE-7686
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7686.patch, LUCENE-7686.patch, LUCENE-7686.patch
>
>
> Some of the other suggesters have this ability, and it's quite simple to add 
> it to the NRT suggester as long as the thing we are filtering on is the 
> suggest key itself, not e.g. another stored field from the document.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10055) Manual bin/solr start causes crash due to resolving wrong solr.in.sh

2017-02-17 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-10055:
---
Attachment: SOLR-10055.patch

Attaching patch which will rename said files on install. Tested on a clean 
Debian system. Will commit soon

> Manual bin/solr start causes crash due to resolving wrong solr.in.sh
> 
>
> Key: SOLR-10055
> URL: https://issues.apache.org/jira/browse/SOLR-10055
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-10055.patch
>
>
> The install script installs {{solr.in.sh}} in {{/etc/defaults/}}. However, if 
> the user manually runs {{solr start}}, the script will use the {{solr.in.sh}} 
> file from {{bin/}} since that is first in the search path. And it will fail 
> since {{/opt/solr}} is write protected. But if user starts with {{service 
> solr start}} then the file from installation is used and all is fine.
> Since the default {{/opt/solr/server/solr}} is not writable by solr user, 
> this creates a bad user experience and classifies as a bug.
> My proposal is that the installer renames {{bin/solr.in.sh -> 
> bin/solr.in.sh.orig}} and the same with {{solr.in.cmd}}, so that the 
> resolution logic will end up finding the one from the install. User can still 
> override this by creating a {{$HOME/.solr.in.sh}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



6.4.2 release?

2017-02-17 Thread Ere Maijala

Hi,

I'd like to nominate LUCENE-7698 (CommonGramsQueryFilter doesn't work 
properly, caused by LUCENE-7603 and a regression from 6.3.0 where it 
worked fine). It doesn't have a fix, but at least we got seriously 
bitten by it.


Regards,
Ere

P.S. Sorry, no proper references in this email since I just joined the list.


Hi devs,

These two issues seem serious enough to warrant a new release from branch_6_4:
* SOLR-10130: Serious performance degradation in Solr 6.4.1 due to the new 
metrics collection
* SOLR-10138: Transaction log replay can hit an NPE due to new Metrics code.

What do you think? Anything else that should go there?

---
Best regards,

Andrzej Bialecki




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9584) The absolute URL path in server/solr-webapp/webapp/js/angular/services.js would make context customization not work

2017-02-17 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-9584.
---
Resolution: Fixed

> The absolute URL path in server/solr-webapp/webapp/js/angular/services.js 
> would make context customization not work
> ---
>
> Key: SOLR-9584
> URL: https://issues.apache.org/jira/browse/SOLR-9584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 6.2
>Reporter: Yun Jie Zhou
>Assignee: Jan Høydahl
>Priority: Minor
>  Labels: patch
> Fix For: 6.5, master (7.0)
>
>
> The absolute path starting from /solr in 
> server/solr-webapp/webapp/js/angular/services.js would make the context 
> customization not work.
> For example, we should use $resource('admin/info/system', {"wt":"json", 
> "_":Date.now()}); instead of $resource('/solr/admin/info/system', 
> {"wt":"json", "_":Date.now()});



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9584) The absolute URL path in server/solr-webapp/webapp/js/angular/services.js would make context customization not work

2017-02-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871592#comment-15871592
 ] 

Jan Høydahl commented on SOLR-9584:
---

Also moved CHANGES entry on master

> The absolute URL path in server/solr-webapp/webapp/js/angular/services.js 
> would make context customization not work
> ---
>
> Key: SOLR-9584
> URL: https://issues.apache.org/jira/browse/SOLR-9584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 6.2
>Reporter: Yun Jie Zhou
>Assignee: Jan Høydahl
>Priority: Minor
>  Labels: patch
> Fix For: 6.5, master (7.0)
>
>
> The absolute path starting from /solr in 
> server/solr-webapp/webapp/js/angular/services.js would make the context 
> customization not work.
> For example, we should use $resource('admin/info/system', {"wt":"json", 
> "_":Date.now()}); instead of $resource('/solr/admin/info/system', 
> {"wt":"json", "_":Date.now()});



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9584) The absolute URL path in server/solr-webapp/webapp/js/angular/services.js would make context customization not work

2017-02-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871590#comment-15871590
 ] 

ASF subversion and git services commented on SOLR-9584:
---

Commit bd459c12756635450512e95c6d0ee92697c64e5f in lucene-solr's branch 
refs/heads/master from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bd459c1 ]

SOLR-9584: Use relative URLs also for files and query


> The absolute URL path in server/solr-webapp/webapp/js/angular/services.js 
> would make context customization not work
> ---
>
> Key: SOLR-9584
> URL: https://issues.apache.org/jira/browse/SOLR-9584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 6.2
>Reporter: Yun Jie Zhou
>Assignee: Jan Høydahl
>Priority: Minor
>  Labels: patch
> Fix For: 6.5, master (7.0)
>
>
> The absolute path starting from /solr in 
> server/solr-webapp/webapp/js/angular/services.js would make the context 
> customization not work.
> For example, we should use $resource('admin/info/system', {"wt":"json", 
> "_":Date.now()}); instead of $resource('/solr/admin/info/system', 
> {"wt":"json", "_":Date.now()});



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+155) - Build # 2875 - Still Unstable!

2017-02-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2875/
Java: 64bit/jdk-9-ea+155 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.handler.admin.TestApiFramework.testFramework

Error Message:


Stack Trace:
java.lang.ExceptionInInitializerError
at 
__randomizedtesting.SeedInfo.seed([E4FCF4080402A14F:F38A3E2F02D64D72]:0)
at 
net.sf.cglib.core.KeyFactory$Generator.generateClass(KeyFactory.java:166)
at 
net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at 
net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at net.sf.cglib.core.KeyFactory$Generator.create(KeyFactory.java:144)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:116)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:108)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:104)
at net.sf.cglib.proxy.Enhancer.(Enhancer.java:69)
at 
org.easymock.internal.ClassProxyFactory.createEnhancer(ClassProxyFactory.java:259)
at 
org.easymock.internal.ClassProxyFactory.createProxy(ClassProxyFactory.java:174)
at org.easymock.internal.MocksControl.createMock(MocksControl.java:60)
at org.easymock.EasyMock.createMock(EasyMock.java:104)
at 
org.apache.solr.handler.admin.TestCoreAdminApis.getCoreContainerMock(TestCoreAdminApis.java:83)
at 
org.apache.solr.handler.admin.TestApiFramework.testFramework(TestApiFramework.java:59)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:543)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (LUCENE-7449) Add CROSSES query support to RangeField

2017-02-17 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871585#comment-15871585
 ] 

Michael McCandless commented on LUCENE-7449:


One of the ES Lucene jobs hit this test failure ... I haven't checked if it 
reproduces:

{noformat}
  [junit4] Suite: org.apache.lucene.search.TestDoubleRangeFieldQueries
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestDoubleRangeFieldQueries -Dtests.method=testRandomTiny 
-Dtests.seed=792130AB9E891AA1 -Dtests.slow=true -Dtests.locale=nl 
-Dtests.timezone=Asia/Atyrau -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] FAILURE 0.04s J0 | TestDoubleRangeFieldQueries.testRandomTiny <<<
   [junit4]> Throwable #1: java.lang.AssertionError: wrong hit (first of 
possibly more):
   [junit4]> FAIL (iter 6): id=0 should not match but did
   [junit4]>  queryRange=Box(-8.542463800517043E307 TO 
8.712902176567828E307)
   [junit4]>  box=Box(-4.889774907987475E307 TO 7.344527517013989E306)
   [junit4]>  queryType=CROSSES
   [junit4]>  deleted?=false
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([792130AB9E891AA1:3066EEEDC0A8220D]:0)
   [junit4]>at 
org.apache.lucene.search.BaseRangeFieldQueryTestCase.verify(BaseRangeFieldQueryTestCase.java:287)
   [junit4]>at 
org.apache.lucene.search.BaseRangeFieldQueryTestCase.doTestRandom(BaseRangeFieldQueryTestCase.java:158)
   [junit4]>at 
org.apache.lucene.search.BaseRangeFieldQueryTestCase.testRandomTiny(BaseRangeFieldQueryTestCase.java:64)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
{noformat}


> Add CROSSES query support to RangeField
> ---
>
> Key: LUCENE-7449
> URL: https://issues.apache.org/jira/browse/LUCENE-7449
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>Assignee: Nicholas Knize
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7449.patch, LUCENE-7449.patch, LUCENE-7449.patch
>
>
> {{RangeField}} currently supports {{INTERSECTS}}, {{WITHIN}}, and 
> {{CONTAINS}} query behavior. This feature adds support for an explicit 
> {{CROSSES}} query. Unlike {{INTERSECT}} and {{OVERLAP}} queries the 
> {{CROSSES}} query finds any indexed ranges whose interior (within range) 
> intersect the interior AND exterior (outside range) of the query range.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9584) The absolute URL path in server/solr-webapp/webapp/js/angular/services.js would make context customization not work

2017-02-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871582#comment-15871582
 ] 

ASF subversion and git services commented on SOLR-9584:
---

Commit a81b227cd220118db365904535bc30d4d4cbd718 in lucene-solr's branch 
refs/heads/branch_6x from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a81b227 ]

SOLR-9584: Use relative URLs also for files and query

(cherry picked from commit aad9bb7)


> The absolute URL path in server/solr-webapp/webapp/js/angular/services.js 
> would make context customization not work
> ---
>
> Key: SOLR-9584
> URL: https://issues.apache.org/jira/browse/SOLR-9584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 6.2
>Reporter: Yun Jie Zhou
>Assignee: Jan Høydahl
>Priority: Minor
>  Labels: patch
> Fix For: 6.5, master (7.0)
>
>
> The absolute path starting from /solr in 
> server/solr-webapp/webapp/js/angular/services.js would make the context 
> customization not work.
> For example, we should use $resource('admin/info/system', {"wt":"json", 
> "_":Date.now()}); instead of $resource('/solr/admin/info/system', 
> {"wt":"json", "_":Date.now()});



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10154) ant run-example fails to start due to missing solr.log.dir

2017-02-17 Thread JIRA
Jan Høydahl created SOLR-10154:
--

 Summary: ant run-example fails to start due to missing solr.log.dir
 Key: SOLR-10154
 URL: https://issues.apache.org/jira/browse/SOLR-10154
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Build
Affects Versions: 6.4.1
Reporter: Jan Høydahl


Running {{ant run-example}} fails to start. Problem is that this solr instance 
is not started using bin/solr and thus does not have proper variables such as 
{{solr.install.dir}} and {{solr.log.dir}}. Error output in next comment.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10154) ant run-example fails to start due to missing solr.log.dir

2017-02-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871576#comment-15871576
 ] 

Jan Høydahl commented on SOLR-10154:


{noformat}
run-example:
 [java] log4j:ERROR setFile(null,true) call failed.
 [java] java.io.FileNotFoundException: /solr.log (Permission denied)
 [java] at java.io.FileOutputStream.open0(Native Method)
 [java] at java.io.FileOutputStream.open(FileOutputStream.java:270)
 [java] at java.io.FileOutputStream.(FileOutputStream.java:213)
 [java] at java.io.FileOutputStream.(FileOutputStream.java:133)
 [java] at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
 [java] at 
org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:207)
 [java] at 
org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
 [java] at 
org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
 [java] at 
org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
 [java] at 
org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
 [java] at 
org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
 [java] at 
org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
 [java] at 
org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
 [java] at 
org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
 [java] at 
org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
 [java] at 
org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
 [java] at org.apache.log4j.LogManager.(LogManager.java:127)
 [java] at 
org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:66)
 [java] at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:277)
 [java] at org.eclipse.jetty.util.log.Slf4jLog.(Slf4jLog.java:38)
 [java] at org.eclipse.jetty.util.log.Slf4jLog.(Slf4jLog.java:32)
 [java] at 
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 [java] at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 [java] at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 [java] at 
java.lang.reflect.Constructor.newInstance(Constructor.java:423)
 [java] at java.lang.Class.newInstance(Class.java:442)
 [java] at org.eclipse.jetty.util.log.Log.initialized(Log.java:177)
 [java] at org.eclipse.jetty.util.log.Log.getLogger(Log.java:310)
 [java] at org.eclipse.jetty.util.log.Log.getLogger(Log.java:300)
 [java] at 
org.eclipse.jetty.xml.XmlConfiguration.(XmlConfiguration.java:81)
 [java] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 [java] at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 [java] at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 [java] at java.lang.reflect.Method.invoke(Method.java:498)
 [java] at org.eclipse.jetty.start.Main.invokeMain(Main.java:214)
 [java] at org.eclipse.jetty.start.Main.start(Main.java:457)
 [java] at org.eclipse.jetty.start.Main.main(Main.java:75)
 [java] 2017-02-17 10:54:19.382 INFO  (main) [   ] o.e.j.s.Server 
jetty-9.3.14.v20161028
 [java] 2017-02-17 10:54:19.766 ERROR (main) [   ] 
o.a.s.s.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
 [java] 2017-02-17 10:54:19.771 INFO  (main) [   ] 
o.a.s.s.SolrDispatchFilter  ___  _   Welcome to Apache Solr™ version 
6.5.0-SNAPSHOT a81b227cd220118db365904535bc30d4d4cbd718 - janhoy - 2017-02-17 
10:54:14
 [java] 2017-02-17 10:54:19.771 INFO  (main) [   ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in standalone mode on 
port 8983
 [java] 2017-02-17 10:54:19.771 INFO  (main) [   ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
 [java] 2017-02-17 10:54:19.801 INFO  (main) [   ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|Start time: 
2017-02-17T09:54:19.775Z
 [java] 2017-02-17 10:54:19.819 INFO  (main) [   ] 
o.a.s.c.SolrResourceLoader Using system property solr.solr.home: 
/Users/janhoy/git/lucene-solr/solr/server/solr
 [java] 2017-02-17 10:54:19.825 INFO  (main) [   ] o.a.s.c.SolrXmlConfig 
Loading container configuration from 
/Users/janhoy/git/lucene-solr/solr/server/solr/solr.xml
 [java] 2017-02-17 10:54:20.180 INFO  (main) [   ] 
o.a.s.u.UpdateShardHandler Creating UpdateShardHandler HTTP client with params: 
socketTimeout=60=6=true
 [java] 2017-02-17 10:54:20.321 

[jira] [Commented] (LUCENE-7698) CommonGramsQueryFilter in the query analyzer chain breaks phrase queries

2017-02-17 Thread Ere Maijala (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871575#comment-15871575
 ] 

Ere Maijala commented on LUCENE-7698:
-

Looks to me like LUCENE-7603 broke this.

> CommonGramsQueryFilter in the query analyzer chain breaks phrase queries
> 
>
> Key: LUCENE-7698
> URL: https://issues.apache.org/jira/browse/LUCENE-7698
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 6.4, 6.4.1
>Reporter: Ere Maijala
>  Labels: regression
>
> (Please pardon me if the project or component are wrong!)
> CommonGramsQueryFilter breaks phrase queries. The behavior also seems to 
> change with addition or removal of adjacent terms.
> Steps to reproduce:
> 1.) Download and extract Solr (in my test case version 6.4.1) somewhere.
> 2.) Modify 
> server/solr/configsets/sample_techproducts_configs/conf/managed-schema and 
> modify text_general fieldType by adding CommonGrams(Query)Filter before 
> stopWordFilter:
>  positionIncrementGap="100">
>   
> 
>  words="stopwords.txt" />
>  words="stopwords.txt" />
> 
> 
>   
>   
> 
>  words="stopwords.txt"/>
>  words="stopwords.txt" />
>  ignoreCase="true" expand="true"/>
> 
>   
> 
> 3.) Add "with" to 
> server/solr/configsets/sample_techproducts_configs/conf/stopwords.txt and 
> make sure the file has correct line endings (extracted from Solr zip it seems 
> to contain DOS/Windows lien endings which may break things).
> 4.) Run the techproducts example with "bin/solr -e techproducts"
> 5.) Browse to 
> 
> 6.) Observe that parsedquery in the debug output is empty
> 7.) Browse to 
> 
> 8.) Observe that parsedquery contains ipod_with as expected but not 
> with_video.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7603) Support Graph Token Streams in QueryBuilder

2017-02-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871563#comment-15871563
 ] 

ASF GitHub Bot commented on LUCENE-7603:


Github user EreMaijala commented on the issue:

https://github.com/apache/lucene-solr/pull/130
  
I believe this has caused https://issues.apache.org/jira/browse/LUCENE-7698.


> Support Graph Token Streams in QueryBuilder
> ---
>
> Key: LUCENE-7603
> URL: https://issues.apache.org/jira/browse/LUCENE-7603
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser, core/search
>Reporter: Matt Weber
> Fix For: master (7.0), 6.4
>
>
> With [LUCENE-6664|https://issues.apache.org/jira/browse/LUCENE-6664] we can 
> use multi-term synonyms query time.  A "graph token stream" will be created 
> which which is nothing more than using the position length attribute on 
> stacked tokens to indicate how many positions a token should span.  Currently 
> the position length attribute on tokens is ignored during query parsing.  
> This issue will add support for handling these graph token streams inside the 
> QueryBuilder utility class used by query parsers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_121) - Build # 6401 - Still Unstable!

2017-02-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6401/
Java: 32bit/jdk1.8.0_121 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery

Error Message:
Expected a collection with one shard and two replicas null Last available 
state: 
DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/7)={
   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "core":"MissingSegmentRecoveryTest_shard1_replica1",   
"base_url":"https://127.0.0.1:57160/solr;,   
"node_name":"127.0.0.1:57160_solr",   "state":"down"}, 
"core_node2":{   "core":"MissingSegmentRecoveryTest_shard1_replica2",   
"base_url":"https://127.0.0.1:57165/solr;,   
"node_name":"127.0.0.1:57165_solr",   "state":"active",   
"leader":"true",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"1",   "autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected a collection with one shard and two replicas
null
Last available state: 
DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/7)={
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "core":"MissingSegmentRecoveryTest_shard1_replica1",
  "base_url":"https://127.0.0.1:57160/solr;,
  "node_name":"127.0.0.1:57160_solr",
  "state":"down"},
"core_node2":{
  "core":"MissingSegmentRecoveryTest_shard1_replica2",
  "base_url":"https://127.0.0.1:57165/solr;,
  "node_name":"127.0.0.1:57165_solr",
  "state":"active",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([A3AC04D7F106F40E:F3F99CD4A8274213]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:265)
at 
org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery(MissingSegmentRecoveryTest.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[GitHub] lucene-solr issue #130: LUCENE-7603: branch_6x Support Graph Token Streams i...

2017-02-17 Thread EreMaijala
Github user EreMaijala commented on the issue:

https://github.com/apache/lucene-solr/pull/130
  
I believe this has caused https://issues.apache.org/jira/browse/LUCENE-7698.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7681) Remove LegacyDocValues implementations from MemoryIndex

2017-02-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871556#comment-15871556
 ] 

ASF subversion and git services commented on LUCENE-7681:
-

Commit 7a8c59dd86ae8788b61047aad7f2bc159733e604 in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7a8c59d ]

LUCENE-7681: Remove LegacyDocValues implementations from MemoryIndex


> Remove LegacyDocValues implementations from MemoryIndex
> ---
>
> Key: LUCENE-7681
> URL: https://issues.apache.org/jira/browse/LUCENE-7681
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: master (7.0)
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: master (7.0)
>
> Attachments: LUCENE-7681.patch
>
>
> MemoryIndex in master is using the LegacyDocValue wrappers.  We should 
> replace these with plain 7.0-style iterators instead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7681) Remove LegacyDocValues implementations from MemoryIndex

2017-02-17 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved LUCENE-7681.
---
   Resolution: Fixed
Fix Version/s: master (7.0)

> Remove LegacyDocValues implementations from MemoryIndex
> ---
>
> Key: LUCENE-7681
> URL: https://issues.apache.org/jira/browse/LUCENE-7681
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: master (7.0)
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: master (7.0)
>
> Attachments: LUCENE-7681.patch
>
>
> MemoryIndex in master is using the LegacyDocValue wrappers.  We should 
> replace these with plain 7.0-style iterators instead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-10037) (non-original) Solr Admin UI > query tab > unexpected url above results

2017-02-17 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl closed SOLR-10037.
--
   Resolution: Duplicate
Fix Version/s: (was: 6.5)
   (was: master (7.0))

Closing as duplicate, the changes from this patch is committed as a new 
SOLR-9584 comit, since 6.5 is still unreleased.

> (non-original) Solr Admin UI > query tab > unexpected url above results
> ---
>
> Key: SOLR-10037
> URL: https://issues.apache.org/jira/browse/SOLR-10037
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 6.5, master (7.0)
>Reporter: Christine Poerschke
>Assignee: Jan Høydahl
>Priority: Minor
> Attachments: SOLR-10037.patch, SOLR-10037.patch
>
>
> To reproduce, in a browser run a search from the query tab and then notice 
> the url shown above the results
> {code}
> # actual:   http://localhost:8983techproducts/select?indent=on=*:*=json
> # expected: 
> http://localhost:8983/solr/techproducts/select?q=*%3A*=json=true
> {code}
> (We had noticed this when using the (master branch) Admin UI during the 
> [London Lucene Hackday for Full 
> Fact|https://www.meetup.com/Apache-Lucene-Solr-London-User-Group/events/236356241/]
>  on Friday, I just tried to reproduce both on master (reproducible with 
> non-original version only) and on branch_6_4 (not reproducible) and search 
> for an existing open issue found no apparent match.)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-9584) The absolute URL path in server/solr-webapp/webapp/js/angular/services.js would make context customization not work

2017-02-17 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reopened SOLR-9584:
---

Reopening to fold in changes from SOLR-10037

> The absolute URL path in server/solr-webapp/webapp/js/angular/services.js 
> would make context customization not work
> ---
>
> Key: SOLR-9584
> URL: https://issues.apache.org/jira/browse/SOLR-9584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 6.2
>Reporter: Yun Jie Zhou
>Assignee: Jan Høydahl
>Priority: Minor
>  Labels: patch
> Fix For: 6.5, master (7.0)
>
>
> The absolute path starting from /solr in 
> server/solr-webapp/webapp/js/angular/services.js would make the context 
> customization not work.
> For example, we should use $resource('admin/info/system', {"wt":"json", 
> "_":Date.now()}); instead of $resource('/solr/admin/info/system', 
> {"wt":"json", "_":Date.now()});



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10153) UnifiedSolrHighlighter support for CustomSeparatorBreakIterator (LUCENE-6485)

2017-02-17 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-10153:

Description: 
Lucene 5.3 added a CustomSeparatorBreakIterator (see LUCENE-6485)

UnifiedSolrHighlighter should support *CustomSeparatorBreakIterator* along with 
existing ones, WholeBreakIterator etc.

  was:
Lucene 5.3 added a CustomSeparatorBreakIterator (see LUCENE-6485)

UnifiedSolrHighlighter should support *CustomSeparatorBreakIterator* along with 
existing one, WholeBreakIterator.


> UnifiedSolrHighlighter support for CustomSeparatorBreakIterator (LUCENE-6485)
> -
>
> Key: SOLR-10153
> URL: https://issues.apache.org/jira/browse/SOLR-10153
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Reporter: Amrit Sarkar
>
> Lucene 5.3 added a CustomSeparatorBreakIterator (see LUCENE-6485)
> UnifiedSolrHighlighter should support *CustomSeparatorBreakIterator* along 
> with existing ones, WholeBreakIterator etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10153) UnifiedSolrHighlighter support for CustomSeparatorBreakIterator (LUCENE-6485)

2017-02-17 Thread Amrit Sarkar (JIRA)
Amrit Sarkar created SOLR-10153:
---

 Summary: UnifiedSolrHighlighter support for 
CustomSeparatorBreakIterator (LUCENE-6485)
 Key: SOLR-10153
 URL: https://issues.apache.org/jira/browse/SOLR-10153
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: highlighter
Reporter: Amrit Sarkar


Lucene 5.3 added a CustomSeparatorBreakIterator (see LUCENE-6485)

UnifiedSolrHighlighter should support CustomSeparatorBreakIterator along with 
existing one, WholeBreakIterator.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10153) UnifiedSolrHighlighter support for CustomSeparatorBreakIterator (LUCENE-6485)

2017-02-17 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-10153:

Description: 
Lucene 5.3 added a CustomSeparatorBreakIterator (see LUCENE-6485)

UnifiedSolrHighlighter should support *CustomSeparatorBreakIterator* along with 
existing one, WholeBreakIterator.

  was:
Lucene 5.3 added a CustomSeparatorBreakIterator (see LUCENE-6485)

UnifiedSolrHighlighter should support CustomSeparatorBreakIterator along with 
existing one, WholeBreakIterator.


> UnifiedSolrHighlighter support for CustomSeparatorBreakIterator (LUCENE-6485)
> -
>
> Key: SOLR-10153
> URL: https://issues.apache.org/jira/browse/SOLR-10153
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Reporter: Amrit Sarkar
>
> Lucene 5.3 added a CustomSeparatorBreakIterator (see LUCENE-6485)
> UnifiedSolrHighlighter should support *CustomSeparatorBreakIterator* along 
> with existing one, WholeBreakIterator.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7698) CommonGramsQueryFilter in the query analyzer chain breaks phrase queries

2017-02-17 Thread Ere Maijala (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ere Maijala updated LUCENE-7698:

Description: 
(Please pardon me if the project or component are wrong!)

CommonGramsQueryFilter breaks phrase queries. The behavior also seems to change 
with addition or removal of adjacent terms.

Steps to reproduce:

1.) Download and extract Solr (in my test case version 6.4.1) somewhere.
2.) Modify 
server/solr/configsets/sample_techproducts_configs/conf/managed-schema and 
modify text_general fieldType by adding CommonGrams(Query)Filter before 
stopWordFilter:


  





  
  





  


3.) Add "with" to 
server/solr/configsets/sample_techproducts_configs/conf/stopwords.txt and make 
sure the file has correct line endings (extracted from Solr zip it seems to 
contain DOS/Windows lien endings which may break things).

4.) Run the techproducts example with "bin/solr -e techproducts"

5.) Browse to 


6.) Observe that parsedquery in the debug output is empty

7.) Browse to 


8.) Observe that parsedquery contains ipod_with as expected but not with_video.

  was:
CommonGramsQueryFilter breaks phrase queries. The behavior also seems to change 
with addition or removal of adjacent terms.

Steps to reproduce:

1.) Download and extract Solr (in my test case version 6.4.1) somewhere.
2.) Modify 
server/solr/configsets/sample_techproducts_configs/conf/managed-schema and 
modify text_general fieldType by adding CommonGrams(Query)Filter before 
stopWordFilter:


  





  
  





  


3.) Add "with" to 
server/solr/configsets/sample_techproducts_configs/conf/stopwords.txt and make 
sure the file has correct line endings (extracted from Solr zip it seems to 
contain DOS/Windows lien endings which may break things).

4.) Run the techproducts example with "bin/solr -e techproducts"

5.) Browse to 


6.) Observe that parsedquery in the debug output is empty

7.) Browse to 


8.) Observe that parsedquery contains ipod_with as expected but not with_video.


> CommonGramsQueryFilter in the query analyzer chain breaks phrase queries
> 
>
> Key: LUCENE-7698
> URL: https://issues.apache.org/jira/browse/LUCENE-7698
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 6.4, 6.4.1
>Reporter: Ere Maijala
>  Labels: regression
>
> (Please pardon me if the project or component are wrong!)
> CommonGramsQueryFilter breaks phrase queries. The behavior also seems to 
> change with addition or removal of adjacent terms.
> Steps to reproduce:
> 1.) Download and extract Solr (in my test case version 6.4.1) somewhere.
> 2.) Modify 
> server/solr/configsets/sample_techproducts_configs/conf/managed-schema and 
> modify text_general fieldType by adding CommonGrams(Query)Filter before 
> stopWordFilter:
>  positionIncrementGap="100">
>   
> 
>  words="stopwords.txt" />
>  words="stopwords.txt" />
> 
> 
>   
>   
> 
>  words="stopwords.txt"/>
>  words="stopwords.txt" />
>  ignoreCase="true" expand="true"/>
> 
>   
> 
> 3.) Add "with" to 
> server/solr/configsets/sample_techproducts_configs/conf/stopwords.txt and 
> make sure the file has correct line endings (extracted from Solr zip it seems 
> to contain DOS/Windows lien endings which may break things).
> 4.) Run the techproducts example with "bin/solr -e techproducts"
> 5.) Browse to 
> 
> 6.) Observe that parsedquery in the debug output is empty
> 7.) Browse to 
> 
> 8.) Observe that parsedquery contains ipod_with as expected but not 
> with_video.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7698) CommonGramsQueryFilter in the query analyzer chain breaks phrase queries

2017-02-17 Thread Ere Maijala (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ere Maijala updated LUCENE-7698:

Labels: regression  (was: )

> CommonGramsQueryFilter in the query analyzer chain breaks phrase queries
> 
>
> Key: LUCENE-7698
> URL: https://issues.apache.org/jira/browse/LUCENE-7698
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 6.4, 6.4.1
>Reporter: Ere Maijala
>  Labels: regression
>
> CommonGramsQueryFilter breaks phrase queries. The behavior also seems to 
> change with addition or removal of adjacent terms.
> Steps to reproduce:
> 1.) Download and extract Solr (in my test case version 6.4.1) somewhere.
> 2.) Modify 
> server/solr/configsets/sample_techproducts_configs/conf/managed-schema and 
> modify text_general fieldType by adding CommonGrams(Query)Filter before 
> stopWordFilter:
>  positionIncrementGap="100">
>   
> 
>  words="stopwords.txt" />
>  words="stopwords.txt" />
> 
> 
>   
>   
> 
>  words="stopwords.txt"/>
>  words="stopwords.txt" />
>  ignoreCase="true" expand="true"/>
> 
>   
> 
> 3.) Add "with" to 
> server/solr/configsets/sample_techproducts_configs/conf/stopwords.txt and 
> make sure the file has correct line endings (extracted from Solr zip it seems 
> to contain DOS/Windows lien endings which may break things).
> 4.) Run the techproducts example with "bin/solr -e techproducts"
> 5.) Browse to 
> 
> 6.) Observe that parsedquery in the debug output is empty
> 7.) Browse to 
> 
> 8.) Observe that parsedquery contains ipod_with as expected but not 
> with_video.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >