[jira] [Updated] (LUCENE-7698) CommonGramsQueryFilter in the query analyzer chain breaks phrase queries

2017-02-21 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated LUCENE-7698:
-
Priority: Blocker  (was: Major)

> CommonGramsQueryFilter in the query analyzer chain breaks phrase queries
> 
>
> Key: LUCENE-7698
> URL: https://issues.apache.org/jira/browse/LUCENE-7698
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 6.4, 6.4.1
>Reporter: Ere Maijala
>Priority: Blocker
>  Labels: regression
> Fix For: master (7.0), 6.4.2
>
> Attachments: LUCENE-7698.patch
>
>
> (Please pardon me if the project or component are wrong!)
> CommonGramsQueryFilter breaks phrase queries. The behavior also seems to 
> change with addition or removal of adjacent terms.
> Steps to reproduce:
> 1.) Download and extract Solr (in my test case version 6.4.1) somewhere.
> 2.) Modify 
> server/solr/configsets/sample_techproducts_configs/conf/managed-schema and 
> modify text_general fieldType by adding CommonGrams(Query)Filter before 
> stopWordFilter:
>  positionIncrementGap="100">
>   
> 
>  words="stopwords.txt" />
>  words="stopwords.txt" />
> 
> 
>   
>   
> 
>  words="stopwords.txt"/>
>  words="stopwords.txt" />
>  ignoreCase="true" expand="true"/>
> 
>   
> 
> 3.) Add "with" to 
> server/solr/configsets/sample_techproducts_configs/conf/stopwords.txt and 
> make sure the file has correct line endings (extracted from Solr zip it seems 
> to contain DOS/Windows lien endings which may break things).
> 4.) Run the techproducts example with "bin/solr -e techproducts"
> 5.) Browse to 
> 
> 6.) Observe that parsedquery in the debug output is empty
> 7.) Browse to 
> 
> 8.) Observe that parsedquery contains ipod_with as expected but not 
> with_video.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 6.4.2 release?

2017-02-21 Thread Ishan Chattopadhyaya
Done.

On Wed, Feb 22, 2017 at 12:04 PM, Ere Maijala 
wrote:

> Please make LUCENE-7698 a blocker if possible. It's a regression that
> makes Solr pretty much useless for anyone with CommonGramsQueryFilter in
> the analysis chain.
>
> --Ere
>
> 21.2.2017, 21.46, Ishan Chattopadhyaya kirjoitti:
>
>> Actually, LUCENE-7698 was not a blocker, just marked for a 6.4.2
>> release. Should we make it a blocker?
>> As per an offline discussion with Andrzej, I've added SOLR-10182 as a
>> blocker. Tentatively, I'll cut a RC for 6.4.2 by Tuesday.
>>
>> On Tue, Feb 21, 2017 at 11:35 PM, Ishan Chattopadhyaya
>> > wrote:
>>
>> I would like to volunteer for this 6.4.2 release. Planning to cut a
>> RC as soon as blockers are resolved.
>> One of the unresolved blocker issues seems to be LUCENE-7698 (I'll
>> take a look to see if there are more). If there are more issues that
>> should be part of the release, please let me know or mark as
>> blockers in jira.
>>
>> Thanks,
>> Ishan
>>
>>
>> On Thu, Feb 16, 2017 at 3:48 AM, Adrien Grand > > wrote:
>>
>> I had initially planned on releasing tomorrow but the mirrors
>> replicated faster than I had thought they would so I finished
>> the release today, including the addition of the new 5.5.4
>> indices for backward testing so I am good with proceeding with a
>> new release now.
>>
>> Le mer. 15 févr. 2017 à 16:13, Adrien Grand > > a écrit :
>>
>> +1
>>
>> One ask I have is to wait for the 5.5.4 release process to
>> be complete so that branch_6_4 has the 5.5.4 backward
>> indices when we cut the first RC. I will let you know when I
>> am done.
>>
>> Le mer. 15 févr. 2017 à 15:53, Christine Poerschke
>> (BLOOMBERG/ LONDON) > > a écrit :
>>
>> Hi,
>>
>> These two could be minor candidates for inclusion:
>>
>> * https://issues.apache.org/jira/browse/SOLR-10083
>> 
>> Fix instanceof check in ConstDoubleSource.equals
>>
>> * https://issues.apache.org/jira/browse/LUCENE-7676
>> 
>> FilterCodecReader to override more super-class methods
>>
>> The former had narrowly missed the 6.4.1 release.
>>
>> Regards,
>>
>> Christine
>>
>> From: dev@lucene.apache.org
>>  At: 02/15/17 14:27:52
>> To: dev@lucene.apache.org
>> Subject: Re:6.4.2 release?
>>
>> Hi devs,
>>
>> These two issues seem serious enough to warrant a
>> new release from branch_6_4:
>> * SOLR-10130: Serious performance degradation in
>> Solr 6.4.1 due to the new metrics collection
>> * SOLR-10138: Transaction log replay can hit an NPE
>> due to new Metrics code.
>>
>> What do you think? Anything else that should go there?
>>
>> ---
>> Best regards,
>>
>> Andrzej Bialecki
>>
>>
>>
>>
> --
> Ere Maijala
> Kansalliskirjasto / The National Library of Finland
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Updated] (SOLR-8045) Deploy Solr in ROOT (/) path

2017-02-21 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-8045:
---
Attachment: SOLR-8045.patch

Update the patch to make it more compact.

> Deploy Solr in ROOT (/) path 
> -
>
> Key: SOLR-8045
> URL: https://issues.apache.org/jira/browse/SOLR-8045
> Project: Solr
>  Issue Type: Wish
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 6.0
>
> Attachments: SOLR-8045.patch, SOLR-8045.patch, SOLR-8045.patch
>
>
> This does not mean that the path to access Solr will be changed. All paths 
> will remain as is and would behave exactly the same



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10187) Solr streaming expression for cluster

2017-02-21 Thread Arun Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877643#comment-15877643
 ] 

Arun Kumar commented on SOLR-10187:
---

Thanks Amrit. Waiting for SOLR-9955.
For Facet, we need to use along with highlight component - so seeking support 
for highlighting as well in streaming expression. The existing facet stream is 
not supporting facet range.

> Solr streaming expression for cluster
> -
>
> Key: SOLR-10187
> URL: https://issues.apache.org/jira/browse/SOLR-10187
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - Clustering
>Affects Versions: 6.3
>Reporter: Arun Kumar
>  Labels: features
>
> The solr streaming expression is fast enough to handle multiple queries but 
> most of the use cases are not just select queries. Rather they are combined 
> with either clustering query or facet query. It would be nice to have the 
> streaming expression support clustering and facet query so that we can make 
> use of the worker nodes for such queries. There wont be any aggregation here 
> just including the clusters and facets in the response



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3848 - Still Unstable!

2017-02-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3848/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForImplicitRouter

Error Message:
Collection not found: withShardField

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: withShardField
at 
__randomizedtesting.SeedInfo.seed([30BA71C9F46E3A14:65EA995B5897F5E4]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.getCollectionNames(CloudSolrClient.java:1376)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1072)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1042)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:232)
at 
org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForImplicitRouter(CustomCollectionTest.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 

[jira] [Commented] (SOLR-10187) Solr streaming expression for cluster

2017-02-21 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877622#comment-15877622
 ] 

Amrit Sarkar commented on SOLR-10187:
-

Arun,

Facet stream is already there: [facet 
source|https://cwiki.apache.org/confluence/display/solr/Streaming+Expressions#StreamingExpressions-facet].
 You can wrap it up in the parallel stream to send over the queries to the 
worker nodes.

While Cluster stream is work in progress: SOLR-9955.

> Solr streaming expression for cluster
> -
>
> Key: SOLR-10187
> URL: https://issues.apache.org/jira/browse/SOLR-10187
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - Clustering
>Affects Versions: 6.3
>Reporter: Arun Kumar
>  Labels: features
>
> The solr streaming expression is fast enough to handle multiple queries but 
> most of the use cases are not just select queries. Rather they are combined 
> with either clustering query or facet query. It would be nice to have the 
> streaming expression support clustering and facet query so that we can make 
> use of the worker nodes for such queries. There wont be any aggregation here 
> just including the clusters and facets in the response



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 6.4.2 release?

2017-02-21 Thread Ere Maijala
Please make LUCENE-7698 a blocker if possible. It's a regression that 
makes Solr pretty much useless for anyone with CommonGramsQueryFilter in 
the analysis chain.


--Ere

21.2.2017, 21.46, Ishan Chattopadhyaya kirjoitti:

Actually, LUCENE-7698 was not a blocker, just marked for a 6.4.2
release. Should we make it a blocker?
As per an offline discussion with Andrzej, I've added SOLR-10182 as a
blocker. Tentatively, I'll cut a RC for 6.4.2 by Tuesday.

On Tue, Feb 21, 2017 at 11:35 PM, Ishan Chattopadhyaya
> wrote:

I would like to volunteer for this 6.4.2 release. Planning to cut a
RC as soon as blockers are resolved.
One of the unresolved blocker issues seems to be LUCENE-7698 (I'll
take a look to see if there are more). If there are more issues that
should be part of the release, please let me know or mark as
blockers in jira.

Thanks,
Ishan


On Thu, Feb 16, 2017 at 3:48 AM, Adrien Grand > wrote:

I had initially planned on releasing tomorrow but the mirrors
replicated faster than I had thought they would so I finished
the release today, including the addition of the new 5.5.4
indices for backward testing so I am good with proceeding with a
new release now.

Le mer. 15 févr. 2017 à 16:13, Adrien Grand > a écrit :

+1

One ask I have is to wait for the 5.5.4 release process to
be complete so that branch_6_4 has the 5.5.4 backward
indices when we cut the first RC. I will let you know when I
am done.

Le mer. 15 févr. 2017 à 15:53, Christine Poerschke
(BLOOMBERG/ LONDON) > a écrit :

Hi,

These two could be minor candidates for inclusion:

* https://issues.apache.org/jira/browse/SOLR-10083

Fix instanceof check in ConstDoubleSource.equals

* https://issues.apache.org/jira/browse/LUCENE-7676

FilterCodecReader to override more super-class methods

The former had narrowly missed the 6.4.1 release.

Regards,

Christine

From: dev@lucene.apache.org
 At: 02/15/17 14:27:52
To: dev@lucene.apache.org
Subject: Re:6.4.2 release?

Hi devs,

These two issues seem serious enough to warrant a
new release from branch_6_4:
* SOLR-10130: Serious performance degradation in
Solr 6.4.1 due to the new metrics collection
* SOLR-10138: Transaction log replay can hit an NPE
due to new Metrics code.

What do you think? Anything else that should go there?

---
Best regards,

Andrzej Bialecki





--
Ere Maijala
Kansalliskirjasto / The National Library of Finland

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10187) Solr streaming expression for cluster

2017-02-21 Thread Arun Kumar (JIRA)
Arun Kumar created SOLR-10187:
-

 Summary: Solr streaming expression for cluster
 Key: SOLR-10187
 URL: https://issues.apache.org/jira/browse/SOLR-10187
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: contrib - Clustering
Affects Versions: 6.3
Reporter: Arun Kumar


The solr streaming expression is fast enough to handle multiple queries but 
most of the use cases are not just select queries. Rather they are combined 
with either clustering query or facet query. It would be nice to have the 
streaming expression support clustering and facet query so that we can make use 
of the worker nodes for such queries. There wont be any aggregation here just 
including the clusters and facets in the response



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10020) CoreAdminHandler silently swallows some errors

2017-02-21 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-10020.
---
   Resolution: Fixed
Fix Version/s: 6.5
   trunk

[~markrmil...@gmail.com][~yo...@apache.org][~ichattopadhyaya] I had some  merge 
issues when I merged CHANGES.txt from trunk to 6x for this JIRA. It seems like 
some entries were moved around in trunk (SOLR-10114) in CHANGES.txt but not 
merged into the 6x version.

So the changes from around SOLR-10114 through "optimizations" for the 6.5 
version of solr/CHANGES.txt where I had the unexpected conflict looks OK to me, 
this is just a heads-up in case I messed up the merge and you wanted to take a 
look.

> CoreAdminHandler silently swallows some errors
> --
>
> Key: SOLR-10020
> URL: https://issues.apache.org/jira/browse/SOLR-10020
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Fix For: trunk, 6.5
>
> Attachments: SOLR-10020.patch, SOLR-10020.patch, SOLR-10020.patch, 
> SOLR-10020.patch
>
>
> With the setup on SOLR-10006, after removing some index files and starting 
> that Solr instance I tried issuing a REQUESTRECOVERY command and it came back 
> as a success even though nothing actually happened. When the core is 
> accessed, a core init exception is returned by subsequent calls to getCore(). 
> There is no catch block after the try so no error is returned.
> Looking through the code I see several other commands that have a similar 
> pattern:
>  FORCEPREPAREFORLEADERSHIP_OP
> LISTSNAPSHOTS_OP
> getCoreStatus
> and perhaps others. getCore() can throw an exception, about the only explicit 
> one it does throw is if the core has an initialization error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10020) CoreAdminHandler silently swallows some errors

2017-02-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877544#comment-15877544
 ] 

ASF subversion and git services commented on SOLR-10020:


Commit f5ea2022097503df4ed62e59f7d1cb061c8266ee in lucene-solr's branch 
refs/heads/branch_6x from Erick
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f5ea202 ]

SOLR-10020: CoreAdminHandler silently swallows some errors

(cherry picked from commit 14b3622608d3312eca32ba749132ce2f8531326a)


> CoreAdminHandler silently swallows some errors
> --
>
> Key: SOLR-10020
> URL: https://issues.apache.org/jira/browse/SOLR-10020
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-10020.patch, SOLR-10020.patch, SOLR-10020.patch, 
> SOLR-10020.patch
>
>
> With the setup on SOLR-10006, after removing some index files and starting 
> that Solr instance I tried issuing a REQUESTRECOVERY command and it came back 
> as a success even though nothing actually happened. When the core is 
> accessed, a core init exception is returned by subsequent calls to getCore(). 
> There is no catch block after the try so no error is returned.
> Looking through the code I see several other commands that have a similar 
> pattern:
>  FORCEPREPAREFORLEADERSHIP_OP
> LISTSNAPSHOTS_OP
> getCoreStatus
> and perhaps others. getCore() can throw an exception, about the only explicit 
> one it does throw is if the core has an initialization error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10020) CoreAdminHandler silently swallows some errors

2017-02-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877538#comment-15877538
 ] 

ASF subversion and git services commented on SOLR-10020:


Commit 14b3622608d3312eca32ba749132ce2f8531326a in lucene-solr's branch 
refs/heads/master from Erick
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=14b3622 ]

SOLR-10020: CoreAdminHandler silently swallows some errors


> CoreAdminHandler silently swallows some errors
> --
>
> Key: SOLR-10020
> URL: https://issues.apache.org/jira/browse/SOLR-10020
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-10020.patch, SOLR-10020.patch, SOLR-10020.patch, 
> SOLR-10020.patch
>
>
> With the setup on SOLR-10006, after removing some index files and starting 
> that Solr instance I tried issuing a REQUESTRECOVERY command and it came back 
> as a success even though nothing actually happened. When the core is 
> accessed, a core init exception is returned by subsequent calls to getCore(). 
> There is no catch block after the try so no error is returned.
> Looking through the code I see several other commands that have a similar 
> pattern:
>  FORCEPREPAREFORLEADERSHIP_OP
> LISTSNAPSHOTS_OP
> getCoreStatus
> and perhaps others. getCore() can throw an exception, about the only explicit 
> one it does throw is if the core has an initialization error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10020) CoreAdminHandler silently swallows some errors

2017-02-21 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-10020:
--
Attachment: SOLR-10020.patch

Same patch with CHANGES attribution.

> CoreAdminHandler silently swallows some errors
> --
>
> Key: SOLR-10020
> URL: https://issues.apache.org/jira/browse/SOLR-10020
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-10020.patch, SOLR-10020.patch, SOLR-10020.patch, 
> SOLR-10020.patch
>
>
> With the setup on SOLR-10006, after removing some index files and starting 
> that Solr instance I tried issuing a REQUESTRECOVERY command and it came back 
> as a success even though nothing actually happened. When the core is 
> accessed, a core init exception is returned by subsequent calls to getCore(). 
> There is no catch block after the try so no error is returned.
> Looking through the code I see several other commands that have a similar 
> pattern:
>  FORCEPREPAREFORLEADERSHIP_OP
> LISTSNAPSHOTS_OP
> getCoreStatus
> and perhaps others. getCore() can throw an exception, about the only explicit 
> one it does throw is if the core has an initialization error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10186) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-02-21 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877536#comment-15877536
 ] 

David Smiley commented on SOLR-10186:
-

Why is this filed in Solr?  KeywordTokenizer is in Lucene.

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> -
>
> Key: SOLR-10186
> URL: https://issues.apache.org/jira/browse/SOLR-10186
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Priority: Minor
>
> Is there a good reason that we hard-code a 256 character limit for the 
> CharTokenizer? In order to change this limit it requires that people 
> copy/paste the incrementToken into some new class since incrementToken is 
> final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10152) PostingsSolrHighlighter support for CustomSeparatorBreakIterator (LUCENE-6485)

2017-02-21 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877522#comment-15877522
 ] 

Amrit Sarkar commented on SOLR-10152:
-

Mr Smiley,

The Former. It was really straightforward configuring 
CustomSeparatorBreakIterator in PostingsSolrHighlighter. I understand 
UnifiedSolrHighlighter is the most flexible one in terms of configuration 
compared to other three available( including the default). As 
PostingsSolrHighlighter is the ancestor, I thought it would be better if this 
particular configuration is also backported. I will let you and others in the 
community decide if it is. Thank you for your feedback.

> PostingsSolrHighlighter support for CustomSeparatorBreakIterator (LUCENE-6485)
> --
>
> Key: SOLR-10152
> URL: https://issues.apache.org/jira/browse/SOLR-10152
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Reporter: Amrit Sarkar
> Attachments: SOLR-10152.patch
>
>
> Lucene 5.3 added a CustomSeparatorBreakIterator (see LUCENE-6485)
> SOLR-10152.patch uploaded which incorporates CustomSeparatorBreakIterator in 
> PostingsSolrHighlighter.
> - added a new request param option to specify which separator char to use. 
> *customSeparatorChar*.
> - changed PostingsSolrHighlighter.getBreakIterator to check 
> HighlightParams.BS_TYPE first.
> - if type=='CUSTOM', look for the new separator param, in getBreakIterator, 
> validate it's a single char, & skip locale parsing.
> - 'WHOLE' option moved from parseBreakIterator to getBreakIterator, as it 
> doesn't depend on locale.
> Changes made in:
> * HighlightParams.java
> * PostingsSolrHighlighter.java
> * test cases added in TestPostingsSolrHighlighter



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+155) - Build # 2914 - Still Unstable!

2017-02-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2914/
Java: 32bit/jdk-9-ea+155 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'P val' for path 'response/params/y/p' full 
output: {   "responseHeader":{ "status":0, "QTime":0},   "response":{   
  "znodeVersion":2, "params":{   "x":{ "a":"A val", 
"b":"B val", "":{"v":0}},   "y":{ "c":"CY val modified",
 "b":"BY val", "i":20, "d":[   "val 1",   
"val 2"], "e":"EY val", "":{"v":1},  from server:  
https://127.0.0.1:35831/solr/collection1_shard1_replica2

Stack Trace:
java.lang.AssertionError: Could not get expected value  'P val' for path 
'response/params/y/p' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "response":{
"znodeVersion":2,
"params":{
  "x":{
"a":"A val",
"b":"B val",
"":{"v":0}},
  "y":{
"c":"CY val modified",
"b":"BY val",
"i":20,
"d":[
  "val 1",
  "val 2"],
"e":"EY val",
"":{"v":1},  from server:  
https://127.0.0.1:35831/solr/collection1_shard1_replica2
at 
__randomizedtesting.SeedInfo.seed([AD0568FF75A78283:25515725DB5BEF7B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:556)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:245)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:69)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:543)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (SOLR-10152) PostingsSolrHighlighter support for CustomSeparatorBreakIterator (LUCENE-6485)

2017-02-21 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877494#comment-15877494
 ] 

David Smiley commented on SOLR-10152:
-

+1 looks fine.  Did you contribute this simply because, after having done the 
UnifiedHighlighter, doing this was easy since it's almost the same code so 
might as well, or do you actually use the PostingsHighlighter over the 
UnifiedHighlighter?  If the latter I'd like to hear how the UH isn't meeting 
your needs.  The UnifiedHighlighter is essentially an evolved version of the 
PostingsHighlighter.

> PostingsSolrHighlighter support for CustomSeparatorBreakIterator (LUCENE-6485)
> --
>
> Key: SOLR-10152
> URL: https://issues.apache.org/jira/browse/SOLR-10152
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Reporter: Amrit Sarkar
> Attachments: SOLR-10152.patch
>
>
> Lucene 5.3 added a CustomSeparatorBreakIterator (see LUCENE-6485)
> SOLR-10152.patch uploaded which incorporates CustomSeparatorBreakIterator in 
> PostingsSolrHighlighter.
> - added a new request param option to specify which separator char to use. 
> *customSeparatorChar*.
> - changed PostingsSolrHighlighter.getBreakIterator to check 
> HighlightParams.BS_TYPE first.
> - if type=='CUSTOM', look for the new separator param, in getBreakIterator, 
> validate it's a single char, & skip locale parsing.
> - 'WHOLE' option moved from parseBreakIterator to getBreakIterator, as it 
> doesn't depend on locale.
> Changes made in:
> * HighlightParams.java
> * PostingsSolrHighlighter.java
> * test cases added in TestPostingsSolrHighlighter



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_121) - Build # 743 - Unstable!

2017-02-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/743/
Java: 64bit/jdk1.8.0_121 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  org.apache.solr.core.TestLazyCores.testNoCommit

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([7C0CC0B2D7A56A04:A36C61631C8209A1]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:918)
at org.apache.solr.core.TestLazyCores.check10(TestLazyCores.java:794)
at 
org.apache.solr.core.TestLazyCores.testNoCommit(TestLazyCores.java:776)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound='10']
xml response was: 



  0
  1
  
*:*
  





request was:q=*:*
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:911)
... 41 more


FAILED:  

[jira] [Created] (SOLR-10186) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-02-21 Thread Erick Erickson (JIRA)
Erick Erickson created SOLR-10186:
-

 Summary: Allow CharTokenizer-derived tokenizers and 
KeywordTokenizer to configure the max token length
 Key: SOLR-10186
 URL: https://issues.apache.org/jira/browse/SOLR-10186
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Erick Erickson
Priority: Minor


Is there a good reason that we hard-code a 256 character limit for the 
CharTokenizer? In order to change this limit it requires that people copy/paste 
the incrementToken into some new class since incrementToken is final.

KeywordTokenizer can easily change the default (which is also 256 bytes), but 
to do so requires code rather than being able to configure it in the schema.

For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
(WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
(Factories) it would take adding a c'tor to the base class in Lucene and using 
it in the factory.

Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+155) - Build # 19022 - Unstable!

2017-02-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19022/
Java: 32bit/jdk-9-ea+155 -client -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_dv

Error Message:
java.lang.RuntimeException: Error from server at 
http://127.0.0.1:44122/solr/test_col: Async exception during distributed 
update: Error from server at 
http://127.0.0.1:35815/solr/test_col_shard1_replica1: Server Errorrequest: 
http://127.0.0.1:35815/solr/test_col_shard1_replica1/update?update.distrib=TOLEADER=http%3A%2F%2F127.0.0.1%3A44122%2Fsolr%2Ftest_col_shard1_replica2%2F=javabin=2
 Remote error message: Failed synchronous update on shard StdNode: 
http://127.0.0.1:44122/solr/test_col_shard1_replica2/ update: 
org.apache.solr.client.solrj.request.UpdateRequest@e77683

Stack Trace:
java.util.concurrent.ExecutionException: java.lang.RuntimeException: Error from 
server at http://127.0.0.1:44122/solr/test_col: Async exception during 
distributed update: Error from server at 
http://127.0.0.1:35815/solr/test_col_shard1_replica1: Server Error



request: 
http://127.0.0.1:35815/solr/test_col_shard1_replica1/update?update.distrib=TOLEADER=http%3A%2F%2F127.0.0.1%3A44122%2Fsolr%2Ftest_col_shard1_replica2%2F=javabin=2
Remote error message: Failed synchronous update on shard StdNode: 
http://127.0.0.1:44122/solr/test_col_shard1_replica2/ update: 
org.apache.solr.client.solrj.request.UpdateRequest@e77683
at 
__randomizedtesting.SeedInfo.seed([4DA757C6C6058F29:7BB335804C58B538]:0)
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.checkField(TestStressCloudBlindAtomicUpdates.java:281)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_dv(TestStressCloudBlindAtomicUpdates.java:193)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:543)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Comment Edited] (SOLR-10155) Clarify logic for term filters on numeric types

2017-02-21 Thread Gus Heck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877413#comment-15877413
 ] 

Gus Heck edited comment on SOLR-10155 at 2/22/17 3:42 AM:
--

I think the pattern actually started with facet.prefix at the time DocValues 
was added by [~jpountz]  in SOLR-3855 in 2013...

https://github.com/apache/lucene-solr/commit/e61398084d3f1ca0f28c5c35d3318645d7a401ec#diff-5ac9dc7b128b4dd99b764060759222b2R381

The only question I have is whether there's a use case for passing blanks 
through... perhaps some situation in which facet.prefix or facet.contains would 
be robotically added and supplying a blank is the means of "turning it off" 
without blowing up? Maybe some component might do such a thing?




was (Author: gus_heck):
I think the pattern actually started with facet.prefix at the time DocValues 
was added by [~jpountz]  in Solr-3855 in 2013...

https://github.com/apache/lucene-solr/commit/e61398084d3f1ca0f28c5c35d3318645d7a401ec#diff-5ac9dc7b128b4dd99b764060759222b2R381

The only question I have is whether there's a use case for passing blanks 
through... perhaps some situation in which facet.prefix or facet.contains would 
be robotically added and supplying a blank is the means of "turning it off" 
without blowing up? Maybe some component might do such a thing?



> Clarify logic for term filters on numeric types
> ---
>
> Key: SOLR-10155
> URL: https://issues.apache.org/jira/browse/SOLR-10155
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 6.4.1
>Reporter: Gus Heck
>Priority: Minor
> Attachments: SOLR-10155.patch
>
>
> The following code has been found to be confusing to multiple folks working 
> in SimpleFacets.java (see SOLR-10132)
> {code}
> if (termFilter != null) {
>   // TODO: understand this logic... what is the case for 
> supporting an empty string
>   // for contains on numeric facets? What does that achieve?
>   // The exception message is misleading in the case of an 
> excludeTerms filter in any case...
>   // Also maybe vulnerable to NPE on isEmpty test?
>   final boolean supportedOperation = (termFilter instanceof 
> SubstringBytesRefFilter) && ((SubstringBytesRefFilter) 
> termFilter).substring().isEmpty();
>   if (!supportedOperation) {
> throw new SolrException(ErrorCode.BAD_REQUEST, 
> FacetParams.FACET_CONTAINS + " is not supported on numeric types");
>   }
> }
> {code}
> This is found around line 482 or so. The comment in the code above is mine, 
> and won't be found in the codebase. This ticket can be resolved by 
> eliminating the complex check and just denying all termFilters with a better 
> exception message not specific to contains filters (and perhaps consolidated 
> with the proceeding check for about prefix filters?), or adding a comment to 
> the code base explaining why we need to allow a term filter with an empty, 
> non-null string to be processed, and why this isn't an NPE waiting to happen.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10155) Clarify logic for term filters on numeric types

2017-02-21 Thread Gus Heck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877413#comment-15877413
 ] 

Gus Heck commented on SOLR-10155:
-

I think the pattern actually started with facet.prefix at the time DocValues 
was added by [~jpountz]  in Solr-3855 in 2013...

https://github.com/apache/lucene-solr/commit/e61398084d3f1ca0f28c5c35d3318645d7a401ec#diff-5ac9dc7b128b4dd99b764060759222b2R381

The only question I have is whether there's a use case for passing blanks 
through... perhaps some situation in which facet.prefix or facet.contains would 
be robotically added and supplying a blank is the means of "turning it off" 
without blowing up? Maybe some component might do such a thing?



> Clarify logic for term filters on numeric types
> ---
>
> Key: SOLR-10155
> URL: https://issues.apache.org/jira/browse/SOLR-10155
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 6.4.1
>Reporter: Gus Heck
>Priority: Minor
> Attachments: SOLR-10155.patch
>
>
> The following code has been found to be confusing to multiple folks working 
> in SimpleFacets.java (see SOLR-10132)
> {code}
> if (termFilter != null) {
>   // TODO: understand this logic... what is the case for 
> supporting an empty string
>   // for contains on numeric facets? What does that achieve?
>   // The exception message is misleading in the case of an 
> excludeTerms filter in any case...
>   // Also maybe vulnerable to NPE on isEmpty test?
>   final boolean supportedOperation = (termFilter instanceof 
> SubstringBytesRefFilter) && ((SubstringBytesRefFilter) 
> termFilter).substring().isEmpty();
>   if (!supportedOperation) {
> throw new SolrException(ErrorCode.BAD_REQUEST, 
> FacetParams.FACET_CONTAINS + " is not supported on numeric types");
>   }
> }
> {code}
> This is found around line 482 or so. The comment in the code above is mine, 
> and won't be found in the codebase. This ticket can be resolved by 
> eliminating the complex check and just denying all termFilters with a better 
> exception message not specific to contains filters (and perhaps consolidated 
> with the proceeding check for about prefix filters?), or adding a comment to 
> the code base explaining why we need to allow a term filter with an empty, 
> non-null string to be processed, and why this isn't an NPE waiting to happen.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+155) - Build # 2913 - Unstable!

2017-02-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2913/
Java: 64bit/jdk-9-ea+155 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.CustomCollectionTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [Overseer] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.cloud.Overseer.start(Overseer.java:523)  at 
org.apache.solr.cloud.OverseerElectionContext.runLeaderProcess(ElectionContext.java:747)
  at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170) 
 at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135)  
at org.apache.solr.cloud.LeaderElector.access$200(LeaderElector.java:56)  at 
org.apache.solr.cloud.LeaderElector$ElectionWatcher.process(LeaderElector.java:348)
  at 
org.apache.solr.common.cloud.SolrZkClient$3.lambda$process$0(SolrZkClient.java:268)
  at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
  at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1161)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
  at java.base/java.lang.Thread.run(Thread.java:844)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [Overseer]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
at org.apache.solr.cloud.Overseer.start(Overseer.java:523)
at 
org.apache.solr.cloud.OverseerElectionContext.runLeaderProcess(ElectionContext.java:747)
at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135)
at org.apache.solr.cloud.LeaderElector.access$200(LeaderElector.java:56)
at 
org.apache.solr.cloud.LeaderElector$ElectionWatcher.process(LeaderElector.java:348)
at 
org.apache.solr.common.cloud.SolrZkClient$3.lambda$process$0(SolrZkClient.java:268)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1161)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:844)


at __randomizedtesting.SeedInfo.seed([A8375D3D6B8FAD64]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:301)
at jdk.internal.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:543)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (SOLR-6325) Expose per-collection and per-shard aggregate statistics

2017-02-21 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877231#comment-15877231
 ] 

Cassandra Targett commented on SOLR-6325:
-

[~shalinmangar] or [~ab], is this issue essentially a duplicate of SOLR-9858 or 
SOLR-9857? Even if not a duplicate, perhaps it is superceded by those issues?

> Expose per-collection and per-shard aggregate statistics
> 
>
> Key: SOLR-6325
> URL: https://issues.apache.org/jira/browse/SOLR-6325
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 4.9, 6.0
>
> Attachments: SOLR-6325.patch, SOLR-6325.patch, SOLR-6325.patch, 
> SOLR-6325.patch
>
>
> SolrCloud doesn't provide any aggregate stats about the cluster or a 
> collection. Very common questions such as document counts per shard, index 
> sizes, request rates etc cannot be answered easily without figuring out the 
> cluster state, invoking multiple core admin APIs and aggregating them 
> manually.
> I propose that we expose an API which returns each of the following on a 
> per-collection and per-shard basis:
> # Document counts
> # Index size on disk
> # Query request rate
> # Indexing request rate
> # Real time get request rate
> I am not yet sure if this should be a distributed search component or a 
> collection API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10143) Create IndexOrDocValuesQuery for PointFields when possible

2017-02-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-10143:
-
Attachment: SOLR-10143.patch

Fixed an issue with PolyFieldTest and TestMaxScoreQueryParser. 

> Create IndexOrDocValuesQuery for PointFields when possible
> --
>
> Key: SOLR-10143
> URL: https://issues.apache.org/jira/browse/SOLR-10143
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-10143.patch, SOLR-10143.patch
>
>
> IndexOrDocValuesQuery was recently added in Lucene as an optimization for 
> queries on fields that have DV and Points. See LUCENE-7055 and LUCENE-7643



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10115) Corruption in read-side of SOLR-HDFS stack

2017-02-21 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877188#comment-15877188
 ] 

Yonik Seeley commented on SOLR-10115:
-

OK, after the fixes in SOLR-10121 and SOLR-10141, I can no longer reproduce 
fails with the attached test.
I still need to make it into a more proper unit test before committing it 
though.

> Corruption in read-side of SOLR-HDFS stack
> --
>
> Key: SOLR-10115
> URL: https://issues.apache.org/jira/browse/SOLR-10115
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 4.4
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Attachments: YCS_HdfsTest.java
>
>
> I've been trying to track down some random AIOOB exceptions in Lucene for a 
> customer, and I've managed to reproduce the issue with a unit test of 
> sufficient size in conjunction with highly concurrent read requests.
> A typical stack trace looks like:
> {code}
> org.apache.solr.common.SolrException; 
> java.lang.ArrayIndexOutOfBoundsException: 172033655
> at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:149)
> at 
> org.apache.lucene.codecs.lucene41.Lucene41PostingsReader$BlockDocsEnum.nextDoc(Lucene41PostingsReader.java:455)
> at 
> org.apache.lucene.search.MultiTermQueryWrapperFilter.getDocIdSet(MultiTermQueryWrapperFilter.java:111)
> at 
> org.apache.lucene.search.ConstantScoreQuery$ConstantWeight.scorer(ConstantScoreQuery.java:157)
> {code}
> The number of unique stack traces is relatively high, most AIOOB exceptions, 
> but some EOF.  Most exceptions occur in the term index, however I believe 
> this may be just an artifact of where highly concurrent access is most likely 
> to occur.  The queries that triggered this had many wildcards and other 
> multi-term queries.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10116) BlockCache test and documentation improvement

2017-02-21 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-10116.
-
   Resolution: Fixed
Fix Version/s: 6.5

> BlockCache test and documentation improvement
> -
>
> Key: SOLR-10116
> URL: https://issues.apache.org/jira/browse/SOLR-10116
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 6.5
>
> Attachments: SOLR-10116.patch
>
>
> We need better concurrency tests for the BlockCache, to ensure that we're 
> working on something really stable.  This is really part of the effort to 
> diagnose SOLR-10115, but will be useful long after.
> I plan to add missing code comments as I review the code as well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10116) BlockCache test and documentation improvement

2017-02-21 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley reassigned SOLR-10116:
---

Assignee: Yonik Seeley

> BlockCache test and documentation improvement
> -
>
> Key: SOLR-10116
> URL: https://issues.apache.org/jira/browse/SOLR-10116
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 6.5
>
> Attachments: SOLR-10116.patch
>
>
> We need better concurrency tests for the BlockCache, to ensure that we're 
> working on something really stable.  This is really part of the effort to 
> diagnose SOLR-10115, but will be useful long after.
> I plan to add missing code comments as I review the code as well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10115) Corruption in read-side of SOLR-HDFS stack

2017-02-21 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley reassigned SOLR-10115:
---

Assignee: Yonik Seeley

> Corruption in read-side of SOLR-HDFS stack
> --
>
> Key: SOLR-10115
> URL: https://issues.apache.org/jira/browse/SOLR-10115
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 4.4
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Attachments: YCS_HdfsTest.java
>
>
> I've been trying to track down some random AIOOB exceptions in Lucene for a 
> customer, and I've managed to reproduce the issue with a unit test of 
> sufficient size in conjunction with highly concurrent read requests.
> A typical stack trace looks like:
> {code}
> org.apache.solr.common.SolrException; 
> java.lang.ArrayIndexOutOfBoundsException: 172033655
> at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:149)
> at 
> org.apache.lucene.codecs.lucene41.Lucene41PostingsReader$BlockDocsEnum.nextDoc(Lucene41PostingsReader.java:455)
> at 
> org.apache.lucene.search.MultiTermQueryWrapperFilter.getDocIdSet(MultiTermQueryWrapperFilter.java:111)
> at 
> org.apache.lucene.search.ConstantScoreQuery$ConstantWeight.scorer(ConstantScoreQuery.java:157)
> {code}
> The number of unique stack traces is relatively high, most AIOOB exceptions, 
> but some EOF.  Most exceptions occur in the term index, however I believe 
> this may be just an artifact of where highly concurrent access is most likely 
> to occur.  The queries that triggered this had many wildcards and other 
> multi-term queries.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10115) Corruption in read-side of SOLR-HDFS stack

2017-02-21 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-10115:

Affects Version/s: (was: 4.10)
   4.4

> Corruption in read-side of SOLR-HDFS stack
> --
>
> Key: SOLR-10115
> URL: https://issues.apache.org/jira/browse/SOLR-10115
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 4.4
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Attachments: YCS_HdfsTest.java
>
>
> I've been trying to track down some random AIOOB exceptions in Lucene for a 
> customer, and I've managed to reproduce the issue with a unit test of 
> sufficient size in conjunction with highly concurrent read requests.
> A typical stack trace looks like:
> {code}
> org.apache.solr.common.SolrException; 
> java.lang.ArrayIndexOutOfBoundsException: 172033655
> at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:149)
> at 
> org.apache.lucene.codecs.lucene41.Lucene41PostingsReader$BlockDocsEnum.nextDoc(Lucene41PostingsReader.java:455)
> at 
> org.apache.lucene.search.MultiTermQueryWrapperFilter.getDocIdSet(MultiTermQueryWrapperFilter.java:111)
> at 
> org.apache.lucene.search.ConstantScoreQuery$ConstantWeight.scorer(ConstantScoreQuery.java:157)
> {code}
> The number of unique stack traces is relatively high, most AIOOB exceptions, 
> but some EOF.  Most exceptions occur in the term index, however I believe 
> this may be just an artifact of where highly concurrent access is most likely 
> to occur.  The queries that triggered this had many wildcards and other 
> multi-term queries.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10156) Add significantTerms Streaming Expression

2017-02-21 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10156:
--
Attachment: SOLR-10156.patch

Added a simple test case. More work still todo but getting closer.

> Add significantTerms Streaming Expression
> -
>
> Key: SOLR-10156
> URL: https://issues.apache.org/jira/browse/SOLR-10156
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.5
>
> Attachments: SOLR-10156.patch, SOLR-10156.patch
>
>
> The significantTerms Streaming Expression will emit a set of terms from a 
> *text field* within a doc frequency range for a specific query. It will also 
> score the terms based on how many times the terms appear in the result set, 
> and how many times the terms appear in the corpus, and return the top N terms 
> based on this significance score.
> Syntax:
> {code}
> significantTerms(collection, 
>q="abc", 
>field="some_text_field", 
>minDocFreq="x", 
>maxDocFreq="y",
>limit="50")
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10121) BlockCache corruption with high concurrency

2017-02-21 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-10121.
-
   Resolution: Fixed
Fix Version/s: 6.5

> BlockCache corruption with high concurrency
> ---
>
> Key: SOLR-10121
> URL: https://issues.apache.org/jira/browse/SOLR-10121
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 4.4
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 6.5
>
> Attachments: SOLR-10121.patch
>
>
> Improving the tests of the BlockCache in SOLR-10116 uncovered a corruption 
> bug (either that or the test is flawed... TBD).
> The failing test is TestBlockCache.testBlockCacheConcurrent()



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10121) BlockCache corruption with high concurrency

2017-02-21 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-10121:

Affects Version/s: 4.4

> BlockCache corruption with high concurrency
> ---
>
> Key: SOLR-10121
> URL: https://issues.apache.org/jira/browse/SOLR-10121
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 4.4
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 6.5
>
> Attachments: SOLR-10121.patch
>
>
> Improving the tests of the BlockCache in SOLR-10116 uncovered a corruption 
> bug (either that or the test is flawed... TBD).
> The failing test is TestBlockCache.testBlockCacheConcurrent()



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 290 - Failure

2017-02-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/290/

6 tests failed.
FAILED:  
org.apache.lucene.index.TestIndexingSequenceNumbers.testStressConcurrentCommit

Error Message:
this IndexWriter is closed

Stack Trace:
org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:753)
at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:767)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3214)
at 
org.apache.lucene.index.TestIndexingSequenceNumbers.testStressConcurrentCommit(TestIndexingSequenceNumbers.java:230)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space
at 
org.apache.lucene.store.GrowableByteArrayDataOutput.(GrowableByteArrayDataOutput.java:47)
at 
org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.(CompressingStoredFieldsWriter.java:108)
at 
org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:128)
at 
org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsWriter(Lucene50StoredFieldsFormat.java:183)
at 

[jira] [Commented] (SOLR-10121) BlockCache corruption with high concurrency

2017-02-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877051#comment-15877051
 ] 

ASF subversion and git services commented on SOLR-10121:


Commit 8dbb1bb3fb64fea4baa672ce82a1b62af22c3571 in lucene-solr's branch 
refs/heads/branch_6x from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8dbb1bb ]

SOLR-10121: enable BlockCacheTest.testBlockCacheConcurrent that now passes


> BlockCache corruption with high concurrency
> ---
>
> Key: SOLR-10121
> URL: https://issues.apache.org/jira/browse/SOLR-10121
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Attachments: SOLR-10121.patch
>
>
> Improving the tests of the BlockCache in SOLR-10116 uncovered a corruption 
> bug (either that or the test is flawed... TBD).
> The failing test is TestBlockCache.testBlockCacheConcurrent()



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10121) BlockCache corruption with high concurrency

2017-02-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877050#comment-15877050
 ] 

ASF subversion and git services commented on SOLR-10121:


Commit cf1cba66f49c551cddbc6053565c30bf3a8b23bb in lucene-solr's branch 
refs/heads/master from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cf1cba6 ]

SOLR-10121: enable BlockCacheTest.testBlockCacheConcurrent that now passes


> BlockCache corruption with high concurrency
> ---
>
> Key: SOLR-10121
> URL: https://issues.apache.org/jira/browse/SOLR-10121
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Attachments: SOLR-10121.patch
>
>
> Improving the tests of the BlockCache in SOLR-10116 uncovered a corruption 
> bug (either that or the test is flawed... TBD).
> The failing test is TestBlockCache.testBlockCacheConcurrent()



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1147 - Still Unstable!

2017-02-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1147/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([9B758FA1922B7948:1321B07B3CD714B0]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:326)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:277)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:259)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:138)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Created] (SOLR-10185) TestCodecSupport fail

2017-02-21 Thread Mark Miller (JIRA)
Mark Miller created SOLR-10185:
--

 Summary: TestCodecSupport fail
 Key: SOLR-10185
 URL: https://issues.apache.org/jira/browse/SOLR-10185
 Project: Solr
  Issue Type: Test
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Mark Miller


{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestCodecSupport 
-Dtests.method=testDocValuesFormats -Dtests.seed=7FED485D50D6E00C 
-Dtests.slow=true -Dtests.locale=id-ID -Dtests.timezone=Europe/Vatican 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 0.02s J3  | TestCodecSupport.testDocValuesFormats <<<
   [junit4]> Throwable #1: org.junit.ComparisonFailure: 
expected: but was:
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([7FED485D50D6E00C:73B680EFB8D740F3]:0)
   [junit4]>at 
org.apache.solr.core.TestCodecSupport.testDocValuesFormats(TestCodecSupport.java:65)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
   [junit4]   2> 154863 INFO  
(TEST-TestCodecSupport.testDynamicFieldsDocValuesFormats-seed#[7FED485D50D6E00C])
 [x:core_with_default_compression] o.a.s.SolrTestCaseJ4 ###Starting 
testDynamicFieldsDocValuesFormats
   [junit4]   2> 154863 INFO  
(TEST-TestCodecSupport.testDynamicFieldsDocValuesFormats-seed#[7FED485D50D6E00C])
 [x:core_with_default_compression] o.a.s.SolrTestCaseJ4 ###Ending 
testDynamicFieldsDocValuesFormats
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestCodecSupport 
-Dtests.method=testDynamicFieldsDocValuesFormats -Dtests.seed=7FED485D50D6E00C 
-Dtests.slow=true -Dtests.locale=id-ID -Dtests.timezone=Europe/Vatican 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 0.00s J3  | 
TestCodecSupport.testDynamicFieldsDocValuesFormats <<<
   [junit4]> Throwable #1: org.junit.ComparisonFailure: 
expected: but was:
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([7FED485D50D6E00C:9F3E169B46485658]:0)
   [junit4]>at 
org.apache.solr.core.TestCodecSupport.testDynamicFieldsDocValuesFormats(TestCodecSupport.java:88)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10126) PeerSyncReplicationTest is a flakey test.

2017-02-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876958#comment-15876958
 ] 

ASF subversion and git services commented on SOLR-10126:


Commit 3771e7d2c7df4df0c3771a1c6aaa05ce16d58b43 in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3771e7d ]

SOLR-10126: @BadApple this test, fails frequently on Jenkins cluster.


> PeerSyncReplicationTest is a flakey test.
> -
>
> Key: SOLR-10126
> URL: https://issues.apache.org/jira/browse/SOLR-10126
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
> Attachments: faillogs.tar.gz
>
>
> Could be related to SOLR-9555, but I will see what else pops up under 
> beasting.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10126) PeerSyncReplicationTest is a flakey test.

2017-02-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876959#comment-15876959
 ] 

ASF subversion and git services commented on SOLR-10126:


Commit 2d69eb3cf0d7f063d1076809731c669178b99cc7 in lucene-solr's branch 
refs/heads/branch_6x from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2d69eb3 ]

SOLR-10126: @BadApple this test, fails frequently on Jenkins cluster.


> PeerSyncReplicationTest is a flakey test.
> -
>
> Key: SOLR-10126
> URL: https://issues.apache.org/jira/browse/SOLR-10126
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
> Attachments: faillogs.tar.gz
>
>
> Could be related to SOLR-9555, but I will see what else pops up under 
> beasting.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8689) bin/solr.cmd does not start with recent Verona builds of Java 9 because of version parsing issue

2017-02-21 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8689:
---
Summary: bin/solr.cmd does not start with recent Verona builds of Java 9 
because of version parsing issue  (was: Solr 5/6 does not start with recent 
Verona builds of Java 9 because of version parsing issue)

> bin/solr.cmd does not start with recent Verona builds of Java 9 because of 
> version parsing issue
> 
>
> Key: SOLR-8689
> URL: https://issues.apache.org/jira/browse/SOLR-8689
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 5.5, 6.0
> Environment: Windows 7
>Reporter: Uwe Schindler
>  Labels: Java9
>
> At least on Windows, Solr 5.5 does not start with the shell script using a 
> Verona-Java-9 JDK:
> {noformat}
> *
> JAVA_HOME = C:\Program Files\Java\jdk-9
> java version "9-ea"
> Java(TM) SE Runtime Environment (build 
> 9-ea+105-2016-02-11-003336.javare.4433.nc)
> Java HotSpot(TM) 64-Bit Server VM (build 
> 9-ea+105-2016-02-11-003336.javare.4433.nc, mixed mode)
> *
> C:\Users\Uwe Schindler\Desktop\solr-5.5.0\bin>solr start
> ERROR: Java 1.7 or later is required to run Solr. Current Java version is: 
> 9-ea
> {noformat}
> I don't know if this is better with Linux, but I assume the version parsing 
> is broken (e.g., String#startsWith, interpret as floating point number,...)
> We should fix this before Java 9 gets released! The version numbering scheme 
> changed completely: http://openjdk.java.net/jeps/223



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10184) bin/solr fails to run on java9 due to unrecognized GC options

2017-02-21 Thread Hoss Man (JIRA)
Hoss Man created SOLR-10184:
---

 Summary: bin/solr fails to run on java9 due to unrecognized GC 
options
 Key: SOLR-10184
 URL: https://issues.apache.org/jira/browse/SOLR-10184
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: scripts and tools
Reporter: Hoss Man


{noformat}
hossman@tray:~/lucene/dev/solr [master] $ bin/solr start -f
Java HotSpot(TM) 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
Java HotSpot(TM) 64-Bit Server VM warning: Option UseParNewGC was deprecated in 
version 9.0 and will likely be removed in a future release.
Unrecognized VM option 'PrintHeapAtGC'
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
{noformat}

(untested) workaround is to override GC_LOG_OPTS in {{solr.in.sh}}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10153) UnifiedSolrHighlighter support for CustomSeparatorBreakIterator (LUCENE-6485)

2017-02-21 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-10153:

Attachment: SOLR-10153.patch

> UnifiedSolrHighlighter support for CustomSeparatorBreakIterator (LUCENE-6485)
> -
>
> Key: SOLR-10153
> URL: https://issues.apache.org/jira/browse/SOLR-10153
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Reporter: Amrit Sarkar
> Attachments: SOLR-10153.patch, SOLR-10153.patch
>
>
> Lucene 5.3 added a CustomSeparatorBreakIterator (see LUCENE-6485)
> UnifiedSolrHighlighter should support *CustomSeparatorBreakIterator* along 
> with existing ones, WholeBreakIterator etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10153) UnifiedSolrHighlighter support for CustomSeparatorBreakIterator (LUCENE-6485)

2017-02-21 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876884#comment-15876884
 ] 

Amrit Sarkar commented on SOLR-10153:
-

Mr. Smiley,

Thank you for the feedback and glad you found the patch good enough. 

Regarding _fragsize == 1_; I got that wrong for sure, thank you correcting me 
out. I made some wrong assertions on the fragment size specified which doesn't 
make sense, was trying to optimise considering the code below in 
getBreakIterator(String field):

{code:xml}
312:   if (fragsize <= 1 || baseBI instanceof WholeBreakIterator) { // no 
real minimum size
313: return baseBI;
314:   }
{code}

I put the piece of code back where it belong. PF the updated patch.

I will also appreciate your inputs on SOLR-10152, CustomSeparatorBreakIterator 
for PostingsSolrHighlighter.

Thanks,
Amrit Sarkar

> UnifiedSolrHighlighter support for CustomSeparatorBreakIterator (LUCENE-6485)
> -
>
> Key: SOLR-10153
> URL: https://issues.apache.org/jira/browse/SOLR-10153
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Reporter: Amrit Sarkar
> Attachments: SOLR-10153.patch
>
>
> Lucene 5.3 added a CustomSeparatorBreakIterator (see LUCENE-6485)
> UnifiedSolrHighlighter should support *CustomSeparatorBreakIterator* along 
> with existing ones, WholeBreakIterator etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10141) Caffeine cache causes BlockCache corruption

2017-02-21 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-10141.
-
   Resolution: Fixed
Fix Version/s: 6.5

Everything is looking good w/ Caffeine 2.4.0, thanks for the help Ben!

> Caffeine cache causes BlockCache corruption 
> 
>
> Key: SOLR-10141
> URL: https://issues.apache.org/jira/browse/SOLR-10141
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.0
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 6.5
>
> Attachments: SOLR-10141.patch, Solr10141Test.java
>
>
> After fixing the race conditions in the BlockCache itself (SOLR-10121), the 
> concurrency test passes with the previous implementation using 
> ConcurrentLinkedHashMap and fail with Caffeine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10141) Caffeine cache causes BlockCache corruption

2017-02-21 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-10141:

Affects Version/s: 6.0

> Caffeine cache causes BlockCache corruption 
> 
>
> Key: SOLR-10141
> URL: https://issues.apache.org/jira/browse/SOLR-10141
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.0
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Attachments: SOLR-10141.patch, Solr10141Test.java
>
>
> After fixing the race conditions in the BlockCache itself (SOLR-10121), the 
> concurrency test passes with the previous implementation using 
> ConcurrentLinkedHashMap and fail with Caffeine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10141) Caffeine cache causes BlockCache corruption

2017-02-21 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley reassigned SOLR-10141:
---

Assignee: Yonik Seeley

> Caffeine cache causes BlockCache corruption 
> 
>
> Key: SOLR-10141
> URL: https://issues.apache.org/jira/browse/SOLR-10141
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.0
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Attachments: SOLR-10141.patch, Solr10141Test.java
>
>
> After fixing the race conditions in the BlockCache itself (SOLR-10121), the 
> concurrency test passes with the previous implementation using 
> ConcurrentLinkedHashMap and fail with Caffeine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10141) Caffeine cache causes BlockCache corruption

2017-02-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876872#comment-15876872
 ] 

ASF subversion and git services commented on SOLR-10141:


Commit d8799bc475ca5d384ec49ecf2726aec58e37447b in lucene-solr's branch 
refs/heads/branch_6x from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d8799bc ]

SOLR-10141: Upgrade to Caffeine 2.4.0 to fix issues with removal listener


> Caffeine cache causes BlockCache corruption 
> 
>
> Key: SOLR-10141
> URL: https://issues.apache.org/jira/browse/SOLR-10141
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
> Attachments: SOLR-10141.patch, Solr10141Test.java
>
>
> After fixing the race conditions in the BlockCache itself (SOLR-10121), the 
> concurrency test passes with the previous implementation using 
> ConcurrentLinkedHashMap and fail with Caffeine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10183) A real scaling normalizer in solr-ltr

2017-02-21 Thread Rahul Babulal (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rahul Babulal updated SOLR-10183:
-
Description: 
The current solr-ltr plugin provider two different normalizer implementations, 
minMax and standard normalizer. The mimMax normalizer doesn't seem to correctly 
scale the values to fall in between the given min and max.   The solr scale 
function [1] appropriately scales the values to fall in between the given 
range. But it cannot be used as it is, because it has performance problems and 
makes the scoring process really slow.  

For example if I have a data set [11,51,16,42,18,21]  and if we want to scale 
the values to 1 to 10,  I would except the max value in the data set (which is 
51) to be normalized to 10, and min value which is 11, to be normalized to 1.
Here is sample output of the minMax normalizer vs Scaling normalizer
||Input||MinMax Normalizer||Scaling Normalizer||
|11.0|1.112|1.0|
|51.0|5.553|10.0|
|16.0|1.666|2.125|
|42.0|4.553|7.975|
|18.0|1.888|2.5749998|
|21.0|2.223|3.25|

[1]https://wiki.apache.org/solr/FunctionQuery#scale

  was:
The current solr-ltr plugin provider two different normalizer implementations, 
minMax and standard normalizer. The mimMax normalizer doesn't seem to correctly 
scale the values to fall in between the given min and max.   The solr scale 
function [1] appropriately scales the values to fall in between the given 
range. But it cannot be used as it is, because it has performance problems and 
makes the scoring process really slow.  

For example if I have a data set [11,51,16,42,18,21]  and if we want to scale 
the values to 1 to 10,  I would except the max value in the data set (which is 
51) to be normalized to 10, and min value which is 11, to be normalized to 1.
Here is sample output of the minMax normalizer vs Scaling normalizer
||Input||MinMax Normalizer||Scaling Normalizer||
|11.0|1.112|10.8|
|51.0|5.553|10.0|
|16.0|1.666|10.7|
|42.0|4.553|10.18|
|18.0|1.888|10.66|
|21.0|2.223|10.6|

[1]https://wiki.apache.org/solr/FunctionQuery#scale


> A real scaling normalizer in solr-ltr
> -
>
> Key: SOLR-10183
> URL: https://issues.apache.org/jira/browse/SOLR-10183
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.4.1
>Reporter: Rahul Babulal
>  Labels: contrib_ltr, ltr
>
> The current solr-ltr plugin provider two different normalizer 
> implementations, minMax and standard normalizer. The mimMax normalizer 
> doesn't seem to correctly scale the values to fall in between the given min 
> and max.   The solr scale function [1] appropriately scales the values to 
> fall in between the given range. But it cannot be used as it is, because it 
> has performance problems and makes the scoring process really slow.  
> For example if I have a data set [11,51,16,42,18,21]  and if we want to scale 
> the values to 1 to 10,  I would except the max value in the data set (which 
> is 51) to be normalized to 10, and min value which is 11, to be normalized to 
> 1.
> Here is sample output of the minMax normalizer vs Scaling normalizer
> ||Input||MinMax Normalizer||Scaling Normalizer||
> |11.0|1.112|1.0|
> |51.0|5.553|10.0|
> |16.0|1.666|2.125|
> |42.0|4.553|7.975|
> |18.0|1.888|2.5749998|
> |21.0|2.223|3.25|
> [1]https://wiki.apache.org/solr/FunctionQuery#scale



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10183) A real scaling normalizer in solr-ltr

2017-02-21 Thread Rahul Babulal (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rahul Babulal updated SOLR-10183:
-
Description: 
The current solr-ltr plugin provider two different normalizer implementations, 
minMax and standard normalizer. The mimMax normalizer doesn't seem to correctly 
scale the values to fall in between the given min and max.   The solr scale 
function [1] appropriately scales the values to fall in between the given 
range. But it cannot be used as it is, because it has performance problems and 
makes the scoring process really slow.  

For example if I have a data set [11,51,16,42,18,21]  and if we want to scale 
the values to 1 to 10,  I would except the max value in the data set (which is 
51) to be normalized to 10, and min value which is 11, to be normalized to 1.
Here is sample output of the minMax normalizer vs Scaling normalizer
||Input||MinMax Normalizer||Scaling Normalizer||
|11.0|1.112|10.8|
|51.0|5.553|10.0|
|16.0|1.666|10.7|
|42.0|4.553|10.18|
|18.0|1.888|10.66|
|21.0|2.223|10.6|

[1]https://wiki.apache.org/solr/FunctionQuery#scale

  was:
The current solr-ltr plugin provider two different normalizer implementations, 
minMax and standard normalizer. The mimMax normalizer doesn't seem to correctly 
scale the values to fall in between the given min and max.   The solr scale 
function [1] appropriately scales the values to fall in between the given 
range. But it cannot be used as it is, because it has performance problems and 
makes the scoring process really slow.  

For example if I have a data set [11,51,16,42,18,21]  and if we want to scale 
the values to 1 to 10,  I would except the max value in the data set (which is 
51) to be normalized to 10, and min value which is 11, to be normalized to 1.
||Input||MinMax Normalizer||Scaling Normalizer||
|11.0|1.112|10.8|
|51.0|5.553|10.0|
|16.0|1.666|10.7|
|42.0|4.553|10.18|
|18.0|1.888|10.66|
|21.0|2.223|10.6|

[1]https://wiki.apache.org/solr/FunctionQuery#scale


> A real scaling normalizer in solr-ltr
> -
>
> Key: SOLR-10183
> URL: https://issues.apache.org/jira/browse/SOLR-10183
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.4.1
>Reporter: Rahul Babulal
>  Labels: contrib_ltr, ltr
>
> The current solr-ltr plugin provider two different normalizer 
> implementations, minMax and standard normalizer. The mimMax normalizer 
> doesn't seem to correctly scale the values to fall in between the given min 
> and max.   The solr scale function [1] appropriately scales the values to 
> fall in between the given range. But it cannot be used as it is, because it 
> has performance problems and makes the scoring process really slow.  
> For example if I have a data set [11,51,16,42,18,21]  and if we want to scale 
> the values to 1 to 10,  I would except the max value in the data set (which 
> is 51) to be normalized to 10, and min value which is 11, to be normalized to 
> 1.
> Here is sample output of the minMax normalizer vs Scaling normalizer
> ||Input||MinMax Normalizer||Scaling Normalizer||
> |11.0|1.112|10.8|
> |51.0|5.553|10.0|
> |16.0|1.666|10.7|
> |42.0|4.553|10.18|
> |18.0|1.888|10.66|
> |21.0|2.223|10.6|
> [1]https://wiki.apache.org/solr/FunctionQuery#scale



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10183) A real scaling normalizer in solr-ltr

2017-02-21 Thread Rahul Babulal (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rahul Babulal updated SOLR-10183:
-
Description: 
The current solr-ltr plugin provider two different normalizer implementations, 
minMax and standard normalizer. The mimMax normalizer doesn't seem to correctly 
scale the values to fall in between the given min and max.   The solr scale 
function [1] appropriately scales the values to fall in between the given 
range. But it cannot be used as it is, because it has performance problems and 
makes the scoring process really slow.  

For example if I have a data set [11,51,16,42,18,21]  and if we want to scale 
the values to 1 to 10,  I would except the max value in the data set (which is 
51) to be normalized to 10, and min value which is 11, to be normalized to 1.
||Input||MinMax Normalizer||Scaling Normalizer||
|11.0|1.112|10.8|
|51.0|5.553|10.0|
|16.0|1.666|10.7|
|42.0|4.553|10.18|
|18.0|1.888|10.66|
|21.0|2.223|10.6|

[1]https://wiki.apache.org/solr/FunctionQuery#scale

  was:
The current solr-ltr plugin provider two different normalizer implementations, 
minMax and standard normalizer. The mimMax normalizer doesn't seem to correctly 
scale the values to fall in between the given min and max.   The solr scale 
function [1] appropriately scales the values to fall in between the given 
range. But it cannot be used as it is, because it has performance problems and 
makes the scoring process really slow.  


[1]https://wiki.apache.org/solr/FunctionQuery#scale


> A real scaling normalizer in solr-ltr
> -
>
> Key: SOLR-10183
> URL: https://issues.apache.org/jira/browse/SOLR-10183
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.4.1
>Reporter: Rahul Babulal
>  Labels: contrib_ltr, ltr
>
> The current solr-ltr plugin provider two different normalizer 
> implementations, minMax and standard normalizer. The mimMax normalizer 
> doesn't seem to correctly scale the values to fall in between the given min 
> and max.   The solr scale function [1] appropriately scales the values to 
> fall in between the given range. But it cannot be used as it is, because it 
> has performance problems and makes the scoring process really slow.  
> For example if I have a data set [11,51,16,42,18,21]  and if we want to scale 
> the values to 1 to 10,  I would except the max value in the data set (which 
> is 51) to be normalized to 10, and min value which is 11, to be normalized to 
> 1.
> ||Input||MinMax Normalizer||Scaling Normalizer||
> |11.0|1.112|10.8|
> |51.0|5.553|10.0|
> |16.0|1.666|10.7|
> |42.0|4.553|10.18|
> |18.0|1.888|10.66|
> |21.0|2.223|10.6|
> [1]https://wiki.apache.org/solr/FunctionQuery#scale



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10141) Caffeine cache causes BlockCache corruption

2017-02-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876847#comment-15876847
 ] 

ASF subversion and git services commented on SOLR-10141:


Commit e9e02a2313518682690ca2933efd0b4db0b54b7c in lucene-solr's branch 
refs/heads/master from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e9e02a2 ]

SOLR-10141: Upgrade to Caffeine 2.4.0 to fix issues with removal listener


> Caffeine cache causes BlockCache corruption 
> 
>
> Key: SOLR-10141
> URL: https://issues.apache.org/jira/browse/SOLR-10141
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
> Attachments: SOLR-10141.patch, Solr10141Test.java
>
>
> After fixing the race conditions in the BlockCache itself (SOLR-10121), the 
> concurrency test passes with the previous implementation using 
> ConcurrentLinkedHashMap and fail with Caffeine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 712 - Still Unstable!

2017-02-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/712/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'A val' for path 'params/a' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{ 
"wt":"json", "useParams":""},   "context":{ "webapp":"/solr", 
"path":"/dump0", "httpMethod":"GET"}},  from server:  
http://127.0.0.1:57991/solr/collection1_shard1_replica2

Stack Trace:
java.lang.AssertionError: Could not get expected value  'A val' for path 
'params/a' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{
"wt":"json",
"useParams":""},
  "context":{
"webapp":"/solr",
"path":"/dump0",
"httpMethod":"GET"}},  from server:  
http://127.0.0.1:57991/solr/collection1_shard1_replica2
at 
__randomizedtesting.SeedInfo.seed([BFDAB48710C06277:378E8B5DBE3C0F8F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:556)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:127)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:69)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Commented] (SOLR-10163) CheckHdfsIndexTest fail.

2017-02-21 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876731#comment-15876731
 ] 

Mike Drob commented on SOLR-10163:
--

Hadoop's DU changed a bunch in 2.8.0 from HADOOP-12973 - might not be a problem 
once we upgrade. Looking at the 2.7.2 code, I I don't even understand what is 
causing that NPE, though.

> CheckHdfsIndexTest fail.
> 
>
> Key: SOLR-10163
> URL: https://issues.apache.org/jira/browse/SOLR-10163
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=CheckHdfsIndexTest -Dtests.method=doTest 
> -Dtests.seed=C045205F24FEF89C -Dtests.slow=true -Dtests.locale=es-NI 
> -Dtests.timezone=Etc/GMT+8 -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   10.2s J8  | CheckHdfsIndexTest.doTest <<<
>[junit4]> Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=1637, name=Thread-391, 
> state=RUNNABLE, group=TGRP-CheckHdfsIndexTest]
>[junit4]> Caused by: java.lang.NullPointerException
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([C045205F24FEF89C]:0)
>[junit4]>  at org.apache.hadoop.fs.DU.(DU.java:74)
>[junit4]>  at org.apache.hadoop.fs.DU.(DU.java:95)
>[junit4]>  at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.(BlockPoolSlice.java:140)
>[junit4]>  at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addBlockPool(FsVolumeImpl.java:827)
>[junit4]>  at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$2.run(FsVolumeList.java:405)Throwable
>  #2: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=1635, name=Thread-389, 
> state=RUNNABLE, group=TGRP-CheckHdfsIndexTest]
>[junit4]> Caused by: java.lang.NullPointerException
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([C045205F24FEF89C]:0)
>[junit4]>  at org.apache.hadoop.fs.DU.(DU.java:74)
>[junit4]>  at org.apache.hadoop.fs.DU.(DU.java:95)
>[junit4]>  at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.(BlockPoolSlice.java:140)
>[junit4]>  at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addBlockPool(FsVolumeImpl.java:827)
>[junit4]>  at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$2.run(FsVolumeList.java:405)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10153) UnifiedSolrHighlighter support for CustomSeparatorBreakIterator (LUCENE-6485)

2017-02-21 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876724#comment-15876724
 ] 

David Smiley commented on SOLR-10153:
-

Hello Amrit; thanks for contributing this.  Why the change of fragsize == 1 to 
be considered equivalent to WHOLE?

Aside from the above and some minor tweaks I plan to do, this looks pretty 
committable.  Thanks for the test.

> UnifiedSolrHighlighter support for CustomSeparatorBreakIterator (LUCENE-6485)
> -
>
> Key: SOLR-10153
> URL: https://issues.apache.org/jira/browse/SOLR-10153
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Reporter: Amrit Sarkar
> Attachments: SOLR-10153.patch
>
>
> Lucene 5.3 added a CustomSeparatorBreakIterator (see LUCENE-6485)
> UnifiedSolrHighlighter should support *CustomSeparatorBreakIterator* along 
> with existing ones, WholeBreakIterator etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10092) HDFS: AutoAddReplica fails

2017-02-21 Thread Hendrik Haddorp (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hendrik Haddorp updated SOLR-10092:
---
Attachment: SOLR-10092.patch

With this patch the automatic replica fail over worked for me on Solr 6.3.

> HDFS: AutoAddReplica fails
> --
>
> Key: SOLR-10092
> URL: https://issues.apache.org/jira/browse/SOLR-10092
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: hdfs
>Affects Versions: 6.3
>Reporter: Hendrik Haddorp
> Attachments: SOLR-10092.patch
>
>
> OverseerAutoReplicaFailoverThread fails to create replacement core with this 
> exception:
> o.a.s.c.OverseerAutoReplicaFailoverThread Exception trying to create new 
> replica on 
> http://...:9000/solr:org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:
>  Error from server at http://...:9000/solr: Error CREATEing SolrCore 
> 'test2.collection-09_shard1_replica1': Unable to create core 
> [test2.collection-09_shard1_replica1] Caused by: No shard id for 
> CoreDescriptor[name=test2.collection-09_shard1_replica1;instanceDir=/var/opt/solr/test2.collection-09_shard1_replica1]
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:593)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
> at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
> at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.createSolrCore(OverseerAutoReplicaFailoverThread.java:456)
> at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.lambda$addReplica$0(OverseerAutoReplicaFailoverThread.java:251)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745) 
> also see this mail thread about the issue: 
> https://lists.apache.org/thread.html/%3CCAA70BoWyzbvQuJTyzaG4Kx1tj0Djgcm+MV=x_hoac1e6cse...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10020) CoreAdminHandler silently swallows some errors

2017-02-21 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876667#comment-15876667
 ] 

Mike Drob commented on SOLR-10020:
--

Exactly. And because we don't start a new thread in the handler, we can throw 
an exception which eventually gets back to the caller instead of solely logging 
the problem.

LISTSNAPSHOTS_OP also throws an exception, so does not have this problem.

FORCEPREPAREFORLEADERSHIP_OP logs but does not throw, so a client will not see 
the problem with a non-existent core. This is easy to fix with something like 
{noformat}

core.getCoreDescriptor().getCloudDescriptor().setLastPublished(Replica.State.ACTIVE);
log().info("Setting the last published state for this core, {}, to {}", 
core.getName(), Replica.State.ACTIVE);
  } else {
-SolrException.log(log(), "Could not find core: " + cname);
+throw new SolrException(ErrorCode.BAD_REQUEST, "Unable to locate core 
" + cname);
  }
}
  }),
{noformat}
I didn't do that in this patch because I'm not sure who the callers of this API 
are and didn't want to rock too many boats at once.

> CoreAdminHandler silently swallows some errors
> --
>
> Key: SOLR-10020
> URL: https://issues.apache.org/jira/browse/SOLR-10020
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-10020.patch, SOLR-10020.patch, SOLR-10020.patch
>
>
> With the setup on SOLR-10006, after removing some index files and starting 
> that Solr instance I tried issuing a REQUESTRECOVERY command and it came back 
> as a success even though nothing actually happened. When the core is 
> accessed, a core init exception is returned by subsequent calls to getCore(). 
> There is no catch block after the try so no error is returned.
> Looking through the code I see several other commands that have a similar 
> pattern:
>  FORCEPREPAREFORLEADERSHIP_OP
> LISTSNAPSHOTS_OP
> getCoreStatus
> and perhaps others. getCore() can throw an exception, about the only explicit 
> one it does throw is if the core has an initialization error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10183) A real scaling normalizer in solr-ltr

2017-02-21 Thread Rahul Babulal (JIRA)
Rahul Babulal created SOLR-10183:


 Summary: A real scaling normalizer in solr-ltr
 Key: SOLR-10183
 URL: https://issues.apache.org/jira/browse/SOLR-10183
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.4.1
Reporter: Rahul Babulal


The current solr-ltr plugin provider two different normalizer implementations, 
minMax and standard normalizer. The mimMax normalizer doesn't seem to correctly 
scale the values to fall in between the given min and max.   The solr scale 
function [1] appropriately scales the values to fall in between the given 
range. But it cannot be used as it is, because it has performance problems and 
makes the scoring process really slow.  


[1]https://wiki.apache.org/solr/FunctionQuery#scale



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10117) Big docs and the DocumentCache; umbrella issue

2017-02-21 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876640#comment-15876640
 ] 

David Smiley commented on SOLR-10117:
-

Another technique that I think makes a lot of sense is to cap the stored value 
to a configurable amount -- a cap after which there can be no highlighting of 
course.  This can be achieved even without an explicit Solr feature with a 
copyField with {{maxChars}} set.  Although it may hinder 
{{hl.requireFieldMatch=true}} if one chooses to go that route.

> Big docs and the DocumentCache; umbrella issue
> --
>
> Key: SOLR-10117
> URL: https://issues.apache.org/jira/browse/SOLR-10117
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR_10117_large_fields.patch
>
>
> This is an umbrella issue for improved handling of large documents (large 
> stored fields), generally related to the DocumentCache or SolrIndexSearcher's 
> doc() methods.  Highlighting is affected as it's the primary consumer of this 
> data.  "Large" here is multi-megabyte, especially tens even hundreds of 
> megabytes. We'd like to support such users without forcing them to choose 
> between no DocumentCache (bad performance), or having one but hitting OOM due 
> to massive Strings winding up in there.  I've contemplated this for longer 
> than I'd like to admit and it's a complicated issue with difference concerns 
> to balance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10020) CoreAdminHandler silently swallows some errors

2017-02-21 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876626#comment-15876626
 ] 

Erick Erickson commented on SOLR-10020:
---

Mike:

Just to check my understanding here. Essentially you took out a thread that had 
no other real purpose than to start a thread, right? We haven't changed the 
asynchronous nature of the call at all for RequestRecovery.

Looking more closely at
FORCEPREPAREFORLEADERSHIP_OP
LISTSNAPSHOTS_OP

I don't think the same problem occurs there since they don't spawn threads that 
can't really propagate the error back.

Testing  & etc now.

> CoreAdminHandler silently swallows some errors
> --
>
> Key: SOLR-10020
> URL: https://issues.apache.org/jira/browse/SOLR-10020
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-10020.patch, SOLR-10020.patch, SOLR-10020.patch
>
>
> With the setup on SOLR-10006, after removing some index files and starting 
> that Solr instance I tried issuing a REQUESTRECOVERY command and it came back 
> as a success even though nothing actually happened. When the core is 
> accessed, a core init exception is returned by subsequent calls to getCore(). 
> There is no catch block after the try so no error is returned.
> Looking through the code I see several other commands that have a similar 
> pattern:
>  FORCEPREPAREFORLEADERSHIP_OP
> LISTSNAPSHOTS_OP
> getCoreStatus
> and perhaps others. getCore() can throw an exception, about the only explicit 
> one it does throw is if the core has an initialization error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Test framework ignoring -Dtestmethod

2017-02-21 Thread Dawid Weiss
I don't see the test, so hard for me to tell for sure... but there are
lots of scaffolding around this class as I inserted an empty test
containing an assumption and it initializes lots of stuff; the
assumption has to be triggering there somewhere.

Once your test fails or assume-fails, locate:
TEST-org.apache.solr.client.solrj.io.stream.StreamExpressionTest.xml
and see inside; there should be a stack trace leading to the failed
assumption.

Dawid

On Tue, Feb 21, 2017 at 9:00 PM, Joel Bernstein  wrote:
> I traced the issue to a specific change that was made to the
> StreamExpressionTest. I'm figuring out the best way to address the issue.
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Tue, Feb 21, 2017 at 2:52 PM, Joel Bernstein  wrote:
>>
>> Hi Dawid,
>>
>> It does appear to be related to an assumption. I researching what the
>> issue is. Also I was using -Dtestmethod to specify the method which does not
>> seem to be correct. I have a reproduce method that looks like this:
>>
>> NOTE: reproduce with: ant test  -Dtestcase=StreamExpressionTest
>> -Dtests.method=testSignificantTermsStream -Dtests.seed=144DBB9558834AFE
>> -Dtests.slow=true -Dtests.locale=ro -Dtests.timezone=Africa/Johannesburg
>> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>>
>>
>> Using -Dtests.method still gives me problems with the assumption. I'm
>> starting to think that the StreamExpressionTest testcase isn't structured
>> properly to run individual methods. But this would be a recent issue.
>>
>>
>> Joel Bernstein
>> http://joelsolr.blogspot.com/
>>
>> On Tue, Feb 21, 2017 at 2:36 PM, Dawid Weiss 
>> wrote:
>>>
>>> Joel,
>>>
>>> your test is being executed, but there is some kind of assumption that
>>> is thrown within the body of the method (or in setup/ teardown) that
>>> is causing the test to be ignored. Assumptions are regular exceptions
>>> -- try wrapping in try/catch and dumping the stack trace if you can't
>>> locate it easily. Alternatively, I think the test execution dumps may
>>> carry a full stack already?
>>>
>>> Dawid
>>>
>>>
>>> On Tue, Feb 21, 2017 at 7:23 PM, Joel Bernstein 
>>> wrote:
>>> >
>>> > A test I've just added is being ignored when it's being called with the
>>> > -Dtestmethod.
>>> >
>>> > Here is the command line:
>>> >
>>> > ant test -Dtestcase=StreamExpressionTest
>>> > -Dtestmethod=testSignifcantTermsStream
>>> >
>>> >
>>> > Here are some snippets from the output:
>>> > --
>>> > --
>>> >
>>> > [junit4]   2> 6440 INFO
>>> >
>>> > (TEST-StreamExpressionTest.testSignifcantTermsStream-seed#[619661D6DA496076])
>>> > [] o.a.s.SolrTestCaseJ4 ###Ending testSignifcantTermsStream
>>> >
>>> >[junit4] IGNOR/A 0.18s |
>>> > StreamExpressionTest.testSignifcantTermsStream
>>> >
>>> >[junit4]> Assumption #1: got: , expected: is 
>>> >
>>> > 
>>> >
>>> >  [junit4] Tests summary: 1 suite, 1 test, 1 ignored (1 assumption)
>>> >
>>> > -
>>> >
>>> > -
>>> >
>>> > Is anyone else seeing this behavior or have any idea why this might be
>>> > happening?
>>> >
>>> >
>>> >
>>> > Thanks
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3847 - Still Unstable!

2017-02-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3847/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([EC6CD25CB9E2753C:6438ED86171E18C4]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:326)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:277)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:259)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:138)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+155) - Build # 2911 - Unstable!

2017-02-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2911/
Java: 64bit/jdk-9-ea+155 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
PeerSync failed. Had to fail back to replication

Stack Trace:
java.lang.AssertionError: PeerSync failed. Had to fail back to replication
at 
__randomizedtesting.SeedInfo.seed([3BC42A6510CBF24F:B39015BFBE379FB7]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:290)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:259)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:138)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:543)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

Re: Test framework ignoring -Dtestmethod

2017-02-21 Thread Joel Bernstein
I traced the issue to a specific change that was made to the
StreamExpressionTest. I'm figuring out the best way to address the issue.

Joel Bernstein
http://joelsolr.blogspot.com/

On Tue, Feb 21, 2017 at 2:52 PM, Joel Bernstein  wrote:

> Hi Dawid,
>
> It does appear to be related to an assumption. I researching what the
> issue is. Also I was using -Dtestmethod to specify the method which does
> not seem to be correct. I have a reproduce method that looks like this:
>
> NOTE: reproduce with: ant test  -Dtestcase=StreamExpressionTest
> -Dtests.method=testSignificantTermsStream -Dtests.seed=144DBB9558834AFE
> -Dtests.slow=true -Dtests.locale=ro -Dtests.timezone=Africa/Johannesburg
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>
>
> Using -Dtests.method still gives me problems with the assumption. I'm
> starting to think that the StreamExpressionTest testcase isn't structured
> properly to run individual methods. But this would be a recent issue.
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Tue, Feb 21, 2017 at 2:36 PM, Dawid Weiss 
> wrote:
>
>> Joel,
>>
>> your test is being executed, but there is some kind of assumption that
>> is thrown within the body of the method (or in setup/ teardown) that
>> is causing the test to be ignored. Assumptions are regular exceptions
>> -- try wrapping in try/catch and dumping the stack trace if you can't
>> locate it easily. Alternatively, I think the test execution dumps may
>> carry a full stack already?
>>
>> Dawid
>>
>>
>> On Tue, Feb 21, 2017 at 7:23 PM, Joel Bernstein 
>> wrote:
>> >
>> > A test I've just added is being ignored when it's being called with the
>> > -Dtestmethod.
>> >
>> > Here is the command line:
>> >
>> > ant test -Dtestcase=StreamExpressionTest
>> > -Dtestmethod=testSignifcantTermsStream
>> >
>> >
>> > Here are some snippets from the output:
>> > --
>> > --
>> >
>> > [junit4]   2> 6440 INFO
>> > (TEST-StreamExpressionTest.testSignifcantTermsStream-seed#[
>> 619661D6DA496076])
>> > [] o.a.s.SolrTestCaseJ4 ###Ending testSignifcantTermsStream
>> >
>> >[junit4] IGNOR/A 0.18s | StreamExpressionTest.testSigni
>> fcantTermsStream
>> >
>> >[junit4]> Assumption #1: got: , expected: is 
>> >
>> > 
>> >
>> >  [junit4] Tests summary: 1 suite, 1 test, 1 ignored (1 assumption)
>> >
>> > -
>> >
>> > -
>> >
>> > Is anyone else seeing this behavior or have any idea why this might be
>> > happening?
>> >
>> >
>> >
>> > Thanks
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>


Re: Test framework ignoring -Dtestmethod

2017-02-21 Thread Joel Bernstein
Hi Dawid,

It does appear to be related to an assumption. I researching what the issue
is. Also I was using -Dtestmethod to specify the method which does not seem
to be correct. I have a reproduce method that looks like this:

NOTE: reproduce with: ant test  -Dtestcase=StreamExpressionTest
-Dtests.method=testSignificantTermsStream -Dtests.seed=144DBB9558834AFE
-Dtests.slow=true -Dtests.locale=ro -Dtests.timezone=Africa/Johannesburg
-Dtests.asserts=true -Dtests.file.encoding=UTF-8


Using -Dtests.method still gives me problems with the assumption. I'm
starting to think that the StreamExpressionTest testcase isn't structured
properly to run individual methods. But this would be a recent issue.

Joel Bernstein
http://joelsolr.blogspot.com/

On Tue, Feb 21, 2017 at 2:36 PM, Dawid Weiss  wrote:

> Joel,
>
> your test is being executed, but there is some kind of assumption that
> is thrown within the body of the method (or in setup/ teardown) that
> is causing the test to be ignored. Assumptions are regular exceptions
> -- try wrapping in try/catch and dumping the stack trace if you can't
> locate it easily. Alternatively, I think the test execution dumps may
> carry a full stack already?
>
> Dawid
>
>
> On Tue, Feb 21, 2017 at 7:23 PM, Joel Bernstein 
> wrote:
> >
> > A test I've just added is being ignored when it's being called with the
> > -Dtestmethod.
> >
> > Here is the command line:
> >
> > ant test -Dtestcase=StreamExpressionTest
> > -Dtestmethod=testSignifcantTermsStream
> >
> >
> > Here are some snippets from the output:
> > --
> > --
> >
> > [junit4]   2> 6440 INFO
> > (TEST-StreamExpressionTest.testSignifcantTermsStream-
> seed#[619661D6DA496076])
> > [] o.a.s.SolrTestCaseJ4 ###Ending testSignifcantTermsStream
> >
> >[junit4] IGNOR/A 0.18s | StreamExpressionTest.
> testSignifcantTermsStream
> >
> >[junit4]> Assumption #1: got: , expected: is 
> >
> > 
> >
> >  [junit4] Tests summary: 1 suite, 1 test, 1 ignored (1 assumption)
> >
> > -
> >
> > -
> >
> > Is anyone else seeing this behavior or have any idea why this might be
> > happening?
> >
> >
> >
> > Thanks
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Updated] (SOLR-10182) Backout directory metrics collection that caused performance degradation

2017-02-21 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-10182:

Description: The performance degradation we observed as per SOLR-10130 will 
go away if directory factory level metrics collection is disabled by default. 
However, since they will cause the same degradation when enabled, we should 
back out those changes until we find a performant way of implementing such 
metrics collection.  (was: The performance degradation we observed as per 
SOLR-10130 will do away if directory factory level metrics as disabled by 
default. However, since they will cause the same degradation when enabled, we 
should back out those changes until we find a performant way of implementing 
such metrics collection.)

> Backout directory metrics collection that caused performance degradation
> 
>
> Key: SOLR-10182
> URL: https://issues.apache.org/jira/browse/SOLR-10182
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 6.4.1, 6.4.0
>Reporter: Ishan Chattopadhyaya
>Assignee: Andrzej Bialecki 
>Priority: Blocker
> Fix For: 6.4.2
>
>
> The performance degradation we observed as per SOLR-10130 will go away if 
> directory factory level metrics collection is disabled by default. However, 
> since they will cause the same degradation when enabled, we should back out 
> those changes until we find a performant way of implementing such metrics 
> collection.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 6.4.2 release?

2017-02-21 Thread Ishan Chattopadhyaya
Actually, LUCENE-7698 was not a blocker, just marked for a 6.4.2 release.
Should we make it a blocker?
As per an offline discussion with Andrzej, I've added SOLR-10182 as a
blocker. Tentatively, I'll cut a RC for 6.4.2 by Tuesday.

On Tue, Feb 21, 2017 at 11:35 PM, Ishan Chattopadhyaya <
ichattopadhy...@gmail.com> wrote:

> I would like to volunteer for this 6.4.2 release. Planning to cut a RC as
> soon as blockers are resolved.
> One of the unresolved blocker issues seems to be LUCENE-7698 (I'll take a
> look to see if there are more). If there are more issues that should be
> part of the release, please let me know or mark as blockers in jira.
>
> Thanks,
> Ishan
>
>
> On Thu, Feb 16, 2017 at 3:48 AM, Adrien Grand  wrote:
>
>> I had initially planned on releasing tomorrow but the mirrors replicated
>> faster than I had thought they would so I finished the release today,
>> including the addition of the new 5.5.4 indices for backward testing so I
>> am good with proceeding with a new release now.
>>
>> Le mer. 15 févr. 2017 à 16:13, Adrien Grand  a écrit :
>>
>> +1
>>
>> One ask I have is to wait for the 5.5.4 release process to be complete so
>> that branch_6_4 has the 5.5.4 backward indices when we cut the first RC. I
>> will let you know when I am done.
>>
>> Le mer. 15 févr. 2017 à 15:53, Christine Poerschke (BLOOMBERG/ LONDON) <
>> cpoersc...@bloomberg.net> a écrit :
>>
>> Hi,
>>
>> These two could be minor candidates for inclusion:
>>
>> * https://issues.apache.org/jira/browse/SOLR-10083
>> Fix instanceof check in ConstDoubleSource.equals
>>
>> * https://issues.apache.org/jira/browse/LUCENE-7676
>> FilterCodecReader to override more super-class methods
>>
>> The former had narrowly missed the 6.4.1 release.
>>
>> Regards,
>>
>> Christine
>>
>> From: dev@lucene.apache.org At: 02/15/17 14:27:52
>> To: dev@lucene.apache.org
>> Subject: Re:6.4.2 release?
>>
>> Hi devs,
>>
>> These two issues seem serious enough to warrant a new release from
>> branch_6_4:
>> * SOLR-10130: Serious performance degradation in Solr 6.4.1 due to the
>> new metrics collection
>> * SOLR-10138: Transaction log replay can hit an NPE due to new Metrics
>> code.
>>
>> What do you think? Anything else that should go there?
>>
>> ---
>> Best regards,
>>
>> Andrzej Bialecki
>>
>>
>


[jira] [Created] (SOLR-10182) Backout directory metrics collection that caused performance degradation

2017-02-21 Thread Ishan Chattopadhyaya (JIRA)
Ishan Chattopadhyaya created SOLR-10182:
---

 Summary: Backout directory metrics collection that caused 
performance degradation
 Key: SOLR-10182
 URL: https://issues.apache.org/jira/browse/SOLR-10182
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
  Components: metrics
Affects Versions: 6.4.1, 6.4.0
Reporter: Ishan Chattopadhyaya
Assignee: Andrzej Bialecki 
Priority: Blocker
 Fix For: 6.4.2


The performance degradation we observed as per SOLR-10130 will do away if 
directory factory level metrics as disabled by default. However, since they 
will cause the same degradation when enabled, we should back out those changes 
until we find a performant way of implementing such metrics collection.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Test framework ignoring -Dtestmethod

2017-02-21 Thread Dawid Weiss
Joel,

your test is being executed, but there is some kind of assumption that
is thrown within the body of the method (or in setup/ teardown) that
is causing the test to be ignored. Assumptions are regular exceptions
-- try wrapping in try/catch and dumping the stack trace if you can't
locate it easily. Alternatively, I think the test execution dumps may
carry a full stack already?

Dawid


On Tue, Feb 21, 2017 at 7:23 PM, Joel Bernstein  wrote:
>
> A test I've just added is being ignored when it's being called with the
> -Dtestmethod.
>
> Here is the command line:
>
> ant test -Dtestcase=StreamExpressionTest
> -Dtestmethod=testSignifcantTermsStream
>
>
> Here are some snippets from the output:
> --
> --
>
> [junit4]   2> 6440 INFO
> (TEST-StreamExpressionTest.testSignifcantTermsStream-seed#[619661D6DA496076])
> [] o.a.s.SolrTestCaseJ4 ###Ending testSignifcantTermsStream
>
>[junit4] IGNOR/A 0.18s | StreamExpressionTest.testSignifcantTermsStream
>
>[junit4]> Assumption #1: got: , expected: is 
>
> 
>
>  [junit4] Tests summary: 1 suite, 1 test, 1 ignored (1 assumption)
>
> -
>
> -
>
> Is anyone else seeing this behavior or have any idea why this might be
> happening?
>
>
>
> Thanks

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7699) Apply graph articulation points optimization to phrase graph queries

2017-02-21 Thread Matt Weber (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876559#comment-15876559
 ] 

Matt Weber commented on LUCENE-7699:


Remove {{GraphQuery}} in LUCENE-7702.

> Apply graph articulation points optimization to phrase graph queries
> 
>
> Key: LUCENE-7699
> URL: https://issues.apache.org/jira/browse/LUCENE-7699
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Matt Weber
> Attachments: LUCENE-7699.patch, LUCENE-7699.patch
>
>
> Follow-up to LUCENE-7638 that applies the same articulation point logic to 
> graph phrases using span queries.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6819) Deprecate index-time boosts?

2017-02-21 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876549#comment-15876549
 ] 

David Smiley edited comment on LUCENE-6819 at 2/21/17 7:31 PM:
---

I get your point. It's a shame that the particular use of the bits right now 
was decided to have both 3 terms and 4 terms produce the same norm when, IMO, 
there should be more fidelity for for them for the same reason you mentioned.  
Maybe this specifically could be rectified instead of removal of index time 
boosts? 

(edited/removed paragraph I reconsidered)

On the other hand, I appreciate that removing this feature would be the 
simplest route to take and reduce overall complexity in Lucene.  And it's not 
like index time boosts is a must-have; users can emulate it, albeit with some 
work.  Maybe that could be made easier... hmmm.

Any way; I'm not standing in your way. I'm curious what others think.


was (Author: dsmiley):
I get your point. It's a shame that the particular use of the bits right now 
was decided to have both 3 terms and 4 terms produce the same norm when, IMO, 
there should be more fidelity for for them for the same reason you mentioned.  
Maybe this specifically could be rectified instead of removal of index time 
boosts? 

Perhaps index time boosts support should be moved to the codec {{NormsFormat}} 
which could have a method to declare wether it supports index time boosts or 
not? ? i.e. we don't support it by default and if you want index time boosts 
then you must do something to enable it?

On the other hand, I appreciate that removing this feature would be the 
simplest route to take and reduce overall complexity in Lucene.  And it's not 
like index time boosts is a must-have; users can emulate it, albeit with some 
work.  Maybe that could be made easier... hmmm.

Any way; I'm not standing in your way. I'm curious what others think.

> Deprecate index-time boosts?
> 
>
> Key: LUCENE-6819
> URL: https://issues.apache.org/jira/browse/LUCENE-6819
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
>
> Follow-up of this comment: 
> https://issues.apache.org/jira/browse/LUCENE-6818?focusedCommentId=14934801=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14934801
> Index-time boosts are a very expert feature whose behaviour is tight to the 
> Similarity impl. Additionally users have often be confused by the poor 
> precision due to the fact that we encode values on a single byte. But now we 
> have doc values that allow you to encode any values the way you want with as 
> much precision as you need so maybe we should deprecate index-time boosts and 
> recommend to encode index-time scoring factors into doc values fields instead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Test framework ignoring -Dtestmethod

2017-02-21 Thread Joel Bernstein
Hi Steve,

It's misspelled the same way in the method and the command line. I'll fix
the misspelling, but something else is going on here that is causing the
test to be ignored.

Joel Bernstein
http://joelsolr.blogspot.com/

On Tue, Feb 21, 2017 at 1:51 PM, Steve Rowe  wrote:

> Hi Joel,
>
> Looks like testSignifcantTermsStream is misspelled?  (missing “i” between
> “f” and “c”)
>
> —-
> Steve
> www.lucidworks.com
>
> > On Feb 21, 2017, at 1:23 PM, Joel Bernstein  wrote:
> >
> >
> > A test I've just added is being ignored when it's being called with the
> -Dtestmethod.
> >
> > Here is the command line:
> >
> > ant test -Dtestcase=StreamExpressionTest -Dtestmethod=
> testSignifcantTermsStream
> >
> >
> > Here are some snippets from the output:
> > --
> > --
> > [junit4]   2> 6440 INFO  (TEST-StreamExpressionTest.
> testSignifcantTermsStream-seed#[619661D6DA496076]) []
> o.a.s.SolrTestCaseJ4 ###Ending testSignifcantTermsStream
> >
> >[junit4] IGNOR/A 0.18s | StreamExpressionTest.
> testSignifcantTermsStream
> >
> >[junit4]> Assumption #1: got: , expected: is 
> >
> > 
> >
> >  [junit4] Tests summary: 1 suite, 1 test, 1 ignored (1 assumption)
> >
> > -
> >
> > -
> >
> > Is anyone else seeing this behavior or have any idea why this might be
> happening?
> >
> >
> > Thanks
> >
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Updated] (LUCENE-7702) Remove GraphQuery

2017-02-21 Thread Matt Weber (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Weber updated LUCENE-7702:
---
Description: With LUCENE-7638 and LUCENE-7699 the {{GraphQuery}} wrapper is 
no longer needed and we can use standard queries.  (was: With LUCENE-7638 and 
LUCENE-7699 the {{GraphQuery}}wrapper is no longer needed and we can use 
standard queries.)

> Remove GraphQuery
> -
>
> Key: LUCENE-7702
> URL: https://issues.apache.org/jira/browse/LUCENE-7702
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Matt Weber
> Attachments: LUCENE-7702.patch
>
>
> With LUCENE-7638 and LUCENE-7699 the {{GraphQuery}} wrapper is no longer 
> needed and we can use standard queries.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7702) Remove GraphQuery

2017-02-21 Thread Matt Weber (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876540#comment-15876540
 ] 

Matt Weber commented on LUCENE-7702:


[~jim.ferenczi] [~mikemccand] Patch to remove {{GraphQuery}}.

> Remove GraphQuery
> -
>
> Key: LUCENE-7702
> URL: https://issues.apache.org/jira/browse/LUCENE-7702
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Matt Weber
> Attachments: LUCENE-7702.patch
>
>
> With LUCENE-7638 and LUCENE-7699 the {{GraphQuery}}wrapper is no longer 
> needed and we can use standard queries.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6819) Deprecate index-time boosts?

2017-02-21 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876549#comment-15876549
 ] 

David Smiley commented on LUCENE-6819:
--

I get your point. It's a shame that the particular use of the bits right now 
was decided to have both 3 terms and 4 terms produce the same norm when, IMO, 
there should be more fidelity for for them for the same reason you mentioned.  
Maybe this specifically could be rectified instead of removal of index time 
boosts? 

Perhaps index time boosts support should be moved to the codec {{NormsFormat}} 
which could have a method to declare wether it supports index time boosts or 
not? ? i.e. we don't support it by default and if you want index time boosts 
then you must do something to enable it?

On the other hand, I appreciate that removing this feature would be the 
simplest route to take and reduce overall complexity in Lucene.  And it's not 
like index time boosts is a must-have; users can emulate it, albeit with some 
work.  Maybe that could be made easier... hmmm.

Any way; I'm not standing in your way. I'm curious what others think.

> Deprecate index-time boosts?
> 
>
> Key: LUCENE-6819
> URL: https://issues.apache.org/jira/browse/LUCENE-6819
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
>
> Follow-up of this comment: 
> https://issues.apache.org/jira/browse/LUCENE-6818?focusedCommentId=14934801=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14934801
> Index-time boosts are a very expert feature whose behaviour is tight to the 
> Similarity impl. Additionally users have often be confused by the poor 
> precision due to the fact that we encode values on a single byte. But now we 
> have doc values that allow you to encode any values the way you want with as 
> much precision as you need so maybe we should deprecate index-time boosts and 
> recommend to encode index-time scoring factors into doc values fields instead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7702) Remove GraphQuery

2017-02-21 Thread Matt Weber (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Weber updated LUCENE-7702:
---
Attachment: LUCENE-7702.patch

Patch.  Assumes LUCENE-7699 is also applied.

> Remove GraphQuery
> -
>
> Key: LUCENE-7702
> URL: https://issues.apache.org/jira/browse/LUCENE-7702
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Matt Weber
> Attachments: LUCENE-7702.patch
>
>
> With LUCENE-7638 and LUCENE-7699 the {{GraphQuery}}wrapper is no longer 
> needed and we can use standard queries.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7702) Remove GraphQuery

2017-02-21 Thread Matt Weber (JIRA)
Matt Weber created LUCENE-7702:
--

 Summary: Remove GraphQuery
 Key: LUCENE-7702
 URL: https://issues.apache.org/jira/browse/LUCENE-7702
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Matt Weber


With LUCENE-7638 and LUCENE-7699 the {{GraphQuery}}wrapper is no longer needed 
and we can use standard queries.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Test framework ignoring -Dtestmethod

2017-02-21 Thread Steve Rowe
Hi Joel,

Looks like testSignifcantTermsStream is misspelled?  (missing “i” between “f” 
and “c”)

—-
Steve
www.lucidworks.com

> On Feb 21, 2017, at 1:23 PM, Joel Bernstein  wrote:
> 
> 
> A test I've just added is being ignored when it's being called with the 
> -Dtestmethod.
> 
> Here is the command line:
> 
> ant test -Dtestcase=StreamExpressionTest 
> -Dtestmethod=testSignifcantTermsStream
> 
> 
> Here are some snippets from the output:
> --
> --
> [junit4]   2> 6440 INFO  
> (TEST-StreamExpressionTest.testSignifcantTermsStream-seed#[619661D6DA496076]) 
> [] o.a.s.SolrTestCaseJ4 ###Ending testSignifcantTermsStream
> 
>[junit4] IGNOR/A 0.18s | StreamExpressionTest.testSignifcantTermsStream
> 
>[junit4]> Assumption #1: got: , expected: is 
> 
> 
> 
>  [junit4] Tests summary: 1 suite, 1 test, 1 ignored (1 assumption)
> 
> -
> 
> -
> 
> Is anyone else seeing this behavior or have any idea why this might be 
> happening?
> 
>  
> Thanks
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7699) Apply graph articulation points optimization to phrase graph queries

2017-02-21 Thread Matt Weber (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Weber updated LUCENE-7699:
---
Attachment: LUCENE-7699.patch

Updated patch with fixed tests.

> Apply graph articulation points optimization to phrase graph queries
> 
>
> Key: LUCENE-7699
> URL: https://issues.apache.org/jira/browse/LUCENE-7699
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Matt Weber
> Attachments: LUCENE-7699.patch, LUCENE-7699.patch
>
>
> Follow-up to LUCENE-7638 that applies the same articulation point logic to 
> graph phrases using span queries.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7696) Remove ancient projects from the dist area

2017-02-21 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876431#comment-15876431
 ] 

Hoss Man commented on LUCENE-7696:
--

The archive is, by design, suppose to be a permanent archive of everything ever 
released, at the path where it was released.  I'm not sure if we (lucene) even 
have a mechanism to remove things from it -- pretty sure only infra has that 
power?

(Not saying cleanup wouldn't be nice, just saying i don't think there's much we 
can do about it other then filing an INFRA request)

> Remove ancient projects from the dist area
> --
>
> Key: LUCENE-7696
> URL: https://issues.apache.org/jira/browse/LUCENE-7696
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/website
>Reporter: Jan Høydahl
>  Labels: archive, dist, download
>
> In https://archive.apache.org/dist/lucene/ we have these folders:
> {noformat}
> [DIR] hadoop/ 2008-01-22 23:40-   
> [DIR] java/   2017-02-14 08:33-   
> [DIR] mahout/ 2015-02-17 20:27-   
> [DIR] nutch/  2015-02-17 20:29-   
> [DIR] pylucene/   2017-02-13 22:00-   
> [DIR] solr/   2017-02-14 08:33-   
> [DIR] tika/   2015-02-17 20:29-   
> [   ] KEYS2016-08-30 09:59  148K  
> {noformat}
> Nobody will expect to find hadoop, mahout, nutch and tika here anymore, so 
> why not clean up?
> I double checked, and both https://archive.apache.org/dist/hadoop/core/ and 
> https://archive.apache.org/dist/mahout/ have a full copy of all releases, so 
> we lose nothing. 
> For https://archive.apache.org/dist/nutch/, they do not have 0.6-0.8 releases 
> that we have under lucene, and https://archive.apache.org/dist/tika/ do not 
> have v0.2-0.7 that only exists with us. For these two projects we could ask 
> their PMC to copy over the early versions and then we nuk'em?
> Any other reason to keep these in the lucene area?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Test framework ignoring -Dtestmethod

2017-02-21 Thread Joel Bernstein
A test I've just added is being ignored when it's being called with the
-Dtestmethod.

Here is the command line:

ant test -Dtestcase=StreamExpressionTest
-Dtestmethod=testSignifcantTermsStream

Here are some snippets from the output:
--
--

[junit4]   2> 6440 INFO
(TEST-StreamExpressionTest.testSignifcantTermsStream-seed#[619661D6DA496076])
[] o.a.s.SolrTestCaseJ4 ###Ending testSignifcantTermsStream

   [junit4] IGNOR/A 0.18s | StreamExpressionTest.testSignifcantTermsStream

   [junit4]> Assumption #1: got: , expected: is 



 [junit4] Tests summary: 1 suite, 1 test, 1 ignored (1 assumption)

-

-

Is anyone else seeing this behavior or have any idea why this might be
happening?



Thanks


[jira] [Commented] (SOLR-10020) CoreAdminHandler silently swallows some errors

2017-02-21 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876411#comment-15876411
 ] 

Mike Drob commented on SOLR-10020:
--

Yea, I think this one is ready.

> CoreAdminHandler silently swallows some errors
> --
>
> Key: SOLR-10020
> URL: https://issues.apache.org/jira/browse/SOLR-10020
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-10020.patch, SOLR-10020.patch, SOLR-10020.patch
>
>
> With the setup on SOLR-10006, after removing some index files and starting 
> that Solr instance I tried issuing a REQUESTRECOVERY command and it came back 
> as a success even though nothing actually happened. When the core is 
> accessed, a core init exception is returned by subsequent calls to getCore(). 
> There is no catch block after the try so no error is returned.
> Looking through the code I see several other commands that have a similar 
> pattern:
>  FORCEPREPAREFORLEADERSHIP_OP
> LISTSNAPSHOTS_OP
> getCoreStatus
> and perhaps others. getCore() can throw an exception, about the only explicit 
> one it does throw is if the core has an initialization error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10020) CoreAdminHandler silently swallows some errors

2017-02-21 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reassigned SOLR-10020:
-

Assignee: Erick Erickson

> CoreAdminHandler silently swallows some errors
> --
>
> Key: SOLR-10020
> URL: https://issues.apache.org/jira/browse/SOLR-10020
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-10020.patch, SOLR-10020.patch, SOLR-10020.patch
>
>
> With the setup on SOLR-10006, after removing some index files and starting 
> that Solr instance I tried issuing a REQUESTRECOVERY command and it came back 
> as a success even though nothing actually happened. When the core is 
> accessed, a core init exception is returned by subsequent calls to getCore(). 
> There is no catch block after the try so no error is returned.
> Looking through the code I see several other commands that have a similar 
> pattern:
>  FORCEPREPAREFORLEADERSHIP_OP
> LISTSNAPSHOTS_OP
> getCoreStatus
> and perhaps others. getCore() can throw an exception, about the only explicit 
> one it does throw is if the core has an initialization error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10020) CoreAdminHandler silently swallows some errors

2017-02-21 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876405#comment-15876405
 ] 

Erick Erickson commented on SOLR-10020:
---

[~mdrob] Do you think this patch is ready to commit? If so I'll look it over 
again and commit it sometime Real Soon Now.

Erick

> CoreAdminHandler silently swallows some errors
> --
>
> Key: SOLR-10020
> URL: https://issues.apache.org/jira/browse/SOLR-10020
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
> Attachments: SOLR-10020.patch, SOLR-10020.patch, SOLR-10020.patch
>
>
> With the setup on SOLR-10006, after removing some index files and starting 
> that Solr instance I tried issuing a REQUESTRECOVERY command and it came back 
> as a success even though nothing actually happened. When the core is 
> accessed, a core init exception is returned by subsequent calls to getCore(). 
> There is no catch block after the try so no error is returned.
> Looking through the code I see several other commands that have a similar 
> pattern:
>  FORCEPREPAREFORLEADERSHIP_OP
> LISTSNAPSHOTS_OP
> getCoreStatus
> and perhaps others. getCore() can throw an exception, about the only explicit 
> one it does throw is if the core has an initialization error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 6.4.2 release?

2017-02-21 Thread Ishan Chattopadhyaya
I would like to volunteer for this 6.4.2 release. Planning to cut a RC as
soon as blockers are resolved.
One of the unresolved blocker issues seems to be LUCENE-7698 (I'll take a
look to see if there are more). If there are more issues that should be
part of the release, please let me know or mark as blockers in jira.

Thanks,
Ishan


On Thu, Feb 16, 2017 at 3:48 AM, Adrien Grand  wrote:

> I had initially planned on releasing tomorrow but the mirrors replicated
> faster than I had thought they would so I finished the release today,
> including the addition of the new 5.5.4 indices for backward testing so I
> am good with proceeding with a new release now.
>
> Le mer. 15 févr. 2017 à 16:13, Adrien Grand  a écrit :
>
> +1
>
> One ask I have is to wait for the 5.5.4 release process to be complete so
> that branch_6_4 has the 5.5.4 backward indices when we cut the first RC. I
> will let you know when I am done.
>
> Le mer. 15 févr. 2017 à 15:53, Christine Poerschke (BLOOMBERG/ LONDON) <
> cpoersc...@bloomberg.net> a écrit :
>
> Hi,
>
> These two could be minor candidates for inclusion:
>
> * https://issues.apache.org/jira/browse/SOLR-10083
> Fix instanceof check in ConstDoubleSource.equals
>
> * https://issues.apache.org/jira/browse/LUCENE-7676
> FilterCodecReader to override more super-class methods
>
> The former had narrowly missed the 6.4.1 release.
>
> Regards,
>
> Christine
>
> From: dev@lucene.apache.org At: 02/15/17 14:27:52
> To: dev@lucene.apache.org
> Subject: Re:6.4.2 release?
>
> Hi devs,
>
> These two issues seem serious enough to warrant a new release from
> branch_6_4:
> * SOLR-10130: Serious performance degradation in Solr 6.4.1 due to the new
> metrics collection
> * SOLR-10138: Transaction log replay can hit an NPE due to new Metrics
> code.
>
> What do you think? Anything else that should go there?
>
> ---
> Best regards,
>
> Andrzej Bialecki
>
>


[JENKINS] Lucene-Solr-Tests-master - Build # 1685 - Still Unstable

2017-02-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1685/

1 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrBootstrapTest.testConvertClusterToCdcrAndBootstrap

Error Message:
Document mismatch on target after sync expected:<1000> but was:<0>

Stack Trace:
java.lang.AssertionError: Document mismatch on target after sync 
expected:<1000> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([F9831DC4439C5259:2E5432B3F7C3CA1E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.CdcrBootstrapTest.testConvertClusterToCdcrAndBootstrap(CdcrBootstrapTest.java:134)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 12381 lines...]
   [junit4] Suite: org.apache.solr.cloud.CdcrBootstrapTest
   [junit4]   2> Creating 

[jira] [Commented] (SOLR-10046) Create UninvertDocValuesMergePolicy

2017-02-21 Thread Keith Laban (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876323#comment-15876323
 ] 

Keith Laban commented on SOLR-10046:


Hi Christine, I was able to do the above.

- I created a new commit on top of master to clean up the working branch
- Added javadocs and removed TODOs
- {{ant precommit}} passes

> Create UninvertDocValuesMergePolicy
> ---
>
> Key: SOLR-10046
> URL: https://issues.apache.org/jira/browse/SOLR-10046
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Keith Laban
>Assignee: Christine Poerschke
>
> Create a merge policy that can detect schema changes and use 
> UninvertingReader to uninvert fields and write docvalues into merged segments 
> when a field has docvalues enabled.
> The current behavior is to write null values in the merged segment which can 
> lead to data integrity problems when sorting or faceting pending a full 
> reindex. 
> With this patch it would still be recommended to reindex when adding 
> docvalues for performance reasons, as it not guarenteed all segments will be 
> merged with docvalues turned on.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8045) Deploy Solr in ROOT (/) path

2017-02-21 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876295#comment-15876295
 ] 

Noble Paul commented on SOLR-8045:
--

[~markrmil...@gmail.com] it allows us to use other paths such as {{/v2/*}} or 
{{/ui/*}} 

> Deploy Solr in ROOT (/) path 
> -
>
> Key: SOLR-8045
> URL: https://issues.apache.org/jira/browse/SOLR-8045
> Project: Solr
>  Issue Type: Wish
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 6.0
>
> Attachments: SOLR-8045.patch, SOLR-8045.patch
>
>
> This does not mean that the path to access Solr will be changed. All paths 
> will remain as is and would behave exactly the same



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



CharTokenizer and hard 255 char limit

2017-02-21 Thread Erick Erickson
What do people think about making this configurable? At the moment
it's a constant that can't be altered. I see at least one situation in
the field where very long payloads are being added (look, it's
special) with a custom tokenizer that subclasses CharTokenizer which
truncates the incoming "word".

Using KeywordTokenizer can get around this as it has a c'tor that
takes a buffer length. But KeywordTokenizer obviously doesn't let you,
well, parse tokens.

Should I raise a JIRA or are there good reasons this is hard-coded?

Erick

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10181) CREATEALIAS and DELETEALIAS commands consistency problems under concurrency

2017-02-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samuel García Martínez updated SOLR-10181:
--
Description: 
When several CREATEALIAS are run at the same time by the OCP it could happen 
that, even tho the API response is OK, some of those CREATEALIAS request 
changes are lost.

h3. The problem
The problem happens because the CREATEALIAS cmd implementation relies on 
_zkStateReader.getAliases()_ to create the map that will be stored in ZK. If 
several threads reach that line at the same time it will happen that only one 
will be stored correctly and the others will be overridden.

The code I'm referencing is [this 
piece|https://github.com/apache/lucene-solr/blob/8c1e67e30e071ceed636083532d4598bf6a8791f/solr/core/src/java/org/apache/solr/cloud/CreateAliasCmd.java#L65].
 As an example, let's say that the current aliases map has {a:colA, b:colB}. If 
two CREATEALIAS (one adding c:colC and other creating d:colD) are submitted to 
the _tpe_ and reach that line at the same time, the resulting maps will look 
like {a:colA, b:colB, c:colC} and {a:colA, b:colB, d:colD} and only one of them 
will be stored correctly in ZK, resulting in "data loss", meaning that API is 
returning OK despite that it didn't work as expected.

On top of this, another concurrency problem could happen when the command 
checks if the alias has been set using _checkForAlias_ method. if these two 
CREATEALIAS zk writes had ran at the same time, the alias check fir one of the 
threads can timeout since only one of the writes has "survived" and has been 
"committed" to the _zkStateReader.getAliases()_ map.

h3. How to fix it
I can post a patch to this if someone gives me directions on how it should be 
fixed. As I see this, there are two places where the issue can be fixed: in the 
processor (OverseerCollectionMessageHandler) in a generic way or inside the 
command itself.

h5. The processor fix
The locking mechanism (_OverseerCollectionMessageHandler#lockTask_) should be 
the place to fix this inside the processor. I thought that adding the operation 
name instead of only "collection" or "name" to the locking key would fix the 
issue, but I realized that the problem will happen anyway if the concurrency 
happens between different operations modifying the same resource (like 
CREATEALIAS and DELETEALIAS do). So, if this should be the path to follow I 
don't know what should be used as a locking key.

h5. The command fix
Fixing it at the command level (_CreateAliasCmd_ and _DeleteAliasCmd_) would be 
relatively easy. Using optimistic locking, i.e, using the aliases.json zk 
version in the keeper.setData. To do that, Aliases class should offer the 
aliases version so the commands can forward that version with the update and 
retry when it fails.

  was:
When several CREATEALIAS are run at the same time by the OCP it could happen 
that, even tho the API response is OK, some of those CREATEALIAS request 
changes are lost.

The problem happens because the CREATEALIAS cmd implementation relies on 
zkStateReader.getAliases() to create the map that will be stored in ZK. If 
several threads reach that line at the same time it will happen that only one 
will be stored correctly and the others will be overridden.

The code I'm referencing is [this 
piece|https://github.com/apache/lucene-solr/blob/8c1e67e30e071ceed636083532d4598bf6a8791f/solr/core/src/java/org/apache/solr/cloud/CreateAliasCmd.java#L65].
 As an example, let's say that the current aliases map has {a:colA, b:colB}. If 
two CREATEALIAS (one adding c:colC and other creating d:colD) are scheduled in 
the _tpe_ and reach that line at the same time, the resulting maps will look 
like {a:colA, b:colB, c:colC} and {a:colA, b:colB, d:colD} and only one of them 
will be stored correctly in ZK, resulting in "data loss", meaning that API is 
returning OK despite that it didn't work as expected.

On top of this, another concurrency problem could happen when the command 
checks the alias being set using _checkForAlias_ method. After the two 
CREATEALIAS zk write being run at the same time, when the alias is being check 
one of the threads can timeout since only one of them has "survived" and has 
been written to the _zkStateReader.getAliases()_ map.

I can post a patch to this if someone gives me directions on how it sould be 
fixed. As I see this, there are two places where the issue can be fixed: in the 
processor (OverseerCollectionMessageHandler) in a generic way or inside the 
command itself.

The processor fix
The locking mechanism (OverseerCollectionMessageHandler#lockTask) should be the 
place to fix this inside the processor. I thought that adding the operation 
name instead of only "collection" or "name" to the locking key would fix the 
issue, but I realized that the problem will happen anyway if the concurrency 
happens between different operations modifying the same 

[jira] [Created] (SOLR-10181) CREATEALIAS and DELETEALIAS commands consistency problems under concurrency

2017-02-21 Thread JIRA
Samuel García Martínez created SOLR-10181:
-

 Summary: CREATEALIAS and DELETEALIAS commands consistency problems 
under concurrency
 Key: SOLR-10181
 URL: https://issues.apache.org/jira/browse/SOLR-10181
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: 6.4.1, 5.5, 5.4, 5.3
Reporter: Samuel García Martínez


When several CREATEALIAS are run at the same time by the OCP it could happen 
that, even tho the API response is OK, some of those CREATEALIAS request 
changes are lost.

The problem happens because the CREATEALIAS cmd implementation relies on 
zkStateReader.getAliases() to create the map that will be stored in ZK. If 
several threads reach that line at the same time it will happen that only one 
will be stored correctly and the others will be overridden.

The code I'm referencing is [this 
piece|https://github.com/apache/lucene-solr/blob/8c1e67e30e071ceed636083532d4598bf6a8791f/solr/core/src/java/org/apache/solr/cloud/CreateAliasCmd.java#L65].
 As an example, let's say that the current aliases map has {a:colA, b:colB}. If 
two CREATEALIAS (one adding c:colC and other creating d:colD) are scheduled in 
the _tpe_ and reach that line at the same time, the resulting maps will look 
like {a:colA, b:colB, c:colC} and {a:colA, b:colB, d:colD} and only one of them 
will be stored correctly in ZK, resulting in "data loss", meaning that API is 
returning OK despite that it didn't work as expected.

On top of this, another concurrency problem could happen when the command 
checks the alias being set using _checkForAlias_ method. After the two 
CREATEALIAS zk write being run at the same time, when the alias is being check 
one of the threads can timeout since only one of them has "survived" and has 
been written to the _zkStateReader.getAliases()_ map.

I can post a patch to this if someone gives me directions on how it sould be 
fixed. As I see this, there are two places where the issue can be fixed: in the 
processor (OverseerCollectionMessageHandler) in a generic way or inside the 
command itself.

The processor fix
The locking mechanism (OverseerCollectionMessageHandler#lockTask) should be the 
place to fix this inside the processor. I thought that adding the operation 
name instead of only "collection" or "name" to the locking key would fix the 
issue, but I realized that the problem will happen anyway if the concurrency 
happens between different operations modifying the same resource (like 
CREATEALIAS and DELETEALIAS do). So, if this should be the path to follow I 
don't know what should be used as a locking key.

The command fix
Fixing it at the command level (CreateAliasCmd and DeleteAliasCmd) would be 
relatively easy. Using optimistic locking, i.e, using the aliases.json zk 
version in the keeper.setData. To do that, Aliases class should offer the 
aliases version so the commands can forward that version with the update and 
retry when it fails.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_121) - Build # 6409 - Unstable!

2017-02-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6409/
Java: 64bit/jdk1.8.0_121 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Error from server at http://127.0.0.1:56647/solr/awhollynewcollection_0: 
Expected mime type application/octet-stream but got text/html.   
 
Error 510HTTP ERROR: 510 Problem 
accessing /solr/awhollynewcollection_0/select. Reason: 
{metadata={error-class=org.apache.solr.common.SolrException,root-error-class=org.apache.solr.common.SolrException},msg={awhollynewcollection_0:7},code=510}
 http://eclipse.org/jetty;>Powered by Jetty:// 
9.3.14.v20161028   

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:56647/solr/awhollynewcollection_0: Expected 
mime type application/octet-stream but got text/html. 


Error 510 


HTTP ERROR: 510
Problem accessing /solr/awhollynewcollection_0/select. Reason:

{metadata={error-class=org.apache.solr.common.SolrException,root-error-class=org.apache.solr.common.SolrException},msg={awhollynewcollection_0:7},code=510}
http://eclipse.org/jetty;>Powered by Jetty:// 
9.3.14.v20161028



at 
__randomizedtesting.SeedInfo.seed([4CC93650BA59FFC0:4BC42E4BC6AD055]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:595)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:439)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:391)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1358)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1109)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1212)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1212)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1212)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1212)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1212)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1042)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:523)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 

[jira] [Commented] (SOLR-8045) Deploy Solr in ROOT (/) path

2017-02-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876198#comment-15876198
 ] 

Mark Miller commented on SOLR-8045:
---

bq. This does not mean that the path to access Solr will be changed. All paths 
will remain as is and would behave exactly the same

So what are the reasons for the issue then?

> Deploy Solr in ROOT (/) path 
> -
>
> Key: SOLR-8045
> URL: https://issues.apache.org/jira/browse/SOLR-8045
> Project: Solr
>  Issue Type: Wish
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 6.0
>
> Attachments: SOLR-8045.patch, SOLR-8045.patch
>
>
> This does not mean that the path to access Solr will be changed. All paths 
> will remain as is and would behave exactly the same



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7465) Add a PatternTokenizer that uses Lucene's RegExp implementation

2017-02-21 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876188#comment-15876188
 ] 

Michael McCandless commented on LUCENE-7465:


That test failure was actually a real bug in both {{SimplePatternTokenizer}} 
and {{SimpleSplitPatternTokenizer}}!  Yay for {{TestRandomChains}} ;)  I pushed 
a fix.

> Add a PatternTokenizer that uses Lucene's RegExp implementation
> ---
>
> Key: LUCENE-7465
> URL: https://issues.apache.org/jira/browse/LUCENE-7465
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7465.patch, LUCENE-7465.patch
>
>
> I think there are some nice benefits to a version of PatternTokenizer that 
> uses Lucene's RegExp impl instead of the JDK's:
>   * Lucene's RegExp is compiled to a DFA up front, so if a "too hard" RegExp 
> is attempted the user discovers it up front instead of later on when a 
> "lucky" document arrives
>   * It processes the incoming characters as a stream, only pulling 128 
> characters at a time, vs the existing {{PatternTokenizer}} which currently 
> reads the entire string up front (this has caused heap problems in the past)
>   * It should be fast.
> I named it {{SimplePatternTokenizer}}, and it still needs a factory and 
> improved tests, but I think it's otherwise close.
> It currently does not take a {{group}} parameter because Lucene's RegExps 
> don't yet implement sub group capture.  I think we could add that at some 
> point, but it's a bit tricky.
> This doesn't even have group=-1 support (like String.split) ... I think if we 
> did that we should maybe name it differently 
> ({{SimplePatternSplitTokenizer}}?).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7465) Add a PatternTokenizer that uses Lucene's RegExp implementation

2017-02-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876185#comment-15876185
 ] 

ASF subversion and git services commented on LUCENE-7465:
-

Commit c3028b32207b8837cdaf29918edd4e0cdc9621ad in lucene-solr's branch 
refs/heads/branch_6x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c3028b3 ]

LUCENE-7465: fix corner case in SimplePattern/SplitTokenizer when lookahead 
hits end of input


> Add a PatternTokenizer that uses Lucene's RegExp implementation
> ---
>
> Key: LUCENE-7465
> URL: https://issues.apache.org/jira/browse/LUCENE-7465
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7465.patch, LUCENE-7465.patch
>
>
> I think there are some nice benefits to a version of PatternTokenizer that 
> uses Lucene's RegExp impl instead of the JDK's:
>   * Lucene's RegExp is compiled to a DFA up front, so if a "too hard" RegExp 
> is attempted the user discovers it up front instead of later on when a 
> "lucky" document arrives
>   * It processes the incoming characters as a stream, only pulling 128 
> characters at a time, vs the existing {{PatternTokenizer}} which currently 
> reads the entire string up front (this has caused heap problems in the past)
>   * It should be fast.
> I named it {{SimplePatternTokenizer}}, and it still needs a factory and 
> improved tests, but I think it's otherwise close.
> It currently does not take a {{group}} parameter because Lucene's RegExps 
> don't yet implement sub group capture.  I think we could add that at some 
> point, but it's a bit tricky.
> This doesn't even have group=-1 support (like String.split) ... I think if we 
> did that we should maybe name it differently 
> ({{SimplePatternSplitTokenizer}}?).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7465) Add a PatternTokenizer that uses Lucene's RegExp implementation

2017-02-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876181#comment-15876181
 ] 

ASF subversion and git services commented on LUCENE-7465:
-

Commit 2d03aa21a2b674d36e201f6309e646f37771b73b in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2d03aa2 ]

LUCENE-7465: fix corner case in SimplePattern/SplitTokenizer when lookahead 
hits end of input


> Add a PatternTokenizer that uses Lucene's RegExp implementation
> ---
>
> Key: LUCENE-7465
> URL: https://issues.apache.org/jira/browse/LUCENE-7465
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7465.patch, LUCENE-7465.patch
>
>
> I think there are some nice benefits to a version of PatternTokenizer that 
> uses Lucene's RegExp impl instead of the JDK's:
>   * Lucene's RegExp is compiled to a DFA up front, so if a "too hard" RegExp 
> is attempted the user discovers it up front instead of later on when a 
> "lucky" document arrives
>   * It processes the incoming characters as a stream, only pulling 128 
> characters at a time, vs the existing {{PatternTokenizer}} which currently 
> reads the entire string up front (this has caused heap problems in the past)
>   * It should be fast.
> I named it {{SimplePatternTokenizer}}, and it still needs a factory and 
> improved tests, but I think it's otherwise close.
> It currently does not take a {{group}} parameter because Lucene's RegExps 
> don't yet implement sub group capture.  I think we could add that at some 
> point, but it's a bit tricky.
> This doesn't even have group=-1 support (like String.split) ... I think if we 
> did that we should maybe name it differently 
> ({{SimplePatternSplitTokenizer}}?).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2017-02-21 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876162#comment-15876162
 ] 

Joel Bernstein edited comment on SOLR-8593 at 2/21/17 3:38 PM:
---

I was thinking about merging  
https://github.com/apache/lucene-solr/tree/jira/solr-8593 into branch_6x rather 
then cherry picking from master. There is one commit that will need to be 
reverted because it's only valid in master,  but that should be fairly easy to 
do I think.



was (Author: joel.bernstein):
I was thinking about merging  
https://github.com/apache/lucene-solr/tree/jira/solr-8593 into branch_6x rather 
then cherry picking from master. There is one commit that will need to be 
reverted because it's only valid in master,  but that should fairly easy to do 
I think.


> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>  Components: Parallel SQL
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2017-02-21 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876162#comment-15876162
 ] 

Joel Bernstein commented on SOLR-8593:
--

I was thinking about merging  
https://github.com/apache/lucene-solr/tree/jira/solr-8593 into branch_6x rather 
then cherry picking from master. There is one commit that will need to be 
reverted because it's only valid in master,  but that should fairly easy to do 
I think.


> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>  Components: Parallel SQL
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7699) Apply graph articulation points optimization to phrase graph queries

2017-02-21 Thread Matt Weber (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876101#comment-15876101
 ] 

Matt Weber commented on LUCENE-7699:


[~jim.ferenczi] That check was intended, but as you said, it is essentially 
pointless.  I will remove it.  Yes, I think {{GraphQuery}} should go as well.  
It was only needed when we needed to detect the graph to apply minimum should 
match and phrase slop which is no longer the case.  Should that be separate 
issue?

> Apply graph articulation points optimization to phrase graph queries
> 
>
> Key: LUCENE-7699
> URL: https://issues.apache.org/jira/browse/LUCENE-7699
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Matt Weber
> Attachments: LUCENE-7699.patch
>
>
> Follow-up to LUCENE-7638 that applies the same articulation point logic to 
> graph phrases using span queries.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2017-02-21 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876076#comment-15876076
 ] 

Kevin Risden commented on SOLR-8593:


[~joel.bernstein] - Thoughts on back porting this to branch_6x?

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>  Components: Parallel SQL
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7628) Add a getMatchingChildren() method to DisjunctionScorer

2017-02-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876069#comment-15876069
 ] 

ASF subversion and git services commented on LUCENE-7628:
-

Commit cbe7e87d82a5a64fb8b019b215b2c59814ef5462 in lucene-solr's branch 
refs/heads/branch_6x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cbe7e87 ]

LUCENE-7628: Scorer.getChildren() returns only matching child scorers


> Add a getMatchingChildren() method to DisjunctionScorer
> ---
>
> Key: LUCENE-7628
> URL: https://issues.apache.org/jira/browse/LUCENE-7628
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 6.5
>
> Attachments: LUCENE-7628.patch, LUCENE-7628.patch
>
>
> This one is a bit convoluted, so bear with me...
> The luwak highlighter works by rewriting queries into their Span-equivalents, 
> and then running them with a special Collector.  At each matching doc, the 
> highlighter gathers all the Spans objects positioned on the current doc and 
> collects their positions using the SpanCollection API.
> Some queries can't be translated into Spans.  For those queries that generate 
> Scorers with ChildScorers, like BooleanQuery, we can call .getChildren() on 
> the Scorer and see if any of them are SpanScorers, and for those that aren't 
> we can call .getChildren() again and recurse down.  For each child scorer, we 
> check that it's positioned on the current document, so non-matching 
> subscorers can be skipped.
> This all works correctly *except* in the case of a DisjunctionScorer where 
> one of the children is a two-phase iterator that has matched its 
> approximation, but not its refinement query.  A SpanScorer in this situation 
> will be correctly positioned on the current document, but its Spans will be 
> in an undefined state, meaning the highlighter will either collect incorrect 
> hits, or it will throw an Exception and prevent hits being collected from 
> other subspans.
> We've tried various ways around this (including forking SpanNearQuery and 
> adding a bunch of slow position checks to it that are used only by the 
> highlighting code), but it turns out that the simplest fix is to add a new 
> method to DisjunctionScorer that only returns the currently matching child 
> Scorers.  It's a bit of a hack, and it won't be used anywhere else, but it's 
> a fairly small and contained hack.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >