[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_72) - Build # 16559 - Still Failing!

2016-04-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16559/
Java: 32bit/jdk1.8.0_72 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.request.SchemaTest.testSchemaRequestAccuracy

Error Message:
java.util.LinkedHashMap cannot be cast to org.apache.solr.common.util.NamedList

Stack Trace:
java.lang.ClassCastException: java.util.LinkedHashMap cannot be cast to 
org.apache.solr.common.util.NamedList
at 
__randomizedtesting.SeedInfo.seed([EA2EA88E459B033B:6DD2A7204B63FEBC]:0)
at 
org.apache.solr.client.solrj.response.schema.SchemaResponse.setResponse(SchemaResponse.java:252)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.client.solrj.request.SchemaTest.testSchemaRequestAccuracy(SchemaTest.java:123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build 

[JENKINS] Lucene-Solr-NightlyTests-5.5 - Build # 1 - Failure

2016-04-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.5/1/

5 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:60491/sbh/gz

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:60491/sbh/gz
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:586)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.makeRequest(CollectionsAPIDistributedZkTest.java:400)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:458)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:179)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Updated] (SOLR-9028) fix bugs in (add sanity checks for) SSL clientAuth testing

2016-04-21 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-9028:
-
Attachment: os.x.failure.txt

Fails for me on OS X 10.11.4, {{os.x.failure.txt}} is test output.  {{java 
-version}} says:

{noformat}
java version "1.8.0_77"
Java(TM) SE Runtime Environment (build 1.8.0_77-b03)
Java HotSpot(TM) 64-Bit Server VM (build 25.77-b03, mixed mode)
{noformat}

> fix bugs in (add sanity checks for) SSL clientAuth testing
> --
>
> Key: SOLR-9028
> URL: https://issues.apache.org/jira/browse/SOLR-9028
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-9028.patch, os.x.failure.txt
>
>
> While looking into SOLR-8970 i realized there was a whole heap of problems 
> with how clientAuth was being handled in tests.  Notably: it wasn't actaully 
> being used when the randomization selects it (aparently due to a copy/paste 
> mistake in SOLR-7166).  But there are few other misc issues (improper usage 
> of sysprops overrides for tests, missuage of keystore/truststore in test 
> clients, etc..)
> I'm working up a patch to fix all of this, and add some much needed tests to 
> *explicitly* verify both SSL and clientAuth that will include some "false 
> positive" verifications, and some "test the test" checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7242) LatLonTree should build a balanced tree

2016-04-21 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15253304#comment-15253304
 ] 

Robert Muir commented on LUCENE-7242:
-

I checked startup cost and everything is ok (numbers are so small that its all 
noise).

> LatLonTree should build a balanced tree
> ---
>
> Key: LUCENE-7242
> URL: https://issues.apache.org/jira/browse/LUCENE-7242
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7242.patch
>
>
> [~rjernst]'s idea: we create an interval tree of edges, but with randomized 
> order.
> Instead we can speed things up more by creating a balanced tree up front.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8992) Restore Schema API GET method functionality removed by SOLR-8736

2016-04-21 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15253302#comment-15253302
 ] 

Steve Rowe commented on SOLR-8992:
--

+1 to the latest patch.  It did take me a while to grok the default case in 
SchemaHandler.handleGET() though - the local variable names (realName, 
fieldName, name, parts) are confusing.  I suggest rethinking them so it's 
clearer what's going on.

Noble, if you can't fix the SchemaTest failures right away, we should revert 
your e8cc19e commit until a fix is in place.

> Restore Schema API GET method functionality removed by SOLR-8736
> 
>
> Key: SOLR-8992
> URL: https://issues.apache.org/jira/browse/SOLR-8992
> Project: Solr
>  Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Noble Paul
> Attachments: SOLR-8992.patch, SOLR-8992.patch, SOLR-8992.patch
>
>
> The following schema API GET functionality was removed under SOLR-8736; some 
> of this functionality should be restored:
> * {{schema/copyfields}}:
> ** The following information is no longer output:
> *** {{destDynamicBase}}: the matching dynamic field pattern for the 
> destination
> *** {{sourceDynamicBase}}: the matching dynamic field pattern for the source
> ** The following request parameters are no longer supported:
> *** {{dest.fl}}: include only copyFields that have one of these as a 
> destination
> *** {{source.fl}}: include only copyFields that have one of these as a source
> * {{schema/dynamicfields}}:
> ** The following request parameters are no longer supported:
> *** {{fl}}: a comma and/or space separated list of dynamic field patterns to 
> include 
> * {{schema/fields}} and {{schema/fields/_fieldname_}}:
> ** The following information is no longer output:
> *** {{dynamicBase}}: the matching dynamic field pattern, if the 
> {{includeDynamic}} param is given (see below) 
> ** The following request parameters are no longer supported:
> *** {{fl}}: (only supported without {{/_fieldname_}}): a comma and/or space 
> separated list of fields to include 
> *** {{includeDynamic}}: output the matching dynamic field pattern as 
> {{dynamicBase}}, if {{_fieldname_}}, or field(s) listed in {{fl}} param, are 
> not explicitly declared in the schema
> * {{schema/fieldtypes}} and {{schema/fieldtypes/_typename_}}:
> ** The following information is no longer output: 
> *** {{fields}}: the fields with the given field type
> *** {{dynamicFields}}: the dynamic fields with the given field type  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7242) LatLonTree should build a balanced tree

2016-04-21 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-7242:

Attachment: LUCENE-7242.patch

Initial patch: seems to make things a bit faster:

Synthetic polygons from luceneUtil
||vertices||old QPS||new QPS|
|5|38.4|41.1|
|50|29.7|33.1|
|500|27.5|30.5|
|5000|18.8|20.1|
Real polygons (33 london districts: 
http://data.london.gov.uk/2011-boundary-files)
||vertices||old QPS||new QPS|
|avg 5.6k|73.0|84.7|

I want to check that startup cost is not hurt, otherwise I think its better. 
startup cost may be improved.

> LatLonTree should build a balanced tree
> ---
>
> Key: LUCENE-7242
> URL: https://issues.apache.org/jira/browse/LUCENE-7242
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7242.patch
>
>
> [~rjernst]'s idea: we create an interval tree of edges, but with randomized 
> order.
> Instead we can speed things up more by creating a balanced tree up front.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7240) Remove DocValues from LatLonPoint, add DocValuesField for that

2016-04-21 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15253292#comment-15253292
 ] 

Karl Wright commented on LUCENE-7240:
-

+1

This looks like a big help (and will unblock my work too).

> Remove DocValues from LatLonPoint, add DocValuesField for that
> --
>
> Key: LUCENE-7240
> URL: https://issues.apache.org/jira/browse/LUCENE-7240
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-7240.patch, LUCENE-7240.patch
>
>
> LatLonPoint needed two-phase intersection initially because of big 
> inefficiencies, but as of LUCENE-7239 all of its query operations:  
> {{newBoxQuery()}}, {{newDistanceQuery()}}, {{newPolygonQuery()}} and 
> {{nearest()}} only need the points datastructure (BKD).
> If you want to do {{newDistanceSort()}} then you need docvalues for that, but 
> I think it should be moved to a separate field: e.g. docvalues is optional 
> just like any other field in lucene. We can add other methods that make sense 
> to that new docvalues field (e.g. facet by distance/region, expressions 
> support, whatever). It is really disjoint from the core query support: and 
> also currently has a heavyish cost of ~64-bits per value in space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7242) LatLonTree should build a balanced tree

2016-04-21 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-7242:
---

 Summary: LatLonTree should build a balanced tree
 Key: LUCENE-7242
 URL: https://issues.apache.org/jira/browse/LUCENE-7242
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


[~rjernst]'s idea: we create an interval tree of edges, but with randomized 
order.

Instead we can speed things up more by creating a balanced tree up front.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Jira Spam - And changes made as a result.

2016-04-21 Thread Ryan Josal
Woah, yeah, I have filed a few bugs as well as posted patches and
comments.  Indeed I don't seem to be able to comment anymore.  Anyone want
to add me (rjosal) to a role that can comment or create?

Ryan

On Thursday, April 21, 2016, David Smiley  wrote:

> Wow!  My reading of this is that the general public (i.e. not committers)
> won't be able to really do anything other than view JIRA issues unless we
> expressly add individuals to a specific project group?  :-(  Clearly that
> sucks big time.  Is anyone reading this differently?  Assuming this is
> true... at this point maybe there is nothing to do but wait until the
> inevitable requests come in for people to create/comment.  Maybe send a
> message to the user lists?
>
> ~ David
>
> -- Forwarded message -
> From: Gav  >
> Date: Fri, Apr 22, 2016 at 12:14 AM
> Subject: Jira Spam - And changes made as a result.
> To: infrastruct...@apache.org
> 
> Infrastructure  >
>
>
> Hi All,
>
> Apologies for notifying you after the fact.
>
> Earlier today (slowing down to a halt about 1/2 hr ago due to our changes)
> we had a
> big Spam attack directed at the ASF Jira instance.
>
> Many project were affected, including :-
>
> TM, ARROW ACCUMULO, ABDERA, JSPWIKI, QPIDIT, LOGCXX, HAWQ, AMQ, ATLAS,
> AIRFLOW, ACE, APEXCORE, RANGER and KYLIN .
>
> During the process we ended up banning 27 IP addresses , deleted well over
> 200 tickets, and about 2 dozen user accounts.
>
> The spammers were creating accounts using the normal system and going
> through the required captchas.
>
> In addition to the ban hammer and deletions and to prevent more spam
> coming in, we changed the 'Default Permissions Scheme' so that anyone in
> the 'jira-users' group are no longer allowed to 'Create' tickets and are no
> longer allowed to 'Comment' on any tickets.
>
> Obviously that affects genuine users as well as the spammers, we know
> that.
>
> Replacement auth instead of jira-users group now includes allowing those
> in the 'Administrator, PMC, Committer, Contributor and Developer' ROLES in
> jira.
>
> Projects would you please assist in making this work - anyone that is not
> in any of those roles for your project; and needs access to be able to
> create issues and comment, please do add their jira id to one of the
> available roles. (Let us know if you need assistance in this area)
>
> This is a short term solution. For the medium to long term we are working
> on providing LDAP authentication for Jira and Confluence through Atlassian
> Crowd (likley).
>
> If any projects are still being affected, please notify us as you may be
> using another permissions scheme to the one altered. Notify us via INFRA
> jira ticket or reply to this mail to infrastruct...@apache.org
>  or join us on
> hipchat (https://www.hipchat.com/gIjVtYcNy)
>
> Any project seriously adversely impacted by our changes please do come
> talk to us and we'll see what we can work out.
>
> Thanks all for your patience and understanding.
>
> Gav... (ASF Infra)
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>


[jira] [Updated] (LUCENE-7241) Improve performance of geo3d for polygons with very large numbers of points

2016-04-21 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright updated LUCENE-7241:

Description: 
This ticket corresponds to LUCENE-7239, except it's for geo3d polygons.

The trick here is to organize edges by some criteria, e.g. z value range, and 
use that to avoid needing to go through all edges and/or tile large irregular 
polygons.  Then we use the ability to quickly determine intersections to figure 
out whether a point is within the polygon, or not.

The current way geo3d polygons are constructed involves finding a single point, 
or "pole", which all polygon points circle.  This point is known to be either 
"in" or "out" based on the direction of the points.  So we have one place of 
"truth" on the globe that is known at polygon setup time.

If edges are organized by z value, where the z values for an edge are computed 
by the standard way of computing bounds for a plane, then we can readily 
organize edges into a tree structure such that it is easy to find all edges we 
need to check for a given z value.  Then, we merely need to compute how many 
intersections to consider as we navigate from the "truth" point to the point 
being tested.  In practice, this means both having a tree that is organized by 
z, and a tree organized by (x,y), since we need to navigate in both directions. 
 But then we can cheaply count the number of intersections, and once we do 
that, we know whether our point is "in" or "out".

The other performance improvement we need is whether a given plane intersects 
the polygon within provided bounds.  This can be done using the same two trees 
(z and (x,y)), by virtue of picking which tree to use based on the plane's 
minimum bounds in z or (x,y).  And, in practice, we might well use three trees: 
one in x, one in y, and one in z, which would mean we didn't have to compute 
longitudes ever.

An implementation like this trades off the cost of finding point membership in 
near O\(log\(n)) time vs. the extra expense per step of finding that 
membership.  Setup of the query is O\(n) in this scheme, rather than O\(n^2) in 
the current implementation, but once again each individual step is more 
expensive.  Therefore I would expect we'd want to use the current 
implementation for simpler polygons and this sort of implementation for tougher 
polygons.  Choosing which to use is a topic for another ticket.



  was:
This ticket corresponds to LUCENE-7239, except it's for geo3d polygons.

The trick here is to organize edges by some criteria, e.g. z value range, and 
use that to avoid needing to go through all edges and/or tile large irregular 
polygons.  Then we use the ability to quickly determine intersections to figure 
out whether a point is within the polygon, or not.

The current way geo3d polygons are constructed involves finding a single point, 
or "pole", which all polygon points circle.  This point is known to be either 
"in" or "out" based on the direction of the points.  So we have one place of 
"truth" on the globe that is known at polygon setup time.

If edges are organized by z value, where the z values for an edge are computed 
by the standard way of computing bounds for a plane, then we can readily 
organize edges into a tree structure such that it is easy to find all edges we 
need to check for a given z value.  Then, we merely need to compute how many 
intersections to consider as we navigate from the "truth" point to the point 
being tested.  In practice, this means both having a tree that is organized by 
z, and a tree organized by (x,y), since we need to navigate in both directions. 
 But then we can cheaply count the number of intersections, and once we do 
that, we know whether our point is "in" or "out".

The other performance improvement we need is whether a given plane intersects 
the polygon within provided bounds.  This can be done using the same two trees 
(z and (x,y)), by virtue of picking which tree to use based on the plane's 
minimum bounds in z or (x,y).  And, in practice, we might well use three trees: 
one in x, one in y, and one in z, which would mean we didn't have to compute 
longitudes ever.

An implementation like this trades off the cost of finding point membership in 
near O(log(n)) time vs. the extra expense per step of finding that membership.  
Setup of the query is O\(n) in this scheme, rather than O\(n^2) in the current 
implementation, but once again each individual step is more expensive.  
Therefore I would expect we'd want to use the current implementation for 
simpler polygons and this sort of implementation for tougher polygons.  
Choosing which to use is a topic for another ticket.




> Improve performance of geo3d for polygons with very large numbers of points
> ---
>
> Key: LUCENE-7241
> URL: 

[jira] [Updated] (LUCENE-7241) Improve performance of geo3d for polygons with very large numbers of points

2016-04-21 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright updated LUCENE-7241:

Description: 
This ticket corresponds to LUCENE-7239, except it's for geo3d polygons.

The trick here is to organize edges by some criteria, e.g. z value range, and 
use that to avoid needing to go through all edges and/or tile large irregular 
polygons.  Then we use the ability to quickly determine intersections to figure 
out whether a point is within the polygon, or not.

The current way geo3d polygons are constructed involves finding a single point, 
or "pole", which all polygon points circle.  This point is known to be either 
"in" or "out" based on the direction of the points.  So we have one place of 
"truth" on the globe that is known at polygon setup time.

If edges are organized by z value, where the z values for an edge are computed 
by the standard way of computing bounds for a plane, then we can readily 
organize edges into a tree structure such that it is easy to find all edges we 
need to check for a given z value.  Then, we merely need to compute how many 
intersections to consider as we navigate from the "truth" point to the point 
being tested.  In practice, this means both having a tree that is organized by 
z, and a tree organized by (x,y), since we need to navigate in both directions. 
 But then we can cheaply count the number of intersections, and once we do 
that, we know whether our point is "in" or "out".

The other performance improvement we need is whether a given plane intersects 
the polygon within provided bounds.  This can be done using the same two trees 
(z and (x,y)), by virtue of picking which tree to use based on the plane's 
minimum bounds in z or (x,y).  And, in practice, we might well use three trees: 
one in x, one in y, and one in z, which would mean we didn't have to compute 
longitudes ever.

An implementation like this trades off the cost of finding point membership in 
near O(log(n)) time vs. the extra expense per step of finding that membership.  
Setup of the query is O\(n) in this scheme, rather than O\(n^2) in the current 
implementation, but once again each individual step is more expensive.  
Therefore I would expect we'd want to use the current implementation for 
simpler polygons and this sort of implementation for tougher polygons.  
Choosing which to use is a topic for another ticket.



  was:
This ticket corresponds to LUCENE-7239, except it's for geo3d polygons.

The trick here is to organize edges by some criteria, e.g. z value range, and 
use that to avoid needing to go through all edges and/or tile large irregular 
polygons.  Then we use the ability to quickly determine intersections to figure 
out whether a point is within the polygon, or not.

The current way geo3d polygons are constructed involves finding a single point, 
or "pole", which all polygon points circle.  This point is known to be either 
"in" or "out" based on the direction of the points.  So we have one place of 
"truth" on the globe that is known at polygon setup time.

If edges are organized by z value, where the z values for an edge are computed 
by the standard way of computing bounds for a plane, then we can readily 
organize edges into a tree structure such that it is easy to find all edges we 
need to check for a given z value.  Then, we merely need to compute how many 
intersections to consider as we navigate from the "truth" point to the point 
being tested.  In practice, this means both having a tree that is organized by 
z, and a tree organized by (x,y), since we need to navigate in both directions. 
 But then we can cheaply count the number of intersections, and once we do 
that, we know whether our point is "in" or "out".

The other performance improvement we need is whether a given plane intersects 
the polygon within provided bounds.  This can be done using the same two trees 
(z and (x,y)), by virtue of picking which tree to use based on the plane's 
minimum bounds in z or (x,y).  And, in practice, we might well use three trees: 
one in x, one in y, and one in z, which would mean we didn't have to compute 
longitudes ever.

An implementation like this trades off the cost of finding point membership in 
near O(log(n)) time vs. the extra expense per step of finding that membership.  
Setup of the query is O(n) in this scheme, rather than O(n^2) in the current 
implementation, but once again each individual step is more expensive.  
Therefore I would expect we'd want to use the current implementation for 
simpler polygons and this sort of implementation for tougher polygons.  
Choosing which to use is a topic for another ticket.




> Improve performance of geo3d for polygons with very large numbers of points
> ---
>
> Key: LUCENE-7241
> URL: 

[jira] [Updated] (LUCENE-7241) Improve performance of geo3d for polygons with very large numbers of points

2016-04-21 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright updated LUCENE-7241:

Description: 
This ticket corresponds to LUCENE-7239, except it's for geo3d polygons.

The trick here is to organize edges by some criteria, e.g. z value range, and 
use that to avoid needing to go through all edges and/or tile large irregular 
polygons.  Then we use the ability to quickly determine intersections to figure 
out whether a point is within the polygon, or not.

The current way geo3d polygons are constructed involves finding a single point, 
or "pole", which all polygon points circle.  This point is known to be either 
"in" or "out" based on the direction of the points.  So we have one place of 
"truth" on the globe that is known at polygon setup time.

If edges are organized by z value, where the z values for an edge are computed 
by the standard way of computing bounds for a plane, then we can readily 
organize edges into a tree structure such that it is easy to find all edges we 
need to check for a given z value.  Then, we merely need to compute how many 
intersections to consider as we navigate from the "truth" point to the point 
being tested.  In practice, this means both having a tree that is organized by 
z, and a tree organized by (x,y), since we need to navigate in both directions. 
 But then we can cheaply count the number of intersections, and once we do 
that, we know whether our point is "in" or "out".

The other performance improvement we need is whether a given plane intersects 
the polygon within provided bounds.  This can be done using the same two trees 
(z and (x,y)), by virtue of picking which tree to use based on the plane's 
minimum bounds in z or (x,y).  And, in practice, we might well use three trees: 
one in x, one in y, and one in z, which would mean we didn't have to compute 
longitudes ever.

An implementation like this trades off the cost of finding point membership in 
near O(log(n)) time vs. the extra expense per step of finding that membership.  
Setup of the query is O(n) in this scheme, rather than O(n^2) in the current 
implementation, but once again each individual step is more expensive.  
Therefore I would expect we'd want to use the current implementation for 
simpler polygons and this sort of implementation for tougher polygons.  
Choosing which to use is a topic for another ticket.



  was:
This ticket corresponds to the LUCENE-7239, except it's for geo3d polygons.

The trick here is to organize edges by some criteria, e.g. z value range, and 
use that to avoid needing to go through all edges and/or tile large irregular 
polygons.  Then we use the ability to quickly determine intersections to figure 
out whether a point is within the polygon, or not.

The current way geo3d polygons are constructed involves finding a single point, 
or "pole", which all polygon points circle.  This point is known to be either 
"in" or "out" based on the direction of the points.  So we have one place of 
"truth" on the globe that is known at polygon setup time.

If edges are organized by z value, where the z values for an edge are computed 
by the standard way of computing bounds for a plane, then we can readily 
organize edges into a tree structure such that it is easy to find all edges we 
need to check for a given z value.  Then, we merely need to compute how many 
intersections to consider as we navigate from the "truth" point to the point 
being tested.  In practice, this means both having a tree that is organized by 
z, and a tree organized by (x,y), since we need to navigate in both directions. 
 But then we can cheaply count the number of intersections, and once we do 
that, we know whether our point is "in" or "out".

The other performance improvement we need is whether a given plane intersects 
the polygon within provided bounds.  This can be done using the same two trees 
(z and (x,y)), by virtue of picking which tree to use based on the plane's 
minimum bounds in z or (x,y).  And, in practice, we might well use three trees: 
one in x, one in y, and one in z, which would mean we didn't have to compute 
longitudes ever.

An implementation like this trades off the cost of finding point membership in 
near O(log(n)) time vs. the extra expense per step of finding that membership.  
Setup of the query is O(n) in this scheme, rather than O(n^2) in the current 
implementation, but once again each individual step is more expensive.  
Therefore I would expect we'd want to use the current implementation for 
simpler polygons and this sort of implementation for tougher polygons.  
Choosing which to use is a topic for another ticket.




> Improve performance of geo3d for polygons with very large numbers of points
> ---
>
> Key: LUCENE-7241
> URL: 

[jira] [Created] (LUCENE-7241) Improve performance of geo3d for polygons with very large numbers of points

2016-04-21 Thread Karl Wright (JIRA)
Karl Wright created LUCENE-7241:
---

 Summary: Improve performance of geo3d for polygons with very large 
numbers of points
 Key: LUCENE-7241
 URL: https://issues.apache.org/jira/browse/LUCENE-7241
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial3d
Affects Versions: master
Reporter: Karl Wright
Assignee: Karl Wright


This ticket corresponds to the LUCENE-7239, except it's for geo3d polygons.

The trick here is to organize edges by some criteria, e.g. z value range, and 
use that to avoid needing to go through all edges and/or tile large irregular 
polygons.  Then we use the ability to quickly determine intersections to figure 
out whether a point is within the polygon, or not.

The current way geo3d polygons are constructed involves finding a single point, 
or "pole", which all polygon points circle.  This point is known to be either 
"in" or "out" based on the direction of the points.  So we have one place of 
"truth" on the globe that is known at polygon setup time.

If edges are organized by z value, where the z values for an edge are computed 
by the standard way of computing bounds for a plane, then we can readily 
organize edges into a tree structure such that it is easy to find all edges we 
need to check for a given z value.  Then, we merely need to compute how many 
intersections to consider as we navigate from the "truth" point to the point 
being tested.  In practice, this means both having a tree that is organized by 
z, and a tree organized by (x,y), since we need to navigate in both directions. 
 But then we can cheaply count the number of intersections, and once we do 
that, we know whether our point is "in" or "out".

The other performance improvement we need is whether a given plane intersects 
the polygon within provided bounds.  This can be done using the same two trees 
(z and (x,y)), by virtue of picking which tree to use based on the plane's 
minimum bounds in z or (x,y).  And, in practice, we might well use three trees: 
one in x, one in y, and one in z, which would mean we didn't have to compute 
longitudes ever.

An implementation like this trades off the cost of finding point membership in 
near O(log(n)) time vs. the extra expense per step of finding that membership.  
Setup of the query is O(n) in this scheme, rather than O(n^2) in the current 
implementation, but once again each individual step is more expensive.  
Therefore I would expect we'd want to use the current implementation for 
simpler polygons and this sort of implementation for tougher polygons.  
Choosing which to use is a topic for another ticket.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7240) Remove DocValues from LatLonPoint, add DocValuesField for that

2016-04-21 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-7240:

Attachment: LUCENE-7240.patch

A couple javadocs improvements/cleanups.

> Remove DocValues from LatLonPoint, add DocValuesField for that
> --
>
> Key: LUCENE-7240
> URL: https://issues.apache.org/jira/browse/LUCENE-7240
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-7240.patch, LUCENE-7240.patch
>
>
> LatLonPoint needed two-phase intersection initially because of big 
> inefficiencies, but as of LUCENE-7239 all of its query operations:  
> {{newBoxQuery()}}, {{newDistanceQuery()}}, {{newPolygonQuery()}} and 
> {{nearest()}} only need the points datastructure (BKD).
> If you want to do {{newDistanceSort()}} then you need docvalues for that, but 
> I think it should be moved to a separate field: e.g. docvalues is optional 
> just like any other field in lucene. We can add other methods that make sense 
> to that new docvalues field (e.g. facet by distance/region, expressions 
> support, whatever). It is really disjoint from the core query support: and 
> also currently has a heavyish cost of ~64-bits per value in space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7240) Remove DocValues from LatLonPoint, add DocValuesField for that

2016-04-21 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-7240:

Attachment: LUCENE-7240.patch

Here is a patch splitting it out.

I ran rough indexing benchmark with luceneutil:

previous (docvalues + points):
INDEX SIZE: 1.0679643917828798 GB
380.419779939 sec to index part 0

patch (points only)
INDEX SIZE: 0.6146336644887924 GB
359.832694579 sec to index part 0

So it doesn't buy you a lot on index time, but helps index-size if you don't 
need sorting or similar. And it keeps the stuff organized similar to other 
fields in core.

> Remove DocValues from LatLonPoint, add DocValuesField for that
> --
>
> Key: LUCENE-7240
> URL: https://issues.apache.org/jira/browse/LUCENE-7240
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-7240.patch
>
>
> LatLonPoint needed two-phase intersection initially because of big 
> inefficiencies, but as of LUCENE-7239 all of its query operations:  
> {{newBoxQuery()}}, {{newDistanceQuery()}}, {{newPolygonQuery()}} and 
> {{nearest()}} only need the points datastructure (BKD).
> If you want to do {{newDistanceSort()}} then you need docvalues for that, but 
> I think it should be moved to a separate field: e.g. docvalues is optional 
> just like any other field in lucene. We can add other methods that make sense 
> to that new docvalues field (e.g. facet by distance/region, expressions 
> support, whatever). It is really disjoint from the core query support: and 
> also currently has a heavyish cost of ~64-bits per value in space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_72) - Build # 5794 - Failure!

2016-04-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5794/
Java: 64bit/jdk1.8.0_72 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.request.SchemaTest.testSchemaRequestAccuracy

Error Message:
java.util.LinkedHashMap cannot be cast to org.apache.solr.common.util.NamedList

Stack Trace:
java.lang.ClassCastException: java.util.LinkedHashMap cannot be cast to 
org.apache.solr.common.util.NamedList
at 
__randomizedtesting.SeedInfo.seed([EE56C3E37B0EEFBF:69AACC4D75F61238]:0)
at 
org.apache.solr.client.solrj.response.schema.SchemaResponse.setResponse(SchemaResponse.java:252)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.client.solrj.request.SchemaTest.testSchemaRequestAccuracy(SchemaTest.java:123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_72) - Build # 16558 - Still Failing!

2016-04-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16558/
Java: 32bit/jdk1.8.0_72 -client -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.request.SchemaTest.testSchemaRequestAccuracy

Error Message:
java.util.LinkedHashMap cannot be cast to org.apache.solr.common.util.NamedList

Stack Trace:
java.lang.ClassCastException: java.util.LinkedHashMap cannot be cast to 
org.apache.solr.common.util.NamedList
at 
__randomizedtesting.SeedInfo.seed([8E82F7DBFF646C9C:97EF875F19C911B]:0)
at 
org.apache.solr.client.solrj.response.schema.SchemaResponse.setResponse(SchemaResponse.java:252)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.client.solrj.request.SchemaTest.testSchemaRequestAccuracy(SchemaTest.java:123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:

[jira] [Resolved] (LUCENE-7229) Improve Polygon.relate

2016-04-21 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-7229.
-
   Resolution: Fixed
Fix Version/s: 6.1
   master

> Improve Polygon.relate
> --
>
> Key: LUCENE-7229
> URL: https://issues.apache.org/jira/browse/LUCENE-7229
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: master, 6.1
>
> Attachments: LUCENE-7229.patch, LUCENE-7229.patch
>
>
> This method is currently quite slow and in many cases does more work than is 
> required. The speed actually directly impacts queries (tree traversal) and 
> bounds grid size to something tiny making it less effective.
> I think we should replace it line intersections based on orientation methods 
> described here http://www.cs.berkeley.edu/~jrs/meshpapers/robnotes.pdf and 
> https://www.cs.cmu.edu/~quake/robust.html
> For one, a naive implementation is considerably faster than the method today: 
> both because it reduces the cost of BKD tree traversals and also because it 
> makes grid construction cheaper. This means we can increase its level of 
> detail with similar or lower startup cost. Now its more like a Mario Brothers 
> 2 picture of your polygon instead of Space Invaders.
> Synthetic polygons from luceneUtil
> ||vertices||old QPS||new QPS||old startup cost||new startup cost||
> |50|20.4|21.7|1ms|1ms|
> |500|11.2|14.4|5ms|4ms|
> |1000|7.4|10.0|9ms|8ms|
> Real polygons (33 london districts: 
> http://data.london.gov.uk/2011-boundary-files)
> ||vertices||old QPS||new QPS||old startup cost||new startup cost||
> |avg 5.6k|4.9|8.6|94ms|85ms|
> But I also like using this method because its possible to extend it to remove 
> floating point error completely in the future with techniques described in 
> those links. This may be necessary if we want to do smarter things (e.g. not 
> linear time).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7239) Speed up LatLonPoint's polygon queries when there are many vertices

2016-04-21 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-7239.
-
   Resolution: Fixed
Fix Version/s: 6.1
   master

> Speed up LatLonPoint's polygon queries when there are many vertices
> ---
>
> Key: LUCENE-7239
> URL: https://issues.apache.org/jira/browse/LUCENE-7239
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: master, 6.1
>
> Attachments: LUCENE-7239.patch, LUCENE-7239.patch
>
>
> This is inspired by the "reliability and numerical stability" recommendations 
> at the end of http://www-ma2.upc.es/geoc/Schirra-pointPolygon.pdf.
> Basically our polys need to answer two questions that are slow today:
> contains(point)
> crosses(rectangle)
> Both of these ops only care about a subset of edges: the ones overlapping a y 
> interval range. We can organize these edges in an interval tree to be 
> practical and speed things up a lot. Worst case is still O(n) but those 
> solutions are more complex to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Fwd: Jira Spam - And changes made as a result.

2016-04-21 Thread David Smiley
Wow!  My reading of this is that the general public (i.e. not committers)
won't be able to really do anything other than view JIRA issues unless we
expressly add individuals to a specific project group?  :-(  Clearly that
sucks big time.  Is anyone reading this differently?  Assuming this is
true... at this point maybe there is nothing to do but wait until the
inevitable requests come in for people to create/comment.  Maybe send a
message to the user lists?

~ David

-- Forwarded message -
From: Gav 
Date: Fri, Apr 22, 2016 at 12:14 AM
Subject: Jira Spam - And changes made as a result.
To: infrastruct...@apache.org Infrastructure 


Hi All,

Apologies for notifying you after the fact.

Earlier today (slowing down to a halt about 1/2 hr ago due to our changes)
we had a
big Spam attack directed at the ASF Jira instance.

Many project were affected, including :-

TM, ARROW ACCUMULO, ABDERA, JSPWIKI, QPIDIT, LOGCXX, HAWQ, AMQ, ATLAS,
AIRFLOW, ACE, APEXCORE, RANGER and KYLIN .

During the process we ended up banning 27 IP addresses , deleted well over
200 tickets, and about 2 dozen user accounts.

The spammers were creating accounts using the normal system and going
through the required captchas.

In addition to the ban hammer and deletions and to prevent more spam coming
in, we changed the 'Default Permissions Scheme' so that anyone in the
'jira-users' group are no longer allowed to 'Create' tickets and are no
longer allowed to 'Comment' on any tickets.

Obviously that affects genuine users as well as the spammers, we know that.

Replacement auth instead of jira-users group now includes allowing those in
the 'Administrator, PMC, Committer, Contributor and Developer' ROLES in
jira.

Projects would you please assist in making this work - anyone that is not
in any of those roles for your project; and needs access to be able to
create issues and comment, please do add their jira id to one of the
available roles. (Let us know if you need assistance in this area)

This is a short term solution. For the medium to long term we are working
on providing LDAP authentication for Jira and Confluence through Atlassian
Crowd (likley).

If any projects are still being affected, please notify us as you may be
using another permissions scheme to the one altered. Notify us via INFRA
jira ticket or reply to this mail to infrastruct...@apache.org or join us
on hipchat (https://www.hipchat.com/gIjVtYcNy)

Any project seriously adversely impacted by our changes please do come talk
to us and we'll see what we can work out.

Thanks all for your patience and understanding.

Gav... (ASF Infra)
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


Re: VOTE: RC2 Release apache-solr-ref-guide-6.0.pdf

2016-04-21 Thread Tomás Fernández Löbbe
+1

Not a blocker, but looks like many times images are broken into 2 pages
(happens for example for many of the admin UI screenshot, but also with
smaller images like in the Spatial Filers section). Is there a way to
prevent this in Confluence?
Also, we should try to prevent pasting extremely long examples, some
example outputs take ~3 pages

On Thu, Apr 21, 2016 at 1:34 PM, Joel Bernstein  wrote:

> +1
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Thu, Apr 21, 2016 at 3:53 PM, Cassandra Targett 
> wrote:
>
>> Reminder to VOTE on this thread so we can get the Ref Guide released.
>>
>> Thanks,
>> Cassandra
>>
>> On Mon, Apr 18, 2016 at 6:13 PM, Steve Rowe  wrote:
>> > +1
>> >
>> > --
>> > Steve
>> > www.lucidworks.com
>> >
>> >> On Apr 18, 2016, at 5:59 PM, Cassandra Targett 
>> wrote:
>> >>
>> >> Please VOTE to release the Apache Solr Ref Guide for 6.0:
>> >>
>> https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-6.0-RC2/
>> >>
>> >> $ cat apache-solr-ref-guide-6.0.pdf.sha1
>> >> 9073530b89148ce3f641a42e38249bd1fbb25136  apache-solr-ref-guide-6.0.pdf
>> >>
>> >> Here's my +1.
>> >>
>> >> * Note, RC1 was skipped because there were a few other issues to be
>> >> fixed right after I'd committed it.
>> >>
>> >> Thanks,
>> >> Cassandra
>> >>
>> >> -
>> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> >> For additional commands, e-mail: dev-h...@lucene.apache.org
>> >>
>> >
>> >
>> > -
>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>> >
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>


[jira] [Commented] (SOLR-8145) bin/solr script oom_killer arg incorrect

2016-04-21 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15253082#comment-15253082
 ] 

Anshum Gupta commented on SOLR-8145:


branch_5x:
{code}
commit 9b6773120cee43e762e254216ca03eafa75e
Author: thelabdude 
Date:   Wed Mar 2 11:22:27 2016 -0700

SOLR-8145: Fix position of OOM killer script when starting Solr in the 
background
{code}

branch_5_5
{code}
commit 851a6029e889860951fdb480bf2d658c89639862
Author: thelabdude 
Date:   Wed Mar 2 11:22:27 2016 -0700

SOLR-8145: Fix position of OOM killer script when starting Solr in the 
background
{code}

> bin/solr script oom_killer arg incorrect
> 
>
> Key: SOLR-8145
> URL: https://issues.apache.org/jira/browse/SOLR-8145
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 5.2.1
>Reporter: Nate Dire
>Assignee: Timothy Potter
>Priority: Minor
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8145.patch, SOLR-8145.patch, SOLR-8145.patch
>
>
> I noticed the oom_killer script wasn't working in our 5.2 deployment.
> In the {{bin/solr}} script, the {{OnOutOfMemoryError}} option is being passed 
> as an arg to the jar rather than to the JVM.  I moved it ahead of {{-jar}} 
> and verified it shows up in the JVM args in the UI.
> {noformat}
># run Solr in the background
> nohup "$JAVA" "${SOLR_START_OPTS[@]}" $SOLR_ADDL_ARGS -jar start.jar \
> "-XX:OnOutOfMemoryError=$SOLR_TIP/bin/oom_solr.sh $SOLR_PORT 
> $SOLR_LOGS_DIR" "${SOLR_JETTY_CONFIG[@]}" \
> {noformat}
> Also, I'm not sure what the {{SOLR_PORT}} and {{SOLR_LOGS_DIR}} args are 
> doing--they don't appear to be positional arguments to the jar.
> Attaching a patch against 5.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8769) CloudMLTQParser does not use uniqueKey field name for exclusion

2016-04-21 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15253079#comment-15253079
 ] 

Anshum Gupta commented on SOLR-8769:


branch_5x
{code}
commit 8dc61cecdc933b52a8ec15eb34756e50ee2378ab
Author: anshum 
Date:   Thu Mar 3 15:27:04 2016 -0800

SOLR-8769: Fix document exclusion in mlt query parser in Cloud mode for 
schemas that have non-'id' unique field
{code}
branch_5_5
{code}
commit 66f47a53f904bab2d845a1a3baf2e0090830cfc7
Author: anshum 
Date:   Thu Mar 3 15:27:04 2016 -0800

SOLR-8769: Fix document exclusion in mlt query parser in Cloud mode for 
schemas that have non-'id' unique field
{code}


> CloudMLTQParser does not use uniqueKey field name for exclusion
> ---
>
> Key: SOLR-8769
> URL: https://issues.apache.org/jira/browse/SOLR-8769
> Project: Solr
>  Issue Type: Bug
>Reporter: Erik Hatcher
> Fix For: master, 6.0, 5.5.1
>
>
> Using the {{\{!mlt}}} query parser in cloud mode on a schema with a non-"id" 
> uniqueKey, the main "like this" document won't be excluded properly due to 
> this code:
> {code}
> realMLTQuery.add(createIdQuery("id", id), BooleanClause.Occur.MUST_NOT);
> {code}
> See also 
> https://github.com/apache/lucene-solr/blob/813ca77250db29116812bc949e2a466a70f969a3/solr/core/src/java/org/apache/solr/search/mlt/CloudMLTQParser.java#L166
> Like SimpleMLTQParser, it needs to use the uniqueKey field with this type of 
> code: {{req.getSchema().getUniqueKeyField().getName()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8728) Splitting a shard of a collection created with a rule fails with NPE

2016-04-21 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15253075#comment-15253075
 ] 

Anshum Gupta commented on SOLR-8728:


branch_5x
{code}
commit 07f9cf8aee3523a22c92923f9d4e46a297efc455
Author: anshum 
Date:   Thu Apr 21 15:59:25 2016 -0700

SOLR-8728: Add missing change log entry for 5.5.1
{code}

branch_5_5
{code}
commit 5601f839c5001b1c2cce44b3b6349b1c1de23230
Author: anshum 
Date:   Thu Apr 21 15:59:25 2016 -0700

SOLR-8728: Add missing change log entry for 5.5.1
{code}

> Splitting a shard of a collection created with a rule fails with NPE
> 
>
> Key: SOLR-8728
> URL: https://issues.apache.org/jira/browse/SOLR-8728
> Project: Solr
>  Issue Type: Bug
>Reporter: Shai Erera
>Assignee: Noble Paul
> Fix For: master, 6.0, 5.5.1, 6.1
>
> Attachments: SOLR-8728.patch, SOLR-8728.patch
>
>
> Spinoff from this discussion: http://markmail.org/message/f7liw4hqaagxo7y2
> I wrote a short test which reproduces, will upload shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8779) Fix missing InterruptedException handling in ZkStateReader

2016-04-21 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15253073#comment-15253073
 ] 

Anshum Gupta commented on SOLR-8779:


branch_5x
{code}
commit 1ce5e533c551bf04fd256cd945be2cb9a261f069
Author: Varun Thacker 
Date:   Fri Mar 4 18:08:53 2016 +0530

SOLR-8779: Fix missing InterruptedException handling in ZkStateReader
{code}
branch_5_5
{code}
commit 6024572a53fc3af8fbb2f3d0cf51cf46d7406350
Author: Varun Thacker 
Date:   Fri Mar 4 18:08:53 2016 +0530

SOLR-8779: Fix missing InterruptedException handling in ZkStateReader
{code}

> Fix missing InterruptedException handling in ZkStateReader
> --
>
> Key: SOLR-8779
> URL: https://issues.apache.org/jira/browse/SOLR-8779
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Varun Thacker
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8779.patch
>
>
> I was debugging a zk session expired issue and saw this stack-trace
> {code}
> ERROR - 2016-03-03 06:55:53.873; [   ] org.apache.solr.common.SolrException; 
> OverseerAutoReplicaFailoverThread had an error in its thread work 
> loop.:org.apache.solr.common.SolrException: Error reading cluster properties
>   at 
> org.apache.solr.common.cloud.ZkStateReader.getClusterProps(ZkStateReader.java:738)
>   at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.doWork(OverseerAutoReplicaFailoverThread.java:153)
>   at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:132)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.InterruptedException: sleep interrupted
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.solr.common.cloud.ZkCmdExecutor.retryDelay(ZkCmdExecutor.java:108)
>   at 
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:76)
>   at 
> org.apache.solr.common.cloud.SolrZkClient.exists(SolrZkClient.java:308)
>   at 
> org.apache.solr.common.cloud.ZkStateReader.getClusterProps(ZkStateReader.java:731)
>   ... 3 more
> {code}
> So I audited ZKStateReader and found a couple of places where an 
> InterruptedException was caught but the interrupt flag wasn't set back.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7240) Remove DocValues from LatLonPoint, add DocValuesField for that

2016-04-21 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-7240:
---

 Summary: Remove DocValues from LatLonPoint, add DocValuesField for 
that
 Key: LUCENE-7240
 URL: https://issues.apache.org/jira/browse/LUCENE-7240
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir


LatLonPoint needed two-phase intersection initially because of big 
inefficiencies, but as of LUCENE-7239 all of its query operations:  
{{newBoxQuery()}}, {{newDistanceQuery()}}, {{newPolygonQuery()}} and 
{{nearest()}} only need the points datastructure (BKD).

If you want to do {{newDistanceSort()}} then you need docvalues for that, but I 
think it should be moved to a separate field: e.g. docvalues is optional just 
like any other field in lucene. We can add other methods that make sense to 
that new docvalues field (e.g. facet by distance/region, expressions support, 
whatever). It is really disjoint from the core query support: and also 
currently has a heavyish cost of ~64-bits per value in space.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8449) Multiple restores on the same core does not work

2016-04-21 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15253071#comment-15253071
 ] 

Anshum Gupta commented on SOLR-8449:


branch_5x:
{code}
commit 443fd2d29a4326a2d483c33bcbfeb7e2f636f250
Author: Varun Thacker 
Date:   Sat Mar 5 13:15:19 2016 +0530

SOLR-8449: Fix the core restore functionality to allow restoring multiple 
times on the same core
{code}

branch_5_5
{code}
commit efb7d4463ee5f1146ee193a46f6a146ca3f48d67
Author: Varun Thacker 
Date:   Sat Mar 5 13:15:19 2016 +0530

SOLR-8449: Fix the core restore functionality to allow restoring multiple 
times on the same core
{code}

> Multiple restores on the same core does not work
> 
>
> Key: SOLR-8449
> URL: https://issues.apache.org/jira/browse/SOLR-8449
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java), replication (scripts)
>Affects Versions: 5.2.1, 5.4
> Environment: SUSE Linux Enterprise Server 11 (64 bit) and Windows 7 
> Prof SP1
>Reporter: Johannes Brucher
>Assignee: Varun Thacker
>Priority: Critical
>  Labels: Backup/Restore
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8449.patch, SOLR-8449.patch, log_windows7_sp1.txt
>
>
> Hi all, I facing the following issue with Solr 5.2.1 and the ongoing version 
> 5.4.
> The restore functionality is not working under Linux and causing an exception 
> on Windows machines each time you want to restore an existing backup twice or 
> even more.
> Steps to reproduce:
> 1. Start a Solr instance pointing the solr_home to e.g. the example-DIH  
> folder.
> 2. Select a core, e.g. the “solr” core.
> 3. Switch to the “Documents” tab
> 4. Add a document {“id”:”1”,”title”:”change.me”}
> 5. Do a backup with the following API call 
> “/solr/replication?command=backup=test”
> The backup defaults to the location solr_home/solr/data/snapshot.test
> 6. Add a document to the index {“id”:”2”,”title”:”change.me”}. Now there a 
> two document in the index.
> 7. Restore the back with the following call 
> “/solr/replication?command=restore=test”
> New index location “solr_home/solr/data/restore.snapshot.test” is created 
> without any physical file in it, except the file write.lock. Num Docs is now 
> 1 as expected!
> 8. Add a document to the index {“id”:”3”,”title”:”change.me”}. Now there a 
> two document in the index.
> 9. Restore the same previous created back again with the following call 
> “/solr/replication?command=restore=test”. Notice, there are still 2 docs 
> in the index!!!
> 10. Try to restore again, but still the same, 2 docs in the index…
> 11. Shut down Solr, you will see the index folder 
> “solr_home/solr/data/restore.snapshot.test” disappears.
> 12. Restart Solr. You will notice the following log entry “Solr index 
> directory ‘solr_home/solr/data/restore.snapshot.test’ doesn’t exist. Creating 
> new index”, and indeed the Index is empty, showing 0 documents.
> 13. After the restart, I tried to restore the existing backup again without 
> any results…
> I thing this behavior in not intended!!!
> Even more Problems arise when you run Solr on a Windows machine.
> After step 10 a folder “index” is created under “solr_home/solr/data/” with a 
> write.lock file in it. After that, the following exception is thrown: 
> …Error closing IndexWriter
> java.lang.IllegalStateException: file: 
> MMapDirectory@D:\solr\Solr_versions\solr-5.2.1\...restore.snapshot.test 
> lockFactory=org.apache.lucene.store.Nat
> iveFSLockFactory@3d3d7a0e appears both in delegate and in cache
> The log file from the Windows test is attached.
> Let me know if you have problems reproducing the same behavior,
> Regards Johannes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8771) Multi-threaded core shutdown creates executor per core

2016-04-21 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15253067#comment-15253067
 ] 

Anshum Gupta commented on SOLR-8771:


For some reason the bot didn't comment on JIRA after the commits. Here are the 
back ports:
branch_5x:
{code}
commit 27ca43a16ed6d9ee83378b0532ba6c84a900eada
Author: anshum 
Date:   Thu Apr 21 16:53:15 2016 -0700

SOLR-8771: Fix broken build that broke during backporting to 5x

commit 34340f540bc8a4ee8cbf80719093c677ffa0f128
Author: Mark Miller 
Date:   Tue Mar 1 12:13:56 2016 -0800

SOLR-8771: Multi-threaded core shutdown creates executor per core.
{code}
branch_5_5
{code}
commit 297bdb63aa6720c6c204ce921f8bfc5854b4cfd4
Author: anshum 
Date:   Thu Apr 21 16:53:15 2016 -0700

SOLR-8771: Fix broken build that broke during backporting to 5x

commit 9698d1bee31eb5f103f8894246acf7f8f5479194
Author: Mark Miller 
Date:   Tue Mar 1 12:13:56 2016 -0800

SOLR-8771: Multi-threaded core shutdown creates executor per core.
{code}

> Multi-threaded core shutdown creates executor per core
> --
>
> Key: SOLR-8771
> URL: https://issues.apache.org/jira/browse/SOLR-8771
> Project: Solr
>  Issue Type: Bug
>Reporter: Mike Drob
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8771.patch
>
>
> The multi-threaded core shutdown that was added in SOLR-8615 has a bug where 
> a new executor is created per core. This means we don't get any benefit from 
> the parallel operations.
> Patch incoming shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8771) Multi-threaded core shutdown creates executor per core

2016-04-21 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-8771.

Resolution: Fixed

> Multi-threaded core shutdown creates executor per core
> --
>
> Key: SOLR-8771
> URL: https://issues.apache.org/jira/browse/SOLR-8771
> Project: Solr
>  Issue Type: Bug
>Reporter: Mike Drob
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8771.patch
>
>
> The multi-threaded core shutdown that was added in SOLR-8615 has a bug where 
> a new executor is created per core. This means we don't get any benefit from 
> the parallel operations.
> Patch incoming shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8145) bin/solr script oom_killer arg incorrect

2016-04-21 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-8145.

Resolution: Fixed

> bin/solr script oom_killer arg incorrect
> 
>
> Key: SOLR-8145
> URL: https://issues.apache.org/jira/browse/SOLR-8145
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 5.2.1
>Reporter: Nate Dire
>Assignee: Timothy Potter
>Priority: Minor
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8145.patch, SOLR-8145.patch, SOLR-8145.patch
>
>
> I noticed the oom_killer script wasn't working in our 5.2 deployment.
> In the {{bin/solr}} script, the {{OnOutOfMemoryError}} option is being passed 
> as an arg to the jar rather than to the JVM.  I moved it ahead of {{-jar}} 
> and verified it shows up in the JVM args in the UI.
> {noformat}
># run Solr in the background
> nohup "$JAVA" "${SOLR_START_OPTS[@]}" $SOLR_ADDL_ARGS -jar start.jar \
> "-XX:OnOutOfMemoryError=$SOLR_TIP/bin/oom_solr.sh $SOLR_PORT 
> $SOLR_LOGS_DIR" "${SOLR_JETTY_CONFIG[@]}" \
> {noformat}
> Also, I'm not sure what the {{SOLR_PORT}} and {{SOLR_LOGS_DIR}} args are 
> doing--they don't appear to be positional arguments to the jar.
> Attaching a patch against 5.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8769) CloudMLTQParser does not use uniqueKey field name for exclusion

2016-04-21 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-8769:
---
Fix Version/s: 5.5.1

> CloudMLTQParser does not use uniqueKey field name for exclusion
> ---
>
> Key: SOLR-8769
> URL: https://issues.apache.org/jira/browse/SOLR-8769
> Project: Solr
>  Issue Type: Bug
>Reporter: Erik Hatcher
> Fix For: master, 6.0, 5.5.1
>
>
> Using the {{\{!mlt}}} query parser in cloud mode on a schema with a non-"id" 
> uniqueKey, the main "like this" document won't be excluded properly due to 
> this code:
> {code}
> realMLTQuery.add(createIdQuery("id", id), BooleanClause.Occur.MUST_NOT);
> {code}
> See also 
> https://github.com/apache/lucene-solr/blob/813ca77250db29116812bc949e2a466a70f969a3/solr/core/src/java/org/apache/solr/search/mlt/CloudMLTQParser.java#L166
> Like SimpleMLTQParser, it needs to use the uniqueKey field with this type of 
> code: {{req.getSchema().getUniqueKeyField().getName()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8728) Splitting a shard of a collection created with a rule fails with NPE

2016-04-21 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-8728.

Resolution: Fixed

> Splitting a shard of a collection created with a rule fails with NPE
> 
>
> Key: SOLR-8728
> URL: https://issues.apache.org/jira/browse/SOLR-8728
> Project: Solr
>  Issue Type: Bug
>Reporter: Shai Erera
>Assignee: Noble Paul
> Fix For: master, 6.0, 5.5.1, 6.1
>
> Attachments: SOLR-8728.patch, SOLR-8728.patch
>
>
> Spinoff from this discussion: http://markmail.org/message/f7liw4hqaagxo7y2
> I wrote a short test which reproduces, will upload shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8449) Multiple restores on the same core does not work

2016-04-21 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-8449.

Resolution: Fixed

> Multiple restores on the same core does not work
> 
>
> Key: SOLR-8449
> URL: https://issues.apache.org/jira/browse/SOLR-8449
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java), replication (scripts)
>Affects Versions: 5.2.1, 5.4
> Environment: SUSE Linux Enterprise Server 11 (64 bit) and Windows 7 
> Prof SP1
>Reporter: Johannes Brucher
>Assignee: Varun Thacker
>Priority: Critical
>  Labels: Backup/Restore
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8449.patch, SOLR-8449.patch, log_windows7_sp1.txt
>
>
> Hi all, I facing the following issue with Solr 5.2.1 and the ongoing version 
> 5.4.
> The restore functionality is not working under Linux and causing an exception 
> on Windows machines each time you want to restore an existing backup twice or 
> even more.
> Steps to reproduce:
> 1. Start a Solr instance pointing the solr_home to e.g. the example-DIH  
> folder.
> 2. Select a core, e.g. the “solr” core.
> 3. Switch to the “Documents” tab
> 4. Add a document {“id”:”1”,”title”:”change.me”}
> 5. Do a backup with the following API call 
> “/solr/replication?command=backup=test”
> The backup defaults to the location solr_home/solr/data/snapshot.test
> 6. Add a document to the index {“id”:”2”,”title”:”change.me”}. Now there a 
> two document in the index.
> 7. Restore the back with the following call 
> “/solr/replication?command=restore=test”
> New index location “solr_home/solr/data/restore.snapshot.test” is created 
> without any physical file in it, except the file write.lock. Num Docs is now 
> 1 as expected!
> 8. Add a document to the index {“id”:”3”,”title”:”change.me”}. Now there a 
> two document in the index.
> 9. Restore the same previous created back again with the following call 
> “/solr/replication?command=restore=test”. Notice, there are still 2 docs 
> in the index!!!
> 10. Try to restore again, but still the same, 2 docs in the index…
> 11. Shut down Solr, you will see the index folder 
> “solr_home/solr/data/restore.snapshot.test” disappears.
> 12. Restart Solr. You will notice the following log entry “Solr index 
> directory ‘solr_home/solr/data/restore.snapshot.test’ doesn’t exist. Creating 
> new index”, and indeed the Index is empty, showing 0 documents.
> 13. After the restart, I tried to restore the existing backup again without 
> any results…
> I thing this behavior in not intended!!!
> Even more Problems arise when you run Solr on a Windows machine.
> After step 10 a folder “index” is created under “solr_home/solr/data/” with a 
> write.lock file in it. After that, the following exception is thrown: 
> …Error closing IndexWriter
> java.lang.IllegalStateException: file: 
> MMapDirectory@D:\solr\Solr_versions\solr-5.2.1\...restore.snapshot.test 
> lockFactory=org.apache.lucene.store.Nat
> iveFSLockFactory@3d3d7a0e appears both in delegate and in cache
> The log file from the Windows test is attached.
> Let me know if you have problems reproducing the same behavior,
> Regards Johannes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8779) Fix missing InterruptedException handling in ZkStateReader

2016-04-21 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-8779.

Resolution: Fixed

> Fix missing InterruptedException handling in ZkStateReader
> --
>
> Key: SOLR-8779
> URL: https://issues.apache.org/jira/browse/SOLR-8779
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Varun Thacker
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8779.patch
>
>
> I was debugging a zk session expired issue and saw this stack-trace
> {code}
> ERROR - 2016-03-03 06:55:53.873; [   ] org.apache.solr.common.SolrException; 
> OverseerAutoReplicaFailoverThread had an error in its thread work 
> loop.:org.apache.solr.common.SolrException: Error reading cluster properties
>   at 
> org.apache.solr.common.cloud.ZkStateReader.getClusterProps(ZkStateReader.java:738)
>   at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.doWork(OverseerAutoReplicaFailoverThread.java:153)
>   at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:132)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.InterruptedException: sleep interrupted
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.solr.common.cloud.ZkCmdExecutor.retryDelay(ZkCmdExecutor.java:108)
>   at 
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:76)
>   at 
> org.apache.solr.common.cloud.SolrZkClient.exists(SolrZkClient.java:308)
>   at 
> org.apache.solr.common.cloud.ZkStateReader.getClusterProps(ZkStateReader.java:731)
>   ... 3 more
> {code}
> So I audited ZKStateReader and found a couple of places where an 
> InterruptedException was caught but the interrupt flag wasn't set back.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Jenkins for branch_5_5

2016-04-21 Thread Steve Rowe
Done.

--
Steve
www.lucidworks.com

> On Apr 21, 2016, at 8:07 PM, Anshum Gupta  wrote:
> 
> Hi,
> 
> Can someone enable the jenkins builds for branch_5_5. I'm getting close to 
> wrapping up all the back ports for the 5.5.1 release. Thanks.
> 
> -- 
> Anshum Gupta


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 1097 - Still Failing

2016-04-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1097/

3 tests failed.
FAILED:  
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithExplicitRefresh

Error Message:
Could not find collection : c1

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : c1
at 
__randomizedtesting.SeedInfo.seed([4B17D181DD650A:1FF1662651BDA3CF]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:170)
at 
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdate(ZkStateReaderTest.java:135)
at 
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithExplicitRefresh(ZkStateReaderTest.java:41)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI

Error Message:
ObjectTracker found 2 object(s) that were not released!!! 
[MockDirectoryWrapper, 

Jenkins for branch_5_5

2016-04-21 Thread Anshum Gupta
Hi,

Can someone enable the jenkins builds for branch_5_5. I'm getting close to
wrapping up all the back ports for the 5.5.1 release. Thanks.

-- 
Anshum Gupta


[jira] [Updated] (LUCENE-7239) Speed up LatLonPoint's polygon queries when there are many vertices

2016-04-21 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-7239:

Attachment: LUCENE-7239.patch

I added minor cleanups and comments to make this less sandy. Its passed 100 
beast rounds. I will test some more and get it in jenkins and followup with 
other improvements.

> Speed up LatLonPoint's polygon queries when there are many vertices
> ---
>
> Key: LUCENE-7239
> URL: https://issues.apache.org/jira/browse/LUCENE-7239
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-7239.patch, LUCENE-7239.patch
>
>
> This is inspired by the "reliability and numerical stability" recommendations 
> at the end of http://www-ma2.upc.es/geoc/Schirra-pointPolygon.pdf.
> Basically our polys need to answer two questions that are slow today:
> contains(point)
> crosses(rectangle)
> Both of these ops only care about a subset of edges: the ones overlapping a y 
> interval range. We can organize these edges in an interval tree to be 
> practical and speed things up a lot. Worst case is still O(n) but those 
> solutions are more complex to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9029) regular fails since ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy

2016-04-21 Thread Scott Blum (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Blum reassigned SOLR-9029:


Assignee: Scott Blum

> regular fails since  
> ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy 
> 
>
> Key: SOLR-9029
> URL: https://issues.apache.org/jira/browse/SOLR-9029
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Scott Blum
>
> jenkins started to semi-regularly complain about 
> ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy on march 7 (53 
> failures in 45 days at current count)
> March 7th is not-coincidently when commit 
> 093a8ce57c06f1bf2f71ddde52dcc7b40cbd6197 for SOLR-8745 was made, modifying 
> both the test & a bunch of ClusterState code.
> 
> Sample failure...
> https://builds.apache.org/job/Lucene-Solr-Tests-master/1096
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=ZkStateReaderTest 
> -Dtests.method=testStateFormatUpdateWithExplicitRefreshLazy 
> -Dtests.seed=78F99EDE682EC04B -Dtests.multiplier=2 -Dtests.slow=true 
> -Dtests.locale=tr-TR -Dtests.timezone=Europe/Tallinn -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.45s J0 | 
> ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy <<<
>[junit4]> Throwable #1: org.apache.solr.common.SolrException: Could 
> not find collection : c1
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([78F99EDE682EC04B:13B63EA311211D71]:0)
>[junit4]>  at 
> org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:170)
>[junit4]>  at 
> org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdate(ZkStateReaderTest.java:135)
>[junit4]>  at 
> org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy(ZkStateReaderTest.java:46)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {noformat}
> ...i've also seen this fail locally, but i've never been able to reproduce it 
> with the same seed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9029) regular fails since ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy

2016-04-21 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252998#comment-15252998
 ] 

Scott Blum commented on SOLR-9029:
--

Scanned through the code, nothing jumps out at me. I'll dig deeper at some 
point.

> regular fails since  
> ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy 
> 
>
> Key: SOLR-9029
> URL: https://issues.apache.org/jira/browse/SOLR-9029
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Scott Blum
>
> jenkins started to semi-regularly complain about 
> ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy on march 7 (53 
> failures in 45 days at current count)
> March 7th is not-coincidently when commit 
> 093a8ce57c06f1bf2f71ddde52dcc7b40cbd6197 for SOLR-8745 was made, modifying 
> both the test & a bunch of ClusterState code.
> 
> Sample failure...
> https://builds.apache.org/job/Lucene-Solr-Tests-master/1096
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=ZkStateReaderTest 
> -Dtests.method=testStateFormatUpdateWithExplicitRefreshLazy 
> -Dtests.seed=78F99EDE682EC04B -Dtests.multiplier=2 -Dtests.slow=true 
> -Dtests.locale=tr-TR -Dtests.timezone=Europe/Tallinn -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.45s J0 | 
> ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy <<<
>[junit4]> Throwable #1: org.apache.solr.common.SolrException: Could 
> not find collection : c1
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([78F99EDE682EC04B:13B63EA311211D71]:0)
>[junit4]>  at 
> org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:170)
>[junit4]>  at 
> org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdate(ZkStateReaderTest.java:135)
>[junit4]>  at 
> org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy(ZkStateReaderTest.java:46)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {noformat}
> ...i've also seen this fail locally, but i've never been able to reproduce it 
> with the same seed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 538 - Failure!

2016-04-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/538/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestSolrConfigHandlerCloud

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.TestSolrConfigHandlerCloud: 1) Thread[id=5441, 
name=Thread-1861, state=TIMED_WAITING, group=TGRP-TestSolrConfigHandlerCloud]   
  at java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101)
 at 
org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:333) 
at 
org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48)
 at 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75)
 at 
org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:108)
 at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:79)   
  at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:937) 
at org.apache.solr.core.SolrCore.lambda$getConfListener$6(SolrCore.java:2488)   
  at org.apache.solr.core.SolrCore$$Lambda$26/184813436.run(Unknown Source) 
at org.apache.solr.cloud.ZkController$4.run(ZkController.java:2425)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.TestSolrConfigHandlerCloud: 
   1) Thread[id=5441, name=Thread-1861, state=TIMED_WAITING, 
group=TGRP-TestSolrConfigHandlerCloud]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101)
at 
org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:333)
at 
org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48)
at 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75)
at 
org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:108)
at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:79)
at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:937)
at 
org.apache.solr.core.SolrCore.lambda$getConfListener$6(SolrCore.java:2488)
at org.apache.solr.core.SolrCore$$Lambda$26/184813436.run(Unknown 
Source)
at org.apache.solr.cloud.ZkController$4.run(ZkController.java:2425)
at __randomizedtesting.SeedInfo.seed([DFC5836C1D10E053]:0)




Build Log:
[...truncated 10896 lines...]
   [junit4] Suite: org.apache.solr.handler.TestSolrConfigHandlerCloud
   [junit4]   2> Creating dataDir: 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J1/temp/solr.handler.TestSolrConfigHandlerCloud_DFC5836C1D10E053-001/init-core-data-001
   [junit4]   2> 670212 INFO  
(SUITE-TestSolrConfigHandlerCloud-seed#[DFC5836C1D10E053]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false)
   [junit4]   2> 670212 INFO  
(SUITE-TestSolrConfigHandlerCloud-seed#[DFC5836C1D10E053]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /uta/jn
   [junit4]   2> 670217 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[DFC5836C1D10E053]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 670217 INFO  (Thread-1748) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 670217 INFO  (Thread-1748) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 670317 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[DFC5836C1D10E053]) [] 
o.a.s.c.ZkTestServer start zk server on port:48064
   [junit4]   2> 670317 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[DFC5836C1D10E053]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 670318 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[DFC5836C1D10E053]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 670383 INFO  (zkCallback-667-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@206d1077 
name:ZooKeeperConnection Watcher:127.0.0.1:48064 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 670383 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[DFC5836C1D10E053]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 670383 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[DFC5836C1D10E053]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 670383 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[DFC5836C1D10E053]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2> 670395 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[DFC5836C1D10E053]) [] 

[jira] [Assigned] (SOLR-9028) fix bugs in (add sanity checks for) SSL clientAuth testing

2016-04-21 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reassigned SOLR-9028:
--

Assignee: Hoss Man

> fix bugs in (add sanity checks for) SSL clientAuth testing
> --
>
> Key: SOLR-9028
> URL: https://issues.apache.org/jira/browse/SOLR-9028
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-9028.patch
>
>
> While looking into SOLR-8970 i realized there was a whole heap of problems 
> with how clientAuth was being handled in tests.  Notably: it wasn't actaully 
> being used when the randomization selects it (aparently due to a copy/paste 
> mistake in SOLR-7166).  But there are few other misc issues (improper usage 
> of sysprops overrides for tests, missuage of keystore/truststore in test 
> clients, etc..)
> I'm working up a patch to fix all of this, and add some much needed tests to 
> *explicitly* verify both SSL and clientAuth that will include some "false 
> positive" verifications, and some "test the test" checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9029) regular fails since ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy

2016-04-21 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252978#comment-15252978
 ] 

Hoss Man commented on SOLR-9029:


[~shalinmangar] & [~dragonsinth] - anything jump out at you?

> regular fails since  
> ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy 
> 
>
> Key: SOLR-9029
> URL: https://issues.apache.org/jira/browse/SOLR-9029
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>
> jenkins started to semi-regularly complain about 
> ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy on march 7 (53 
> failures in 45 days at current count)
> March 7th is not-coincidently when commit 
> 093a8ce57c06f1bf2f71ddde52dcc7b40cbd6197 for SOLR-8745 was made, modifying 
> both the test & a bunch of ClusterState code.
> 
> Sample failure...
> https://builds.apache.org/job/Lucene-Solr-Tests-master/1096
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=ZkStateReaderTest 
> -Dtests.method=testStateFormatUpdateWithExplicitRefreshLazy 
> -Dtests.seed=78F99EDE682EC04B -Dtests.multiplier=2 -Dtests.slow=true 
> -Dtests.locale=tr-TR -Dtests.timezone=Europe/Tallinn -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.45s J0 | 
> ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy <<<
>[junit4]> Throwable #1: org.apache.solr.common.SolrException: Could 
> not find collection : c1
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([78F99EDE682EC04B:13B63EA311211D71]:0)
>[junit4]>  at 
> org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:170)
>[junit4]>  at 
> org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdate(ZkStateReaderTest.java:135)
>[junit4]>  at 
> org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy(ZkStateReaderTest.java:46)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {noformat}
> ...i've also seen this fail locally, but i've never been able to reproduce it 
> with the same seed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8449) Multiple restores on the same core does not work

2016-04-21 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-8449:
---
Fix Version/s: 5.5.1

> Multiple restores on the same core does not work
> 
>
> Key: SOLR-8449
> URL: https://issues.apache.org/jira/browse/SOLR-8449
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java), replication (scripts)
>Affects Versions: 5.2.1, 5.4
> Environment: SUSE Linux Enterprise Server 11 (64 bit) and Windows 7 
> Prof SP1
>Reporter: Johannes Brucher
>Assignee: Varun Thacker
>Priority: Critical
>  Labels: Backup/Restore
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8449.patch, SOLR-8449.patch, log_windows7_sp1.txt
>
>
> Hi all, I facing the following issue with Solr 5.2.1 and the ongoing version 
> 5.4.
> The restore functionality is not working under Linux and causing an exception 
> on Windows machines each time you want to restore an existing backup twice or 
> even more.
> Steps to reproduce:
> 1. Start a Solr instance pointing the solr_home to e.g. the example-DIH  
> folder.
> 2. Select a core, e.g. the “solr” core.
> 3. Switch to the “Documents” tab
> 4. Add a document {“id”:”1”,”title”:”change.me”}
> 5. Do a backup with the following API call 
> “/solr/replication?command=backup=test”
> The backup defaults to the location solr_home/solr/data/snapshot.test
> 6. Add a document to the index {“id”:”2”,”title”:”change.me”}. Now there a 
> two document in the index.
> 7. Restore the back with the following call 
> “/solr/replication?command=restore=test”
> New index location “solr_home/solr/data/restore.snapshot.test” is created 
> without any physical file in it, except the file write.lock. Num Docs is now 
> 1 as expected!
> 8. Add a document to the index {“id”:”3”,”title”:”change.me”}. Now there a 
> two document in the index.
> 9. Restore the same previous created back again with the following call 
> “/solr/replication?command=restore=test”. Notice, there are still 2 docs 
> in the index!!!
> 10. Try to restore again, but still the same, 2 docs in the index…
> 11. Shut down Solr, you will see the index folder 
> “solr_home/solr/data/restore.snapshot.test” disappears.
> 12. Restart Solr. You will notice the following log entry “Solr index 
> directory ‘solr_home/solr/data/restore.snapshot.test’ doesn’t exist. Creating 
> new index”, and indeed the Index is empty, showing 0 documents.
> 13. After the restart, I tried to restore the existing backup again without 
> any results…
> I thing this behavior in not intended!!!
> Even more Problems arise when you run Solr on a Windows machine.
> After step 10 a folder “index” is created under “solr_home/solr/data/” with a 
> write.lock file in it. After that, the following exception is thrown: 
> …Error closing IndexWriter
> java.lang.IllegalStateException: file: 
> MMapDirectory@D:\solr\Solr_versions\solr-5.2.1\...restore.snapshot.test 
> lockFactory=org.apache.lucene.store.Nat
> iveFSLockFactory@3d3d7a0e appears both in delegate and in cache
> The log file from the Windows test is attached.
> Let me know if you have problems reproducing the same behavior,
> Regards Johannes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9029) regular fails since ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy

2016-04-21 Thread Hoss Man (JIRA)
Hoss Man created SOLR-9029:
--

 Summary: regular fails since  
ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy 
 Key: SOLR-9029
 URL: https://issues.apache.org/jira/browse/SOLR-9029
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man


jenkins started to semi-regularly complain about 
ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy on march 7 (53 
failures in 45 days at current count)

March 7th is not-coincidently when commit 
093a8ce57c06f1bf2f71ddde52dcc7b40cbd6197 for SOLR-8745 was made, modifying both 
the test & a bunch of ClusterState code.



Sample failure...

https://builds.apache.org/job/Lucene-Solr-Tests-master/1096

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=ZkStateReaderTest 
-Dtests.method=testStateFormatUpdateWithExplicitRefreshLazy 
-Dtests.seed=78F99EDE682EC04B -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=tr-TR -Dtests.timezone=Europe/Tallinn -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR   0.45s J0 | 
ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy <<<
   [junit4]> Throwable #1: org.apache.solr.common.SolrException: Could not 
find collection : c1
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([78F99EDE682EC04B:13B63EA311211D71]:0)
   [junit4]>at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:170)
   [junit4]>at 
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdate(ZkStateReaderTest.java:135)
   [junit4]>at 
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy(ZkStateReaderTest.java:46)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
{noformat}

...i've also seen this fail locally, but i've never been able to reproduce it 
with the same seed.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6968) LSH Filter

2016-04-21 Thread Andy Hind (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252743#comment-15252743
 ] 

Andy Hind edited comment on LUCENE-6968 at 4/21/16 11:06 PM:
-

Hi

It would be quite common to use min hashing after shingling. At this point the 
number of possible word combinations vs the size of the hash is important. With 
shingles of 5 words from 100,000 that is 10e25 combinations. Some naive 
processing of the ~500k  Enron emails (splitting on white space, case folding 
and 5 word shingles) gives ~52M  combinations. So a long hash would be better 
at 1.8e19. I have not yet looked at a larger corpus.

The LSH query is neat. However the logic can give banding where the last band 
is uneven. In the patch I think the last band would be dropped unless bands * 
rows in band  = # of hashes

The underlying state of the source filter may also be lost (if using shingling)

I do not believe the similarity is required at all. I think you can get Jaccard 
distance using constant score queries and disabling coordination on the boolean 
query. 

I went for 128-bit hashes, or a 32 bit hash identifier + 96 bit hash with a bit 
more flexibility allowing a minimum set of hash values for a bunch of hashes. 
There is clearly some trade off for speed of hashing and over representing 
short documents. The minimum set may be a solution to this.  I think there is 
some interesting research there. 

I will add my patch inspired by the original  and apologise for the mixed 
formatting in advance ..



was (Author: andyhind):
Hi

It would be quite common to use min hashing after shingling. At this point the 
number of possible word combinations vs the size of the hash is important. With 
shingles of 5 words from 100,000 that is 10e25 combinations. Some naive 
processing of the ~500k  Enron emails (splitting on white space, case folding 
and 5 word shingles) gives ~1e13  combinations. So a long hash would be better 
at 1.8e19. I have not yet looked at a larger corpus.

The LSH query is neat. However the logic can give banding where the last band 
is uneven. In the patch I think the last band would be dropped unless bands * 
rows in band  = # of hashes

The underlying state of the source filter may also be lost (if using shingling)

I do not believe the similarity is required at all. I think you can get Jaccard 
distance using constant score queries and disabling coordination on the boolean 
query. 

I went for 128-bit hashes, or a 32 bit hash identifier + 96 bit hash with a bit 
more flexibility allowing a minimum set of hash values for a bunch of hashes. 
There is clearly some trade off for speed of hashing and over representing 
short documents. The minimum set may be a solution to this.  I think there is 
some interesting research there. 

I will add my patch inspired by the original  and apologise for the mixed 
formatting in advance ..


> LSH Filter
> --
>
> Key: LUCENE-6968
> URL: https://issues.apache.org/jira/browse/LUCENE-6968
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Cao Manh Dat
> Attachments: LUCENE-6968.patch, LUCENE-6968.patch, LUCENE-6968.patch
>
>
> I'm planning to implement LSH. Which support query like this
> {quote}
> Find similar documents that have 0.8 or higher similar score with a given 
> document. Similarity measurement can be cosine, jaccard, euclid..
> {quote}
> For example. Given following corpus
> {quote}
> 1. Solr is an open source search engine based on Lucene
> 2. Solr is an open source enterprise search engine based on Lucene
> 3. Solr is an popular open source enterprise search engine based on Lucene
> 4. Apache Lucene is a high-performance, full-featured text search engine 
> library written entirely in Java
> {quote}
> We wanna find documents that have 0.6 score in jaccard measurement with this 
> doc
> {quote}
> Solr is an open source search engine
> {quote}
> It will return only docs 1,2 and 3 (MoreLikeThis will also return doc 4)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8449) Multiple restores on the same core does not work

2016-04-21 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252968#comment-15252968
 ] 

Anshum Gupta edited comment on SOLR-8449 at 4/21/16 11:04 PM:
--

back porting for 5.5.1


was (Author: anshumg):
back porting to 5.5.1

> Multiple restores on the same core does not work
> 
>
> Key: SOLR-8449
> URL: https://issues.apache.org/jira/browse/SOLR-8449
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java), replication (scripts)
>Affects Versions: 5.2.1, 5.4
> Environment: SUSE Linux Enterprise Server 11 (64 bit) and Windows 7 
> Prof SP1
>Reporter: Johannes Brucher
>Assignee: Varun Thacker
>Priority: Critical
>  Labels: Backup/Restore
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8449.patch, SOLR-8449.patch, log_windows7_sp1.txt
>
>
> Hi all, I facing the following issue with Solr 5.2.1 and the ongoing version 
> 5.4.
> The restore functionality is not working under Linux and causing an exception 
> on Windows machines each time you want to restore an existing backup twice or 
> even more.
> Steps to reproduce:
> 1. Start a Solr instance pointing the solr_home to e.g. the example-DIH  
> folder.
> 2. Select a core, e.g. the “solr” core.
> 3. Switch to the “Documents” tab
> 4. Add a document {“id”:”1”,”title”:”change.me”}
> 5. Do a backup with the following API call 
> “/solr/replication?command=backup=test”
> The backup defaults to the location solr_home/solr/data/snapshot.test
> 6. Add a document to the index {“id”:”2”,”title”:”change.me”}. Now there a 
> two document in the index.
> 7. Restore the back with the following call 
> “/solr/replication?command=restore=test”
> New index location “solr_home/solr/data/restore.snapshot.test” is created 
> without any physical file in it, except the file write.lock. Num Docs is now 
> 1 as expected!
> 8. Add a document to the index {“id”:”3”,”title”:”change.me”}. Now there a 
> two document in the index.
> 9. Restore the same previous created back again with the following call 
> “/solr/replication?command=restore=test”. Notice, there are still 2 docs 
> in the index!!!
> 10. Try to restore again, but still the same, 2 docs in the index…
> 11. Shut down Solr, you will see the index folder 
> “solr_home/solr/data/restore.snapshot.test” disappears.
> 12. Restart Solr. You will notice the following log entry “Solr index 
> directory ‘solr_home/solr/data/restore.snapshot.test’ doesn’t exist. Creating 
> new index”, and indeed the Index is empty, showing 0 documents.
> 13. After the restart, I tried to restore the existing backup again without 
> any results…
> I thing this behavior in not intended!!!
> Even more Problems arise when you run Solr on a Windows machine.
> After step 10 a folder “index” is created under “solr_home/solr/data/” with a 
> write.lock file in it. After that, the following exception is thrown: 
> …Error closing IndexWriter
> java.lang.IllegalStateException: file: 
> MMapDirectory@D:\solr\Solr_versions\solr-5.2.1\...restore.snapshot.test 
> lockFactory=org.apache.lucene.store.Nat
> iveFSLockFactory@3d3d7a0e appears both in delegate and in cache
> The log file from the Windows test is attached.
> Let me know if you have problems reproducing the same behavior,
> Regards Johannes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-8449) Multiple restores on the same core does not work

2016-04-21 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta reopened SOLR-8449:


back porting to 5.5.1

> Multiple restores on the same core does not work
> 
>
> Key: SOLR-8449
> URL: https://issues.apache.org/jira/browse/SOLR-8449
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java), replication (scripts)
>Affects Versions: 5.2.1, 5.4
> Environment: SUSE Linux Enterprise Server 11 (64 bit) and Windows 7 
> Prof SP1
>Reporter: Johannes Brucher
>Assignee: Varun Thacker
>Priority: Critical
>  Labels: Backup/Restore
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8449.patch, SOLR-8449.patch, log_windows7_sp1.txt
>
>
> Hi all, I facing the following issue with Solr 5.2.1 and the ongoing version 
> 5.4.
> The restore functionality is not working under Linux and causing an exception 
> on Windows machines each time you want to restore an existing backup twice or 
> even more.
> Steps to reproduce:
> 1. Start a Solr instance pointing the solr_home to e.g. the example-DIH  
> folder.
> 2. Select a core, e.g. the “solr” core.
> 3. Switch to the “Documents” tab
> 4. Add a document {“id”:”1”,”title”:”change.me”}
> 5. Do a backup with the following API call 
> “/solr/replication?command=backup=test”
> The backup defaults to the location solr_home/solr/data/snapshot.test
> 6. Add a document to the index {“id”:”2”,”title”:”change.me”}. Now there a 
> two document in the index.
> 7. Restore the back with the following call 
> “/solr/replication?command=restore=test”
> New index location “solr_home/solr/data/restore.snapshot.test” is created 
> without any physical file in it, except the file write.lock. Num Docs is now 
> 1 as expected!
> 8. Add a document to the index {“id”:”3”,”title”:”change.me”}. Now there a 
> two document in the index.
> 9. Restore the same previous created back again with the following call 
> “/solr/replication?command=restore=test”. Notice, there are still 2 docs 
> in the index!!!
> 10. Try to restore again, but still the same, 2 docs in the index…
> 11. Shut down Solr, you will see the index folder 
> “solr_home/solr/data/restore.snapshot.test” disappears.
> 12. Restart Solr. You will notice the following log entry “Solr index 
> directory ‘solr_home/solr/data/restore.snapshot.test’ doesn’t exist. Creating 
> new index”, and indeed the Index is empty, showing 0 documents.
> 13. After the restart, I tried to restore the existing backup again without 
> any results…
> I thing this behavior in not intended!!!
> Even more Problems arise when you run Solr on a Windows machine.
> After step 10 a folder “index” is created under “solr_home/solr/data/” with a 
> write.lock file in it. After that, the following exception is thrown: 
> …Error closing IndexWriter
> java.lang.IllegalStateException: file: 
> MMapDirectory@D:\solr\Solr_versions\solr-5.2.1\...restore.snapshot.test 
> lockFactory=org.apache.lucene.store.Nat
> iveFSLockFactory@3d3d7a0e appears both in delegate and in cache
> The log file from the Windows test is attached.
> Let me know if you have problems reproducing the same behavior,
> Regards Johannes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8779) Fix missing InterruptedException handling in ZkStateReader

2016-04-21 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-8779:
---
Fix Version/s: 5.5.1

> Fix missing InterruptedException handling in ZkStateReader
> --
>
> Key: SOLR-8779
> URL: https://issues.apache.org/jira/browse/SOLR-8779
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Varun Thacker
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8779.patch
>
>
> I was debugging a zk session expired issue and saw this stack-trace
> {code}
> ERROR - 2016-03-03 06:55:53.873; [   ] org.apache.solr.common.SolrException; 
> OverseerAutoReplicaFailoverThread had an error in its thread work 
> loop.:org.apache.solr.common.SolrException: Error reading cluster properties
>   at 
> org.apache.solr.common.cloud.ZkStateReader.getClusterProps(ZkStateReader.java:738)
>   at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.doWork(OverseerAutoReplicaFailoverThread.java:153)
>   at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:132)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.InterruptedException: sleep interrupted
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.solr.common.cloud.ZkCmdExecutor.retryDelay(ZkCmdExecutor.java:108)
>   at 
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:76)
>   at 
> org.apache.solr.common.cloud.SolrZkClient.exists(SolrZkClient.java:308)
>   at 
> org.apache.solr.common.cloud.ZkStateReader.getClusterProps(ZkStateReader.java:731)
>   ... 3 more
> {code}
> So I audited ZKStateReader and found a couple of places where an 
> InterruptedException was caught but the interrupt flag wasn't set back.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-8779) Fix missing InterruptedException handling in ZkStateReader

2016-04-21 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta reopened SOLR-8779:


backport for 5.5.1

> Fix missing InterruptedException handling in ZkStateReader
> --
>
> Key: SOLR-8779
> URL: https://issues.apache.org/jira/browse/SOLR-8779
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Varun Thacker
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8779.patch
>
>
> I was debugging a zk session expired issue and saw this stack-trace
> {code}
> ERROR - 2016-03-03 06:55:53.873; [   ] org.apache.solr.common.SolrException; 
> OverseerAutoReplicaFailoverThread had an error in its thread work 
> loop.:org.apache.solr.common.SolrException: Error reading cluster properties
>   at 
> org.apache.solr.common.cloud.ZkStateReader.getClusterProps(ZkStateReader.java:738)
>   at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.doWork(OverseerAutoReplicaFailoverThread.java:153)
>   at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:132)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.InterruptedException: sleep interrupted
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.solr.common.cloud.ZkCmdExecutor.retryDelay(ZkCmdExecutor.java:108)
>   at 
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:76)
>   at 
> org.apache.solr.common.cloud.SolrZkClient.exists(SolrZkClient.java:308)
>   at 
> org.apache.solr.common.cloud.ZkStateReader.getClusterProps(ZkStateReader.java:731)
>   ... 3 more
> {code}
> So I audited ZKStateReader and found a couple of places where an 
> InterruptedException was caught but the interrupt flag wasn't set back.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8599) Errors in construction of SolrZooKeeper cause Solr to go into an inconsistent state

2016-04-21 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-8599.

Resolution: Fixed

> Errors in construction of SolrZooKeeper cause Solr to go into an inconsistent 
> state
> ---
>
> Key: SOLR-8599
> URL: https://issues.apache.org/jira/browse/SOLR-8599
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Keith Laban
>Assignee: Dennis Gove
> Fix For: master, 6.0, 5.5.1
>
> Attachments: SOLR-8599.patch, SOLR-8599.patch, SOLR-8599.patch, 
> SOLR-8599.patch
>
>
> We originally saw this happen due to a DNS exception (see stack trace below). 
> Although any exception thrown in the constructor of SolrZooKeeper or the 
> parent class, ZooKeeper, will cause DefaultConnectionStrategy to fail to 
> update the zookeeper client. Once it gets into this state, it will not try to 
> connect again until the process is restarted. The node itself will also 
> respond successfully to query requests, but not to update requests.
> Two things should be address here:
> 1) Fix the error handling and issue some number of retries
> 2) If we are stuck in a state like this stop responding to all requests 
> {code}
> 2016-01-23 13:49:20.222 ERROR ConnectionManager [main-EventThread] - 
> :java.net.UnknownHostException: HOSTNAME: unknown error
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
> at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
> at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
> at java.net.InetAddress.getAllByName(InetAddress.java:1192)
> at java.net.InetAddress.getAllByName(InetAddress.java:1126)
> at 
> org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:61)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
> at org.apache.solr.common.cloud.SolrZooKeeper.(SolrZooKeeper.java:41)
> at 
> org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:53)
> at 
> org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:132)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> 2016-01-23 13:49:20.222 INFO ConnectionManager [main-EventThread] - 
> Connected:false
> 2016-01-23 13:49:20.222 INFO ClientCnxn [main-EventThread] - EventThread shut 
> down
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8599) Errors in construction of SolrZooKeeper cause Solr to go into an inconsistent state

2016-04-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252955#comment-15252955
 ] 

ASF subversion and git services commented on SOLR-8599:
---

Commit 8d24d72ab64d435a5e6bdca11b5e79c22f0057ef in lucene-solr's branch 
refs/heads/branch_6_0 from [~dpgove]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8d24d72 ]

SOLR-8599: Improved the tests for this issue to avoid changing a variable to 
non-final


> Errors in construction of SolrZooKeeper cause Solr to go into an inconsistent 
> state
> ---
>
> Key: SOLR-8599
> URL: https://issues.apache.org/jira/browse/SOLR-8599
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Keith Laban
>Assignee: Dennis Gove
> Fix For: master, 6.0, 5.5.1
>
> Attachments: SOLR-8599.patch, SOLR-8599.patch, SOLR-8599.patch, 
> SOLR-8599.patch
>
>
> We originally saw this happen due to a DNS exception (see stack trace below). 
> Although any exception thrown in the constructor of SolrZooKeeper or the 
> parent class, ZooKeeper, will cause DefaultConnectionStrategy to fail to 
> update the zookeeper client. Once it gets into this state, it will not try to 
> connect again until the process is restarted. The node itself will also 
> respond successfully to query requests, but not to update requests.
> Two things should be address here:
> 1) Fix the error handling and issue some number of retries
> 2) If we are stuck in a state like this stop responding to all requests 
> {code}
> 2016-01-23 13:49:20.222 ERROR ConnectionManager [main-EventThread] - 
> :java.net.UnknownHostException: HOSTNAME: unknown error
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
> at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
> at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
> at java.net.InetAddress.getAllByName(InetAddress.java:1192)
> at java.net.InetAddress.getAllByName(InetAddress.java:1126)
> at 
> org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:61)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
> at org.apache.solr.common.cloud.SolrZooKeeper.(SolrZooKeeper.java:41)
> at 
> org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:53)
> at 
> org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:132)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> 2016-01-23 13:49:20.222 INFO ConnectionManager [main-EventThread] - 
> Connected:false
> 2016-01-23 13:49:20.222 INFO ClientCnxn [main-EventThread] - EventThread shut 
> down
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3224 - Still Failing!

2016-04-21 Thread Chris Hostetter


https://issues.apache.org/jira/browse/SOLR-8992



: Date: Thu, 21 Apr 2016 19:37:16 + (UTC)
: From: Policeman Jenkins Server 
: Reply-To: dev@lucene.apache.org
: To: no...@apache.org, nkn...@apache.org, markrmil...@apache.org,
: jpou...@gmail.com, daddy...@gmail.com, dev@lucene.apache.org
: Subject: [JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3224 -
:  Still Failing!
: 
: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3224/
: Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC
: 
: 1 tests failed.
: FAILED:  
org.apache.solr.client.solrj.request.SchemaTest.testSchemaRequestAccuracy
: 
: Error Message:
: java.util.LinkedHashMap cannot be cast to 
org.apache.solr.common.util.NamedList
: 
: Stack Trace:
: java.lang.ClassCastException: java.util.LinkedHashMap cannot be cast to 
org.apache.solr.common.util.NamedList
:   at 
__randomizedtesting.SeedInfo.seed([B07EE1D8472F7C65:3782EE7649D781E2]:0)
:   at 
org.apache.solr.client.solrj.response.schema.SchemaResponse.setResponse(SchemaResponse.java:252)
:   at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
:   at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
:   at 
org.apache.solr.client.solrj.request.SchemaTest.testSchemaRequestAccuracy(SchemaTest.java:123)
:   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
:   at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
:   at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
:   at java.lang.reflect.Method.invoke(Method.java:498)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
:   at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
:   at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
:   at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
:   at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
:   at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
:   at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
:   at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
:   at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
:   at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
:   at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
:   at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
:   at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
:   at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
:   at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
:   at 

[jira] [Commented] (SOLR-8992) Restore Schema API GET method functionality removed by SOLR-8736

2016-04-21 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252954#comment-15252954
 ] 

Hoss Man commented on SOLR-8992:


every jenkins build that makes it far enough to run the solrj tests is failin 
on this test...

ant test  -Dtestcase=SchemaTest -Dtests.method=testSchemaRequestAccuracy 
-Dtests.seed=B07EE1D8472F7C65 -Dtests.slow=true -Dtests.locale=es-VE 
-Dtests.timezone=Europe/Nicosia -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

...taht seed (along with every other seed i tried) is failing reliably for me...

{noformat}
java.lang.ClassCastException: java.util.LinkedHashMap cannot be cast to 
org.apache.solr.common.util.NamedList
at 
__randomizedtesting.SeedInfo.seed([B07EE1D8472F7C65:3782EE7649D781E2]:0)
at 
org.apache.solr.client.solrj.response.schema.SchemaResponse.setResponse(SchemaResponse.java:252)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.client.solrj.request.SchemaTest.testSchemaRequestAccuracy(SchemaTest.java:123)
{noformat}

the problem seems to be e8cc19eb885c46d25b56fdd681825712516050c9, revert to the 
previous SHA (2ee8426) and it passes

> Restore Schema API GET method functionality removed by SOLR-8736
> 
>
> Key: SOLR-8992
> URL: https://issues.apache.org/jira/browse/SOLR-8992
> Project: Solr
>  Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Noble Paul
> Attachments: SOLR-8992.patch, SOLR-8992.patch, SOLR-8992.patch
>
>
> The following schema API GET functionality was removed under SOLR-8736; some 
> of this functionality should be restored:
> * {{schema/copyfields}}:
> ** The following information is no longer output:
> *** {{destDynamicBase}}: the matching dynamic field pattern for the 
> destination
> *** {{sourceDynamicBase}}: the matching dynamic field pattern for the source
> ** The following request parameters are no longer supported:
> *** {{dest.fl}}: include only copyFields that have one of these as a 
> destination
> *** {{source.fl}}: include only copyFields that have one of these as a source
> * {{schema/dynamicfields}}:
> ** The following request parameters are no longer supported:
> *** {{fl}}: a comma and/or space separated list of dynamic field patterns to 
> include 
> * {{schema/fields}} and {{schema/fields/_fieldname_}}:
> ** The following information is no longer output:
> *** {{dynamicBase}}: the matching dynamic field pattern, if the 
> {{includeDynamic}} param is given (see below) 
> ** The following request parameters are no longer supported:
> *** {{fl}}: (only supported without {{/_fieldname_}}): a comma and/or space 
> separated list of fields to include 
> *** {{includeDynamic}}: output the matching dynamic field pattern as 
> {{dynamicBase}}, if {{_fieldname_}}, or field(s) listed in {{fl}} param, are 
> not explicitly declared in the schema
> * {{schema/fieldtypes}} and {{schema/fieldtypes/_typename_}}:
> ** The following information is no longer output: 
> *** {{fields}}: the fields with the given field type
> *** {{dynamicFields}}: the dynamic fields with the given field type  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8599) Errors in construction of SolrZooKeeper cause Solr to go into an inconsistent state

2016-04-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252953#comment-15252953
 ] 

ASF subversion and git services commented on SOLR-8599:
---

Commit 78176e23bcac5c6e4accd8989dc931ec6cedb188 in lucene-solr's branch 
refs/heads/branch_6x from [~dpgove]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=78176e2 ]

SOLR-8599: Improved the tests for this issue to avoid changing a variable to 
non-final


> Errors in construction of SolrZooKeeper cause Solr to go into an inconsistent 
> state
> ---
>
> Key: SOLR-8599
> URL: https://issues.apache.org/jira/browse/SOLR-8599
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Keith Laban
>Assignee: Dennis Gove
> Fix For: master, 6.0, 5.5.1
>
> Attachments: SOLR-8599.patch, SOLR-8599.patch, SOLR-8599.patch, 
> SOLR-8599.patch
>
>
> We originally saw this happen due to a DNS exception (see stack trace below). 
> Although any exception thrown in the constructor of SolrZooKeeper or the 
> parent class, ZooKeeper, will cause DefaultConnectionStrategy to fail to 
> update the zookeeper client. Once it gets into this state, it will not try to 
> connect again until the process is restarted. The node itself will also 
> respond successfully to query requests, but not to update requests.
> Two things should be address here:
> 1) Fix the error handling and issue some number of retries
> 2) If we are stuck in a state like this stop responding to all requests 
> {code}
> 2016-01-23 13:49:20.222 ERROR ConnectionManager [main-EventThread] - 
> :java.net.UnknownHostException: HOSTNAME: unknown error
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
> at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
> at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
> at java.net.InetAddress.getAllByName(InetAddress.java:1192)
> at java.net.InetAddress.getAllByName(InetAddress.java:1126)
> at 
> org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:61)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
> at org.apache.solr.common.cloud.SolrZooKeeper.(SolrZooKeeper.java:41)
> at 
> org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:53)
> at 
> org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:132)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> 2016-01-23 13:49:20.222 INFO ConnectionManager [main-EventThread] - 
> Connected:false
> 2016-01-23 13:49:20.222 INFO ClientCnxn [main-EventThread] - EventThread shut 
> down
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-8728) Splitting a shard of a collection created with a rule fails with NPE

2016-04-21 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta reopened SOLR-8728:


Reopening to add the change log entry to branch_5_5.

> Splitting a shard of a collection created with a rule fails with NPE
> 
>
> Key: SOLR-8728
> URL: https://issues.apache.org/jira/browse/SOLR-8728
> Project: Solr
>  Issue Type: Bug
>Reporter: Shai Erera
>Assignee: Noble Paul
> Fix For: master, 6.0, 5.5.1, 6.1
>
> Attachments: SOLR-8728.patch, SOLR-8728.patch
>
>
> Spinoff from this discussion: http://markmail.org/message/f7liw4hqaagxo7y2
> I wrote a short test which reproduces, will upload shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8728) Splitting a shard of a collection created with a rule fails with NPE

2016-04-21 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252948#comment-15252948
 ] 

Anshum Gupta commented on SOLR-8728:


Noble, this was committed to branch_5_5 but seems like you missed the change 
log entry.

> Splitting a shard of a collection created with a rule fails with NPE
> 
>
> Key: SOLR-8728
> URL: https://issues.apache.org/jira/browse/SOLR-8728
> Project: Solr
>  Issue Type: Bug
>Reporter: Shai Erera
>Assignee: Noble Paul
> Fix For: master, 6.0, 5.5.1, 6.1
>
> Attachments: SOLR-8728.patch, SOLR-8728.patch
>
>
> Spinoff from this discussion: http://markmail.org/message/f7liw4hqaagxo7y2
> I wrote a short test which reproduces, will upload shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8145) bin/solr script oom_killer arg incorrect

2016-04-21 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-8145:
---
Fix Version/s: 5.5.1

> bin/solr script oom_killer arg incorrect
> 
>
> Key: SOLR-8145
> URL: https://issues.apache.org/jira/browse/SOLR-8145
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 5.2.1
>Reporter: Nate Dire
>Assignee: Timothy Potter
>Priority: Minor
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8145.patch, SOLR-8145.patch, SOLR-8145.patch
>
>
> I noticed the oom_killer script wasn't working in our 5.2 deployment.
> In the {{bin/solr}} script, the {{OnOutOfMemoryError}} option is being passed 
> as an arg to the jar rather than to the JVM.  I moved it ahead of {{-jar}} 
> and verified it shows up in the JVM args in the UI.
> {noformat}
># run Solr in the background
> nohup "$JAVA" "${SOLR_START_OPTS[@]}" $SOLR_ADDL_ARGS -jar start.jar \
> "-XX:OnOutOfMemoryError=$SOLR_TIP/bin/oom_solr.sh $SOLR_PORT 
> $SOLR_LOGS_DIR" "${SOLR_JETTY_CONFIG[@]}" \
> {noformat}
> Also, I'm not sure what the {{SOLR_PORT}} and {{SOLR_LOGS_DIR}} args are 
> doing--they don't appear to be positional arguments to the jar.
> Attaching a patch against 5.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_72) - Build # 16557 - Still Failing!

2016-04-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16557/
Java: 32bit/jdk1.8.0_72 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithExplicitRefresh

Error Message:
Could not find collection : c1

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : c1
at 
__randomizedtesting.SeedInfo.seed([171A934ECAAA636D:8A0E2B91ACAA5A8]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:170)
at 
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdate(ZkStateReaderTest.java:135)
at 
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithExplicitRefresh(ZkStateReaderTest.java:41)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10685 lines...]
   [junit4] Suite: org.apache.solr.cloud.overseer.ZkStateReaderTest
   [junit4]   2> Creating 

[jira] [Reopened] (SOLR-8145) bin/solr script oom_killer arg incorrect

2016-04-21 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta reopened SOLR-8145:


backport for 5.5.1

> bin/solr script oom_killer arg incorrect
> 
>
> Key: SOLR-8145
> URL: https://issues.apache.org/jira/browse/SOLR-8145
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 5.2.1
>Reporter: Nate Dire
>Assignee: Timothy Potter
>Priority: Minor
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8145.patch, SOLR-8145.patch, SOLR-8145.patch
>
>
> I noticed the oom_killer script wasn't working in our 5.2 deployment.
> In the {{bin/solr}} script, the {{OnOutOfMemoryError}} option is being passed 
> as an arg to the jar rather than to the JVM.  I moved it ahead of {{-jar}} 
> and verified it shows up in the JVM args in the UI.
> {noformat}
># run Solr in the background
> nohup "$JAVA" "${SOLR_START_OPTS[@]}" $SOLR_ADDL_ARGS -jar start.jar \
> "-XX:OnOutOfMemoryError=$SOLR_TIP/bin/oom_solr.sh $SOLR_PORT 
> $SOLR_LOGS_DIR" "${SOLR_JETTY_CONFIG[@]}" \
> {noformat}
> Also, I'm not sure what the {{SOLR_PORT}} and {{SOLR_LOGS_DIR}} args are 
> doing--they don't appear to be positional arguments to the jar.
> Attaching a patch against 5.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8771) Multi-threaded core shutdown creates executor per core

2016-04-21 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-8771:
---
Fix Version/s: (was: master)
   5.5.1
   6.0

> Multi-threaded core shutdown creates executor per core
> --
>
> Key: SOLR-8771
> URL: https://issues.apache.org/jira/browse/SOLR-8771
> Project: Solr
>  Issue Type: Bug
>Reporter: Mike Drob
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8771.patch
>
>
> The multi-threaded core shutdown that was added in SOLR-8615 has a bug where 
> a new executor is created per core. This means we don't get any benefit from 
> the parallel operations.
> Patch incoming shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-8771) Multi-threaded core shutdown creates executor per core

2016-04-21 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta reopened SOLR-8771:

  Assignee: (was: Mark Miller)

Back porting for 5.5.1.

> Multi-threaded core shutdown creates executor per core
> --
>
> Key: SOLR-8771
> URL: https://issues.apache.org/jira/browse/SOLR-8771
> Project: Solr
>  Issue Type: Bug
>Reporter: Mike Drob
> Fix For: master
>
> Attachments: SOLR-8771.patch
>
>
> The multi-threaded core shutdown that was added in SOLR-8615 has a bug where 
> a new executor is created per core. This means we don't get any benefit from 
> the parallel operations.
> Patch incoming shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 471 - Still Failing

2016-04-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/471/

No tests ran.

Build Log:
[...truncated 40517 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (18.4 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.0.0-src.tgz...
   [smoker] 28.6 MB in 0.02 sec (1160.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.tgz...
   [smoker] 62.9 MB in 0.25 sec (247.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.zip...
   [smoker] 73.5 MB in 0.06 sec (1191.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 5995 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 5995 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 218 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (24.2 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.0.0-src.tgz...
   [smoker] 37.7 MB in 0.60 sec (62.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.tgz...
   [smoker] 132.0 MB in 2.20 sec (59.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.zip...
   [smoker] 140.6 MB in 2.01 sec (69.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] bin/solr start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 30 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]   [|]   [/]   [-]   
[\]   [|]  

[jira] [Resolved] (SOLR-8420) Date statistics: sumOfSquares overflows long

2016-04-21 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-8420.

Resolution: Fixed

> Date statistics: sumOfSquares overflows long
> 
>
> Key: SOLR-8420
> URL: https://issues.apache.org/jira/browse/SOLR-8420
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Affects Versions: 5.4
>Reporter: Tom Hill
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Fix For: master, 6.0, 5.5.1
>
> Attachments: 0001-Fix-overflow-in-date-statistics.patch, 
> 0001-Fix-overflow-in-date-statistics.patch, 
> 0001-Fix-overflow-in-date-statistics.patch, SOLR-8420.patch, StdDev.java
>
>
> The values for Dates are large enough that squaring them overflows a "long" 
> field. This should be converted to a double. 
> StatsValuesFactory.java, line 755 DateStatsValues#updateTypeSpecificStats Add 
> a cast to double 
> sumOfSquares += ( (double)value * value * count);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8748) OverseerTaskProcessor limits number of concurrent tasks to just 10 even though the thread pool size is 100

2016-04-21 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-8748.

Resolution: Fixed

> OverseerTaskProcessor limits number of concurrent tasks to just 10 even 
> though the thread pool size is 100
> --
>
> Key: SOLR-8748
> URL: https://issues.apache.org/jira/browse/SOLR-8748
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.4, 5.5
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: master, 6.0, 5.5.1
>
> Attachments: SOLR-8748.patch
>
>
> OverseerTaskProcessor uses maxParallelThreads to limit number of concurrent 
> tasks but the same is not used for creating the thread pool. The default 
> limit of 10 is too small, IMO and we should change it to 100. The overseer 
> collection processor mostly just waits around on network calls so there is no 
> harm in increasing this limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8375) ReplicaAssigner rejects valid positions

2016-04-21 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-8375.

Resolution: Fixed

> ReplicaAssigner rejects valid positions
> ---
>
> Key: SOLR-8375
> URL: https://issues.apache.org/jira/browse/SOLR-8375
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Kelvin Tan
>Assignee: Noble Paul
>Priority: Minor
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8375.patch, patch.txt
>
>
> ReplicaAssigner rejects any position for which a rule does not return 
> NODE_CAN_BE_ASSIGNED.
> However, if the rule's shard does not apply to the position's shard, Rule 
> returns NOT_APPLICABLE. This is not taken into account, and thus valid rules 
> are being rejected at the moment. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8758) Add SolrCloudTestCase base class

2016-04-21 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-8758.

Resolution: Fixed

> Add SolrCloudTestCase base class
> 
>
> Key: SOLR-8758
> URL: https://issues.apache.org/jira/browse/SOLR-8758
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8758.patch
>
>
> At the moment, if you want to write unit tests for Cloud components, you have 
> to extend AbstractDistribZkTestCase, which has a number of disadvantages:
> * the API isn't well-documented
> * you get a default configuration loaded into ZK, and it's not trivial to add 
> separate ones
> * you get a default collection, whether you want one or not
> * the test cluster isn't static, which means that it's started up and 
> shutdown after every test function.  To avoid tests being incredibly slow, we 
> end up writing single-function tests that call out to sub-functions, losing 
> the benefits of execution-order randomization.
> It would be more useful to have a properly configurable and documented 
> testcase base class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8738) invalid DBQ initially sent to a non-leader node will report success

2016-04-21 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-8738.

Resolution: Fixed

> invalid DBQ initially sent to a non-leader node will report success
> ---
>
> Key: SOLR-8738
> URL: https://issues.apache.org/jira/browse/SOLR-8738
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: master, 6.0, 5.5.1
>
> Attachments: SOLR-8738.patch, SOLR-8738.patch, SOLR-8738.patch
>
>
> Discovered this while working on SOLR-445.
> If a Delete By Query gets sent to a node which is not hosting a leader (ie: 
> only hosts replicas, or doesn't host any cores related to the specified 
> collection) then a success will be returned, even if the DBQ is completely 
> malformed and actually failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8738) invalid DBQ initially sent to a non-leader node will report success

2016-04-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252873#comment-15252873
 ] 

ASF subversion and git services commented on SOLR-8738:
---

Commit 66d3c2eb0a1e7b28621557c87c8d5b5219a95add in lucene-solr's branch 
refs/heads/branch_5_5 from [~hossman_luc...@fucit.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=66d3c2e ]

SOLR-8738: Fixed false success response when invalid deleteByQuery requests 
intially hit non-leader cloud nodes


> invalid DBQ initially sent to a non-leader node will report success
> ---
>
> Key: SOLR-8738
> URL: https://issues.apache.org/jira/browse/SOLR-8738
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: master, 6.0, 5.5.1
>
> Attachments: SOLR-8738.patch, SOLR-8738.patch, SOLR-8738.patch
>
>
> Discovered this while working on SOLR-445.
> If a Delete By Query gets sent to a node which is not hosting a leader (ie: 
> only hosts replicas, or doesn't host any cores related to the specified 
> collection) then a success will be returned, even if the DBQ is completely 
> malformed and actually failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8599) Errors in construction of SolrZooKeeper cause Solr to go into an inconsistent state

2016-04-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252868#comment-15252868
 ] 

ASF subversion and git services commented on SOLR-8599:
---

Commit 983abb1ca14f7ee42678a03f9d754af8e05e8288 in lucene-solr's branch 
refs/heads/branch_5_5 from [~dpgove]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=983abb1 ]

SOLR-8599: Improved the tests for this issue to avoid changing a variable to 
non-final


> Errors in construction of SolrZooKeeper cause Solr to go into an inconsistent 
> state
> ---
>
> Key: SOLR-8599
> URL: https://issues.apache.org/jira/browse/SOLR-8599
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Keith Laban
>Assignee: Dennis Gove
> Fix For: master, 6.0, 5.5.1
>
> Attachments: SOLR-8599.patch, SOLR-8599.patch, SOLR-8599.patch, 
> SOLR-8599.patch
>
>
> We originally saw this happen due to a DNS exception (see stack trace below). 
> Although any exception thrown in the constructor of SolrZooKeeper or the 
> parent class, ZooKeeper, will cause DefaultConnectionStrategy to fail to 
> update the zookeeper client. Once it gets into this state, it will not try to 
> connect again until the process is restarted. The node itself will also 
> respond successfully to query requests, but not to update requests.
> Two things should be address here:
> 1) Fix the error handling and issue some number of retries
> 2) If we are stuck in a state like this stop responding to all requests 
> {code}
> 2016-01-23 13:49:20.222 ERROR ConnectionManager [main-EventThread] - 
> :java.net.UnknownHostException: HOSTNAME: unknown error
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
> at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
> at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
> at java.net.InetAddress.getAllByName(InetAddress.java:1192)
> at java.net.InetAddress.getAllByName(InetAddress.java:1126)
> at 
> org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:61)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
> at org.apache.solr.common.cloud.SolrZooKeeper.(SolrZooKeeper.java:41)
> at 
> org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:53)
> at 
> org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:132)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> 2016-01-23 13:49:20.222 INFO ConnectionManager [main-EventThread] - 
> Connected:false
> 2016-01-23 13:49:20.222 INFO ClientCnxn [main-EventThread] - EventThread shut 
> down
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8758) Add SolrCloudTestCase base class

2016-04-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252872#comment-15252872
 ] 

ASF subversion and git services commented on SOLR-8758:
---

Commit 4a274605b8b426029276b9cccec78a23c095e0da in lucene-solr's branch 
refs/heads/branch_5_5 from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4a27460 ]

SOLR-8758: Add SolrCloudTestCase base class


> Add SolrCloudTestCase base class
> 
>
> Key: SOLR-8758
> URL: https://issues.apache.org/jira/browse/SOLR-8758
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8758.patch
>
>
> At the moment, if you want to write unit tests for Cloud components, you have 
> to extend AbstractDistribZkTestCase, which has a number of disadvantages:
> * the API isn't well-documented
> * you get a default configuration loaded into ZK, and it's not trivial to add 
> separate ones
> * you get a default collection, whether you want one or not
> * the test cluster isn't static, which means that it's started up and 
> shutdown after every test function.  To avoid tests being incredibly slow, we 
> end up writing single-function tests that call out to sub-functions, losing 
> the benefits of execution-order randomization.
> It would be more useful to have a properly configurable and documented 
> testcase base class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8420) Date statistics: sumOfSquares overflows long

2016-04-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252869#comment-15252869
 ] 

ASF subversion and git services commented on SOLR-8420:
---

Commit f9acafbd917b7970b29f12e0c637612d2cd216f7 in lucene-solr's branch 
refs/heads/branch_5_5 from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f9acafb ]

SOLR-8420: Fix long overflow in sumOfSquares for Date statistics

Casted operations to double. Changed the test to support a percentage error 
given the FUZZY flag in doubles


> Date statistics: sumOfSquares overflows long
> 
>
> Key: SOLR-8420
> URL: https://issues.apache.org/jira/browse/SOLR-8420
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Affects Versions: 5.4
>Reporter: Tom Hill
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Fix For: master, 6.0, 5.5.1
>
> Attachments: 0001-Fix-overflow-in-date-statistics.patch, 
> 0001-Fix-overflow-in-date-statistics.patch, 
> 0001-Fix-overflow-in-date-statistics.patch, SOLR-8420.patch, StdDev.java
>
>
> The values for Dates are large enough that squaring them overflows a "long" 
> field. This should be converted to a double. 
> StatsValuesFactory.java, line 755 DateStatsValues#updateTypeSpecificStats Add 
> a cast to double 
> sumOfSquares += ( (double)value * value * count);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8748) OverseerTaskProcessor limits number of concurrent tasks to just 10 even though the thread pool size is 100

2016-04-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252870#comment-15252870
 ] 

ASF subversion and git services commented on SOLR-8748:
---

Commit f1127db72c7ac247f25420c34b49b92f3e156dd7 in lucene-solr's branch 
refs/heads/branch_5_5 from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f1127db ]

SOLR-8748: OverseerTaskProcessor limits number of concurrent tasks to just 10 
even though the thread pool size is 100. The limit has now been increased to 
100.


> OverseerTaskProcessor limits number of concurrent tasks to just 10 even 
> though the thread pool size is 100
> --
>
> Key: SOLR-8748
> URL: https://issues.apache.org/jira/browse/SOLR-8748
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.4, 5.5
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: master, 6.0, 5.5.1
>
> Attachments: SOLR-8748.patch
>
>
> OverseerTaskProcessor uses maxParallelThreads to limit number of concurrent 
> tasks but the same is not used for creating the thread pool. The default 
> limit of 10 is too small, IMO and we should change it to 100. The overseer 
> collection processor mostly just waits around on network calls so there is no 
> harm in increasing this limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8375) ReplicaAssigner rejects valid positions

2016-04-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252871#comment-15252871
 ] 

ASF subversion and git services commented on SOLR-8375:
---

Commit 38156552730eb5865297f28f5660f5427c43d56a in lucene-solr's branch 
refs/heads/branch_5_5 from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3815655 ]

SOLR-8375: ReplicaAssigner rejects valid nodes


> ReplicaAssigner rejects valid positions
> ---
>
> Key: SOLR-8375
> URL: https://issues.apache.org/jira/browse/SOLR-8375
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Kelvin Tan
>Assignee: Noble Paul
>Priority: Minor
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8375.patch, patch.txt
>
>
> ReplicaAssigner rejects any position for which a rule does not return 
> NODE_CAN_BE_ASSIGNED.
> However, if the rule's shard does not apply to the position's shard, Rule 
> returns NOT_APPLICABLE. This is not taken into account, and thus valid rules 
> are being rejected at the moment. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8599) Errors in construction of SolrZooKeeper cause Solr to go into an inconsistent state

2016-04-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252867#comment-15252867
 ] 

ASF subversion and git services commented on SOLR-8599:
---

Commit 853e1b99b10ccce4029fd77ba88df17dbc77ce3d in lucene-solr's branch 
refs/heads/branch_5_5 from [~dpgove]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=853e1b9 ]

SOLR-8599: After a failed connection during construction of SolrZkClient 
attempt to retry until a connection can be made


> Errors in construction of SolrZooKeeper cause Solr to go into an inconsistent 
> state
> ---
>
> Key: SOLR-8599
> URL: https://issues.apache.org/jira/browse/SOLR-8599
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Keith Laban
>Assignee: Dennis Gove
> Fix For: master, 6.0, 5.5.1
>
> Attachments: SOLR-8599.patch, SOLR-8599.patch, SOLR-8599.patch, 
> SOLR-8599.patch
>
>
> We originally saw this happen due to a DNS exception (see stack trace below). 
> Although any exception thrown in the constructor of SolrZooKeeper or the 
> parent class, ZooKeeper, will cause DefaultConnectionStrategy to fail to 
> update the zookeeper client. Once it gets into this state, it will not try to 
> connect again until the process is restarted. The node itself will also 
> respond successfully to query requests, but not to update requests.
> Two things should be address here:
> 1) Fix the error handling and issue some number of retries
> 2) If we are stuck in a state like this stop responding to all requests 
> {code}
> 2016-01-23 13:49:20.222 ERROR ConnectionManager [main-EventThread] - 
> :java.net.UnknownHostException: HOSTNAME: unknown error
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
> at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
> at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
> at java.net.InetAddress.getAllByName(InetAddress.java:1192)
> at java.net.InetAddress.getAllByName(InetAddress.java:1126)
> at 
> org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:61)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
> at org.apache.solr.common.cloud.SolrZooKeeper.(SolrZooKeeper.java:41)
> at 
> org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:53)
> at 
> org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:132)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> 2016-01-23 13:49:20.222 INFO ConnectionManager [main-EventThread] - 
> Connected:false
> 2016-01-23 13:49:20.222 INFO ClientCnxn [main-EventThread] - EventThread shut 
> down
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8599) Errors in construction of SolrZooKeeper cause Solr to go into an inconsistent state

2016-04-21 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252863#comment-15252863
 ] 

Anshum Gupta commented on SOLR-8599:


Thanks Keith. We need to track these better :)
I'll commit the other one to 6x too. I got both of them to 5x and I'm about to 
commit these to 5.5.

> Errors in construction of SolrZooKeeper cause Solr to go into an inconsistent 
> state
> ---
>
> Key: SOLR-8599
> URL: https://issues.apache.org/jira/browse/SOLR-8599
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Keith Laban
>Assignee: Dennis Gove
> Fix For: master, 6.0, 5.5.1
>
> Attachments: SOLR-8599.patch, SOLR-8599.patch, SOLR-8599.patch, 
> SOLR-8599.patch
>
>
> We originally saw this happen due to a DNS exception (see stack trace below). 
> Although any exception thrown in the constructor of SolrZooKeeper or the 
> parent class, ZooKeeper, will cause DefaultConnectionStrategy to fail to 
> update the zookeeper client. Once it gets into this state, it will not try to 
> connect again until the process is restarted. The node itself will also 
> respond successfully to query requests, but not to update requests.
> Two things should be address here:
> 1) Fix the error handling and issue some number of retries
> 2) If we are stuck in a state like this stop responding to all requests 
> {code}
> 2016-01-23 13:49:20.222 ERROR ConnectionManager [main-EventThread] - 
> :java.net.UnknownHostException: HOSTNAME: unknown error
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
> at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
> at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
> at java.net.InetAddress.getAllByName(InetAddress.java:1192)
> at java.net.InetAddress.getAllByName(InetAddress.java:1126)
> at 
> org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:61)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
> at org.apache.solr.common.cloud.SolrZooKeeper.(SolrZooKeeper.java:41)
> at 
> org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:53)
> at 
> org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:132)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> 2016-01-23 13:49:20.222 INFO ConnectionManager [main-EventThread] - 
> Connected:false
> 2016-01-23 13:49:20.222 INFO ClientCnxn [main-EventThread] - EventThread shut 
> down
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7239) Speed up LatLonPoint's polygon queries when there are many vertices

2016-04-21 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252856#comment-15252856
 ] 

Robert Muir commented on LUCENE-7239:
-

I think most of the logN solutions are too tricky: of course if we can 
implement one for 2D and it outperforms this, we can throw this stuff out for 
it.

But the logN datastructures i looked at involved tricky calculations (and I 
don't want to introduce error): whereas this one is doing "obviously" the same 
thing as the slower versions it replaces: since the optimizatation is only 
based on comparisons, which are exact, and its the same comparisons the slow 
versions do in the iteration of each loop.  

I also have the concerns about those complicated logN datastructures 
introducing a high overhead (echoed here in "Faster Tests": 
http://erich.realtimerendering.com/ptinpoly/) that might mean they are 
impractical. Another thing i really am trying to keep is "one codepath" without 
specialization for different types of polygons in any way. This makes it easier 
to understand what the adversaries are.

We just have to keep in mind this stuff here is still linear time, but I think 
its a practical improvement. So maybe there is a similar more 1980s approach 
for geo3d that is "good enough" but not too complicated there as well.


> Speed up LatLonPoint's polygon queries when there are many vertices
> ---
>
> Key: LUCENE-7239
> URL: https://issues.apache.org/jira/browse/LUCENE-7239
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-7239.patch
>
>
> This is inspired by the "reliability and numerical stability" recommendations 
> at the end of http://www-ma2.upc.es/geoc/Schirra-pointPolygon.pdf.
> Basically our polys need to answer two questions that are slow today:
> contains(point)
> crosses(rectangle)
> Both of these ops only care about a subset of edges: the ones overlapping a y 
> interval range. We can organize these edges in an interval tree to be 
> practical and speed things up a lot. Worst case is still O(n) but those 
> solutions are more complex to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 93 - Failure!

2016-04-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/93/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.schema.TestManagedSchemaAPI.test

Error Message:
Error from server at http://127.0.0.1:53030/solr/testschemaapi_shard1_replica1: 
ERROR: [doc=2] unknown field 'myNewField1'

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:53030/solr/testschemaapi_shard1_replica1: ERROR: 
[doc=2] unknown field 'myNewField1'
at 
__randomizedtesting.SeedInfo.seed([15674B1DEE069395:9D3374C740FAFE6D]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:661)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1073)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:962)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:898)
at 
org.apache.solr.schema.TestManagedSchemaAPI.testAddFieldAndDocument(TestManagedSchemaAPI.java:86)
at 
org.apache.solr.schema.TestManagedSchemaAPI.test(TestManagedSchemaAPI.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2016-04-21 Thread Justin Deoliveira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252850#comment-15252850
 ] 

Justin Deoliveira commented on SOLR-5944:
-

I've been following this patch for a while, and am super excited about recent 
progress. I just applied the latest patch locally and built with maven and it 
resulted in some forbidden api failures. I don't know of this helps but here is 
a minor 
[patch|https://gist.githubusercontent.com/jdeolive/5b56848603fe5cbac804cd5acb8ebcd2/raw/35b3a3150c652738713618e7d8fcf7bd6cf3e0e6/forbiddenapis.patch]
 that addresses them. 

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: DUP.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8748) OverseerTaskProcessor limits number of concurrent tasks to just 10 even though the thread pool size is 100

2016-04-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252817#comment-15252817
 ] 

ASF subversion and git services commented on SOLR-8748:
---

Commit 953949181992351cdad417d5fff05f1fcc5ee510 in lucene-solr's branch 
refs/heads/branch_5x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9539491 ]

SOLR-8748: OverseerTaskProcessor limits number of concurrent tasks to just 10 
even though the thread pool size is 100. The limit has now been increased to 
100.


> OverseerTaskProcessor limits number of concurrent tasks to just 10 even 
> though the thread pool size is 100
> --
>
> Key: SOLR-8748
> URL: https://issues.apache.org/jira/browse/SOLR-8748
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.4, 5.5
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: master, 6.0, 5.5.1
>
> Attachments: SOLR-8748.patch
>
>
> OverseerTaskProcessor uses maxParallelThreads to limit number of concurrent 
> tasks but the same is not used for creating the thread pool. The default 
> limit of 10 is too small, IMO and we should change it to 100. The overseer 
> collection processor mostly just waits around on network calls so there is no 
> harm in increasing this limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8738) invalid DBQ initially sent to a non-leader node will report success

2016-04-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252820#comment-15252820
 ] 

ASF subversion and git services commented on SOLR-8738:
---

Commit 9e77319abcf3ff372b86cec4d66bac11f7e038b6 in lucene-solr's branch 
refs/heads/branch_5x from [~hossman_luc...@fucit.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9e77319 ]

SOLR-8738: Fixed false success response when invalid deleteByQuery requests 
intially hit non-leader cloud nodes


> invalid DBQ initially sent to a non-leader node will report success
> ---
>
> Key: SOLR-8738
> URL: https://issues.apache.org/jira/browse/SOLR-8738
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: master, 6.0, 5.5.1
>
> Attachments: SOLR-8738.patch, SOLR-8738.patch, SOLR-8738.patch
>
>
> Discovered this while working on SOLR-445.
> If a Delete By Query gets sent to a node which is not hosting a leader (ie: 
> only hosts replicas, or doesn't host any cores related to the specified 
> collection) then a success will be returned, even if the DBQ is completely 
> malformed and actually failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8758) Add SolrCloudTestCase base class

2016-04-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252819#comment-15252819
 ] 

ASF subversion and git services commented on SOLR-8758:
---

Commit 27e284bb7bcd6536b3c017d76e675f24397cce9c in lucene-solr's branch 
refs/heads/branch_5x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=27e284b ]

SOLR-8758: Add SolrCloudTestCase base class


> Add SolrCloudTestCase base class
> 
>
> Key: SOLR-8758
> URL: https://issues.apache.org/jira/browse/SOLR-8758
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8758.patch
>
>
> At the moment, if you want to write unit tests for Cloud components, you have 
> to extend AbstractDistribZkTestCase, which has a number of disadvantages:
> * the API isn't well-documented
> * you get a default configuration loaded into ZK, and it's not trivial to add 
> separate ones
> * you get a default collection, whether you want one or not
> * the test cluster isn't static, which means that it's started up and 
> shutdown after every test function.  To avoid tests being incredibly slow, we 
> end up writing single-function tests that call out to sub-functions, losing 
> the benefits of execution-order randomization.
> It would be more useful to have a properly configurable and documented 
> testcase base class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8599) Errors in construction of SolrZooKeeper cause Solr to go into an inconsistent state

2016-04-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252815#comment-15252815
 ] 

ASF subversion and git services commented on SOLR-8599:
---

Commit 20e2caba9615e19f84fbcc59a950fb197385592e in lucene-solr's branch 
refs/heads/branch_5x from [~dpgove]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=20e2cab ]

SOLR-8599: Improved the tests for this issue to avoid changing a variable to 
non-final


> Errors in construction of SolrZooKeeper cause Solr to go into an inconsistent 
> state
> ---
>
> Key: SOLR-8599
> URL: https://issues.apache.org/jira/browse/SOLR-8599
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Keith Laban
>Assignee: Dennis Gove
> Fix For: master, 6.0, 5.5.1
>
> Attachments: SOLR-8599.patch, SOLR-8599.patch, SOLR-8599.patch, 
> SOLR-8599.patch
>
>
> We originally saw this happen due to a DNS exception (see stack trace below). 
> Although any exception thrown in the constructor of SolrZooKeeper or the 
> parent class, ZooKeeper, will cause DefaultConnectionStrategy to fail to 
> update the zookeeper client. Once it gets into this state, it will not try to 
> connect again until the process is restarted. The node itself will also 
> respond successfully to query requests, but not to update requests.
> Two things should be address here:
> 1) Fix the error handling and issue some number of retries
> 2) If we are stuck in a state like this stop responding to all requests 
> {code}
> 2016-01-23 13:49:20.222 ERROR ConnectionManager [main-EventThread] - 
> :java.net.UnknownHostException: HOSTNAME: unknown error
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
> at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
> at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
> at java.net.InetAddress.getAllByName(InetAddress.java:1192)
> at java.net.InetAddress.getAllByName(InetAddress.java:1126)
> at 
> org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:61)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
> at org.apache.solr.common.cloud.SolrZooKeeper.(SolrZooKeeper.java:41)
> at 
> org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:53)
> at 
> org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:132)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> 2016-01-23 13:49:20.222 INFO ConnectionManager [main-EventThread] - 
> Connected:false
> 2016-01-23 13:49:20.222 INFO ClientCnxn [main-EventThread] - EventThread shut 
> down
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8599) Errors in construction of SolrZooKeeper cause Solr to go into an inconsistent state

2016-04-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252814#comment-15252814
 ] 

ASF subversion and git services commented on SOLR-8599:
---

Commit d9875832f4798e5f732f4ae5627c7b306ccafa9c in lucene-solr's branch 
refs/heads/branch_5x from [~dpgove]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d987583 ]

SOLR-8599: After a failed connection during construction of SolrZkClient 
attempt to retry until a connection can be made


> Errors in construction of SolrZooKeeper cause Solr to go into an inconsistent 
> state
> ---
>
> Key: SOLR-8599
> URL: https://issues.apache.org/jira/browse/SOLR-8599
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Keith Laban
>Assignee: Dennis Gove
> Fix For: master, 6.0, 5.5.1
>
> Attachments: SOLR-8599.patch, SOLR-8599.patch, SOLR-8599.patch, 
> SOLR-8599.patch
>
>
> We originally saw this happen due to a DNS exception (see stack trace below). 
> Although any exception thrown in the constructor of SolrZooKeeper or the 
> parent class, ZooKeeper, will cause DefaultConnectionStrategy to fail to 
> update the zookeeper client. Once it gets into this state, it will not try to 
> connect again until the process is restarted. The node itself will also 
> respond successfully to query requests, but not to update requests.
> Two things should be address here:
> 1) Fix the error handling and issue some number of retries
> 2) If we are stuck in a state like this stop responding to all requests 
> {code}
> 2016-01-23 13:49:20.222 ERROR ConnectionManager [main-EventThread] - 
> :java.net.UnknownHostException: HOSTNAME: unknown error
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
> at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
> at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
> at java.net.InetAddress.getAllByName(InetAddress.java:1192)
> at java.net.InetAddress.getAllByName(InetAddress.java:1126)
> at 
> org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:61)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
> at org.apache.solr.common.cloud.SolrZooKeeper.(SolrZooKeeper.java:41)
> at 
> org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:53)
> at 
> org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:132)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> 2016-01-23 13:49:20.222 INFO ConnectionManager [main-EventThread] - 
> Connected:false
> 2016-01-23 13:49:20.222 INFO ClientCnxn [main-EventThread] - EventThread shut 
> down
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8375) ReplicaAssigner rejects valid positions

2016-04-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252818#comment-15252818
 ] 

ASF subversion and git services commented on SOLR-8375:
---

Commit 3c6ef10e9e3455fc5027a2f45b889dd1c025d055 in lucene-solr's branch 
refs/heads/branch_5x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3c6ef10 ]

SOLR-8375: ReplicaAssigner rejects valid nodes


> ReplicaAssigner rejects valid positions
> ---
>
> Key: SOLR-8375
> URL: https://issues.apache.org/jira/browse/SOLR-8375
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Kelvin Tan
>Assignee: Noble Paul
>Priority: Minor
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8375.patch, patch.txt
>
>
> ReplicaAssigner rejects any position for which a rule does not return 
> NODE_CAN_BE_ASSIGNED.
> However, if the rule's shard does not apply to the position's shard, Rule 
> returns NOT_APPLICABLE. This is not taken into account, and thus valid rules 
> are being rejected at the moment. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8420) Date statistics: sumOfSquares overflows long

2016-04-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252816#comment-15252816
 ] 

ASF subversion and git services commented on SOLR-8420:
---

Commit 2beccf469f9e07eb5a05fef9ec3f869d6da4008a in lucene-solr's branch 
refs/heads/branch_5x from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2beccf4 ]

SOLR-8420: Fix long overflow in sumOfSquares for Date statistics

Casted operations to double. Changed the test to support a percentage error 
given the FUZZY flag in doubles


> Date statistics: sumOfSquares overflows long
> 
>
> Key: SOLR-8420
> URL: https://issues.apache.org/jira/browse/SOLR-8420
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Affects Versions: 5.4
>Reporter: Tom Hill
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Fix For: master, 6.0, 5.5.1
>
> Attachments: 0001-Fix-overflow-in-date-statistics.patch, 
> 0001-Fix-overflow-in-date-statistics.patch, 
> 0001-Fix-overflow-in-date-statistics.patch, SOLR-8420.patch, StdDev.java
>
>
> The values for Dates are large enough that squaring them overflows a "long" 
> field. This should be converted to a double. 
> StatsValuesFactory.java, line 755 DateStatsValues#updateTypeSpecificStats Add 
> a cast to double 
> sumOfSquares += ( (double)value * value * count);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7239) Speed up LatLonPoint's polygon queries when there are many vertices

2016-04-21 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252821#comment-15252821
 ] 

Karl Wright commented on LUCENE-7239:
-

So here's a quick brain-dump of the geo3d technology, which may help.  First, 
the basic component of membership is what's called a "sided plane", which is a 
single plane where one side of the plane is in-set and the other side is 
out-of-set.  Being on the plane is considered in-set.  This is computed with an 
inherent level of accuracy, e.g. anything within a perpendicular distance of 
epsilon from the plane is considered to sit on the plane.  This is also 
extremely fast: three multiplications, two additions, and a comparison.  The 
second technology is finding intersections of two planes on the surface of the 
ellipsoid.  Since planes intersect along a line, there is also some requirement 
of "membership", which is typically a set of sided planes that the intersection 
point must lie within.  Slightly less fast but still pretty good; there's a 
square root involved but otherwise its comparable to sided plane computation.  
This is also how we detect intersections between edges.  Finally, there's the 
ability to find the bounds of any plane's intersection with the ellipsoid.  
That's useful but considerably slower and less accurate.

All of the geo3d shapes are built using these technologies.  For polygons, 
though, because the inherent limitation of sided planes that go through the 
center of the ellipsoid is that they describe 1/2 of the ellipsoid, we can only 
effectively build convex or concave polygons, where concave polygons are just 
the complement of a convex polygon.  The current code therefore breaks an 
arbitrary messy polygon down into a set of convex and concave polygon tiles 
that are well-behaved.

The problem is that for polygons that have lots of edges, even after you 
construct a tiled representation, all queries about relationships/intersection 
and membership are O(N).  This would have to become O(log(N)) to be practical.  
In addition, the borough data has very closely spaced points that are 
essentially co-linear as far as geo3d is concerned: if you construct a plane 
with any two adjacent borough points you have a pretty good chance that the 
adjacent points on either side also sit on the plane.  So, unless some cleaning 
up is done, sided planes are useless for the borough polygons.  I've worked, 
therefore, on the cleanup problem, and (I think) solved it, but it still 
doesn't fix the O(N) issue.

Now, we could do the following kind of thing instead: build edges from simple 
planes (not sided planes), and use only plane intersection to compute 
membership.  Then, there would be a chance of ordering planes hierarchically to 
acheive O(log(N)) time.  But, two caveats: (1) don't know how to do the 
ordering yet, and (2) there may be similar numerical issues with computing 
intersections for very short edges.

Anyhow, let's keep kicking these ideas around.

> Speed up LatLonPoint's polygon queries when there are many vertices
> ---
>
> Key: LUCENE-7239
> URL: https://issues.apache.org/jira/browse/LUCENE-7239
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-7239.patch
>
>
> This is inspired by the "reliability and numerical stability" recommendations 
> at the end of http://www-ma2.upc.es/geoc/Schirra-pointPolygon.pdf.
> Basically our polys need to answer two questions that are slow today:
> contains(point)
> crosses(rectangle)
> Both of these ops only care about a subset of edges: the ones overlapping a y 
> interval range. We can organize these edges in an interval tree to be 
> practical and speed things up a lot. Worst case is still O(n) but those 
> solutions are more complex to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9028) fix bugs in (add sanity checks for) SSL clientAuth testing

2016-04-21 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9028:
---
Attachment: SOLR-9028.patch


Here's my current in progress patch (summary of chagnes below).  Feedback on 
the changes/tests or suggestions for additional tests i haven't thought of yet 
would be appreciated.

In particular: I would really love it if someone on OSX could run the new and 
improved TestMiniSolrCloudClusterSSL and let me know if it passes for you -- 
clientAuth randomization in SolrTestCaseJ4 has been completley disabled on OSX 
for a long time due to some consistent failures that no one ever got to the 
bottom of, and I'm wondering if it was a JVM bug that's still a problem with 
modern JVMs and/or if my changes to SSLTestConfig resolved whatever the 
underlying problem is (if not, i have another avenue to explore - see nocommit 
in SolrTestCaseJ4)



In this patch...

* SSLConfig
** jdocs that clientAuth and all other settings are ignored unless useSSL is 
true
** fix createContextFactory to pay attention to clientAuth setting & only use 
trustStore when it's set
** fix Boolean.getBoolean usage
* SSLTestConfig
** some refacotring & jdocs
** fix bug when building test *client* SSL Context
*** trust store & keystore have to be swapped from clients perspective
* SolrTestCaseJ4
** make clientAuth randomization more likely
* TestMiniSolrCloudClusterSSL
** don't rely on random sslConfig, test explicit SSL scenerios w/distinct test 
clusters
** add sanity check asserts of things like baseURL when we expect to be using 
SSL
** assert no false positives when requiring clientAuth by doing a HEAD request 
from a client w/o any client certs 


> fix bugs in (add sanity checks for) SSL clientAuth testing
> --
>
> Key: SOLR-9028
> URL: https://issues.apache.org/jira/browse/SOLR-9028
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-9028.patch
>
>
> While looking into SOLR-8970 i realized there was a whole heap of problems 
> with how clientAuth was being handled in tests.  Notably: it wasn't actaully 
> being used when the randomization selects it (aparently due to a copy/paste 
> mistake in SOLR-7166).  But there are few other misc issues (improper usage 
> of sysprops overrides for tests, missuage of keystore/truststore in test 
> clients, etc..)
> I'm working up a patch to fix all of this, and add some much needed tests to 
> *explicitly* verify both SSL and clientAuth that will include some "false 
> positive" verifications, and some "test the test" checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9028) fix bugs in (add sanity checks for) SSL clientAuth testing

2016-04-21 Thread Hoss Man (JIRA)
Hoss Man created SOLR-9028:
--

 Summary: fix bugs in (add sanity checks for) SSL clientAuth testing
 Key: SOLR-9028
 URL: https://issues.apache.org/jira/browse/SOLR-9028
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man



While looking into SOLR-8970 i realized there was a whole heap of problems with 
how clientAuth was being handled in tests.  Notably: it wasn't actaully being 
used when the randomization selects it (aparently due to a copy/paste mistake 
in SOLR-7166).  But there are few other misc issues (improper usage of sysprops 
overrides for tests, missuage of keystore/truststore in test clients, etc..)

I'm working up a patch to fix all of this, and add some much needed tests to 
*explicitly* verify both SSL and clientAuth that will include some "false 
positive" verifications, and some "test the test" checks.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8992) Restore Schema API GET method functionality removed by SOLR-8736

2016-04-21 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8992:
-
Attachment: SOLR-8992.patch

added the missing testcase

> Restore Schema API GET method functionality removed by SOLR-8736
> 
>
> Key: SOLR-8992
> URL: https://issues.apache.org/jira/browse/SOLR-8992
> Project: Solr
>  Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Noble Paul
> Attachments: SOLR-8992.patch, SOLR-8992.patch, SOLR-8992.patch
>
>
> The following schema API GET functionality was removed under SOLR-8736; some 
> of this functionality should be restored:
> * {{schema/copyfields}}:
> ** The following information is no longer output:
> *** {{destDynamicBase}}: the matching dynamic field pattern for the 
> destination
> *** {{sourceDynamicBase}}: the matching dynamic field pattern for the source
> ** The following request parameters are no longer supported:
> *** {{dest.fl}}: include only copyFields that have one of these as a 
> destination
> *** {{source.fl}}: include only copyFields that have one of these as a source
> * {{schema/dynamicfields}}:
> ** The following request parameters are no longer supported:
> *** {{fl}}: a comma and/or space separated list of dynamic field patterns to 
> include 
> * {{schema/fields}} and {{schema/fields/_fieldname_}}:
> ** The following information is no longer output:
> *** {{dynamicBase}}: the matching dynamic field pattern, if the 
> {{includeDynamic}} param is given (see below) 
> ** The following request parameters are no longer supported:
> *** {{fl}}: (only supported without {{/_fieldname_}}): a comma and/or space 
> separated list of fields to include 
> *** {{includeDynamic}}: output the matching dynamic field pattern as 
> {{dynamicBase}}, if {{_fieldname_}}, or field(s) listed in {{fl}} param, are 
> not explicitly declared in the schema
> * {{schema/fieldtypes}} and {{schema/fieldtypes/_typename_}}:
> ** The following information is no longer output: 
> *** {{fields}}: the fields with the given field type
> *** {{dynamicFields}}: the dynamic fields with the given field type  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8758) Add SolrCloudTestCase base class

2016-04-21 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-8758:
---
Fix Version/s: 5.5.1

> Add SolrCloudTestCase base class
> 
>
> Key: SOLR-8758
> URL: https://issues.apache.org/jira/browse/SOLR-8758
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8758.patch
>
>
> At the moment, if you want to write unit tests for Cloud components, you have 
> to extend AbstractDistribZkTestCase, which has a number of disadvantages:
> * the API isn't well-documented
> * you get a default configuration loaded into ZK, and it's not trivial to add 
> separate ones
> * you get a default collection, whether you want one or not
> * the test cluster isn't static, which means that it's started up and 
> shutdown after every test function.  To avoid tests being incredibly slow, we 
> end up writing single-function tests that call out to sub-functions, losing 
> the benefits of execution-order randomization.
> It would be more useful to have a properly configurable and documented 
> testcase base class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7239) Speed up LatLonPoint's polygon queries when there are many vertices

2016-04-21 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252776#comment-15252776
 ] 

Robert Muir commented on LUCENE-7239:
-

{quote}
Indeed, very impressive speed-up. Are you using the same borough polygons I am 
looking at, where the total vertex count is 186,000 or thereabouts?
{quote}
Yes, I'm using {{-points -polyMedium}} from the luceneutil benchmark. Total 
vertex count is 186318.

Note that this solution is still sandy. Imagine the russia polygon from 
geonames where you have 1000 components (one for each island). This will still 
be slow, because we don't yet "squash" the polygon all into one gon with 
separators. Our algorithms support that, but we'd still have to keep an 
additional tree of just the holes to answer CELL_OUTSIDE_QUERY when its fully 
contained in the holes. Also i want to make sure it doesn't blow the tree all 
to hell. Followup :)

{quote}
For geo3d, I would love to be able to do some similar edge tree construction, 
but I don't yet have a firm idea what the tree hierarchy criteria would be. 
Can't split on latitude, that's for sure. Maybe the z in (x,y,z)?
{quote}

See https://en.wikipedia.org/wiki/Interval_tree#Higher_dimensions_2 for some 
discussion. I am not as familiar with how the 3D polygon algorithms work though 
to offer anything intelligent. I'm still fighting with 2D :)


> Speed up LatLonPoint's polygon queries when there are many vertices
> ---
>
> Key: LUCENE-7239
> URL: https://issues.apache.org/jira/browse/LUCENE-7239
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-7239.patch
>
>
> This is inspired by the "reliability and numerical stability" recommendations 
> at the end of http://www-ma2.upc.es/geoc/Schirra-pointPolygon.pdf.
> Basically our polys need to answer two questions that are slow today:
> contains(point)
> crosses(rectangle)
> Both of these ops only care about a subset of edges: the ones overlapping a y 
> interval range. We can organize these edges in an interval tree to be 
> practical and speed things up a lot. Worst case is still O(n) but those 
> solutions are more complex to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-8758) Add SolrCloudTestCase base class

2016-04-21 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta reopened SOLR-8758:


I'll also back port this to 5x for 5.5.1 so that the bug fixes and tests that 
use SolrCloudTestCase base class from master/6.0 can be easily back ported.

> Add SolrCloudTestCase base class
> 
>
> Key: SOLR-8758
> URL: https://issues.apache.org/jira/browse/SOLR-8758
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
> Fix For: 6.0
>
> Attachments: SOLR-8758.patch
>
>
> At the moment, if you want to write unit tests for Cloud components, you have 
> to extend AbstractDistribZkTestCase, which has a number of disadvantages:
> * the API isn't well-documented
> * you get a default configuration loaded into ZK, and it's not trivial to add 
> separate ones
> * you get a default collection, whether you want one or not
> * the test cluster isn't static, which means that it's started up and 
> shutdown after every test function.  To avoid tests being incredibly slow, we 
> end up writing single-function tests that call out to sub-functions, losing 
> the benefits of execution-order randomization.
> It would be more useful to have a properly configurable and documented 
> testcase base class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 158 - Still Failing

2016-04-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/158/

2 tests failed.
FAILED:  
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithExplicitRefresh

Error Message:
Could not find collection : c1

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : c1
at 
__randomizedtesting.SeedInfo.seed([65567DB43CFEF2BD:7AEC0C43EC9E3478]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:170)
at 
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdate(ZkStateReaderTest.java:136)
at 
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithExplicitRefresh(ZkStateReaderTest.java:42)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy

Error Message:
Could not find collection : c1

Stack Trace:

[jira] [Resolved] (SOLR-8837) Duplicate leader elector node detection is broken

2016-04-21 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-8837.

Resolution: Fixed

> Duplicate leader elector node detection is broken
> -
>
> Key: SOLR-8837
> URL: https://issues.apache.org/jira/browse/SOLR-8837
> Project: Solr
>  Issue Type: Bug
>Affects Versions: master, 6.0
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8837.patch
>
>
> LeaderElector.checkIfIAmLeader checks to see if it has duplicate 
> registrations in under its election node, but it does this by prefix 
> checking, which means that if core_node1 registers itself after core_node11, 
> it will think it's a duplicate, and delete the core_node11 node.
> This is causing regular failures in UnloadDistributedZkTest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7239) Speed up LatLonPoint's polygon queries when there are many vertices

2016-04-21 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252755#comment-15252755
 ] 

Karl Wright commented on LUCENE-7239:
-

Indeed, very impressive speed-up.  Are you using the same borough polygons I am 
looking at, where the total vertex count is 186,000 or thereabouts?

For geo3d, I would love to be able to do some similar edge tree construction, 
but I don't yet have a firm idea what the tree hierarchy criteria would be.  
Can't split on latitude, that's for sure.  Maybe the z in (x,y,z)?


> Speed up LatLonPoint's polygon queries when there are many vertices
> ---
>
> Key: LUCENE-7239
> URL: https://issues.apache.org/jira/browse/LUCENE-7239
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-7239.patch
>
>
> This is inspired by the "reliability and numerical stability" recommendations 
> at the end of http://www-ma2.upc.es/geoc/Schirra-pointPolygon.pdf.
> Basically our polys need to answer two questions that are slow today:
> contains(point)
> crosses(rectangle)
> Both of these ops only care about a subset of edges: the ones overlapping a y 
> interval range. We can organize these edges in an interval tree to be 
> practical and speed things up a lot. Worst case is still O(n) but those 
> solutions are more complex to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8697) Fix LeaderElector issues

2016-04-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252752#comment-15252752
 ] 

ASF subversion and git services commented on SOLR-8697:
---

Commit f6fca6901665cab4bea078baa4350ddc8964a2cf in lucene-solr's branch 
refs/heads/branch_5_5 from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f6fca69 ]

SOLR-8697: Fix precommit failure


> Fix LeaderElector issues
> 
>
> Key: SOLR-8697
> URL: https://issues.apache.org/jira/browse/SOLR-8697
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>Assignee: Mark Miller
>  Labels: patch, reliability, solrcloud
> Fix For: master, 6.0, 5.5.1
>
> Attachments: OverseerTestFail.log, SOLR-8697-followup.patch, 
> SOLR-8697.patch
>
>
> This patch is still somewhat WIP for a couple of reasons:
> 1) Still debugging test failures.
> 2) This will more scrutiny from knowledgable folks!
> There are some subtle bugs with the current implementation of LeaderElector, 
> best demonstrated by the following test:
> 1) Start up a small single-node solrcloud.  it should be become Overseer.
> 2) kill -9 the solrcloud process and immediately start a new one.
> 3) The new process won't become overseer.  The old process's ZK leader elect 
> node has not yet disappeared, and the new process fails to set appropriate 
> watches.
> NOTE: this is only reproducible if the new node is able to start up and join 
> the election quickly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8837) Duplicate leader elector node detection is broken

2016-04-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252753#comment-15252753
 ] 

ASF subversion and git services commented on SOLR-8837:
---

Commit 55ac1ab95819f307f8056f8ddffbdd349ea51247 in lucene-solr's branch 
refs/heads/branch_5_5 from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=55ac1ab ]

SOLR-8837: Fix duplicate election node detection


> Duplicate leader elector node detection is broken
> -
>
> Key: SOLR-8837
> URL: https://issues.apache.org/jira/browse/SOLR-8837
> Project: Solr
>  Issue Type: Bug
>Affects Versions: master, 6.0
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8837.patch
>
>
> LeaderElector.checkIfIAmLeader checks to see if it has duplicate 
> registrations in under its election node, but it does this by prefix 
> checking, which means that if core_node1 registers itself after core_node11, 
> it will think it's a duplicate, and delete the core_node11 node.
> This is causing regular failures in UnloadDistributedZkTest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8697) Fix LeaderElector issues

2016-04-21 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-8697.

Resolution: Fixed

> Fix LeaderElector issues
> 
>
> Key: SOLR-8697
> URL: https://issues.apache.org/jira/browse/SOLR-8697
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>Assignee: Mark Miller
>  Labels: patch, reliability, solrcloud
> Fix For: master, 6.0, 5.5.1
>
> Attachments: OverseerTestFail.log, SOLR-8697-followup.patch, 
> SOLR-8697.patch
>
>
> This patch is still somewhat WIP for a couple of reasons:
> 1) Still debugging test failures.
> 2) This will more scrutiny from knowledgable folks!
> There are some subtle bugs with the current implementation of LeaderElector, 
> best demonstrated by the following test:
> 1) Start up a small single-node solrcloud.  it should be become Overseer.
> 2) kill -9 the solrcloud process and immediately start a new one.
> 3) The new process won't become overseer.  The old process's ZK leader elect 
> node has not yet disappeared, and the new process fails to set appropriate 
> watches.
> NOTE: this is only reproducible if the new node is able to start up and join 
> the election quickly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8697) Fix LeaderElector issues

2016-04-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15252751#comment-15252751
 ] 

ASF subversion and git services commented on SOLR-8697:
---

Commit 78bed536984dbfd4ba2f802deb58b29979d59329 in lucene-solr's branch 
refs/heads/branch_5_5 from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=78bed53 ]

SOLR-8697: Add synchronization around registering as leader and canceling.


> Fix LeaderElector issues
> 
>
> Key: SOLR-8697
> URL: https://issues.apache.org/jira/browse/SOLR-8697
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>Assignee: Mark Miller
>  Labels: patch, reliability, solrcloud
> Fix For: master, 6.0, 5.5.1
>
> Attachments: OverseerTestFail.log, SOLR-8697-followup.patch, 
> SOLR-8697.patch
>
>
> This patch is still somewhat WIP for a couple of reasons:
> 1) Still debugging test failures.
> 2) This will more scrutiny from knowledgable folks!
> There are some subtle bugs with the current implementation of LeaderElector, 
> best demonstrated by the following test:
> 1) Start up a small single-node solrcloud.  it should be become Overseer.
> 2) kill -9 the solrcloud process and immediately start a new one.
> 3) The new process won't become overseer.  The old process's ZK leader elect 
> node has not yet disappeared, and the new process fails to set appropriate 
> watches.
> NOTE: this is only reproducible if the new node is able to start up and join 
> the election quickly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   3   >