[jira] [Comment Edited] (SOLR-8798) org.apache.solr.rest.RestManager can't find cyrillic synonyms.

2016-03-18 Thread Kostiantyn (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197239#comment-15197239
 ] 

Kostiantyn edited comment on SOLR-8798 at 3/16/16 12:14 PM:


Got the same issue with Danish synonyms.

I think the original problem in API. According to documentation 
https://cwiki.apache.org/confluence/display/solr/Managed+Resources the example 
request below will add new synonym mapping.
{code}
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"mad":["angry","upset"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english;
{code}
If after that, I will execute this request,
{code}
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"mad":["insane"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english;
{code}
I will get result mapping merged:
{code}
"managedMap":{"mad":["angry","upset","insane"]}
{code}
If I need not merging but replacing, I have to at first delete the "mad" 
synonym at all and then re-add it with new value.
{code}
curl -X DELETE -H 'Content-type:application/json' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english/mad;
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"mad":["insane"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english;
{code}
That is how I could get replaced mapping:
{code}
"managedMap":{ "mad":["insane"]}
{code}

In my opinion this API could not be considered as totally finished. There is 
must be a method to update a synonym mapping also.
Problem comes when you have non latin symbols (Dannish example "åbningstider") 
or cyrillic symbols as well.
In this case you cannot perform deletion command because Solr will return 404 
status.

Example. 
Add synonym mapping for the Danish word badroom "soveværelse"
{code}
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"soveværelse":["køkken"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/danish;
{code}
Than I need to replace mapping "køkken" (a kitchen) with "værelse" (a room). I 
cannot just execute PUT request, it will merge "værelse" with existent "køkken" 
and I will get
{code}
"managedMap":{"soveværelse":["køkken","værelse"]}
{code}
But I actually need this
{code}
"managedMap":{"soveværelse":["værelse"]}
{code}
If I try to delete "soveværelse", I will get an error 404 from Solr
{code}
 curl -X DELETE -H 'Content-type:application/json' --data-binary 
'{"mad":["angry","upset"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/danish/soveværelse;
{
  "responseHeader":{
"status":404,
"QTime":10},
  "error":{
"msg":"sovev%C3%A6relse not found in /schema/analysis/synonyms/danish",
"code":404}}
{code}


It means that there is no way to maintain such synonym mappings.




was (Author: koschos):
Got the same issue with Danish synonyms.

I think the original problem in API. According to documentation 
https://cwiki.apache.org/confluence/display/solr/Managed+Resources the example 
request below will add new synonym mapping.
{code}
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"mad":["angry","upset"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english;
{code}
If after that I will execute this request,
{code}
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"mad":["insane"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english;
{code}
I will get result mapping merged:
{code}
"managedMap":{"mad":["angry","upset","insane"]}
{code}
If I need not merging but replacing, I have to at first delete the "mad" 
synonym at all and then re-add it with new value.
{code}
curl -X DELETE -H 'Content-type:application/json' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english/mad;
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"mad":["insane"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english;
{code}
That is how I could get replaced mapping:
{code}
"managedMap":{ "mad":["insane"]}
{code}

In my opinion this API could not be considered as totally finished. There is 
must be a method to update a synonym mapping also.
Problem comes when you have non latin symbols (Dannish example "åbningstider") 
or cyrillic symbols as well.
In this case you cannot perform deletion command because Solr will return 404 
status.
Example. Add first synonym mapping for the Danish word badroom "soveværelse"
{code}
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"soveværelse":["køkken"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/danish;
{code}
Than I need to replace mapping "køkken" (a kitchen) with "værelse" (a room). I 
cannot just execute PUT request, it will merge "værelse" with existent "køkken" 
and I will get
{code}

[jira] [Comment Edited] (LUCENE-7111) DocValuesRangeQuery.newLongRange behaves incorrectly for Long.MAX_VALUE and Long.MIN_VALUE

2016-03-18 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15198687#comment-15198687
 ] 

Ishan Chattopadhyaya edited comment on LUCENE-7111 at 3/17/16 1:01 PM:
---

Attaching an attempted fix. Not sure if there's a better way to handle this. 
Could someone please review?

Edit: -Never mind, the fix may not be the correct one. I'm still looking 
deeper.- I think the fix is behaving correctly, but I am looking for 
suggestions from someone who knows that part of the code better.


was (Author: ichattopadhyaya):
-Attaching an attempted fix. Not sure if there's a better way to handle this. 
Could someone please review?-
Never mind, the fix may not be the correct one. I'm still looking deeper.

> DocValuesRangeQuery.newLongRange behaves incorrectly for Long.MAX_VALUE and 
> Long.MIN_VALUE
> --
>
> Key: LUCENE-7111
> URL: https://issues.apache.org/jira/browse/LUCENE-7111
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Reporter: Ishan Chattopadhyaya
> Attachments: LUCENE-7111.patch, LUCENE-7111.patch, LUCENE-7111.patch
>
>
> It seems that the following queries return all documents, which is unexpected:
> {code}
> DocValuesRangeQuery.newLongRange("dv", Long.MAX_VALUE, Long.MAX_VALUE, false, 
> true);
> DocValuesRangeQuery.newLongRange("dv", Long.MIN_VALUE, Long.MIN_VALUE, true, 
> false);
> {code}
> In Solr, floats and doubles are converted to longs and -0d gets converted to 
> Long.MIN_VALUE, and queries like {-0d TO 0d] could fail due to this, 
> returning all documents in the index.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8871) Classification Update Request Processor Improvements

2016-03-18 Thread Alessandro Benedetti (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201694#comment-15201694
 ] 

Alessandro Benedetti commented on SOLR-8871:


Base issue for the update request processor

> Classification Update Request Processor Improvements
> 
>
> Key: SOLR-8871
> URL: https://issues.apache.org/jira/browse/SOLR-8871
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 6.1
>Reporter: Alessandro Benedetti
>  Labels: classification, classifier, update, update.chain
>
> This task will group a set of modifications to the classification update 
> reqeust processor ( and Lucene classification module), based on user's 
> feedback ( thanks [~teofili] and Александър Цветанов  ) :
> -include boosting support for inputFields in the solrconfig.xml for the 
> classification update request processor
> e.g.
> field1^2, field2^5 ...
> -multi class assignement ( introduce a parameter, default 1, for the max 
> number of class to assign)
> - separate the classField in :
> classTrainingField
> classOutputField
> Default when classOutputField is not defined, is classTrainingField .
> - add support for the classification query, to use only a subset of the 
> entire index to classify.
> -Improve Related Tests



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 76 - Still Failing

2016-03-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/76/

No tests ran.

Build Log:
[...truncated 25 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/build.xml:21: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/lucene/common-build.xml:302:
 Minimum supported Java version is 1.8.

Total time: 0 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
were found. Configuration error?
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: Welcome Kevin Rsiden as Lucene/Solr committer

2016-03-18 Thread Koji Sekiguchi

Welcome Kevin!

Koji


On 2016/03/17 2:02, Joel Bernstein wrote:

I'm pleased to announce that Kevin Risden has accepted the PMC's invitation to 
become a committer.

Kevin, it's tradition that you introduce yourself with a brief bio.

I believe your account has been setup and karma has been granted so that you 
can add yourself to the
committers section of the Who We Are page on the website:
.

Congratulations and welcome!


Joel Bernstein




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7111) DocValuesRangeQuery.newLongRange behaves incorrectly for Long.MAX_VALUE and Long.MIN_VALUE

2016-03-18 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated LUCENE-7111:
-
Attachment: LUCENE-7111.patch

Attaching an attempted fix. Not sure if there's a better way to handle this. 
Could someone please review?

> DocValuesRangeQuery.newLongRange behaves incorrectly for Long.MAX_VALUE and 
> Long.MIN_VALUE
> --
>
> Key: LUCENE-7111
> URL: https://issues.apache.org/jira/browse/LUCENE-7111
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Reporter: Ishan Chattopadhyaya
> Attachments: LUCENE-7111.patch, LUCENE-7111.patch
>
>
> It seems that the following queries return all documents, which is unexpected:
> {code}
> DocValuesRangeQuery.newLongRange("dv", Long.MAX_VALUE, Long.MAX_VALUE, false, 
> true);
> DocValuesRangeQuery.newLongRange("dv", Long.MIN_VALUE, Long.MIN_VALUE, true, 
> false);
> {code}
> In Solr, floats and doubles are converted to longs and -0d gets converted to 
> Long.MIN_VALUE, and queries like {-0d TO 0d] could fail due to this, 
> returning all documents in the index.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Kevin Rsiden as Lucene/Solr committer

2016-03-18 Thread Otis Gospodnetic
Congratulations and welcome!

Otis

> On Mar 16, 2016, at 13:02, Joel Bernstein  wrote:
> 
> I'm pleased to announce that Kevin Risden has accepted the PMC's invitation 
> to become a committer.
> 
> Kevin, it's tradition that you introduce yourself with a brief bio.
> 
> I believe your account has been setup and karma has been granted so that you 
> can add yourself to the committers section of the Who We Are page on the 
> website:
> .
> 
> Congratulations and welcome!
> 
> 
> Joel Bernstein
> 


[jira] [Commented] (SOLR-8866) UpdateLog should throw an exception when serializing unknown types

2016-03-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15199979#comment-15199979
 ] 

ASF subversion and git services commented on SOLR-8866:
---

Commit a22099a3986de1f36f926b4e106827c5308708b0 in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a22099a ]

SOLR-8866: UpdateLog now throws an error if it can't serialize a field value


> UpdateLog should throw an exception when serializing unknown types
> --
>
> Key: SOLR-8866
> URL: https://issues.apache.org/jira/browse/SOLR-8866
> Project: Solr
>  Issue Type: Improvement
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR_8866_UpdateLog_show_throw_for_unknown_types.patch
>
>
> When JavaBinCodec encounters a class it doesn't have explicit knowledge of 
> how to serialize, nor does it implement the {{ObjectResolver}} interface, it 
> currently serializes the object as the classname, colon, then toString() of 
> the object.
> This may appear innocent but _not_ throwing an exception hides bugs.  One 
> example is that the UpdateLog, which uses JavaBinCodec, to save a document.  
> The result is that this bad value winds up there, gets deserialized as a 
> String in PeerSync (which uses /get) and then this value pretends to be a 
> suitable value to the final document in the leader.  But of course it isn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7061) fix remaining api issues with XYZPoint classes

2016-03-18 Thread Matt Davis (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15199426#comment-15199426
 ] 

Matt Davis commented on LUCENE-7061:


+1 and -1 for start and end exclusive for Int and Long makes sense but what 
would be the pattern for Float and Double?

> fix remaining api issues with XYZPoint classes
> --
>
> Key: LUCENE-7061
> URL: https://issues.apache.org/jira/browse/LUCENE-7061
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: master, 6.0
>
> Attachments: LUCENE-7061.patch
>
>
> There are still some major problems today:
> XYZPoint.newRangeQuery has "brain damage" from variable length terms. nulls 
> for open ranges make no sense: these are fixed-width types and instead you 
> can use things like Integer.maxValue. Removing the nulls is safe, as we can 
> just switch to primitive types instead of boxed types.
> XYZPoint.newRangeQuery requires boolean arrays for inclusive/exclusive, but 
> thats just more brain damage. If you want to exclude an integer, you just 
> subtract 1 from it and other simple stuff.
> For the apis, this means Instead of:
> {code}
> public static Query newRangeQuery(String field, Long lowerValue, boolean 
> lowerInclusive, Long upperValue, boolean upperInclusive);
>   
> public static Query newMultiRangeQuery(String field, Long[] lowerValue, 
> boolean lowerInclusive[], Long[] upperValue, boolean upperInclusive[]);
> {code}
> we have:
> {code}
> public static Query newRangeQuery(String field, long lowerValue, long 
> upperValue);
> public static Query newRangeQuery(String field, long[] lowerValue, long[] 
> upperValue);
> {code}
> PointRangeQuery is horribly complex due to these nulls and boolean arrays, 
> and need not be. Now it only works "inclusive" and is much simpler.
> XYZPoint.newSetQuery throws IOException, just creating the query. This is 
> very confusing and unnecessary (no i/o happens).
> LatLonPoint's bounding box query is not inclusive like the other geo. And the 
> test does not fail!
> I also found a few missing checks here and there while cleaning up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2016-03-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201173#comment-15201173
 ] 

ASF subversion and git services commented on SOLR-8029:
---

Commit 8163ffd08318da1c525b9377339f93da4950fbf4 in lucene-solr's branch 
refs/heads/apiv2 from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8163ffd ]

SOLR-8029: Merge remote-tracking branch 'remotes/origin/master' into apiv2


> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: master
>
> Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, 
> SOLR-8029.patch
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 4 types of requests in the new API 
> * {{/v2//*}} : Hit a collection directly or manage 
> collections/shards/replicas 
> * {{/v2//*}} : Hit a core directly or manage cores 
> * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection 
> or core. e.g: security, overseer ops etc
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7113) OfflineSorter and BKD should verify checksums in their temp files

2016-03-18 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-7113:
--

 Summary: OfflineSorter and BKD should verify checksums in their 
temp files
 Key: LUCENE-7113
 URL: https://issues.apache.org/jira/browse/LUCENE-7113
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: master, 6.0


I am trying to index all 3.2 B points from the latest OpenStreetMaps export.

My SSDs were not up to this, so I added a spinning magnets disk to beast2.

But then I was hitting scary bug-like exceptions 
({{ArrayIndexOutOfBoundsException}}) when indexing the first 2B points, and I 
finally checked dmesg and saw that my hard drive is dying.

I think it's important that our temp file usages also validate checksums (like 
we do for all our index files, either at reader open or at merge or 
{{CheckIndex}}), so we can hopefully easily differentiate a bit-flipping IO 
system from a possible Lucene bug, in the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: SolrCloud: the default cluster state format

2016-03-18 Thread Shalin Shekhar Mangar
Yeah, the collection creation code explicitly sets stateFormat=2 so the
default format is actually "2". But the overseer code assumes 1 if none is
specified by the OverseerCollectionProcessor or a CoreAdmin Create.

+1 to fix.

On Sat, Mar 19, 2016 at 7:17 AM, David Smiley 
wrote:

> Ok; I have this simple change in my patch (and branch) for SOLR-5750.  It
> seems like this bug is not as bad as it may appear because normal
> collection creation takes a code path that expressly states the state
> format to be 2, whereas for this new collection restoration feature a
> different path is taken that doesn't set it.  If people think this needs
> its own issue then I'll file one and commit it.
>
> On Fri, Mar 18, 2016 at 7:20 PM Mark Miller  wrote:
>
>> We certainly discussed making it two and there was consensus, and I would
>> have sworn someone did, but perhaps no one ever did.
>>
>> - Mark
>>
>> On Fri, Mar 18, 2016 at 7:09 PM Scott Blum  wrote:
>>
>>> That seems really bad, the default should be 2.
>>>
>>> On Fri, Mar 18, 2016 at 3:28 PM, David Smiley 
>>> wrote:
>>>
 I noticed ClusterStateMutator.createCollection defaults the state
 format to 1 if it's not explicitly set -- line 104.  Shouldn't it be 2?
 While working on a test for collection restore from a backup (SOLR-5750) I
 see the restored collection ends up being in the old (1) state format
 because of this.

 I'll file an issue unless someone can confirm it's supposed to be this
 way.
 --
 Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
 LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
 http://www.solrenterprisesearchserver.com

>>>
>>> --
>> - Mark
>> about.me/markrmiller
>>
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>



-- 
Regards,
Shalin Shekhar Mangar.


[jira] [Commented] (SOLR-8864) TestTestInjection needs to cleanup after itself -- causes TestCloudDeleteByQuery fail (may be symptom of larger problem?)

2016-03-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15199434#comment-15199434
 ] 

Mark Miller commented on SOLR-8864:
---

bq. but i'm surprised this hasn't caused a lot more weird failures since this 
test was added back in december

perhaps just hard to spot - only takes one successful SolrTestCaseJ4 run after 
to clear it.

> TestTestInjection needs to cleanup after itself -- causes 
> TestCloudDeleteByQuery fail (may be symptom of larger problem?)
> -
>
> Key: SOLR-8864
> URL: https://issues.apache.org/jira/browse/SOLR-8864
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Mark Miller
> Attachments: jenkins.log
>
>
> https://builds.apache.org/job/Lucene-Solr-Tests-6.x/65/ recently reported a 
> failure from TestCloudDeleteByQuery's init methods that made no sense to me 
> -- looking at the logs showed an error from "TestInjection.parseValue" even 
> though this test doesn't do anything to setup TestInjection...
> {noformat}
>[junit4]   2> 527801 ERROR (qtp1490160324-5239) [n:127.0.0.1:48763_solr 
> c:test_col s:shard1 r:core_node4 x:test_col_shard1_replica2] 
> o.a.s.h.RequestHandlerBase java.lang.RuntimeException: No match, probably bad 
> syntax: TRUE:0:
>[junit4]   2>  at 
> org.apache.solr.util.TestInjection.parseValue(TestInjection.java:236)
>[junit4]   2>  at 
> org.apache.solr.util.TestInjection.injectFailReplicaRequests(TestInjection.java:159)
>[junit4]   2>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.setupRequest(DistributedUpdateProcessor.java:356)
> {noformat}
> ...the immediate problem seems to be that TestTestInjection doesn't do 
> anything to cleanup after itself (it never calls {{TestInjection.reset()}}, 
> and doesn't subclass SolrTestCaseJ4) but i'm surprised this hasn't caused a 
> lot more weird failures since this test was added back in december -- i 
> wonder if this this "bad syntax" RuntimeException, when injected into the 
> distributed updates, isn't causing a problem in most cases because of leader 
> initiated recovery, but maybe something specific about the codepaths used in 
> TestCloudDeleteByQuery (which is only a few weeks old) don't work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7099) add newDistanceSort to sandbox LatLonPoint

2016-03-18 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-7099.
-
   Resolution: Fixed
Fix Version/s: 6.1
   master

> add newDistanceSort to sandbox LatLonPoint
> --
>
> Key: LUCENE-7099
> URL: https://issues.apache.org/jira/browse/LUCENE-7099
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: master, 6.1
>
> Attachments: LUCENE-7099.patch
>
>
> This field does not support sorting by distance, which is a very common use 
> case. 
> We can add {{LatLonPoint.newDistanceSort(field, latitude, longitude)}} which 
> returns a suitable SortField. There are a lot of optimizations esp when e.g. 
> the priority queue gets full to avoid tons of haversin() computations.
> Also, we can make use of the SortedNumeric data to switch 
> newDistanceQuery/newPolygonQuery to the two-phase iterator api, so they 
> aren't doing haversin() calls on bkd leaf nodes. It should look a lot like 
> LUCENE-7019



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7112) WeightedSpanTermExtractor should not always call extractUnknownQuery

2016-03-18 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-7112:


 Summary: WeightedSpanTermExtractor should not always call 
extractUnknownQuery
 Key: LUCENE-7112
 URL: https://issues.apache.org/jira/browse/LUCENE-7112
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Priority: Minor


WeightedSpanTermExtractor always calls extractUnknownQuery, even if term 
extraction already succeeded because the query is eg. a phrase query. It should 
only call this method if it could not find how to extract terms otherwise (eg. 
in case of a custom query).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7111) DocValuesRangeQuery.newLongRange behaves incorrectly for Long.MAX_VALUE and Long.MIN_VALUE

2016-03-18 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15198687#comment-15198687
 ] 

Ishan Chattopadhyaya edited comment on LUCENE-7111 at 3/17/16 3:40 AM:
---

-Attaching an attempted fix. Not sure if there's a better way to handle this. 
Could someone please review?-
Never mind, the fix may not be the correct one. I'm still looking deeper.


was (Author: ichattopadhyaya):
Attaching an attempted fix. Not sure if there's a better way to handle this. 
Could someone please review?

> DocValuesRangeQuery.newLongRange behaves incorrectly for Long.MAX_VALUE and 
> Long.MIN_VALUE
> --
>
> Key: LUCENE-7111
> URL: https://issues.apache.org/jira/browse/LUCENE-7111
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Reporter: Ishan Chattopadhyaya
> Attachments: LUCENE-7111.patch, LUCENE-7111.patch, LUCENE-7111.patch
>
>
> It seems that the following queries return all documents, which is unexpected:
> {code}
> DocValuesRangeQuery.newLongRange("dv", Long.MAX_VALUE, Long.MAX_VALUE, false, 
> true);
> DocValuesRangeQuery.newLongRange("dv", Long.MIN_VALUE, Long.MIN_VALUE, true, 
> false);
> {code}
> In Solr, floats and doubles are converted to longs and -0d gets converted to 
> Long.MIN_VALUE, and queries like {-0d TO 0d] could fail due to this, 
> returning all documents in the index.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8861) Fix missing CloudSolrClient.connect() before getZkStateReader in solrj.io classes

2016-03-18 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8861:
---
Summary: Fix missing CloudSolrClient.connect() before getZkStateReader in 
solrj.io classes  (was: Fix missing CloudSolrClient.connect() before 
getZkStateReader in solrj.io)

> Fix missing CloudSolrClient.connect() before getZkStateReader in solrj.io 
> classes
> -
>
> Key: SOLR-8861
> URL: https://issues.apache.org/jira/browse/SOLR-8861
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: master, 6.0
>Reporter: Kevin Risden
>Priority: Critical
> Fix For: 6.0
>
>
> There are a few places in the new solrj.io package that miss calling connect 
> before getZkStateReader. This can cause NPE exceptions with getZkStateReader 
> in some cases if the SolrCache is closed.
> There is probably a better way to fix this moving forward, but for 6.0 this 
> should be resolved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8798) org.apache.solr.rest.RestManager can't find cyrillic synonyms.

2016-03-18 Thread Kostiantyn (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197239#comment-15197239
 ] 

Kostiantyn commented on SOLR-8798:
--

Got teh same issue with Danish synonyms.

I think the original problem in API. According to documentation 
https://cwiki.apache.org/confluence/display/solr/Managed+Resources the example 
request below will add new synonym mapping.
{code}
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"mad":["angry","upset"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english;
{code}
If after that I will execute this request,
{code}
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"mad":["insane"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english;
{code}
I will get result mapping merged:
{code}
"initArgs":{"ignoreCase":false},
  "initializedOn":"2016-03-07T11:57:00.116Z",
  "updatedSinceInit":"2016-03-07T12:19:11.174Z",
  "managedMap":{
"mad":["angry","upset","insane"]}}
{code}
If I need not merging but replacing, I have to at first delete the "mad" 
synonym at all and then re-add it with new value.
{code}
curl -X DELETE -H 'Content-type:application/json' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english/mad;
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"mad":["insane"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english;
{code}
That is how I could get replaced mapping:
{code}
"initArgs":{"ignoreCase":false},
  "initializedOn":"2016-03-07T11:57:00.116Z",
  "updatedSinceInit":"2016-03-07T12:19:11.174Z",
  "managedMap":{
"mad":["insane"]}}
{code}

In my opinion this API could not be considered as totally finished. There is 
must be a method to update a synonym mapping also.
Problem comes when you have non latin symbols (Dannish example "åbningstider") 
or cyrillic symbols as well.
In this case you cannot perform deletion command because Solr will return 404 
status.
Example. Add first synonym mapping for the Danish word badroom "soveværelse"
{code}
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"soveværelse":["køkken"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/daniish;
{code}
Than I need to replace mapping "køkken" (a kitchen) with "værelse" (a room). I 
cannot just execute PUT request, it will merge "værelse" with existent "køkken" 
and I will get
{code}
"managedMap":{"soveværelse":["køkken","værelse"]}
{code}
But I actually need this
{code}
"managedMap":{"soveværelse":["værelse"]}
{code}
If I try to delete "soveværelse" I get an error 404 from Solr
{code}
 curl -X DELETE -H 'Content-type:application/json' --data-binary 
'{"mad":["angry","upset"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/daniish/soveværelse;
{
  "responseHeader":{
"status":404,
"QTime":10},
  "error":{
"msg":"sovev%C3%A6relse not found in /schema/analysis/synonyms/2",
"code":404}}
{code}



> org.apache.solr.rest.RestManager can't find cyrillic synonyms.
> --
>
> Key: SOLR-8798
> URL: https://issues.apache.org/jira/browse/SOLR-8798
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.9.1
>Reporter: Vitalii
>
> RestManager doesn't work well with cyrillic symbols.
> I'm able to create new synonyms via REST interface. But I have an error when 
> I try to get created synonyns with via request:
> http://localhost:8983/solr/collection1/schema/analysis/synonyms/18/ліжко
> I get this message in console log:
> {code}
> # solr/console.log
> 4591823 [qtp1281335597-14] INFO  org.apache.solr.rest.RestManager  – Resource 
> not found for /schema/analysis/synonyms/18/%D0%BB%D1%96%D0%B6%D0%BA%D0%BE, 
> looking for parent: /schema/analysis/synonyms/18
> {code}
> But in synonyms file I have row with this word:
> {code}
> # /solr/collection1/conf/_schema_analysis_synonyms_18.json
>   "initArgs":{"ignoreCase":false},
>   "initializedOn":"2016-03-07T11:57:00.116Z",
>   "updatedSinceInit":"2016-03-07T12:19:11.174Z",
>   "managedMap":{
> "ліжко":["кровать"],
> "стілець":["стул"]}}
> {code}
> This issue have been tested by multiple persons and they can confirm that 
> faced this problem too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Kevin Risden as Lucene/Solr committer

2016-03-18 Thread Anshum Gupta
Congratulations and Welcome Kevin!

On Wed, Mar 16, 2016 at 10:03 AM, David Smiley 
wrote:

> Welcome Kevin!
>
> (corrected misspelling of your last name in the subject)
>
> On Wed, Mar 16, 2016 at 1:02 PM Joel Bernstein  wrote:
>
>> I'm pleased to announce that Kevin Risden has accepted the PMC's invitation
>> to become a committer.
>>
>> Kevin, it's tradition that you introduce yourself with a brief bio.
>>
>> I believe your account has been setup and karma has been granted so that
>> you can add yourself to the committers section of the Who We Are page on
>> the website:
>> .
>>
>> Congratulations and welcome!
>>
>>
>> Joel Bernstein
>>
>> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>



-- 
Anshum Gupta


[jira] [Commented] (SOLR-8798) org.apache.solr.rest.RestManager can't find cyrillic synonyms.

2016-03-18 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15200433#comment-15200433
 ] 

Steve Rowe commented on SOLR-8798:
--

I can't reproduce the problem from unit tests - I add the following test to 
{{TestManagedSynonymFilterFactory}} - below passes for me:

{code:java}
  /**
   * Can we add and remove synonyms with non-Latin chars
   */
  @Test
  public void testCanHandleDecodingAndEncodingForSynonyms2() throws Exception  {
String endpoint = "/schema/analysis/synonyms/nonlatin";

assertJQ(endpoint,
"/synonymMappings/initArgs/ignoreCase==false",
"/synonymMappings/managedMap=={}");

// does not exist
assertJQ(endpoint+"/ліжко", "/error/code==404");

Map syns = new HashMap<>();

// now put a synonym
syns.put("ліжко", Collections.singletonList("кровать"));
assertJPut(endpoint, JSONUtil.toJSON(syns), "/responseHeader/status==0");

// and check if it exists
assertJQ(endpoint, "/synonymMappings/managedMap/ліжко==['кровать']");

// verify get works
assertJQ(endpoint+"/ліжко", "/responseHeader/status==0");
assertJQ(endpoint+"/%D0%BB%D1%96%D0%B6%D0%BA%D0%BE", 
"/responseHeader/status==0");

// verify delete works
assertJDelete(endpoint+"/ліжко", "/responseHeader/status==0");

// was it really deleted?
assertJDelete(endpoint+"/ліжко", "/error/code==404");
  }
{code}

I'll try some manual testing.

> org.apache.solr.rest.RestManager can't find cyrillic synonyms.
> --
>
> Key: SOLR-8798
> URL: https://issues.apache.org/jira/browse/SOLR-8798
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.9.1
>Reporter: Vitalii
>
> RestManager doesn't work well with cyrillic symbols.
> I'm able to create new synonyms via REST interface. But I have an error when 
> I try to get created synonyns with via request:
> http://localhost:8983/solr/collection1/schema/analysis/synonyms/18/ліжко
> I get this message in console log:
> {code}
> # solr/console.log
> 4591823 [qtp1281335597-14] INFO  org.apache.solr.rest.RestManager  – Resource 
> not found for /schema/analysis/synonyms/18/%D0%BB%D1%96%D0%B6%D0%BA%D0%BE, 
> looking for parent: /schema/analysis/synonyms/18
> {code}
> But in synonyms file I have row with this word:
> {code}
> # /solr/collection1/conf/_schema_analysis_synonyms_18.json
>   "initArgs":{"ignoreCase":false},
>   "initializedOn":"2016-03-07T11:57:00.116Z",
>   "updatedSinceInit":"2016-03-07T12:19:11.174Z",
>   "managedMap":{
> "ліжко":["кровать"],
> "стілець":["стул"]}}
> {code}
> This issue have been tested by multiple persons and they can confirm that 
> faced this problem too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8867) frange / ValueSourceRangeFilter / FunctionValues.getRangeScorer should not match documents w/o a value

2016-03-18 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-8867:
---
Description: 
{!frange} currently can match documents w/o a value (because of a default value 
of 0).
This only existed historically because we didn't have info about what fields 
had a value for numerics, and didn't have exists() on FunctionValues.

> frange / ValueSourceRangeFilter / FunctionValues.getRangeScorer should not 
> match documents w/o a value
> --
>
> Key: SOLR-8867
> URL: https://issues.apache.org/jira/browse/SOLR-8867
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>
> {!frange} currently can match documents w/o a value (because of a default 
> value of 0).
> This only existed historically because we didn't have info about what fields 
> had a value for numerics, and didn't have exists() on FunctionValues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7339) Upgrade Jetty from 9.2 to 9.3

2016-03-18 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-7339:
-
Fix Version/s: 6.0

> Upgrade Jetty from 9.2 to 9.3
> -
>
> Key: SOLR-7339
> URL: https://issues.apache.org/jira/browse/SOLR-7339
> Project: Solr
>  Issue Type: Improvement
>Reporter: Gregg Donovan
>Assignee: Mark Miller
>Priority: Blocker
> Fix For: master, 6.0
>
> Attachments: SOLR-7339-jetty-9.3.8.patch, 
> SOLR-7339-jetty-9.3.8.patch, SOLR-7339-revert.patch, SOLR-7339.patch, 
> SOLR-7339.patch, SOLR-7339.patch, 
> SolrExampleStreamingBinaryTest.testUpdateField-jetty92.pcapng, 
> SolrExampleStreamingBinaryTest.testUpdateField-jetty93.pcapng
>
>
> Jetty 9.3 offers support for HTTP/2. Interest in HTTP/2 or its predecessor 
> SPDY was shown in [SOLR-6699|https://issues.apache.org/jira/browse/SOLR-6699] 
> and [on the mailing list|http://markmail.org/message/jyhcmwexn65gbdsx].
> Among the HTTP/2 benefits over HTTP/1.1 relevant to Solr are:
> * multiplexing requests over a single TCP connection ("streams")
> * canceling a single request without closing the TCP connection
> * removing [head-of-line 
> blocking|https://http2.github.io/faq/#why-is-http2-multiplexed]
> * header compression
> Caveats:
> * Jetty 9.3 is at M2, not released.
> * Full Solr support for HTTP/2 would require more work than just upgrading 
> Jetty. The server configuration would need to change and a new HTTP client 
> ([Jetty's own 
> client|https://github.com/eclipse/jetty.project/tree/master/jetty-http2], 
> [Square's OkHttp|http://square.github.io/okhttp/], 
> [etc.|https://github.com/http2/http2-spec/wiki/Implementations]) would need 
> to be selected and wired up. Perhaps this is worthy of a branch?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Artifacts-6.x - Build # 18 - Still Failing

2016-03-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Artifacts-6.x/18/

No tests ran.

Build Log:
[...truncated 27 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Artifacts-6.x/lucene/build.xml:24: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Artifacts-6.x/lucene/common-build.xml:302:
 Minimum supported Java version is 1.8.

Total time: 0 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Solr-Artifacts-6.x - Build # 18 - Still Failing

2016-03-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-Artifacts-6.x/18/

No tests ran.

Build Log:
[...truncated 29 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/solr/build.xml:39: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/solr/common-build.xml:55:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/lucene/module-build.xml:27:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/lucene/common-build.xml:302:
 Minimum supported Java version is 1.8.

Total time: 0 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Solr-Artifacts-master - Build # 2821 - Failure

2016-03-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-Artifacts-master/2821/

No tests ran.

Build Log:
[...truncated 29 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-master/solr/build.xml:39: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-master/solr/common-build.xml:55:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-master/lucene/module-build.xml:27:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-master/lucene/common-build.xml:302:
 Minimum supported Java version is 1.8.

Total time: 0 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Solr-Artifacts-6.x - Build # 17 - Still Failing

2016-03-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-Artifacts-6.x/17/

No tests ran.

Build Log:
[...truncated 29 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/solr/build.xml:39: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/solr/common-build.xml:55:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/lucene/module-build.xml:27:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/lucene/common-build.xml:302:
 Minimum supported Java version is 1.8.

Total time: 0 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-master - Build # 1013 - Failure

2016-03-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1013/

No tests ran.

Build Log:
[...truncated 28 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/build.xml:21: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/common-build.xml:302:
 Minimum supported Java version is 1.8.

Total time: 0 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
were found. Configuration error?
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-6.x - Build # 75 - Failure

2016-03-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/75/

No tests ran.

Build Log:
[...truncated 25 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/build.xml:21: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/lucene/common-build.xml:302:
 Minimum supported Java version is 1.8.

Total time: 0 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
were found. Configuration error?
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS-MAVEN] Lucene-Solr-Maven-6.x #17: POMs out of sync

2016-03-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-6.x/17/

No tests ran.

Build Log:
[...truncated 25 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-6.x/build.xml:21: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-6.x/lucene/common-build.xml:302:
 Minimum supported Java version is 1.8.

Total time: 0 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 16 - Still Failing

2016-03-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/16/

No tests ran.

Build Log:
[...truncated 25 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/build.xml:21: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/lucene/common-build.xml:302:
 Minimum supported Java version is 1.8.

Total time: 0 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
No prior successful build to compare, so performing full copy of artifacts
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
were found. Configuration error?
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Artifacts-master - Build # 2931 - Failure

2016-03-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Artifacts-master/2931/

No tests ran.

Build Log:
[...truncated 27 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Artifacts-master/lucene/build.xml:24:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Artifacts-master/lucene/common-build.xml:302:
 Minimum supported Java version is 1.8.

Total time: 0 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS-MAVEN] Lucene-Solr-Maven-master #1711: POMs out of sync

2016-03-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-master/1711/

No tests ran.

Build Log:
[...truncated 25 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/build.xml:21: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/lucene/common-build.xml:302:
 Minimum supported Java version is 1.8.

Total time: 0 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-7109) LatLonPoint newPolygonQuery should use two-phase iterator

2016-03-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197309#comment-15197309
 ] 

ASF subversion and git services commented on LUCENE-7109:
-

Commit e68dc4a330bed0d3cc90167b74b83261ed29fd0a in lucene-solr's branch 
refs/heads/branch_6x from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e68dc4a ]

LUCENE-7109: LatLonPoint.newPolygonQuery should use two-phase iterator


> LatLonPoint newPolygonQuery should use two-phase iterator
> -
>
> Key: LUCENE-7109
> URL: https://issues.apache.org/jira/browse/LUCENE-7109
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: master, 6.1
>
> Attachments: LUCENE-7109.patch
>
>
> Currently, the calculation this thing does is very expensive, and gets slower 
> the more complex the polygon is. Doing everything in one phase is really bad 
> for performance.
> Later, there are a lot of optimizations we can do. But I think we should try 
> to beef up testing first. This is just to improve from 
> galapagos-tortoise-slow to turtle-slow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Artifacts-6.x - Build # 17 - Failure

2016-03-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Artifacts-6.x/17/

No tests ran.

Build Log:
[...truncated 27 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Artifacts-6.x/lucene/build.xml:24: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Artifacts-6.x/lucene/common-build.xml:302:
 Minimum supported Java version is 1.8.

Total time: 1 second
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Tests-MMAP-master - Build # 31 - Failure

2016-03-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Tests-MMAP-master/31/

No tests ran.

Build Log:
[...truncated 25 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Tests-MMAP-master/lucene/build.xml:24:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Tests-MMAP-master/lucene/common-build.xml:302:
 Minimum supported Java version is 1.8.

Total time: 0 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
were found. Configuration error?
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-8842) security should use an API to expose the permission name instead of using HTTP params

2016-03-18 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8842:
-
Labels: security  (was: )

> security should use an API to expose the permission name instead of using 
> HTTP params
> -
>
> Key: SOLR-8842
> URL: https://issues.apache.org/jira/browse/SOLR-8842
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: security
> Attachments: SOLR-8842.patch, SOLR-8842.patch
>
>
> Currently the well-known permissions are using the HTTP atributes, such as 
> method, uri, params etc to identify the corresponding permission name such as 
> 'read', 'update' etc. Expose this value through an API so that it can be more 
> accurate and handle various versions of the API
> RequestHandlers will be able to implement an interface to provide the name
> {code}
> interface PermissionNameProvider {
>  Name getPermissionName(SolrQueryRequest req)
> }
> {code} 
> This means many significant changes to the API
> 1) {{name}} does not mean a set of http attributes. Name is decided by the 
> requesthandler . Which means it's possible to use the same name across 
> different permissions.  
> examples
> {code}
> {
> "permissions": [
> {//this permission applies to all collections
>   "name": "read",
>   "role": "dev"
> },
> {
>  
>  // this applies to only collection x. But both means you are hitting a 
> read type API
>   "name": "read",
>   "collection": "x",
>   "role": "x_dev"
> }
>   ]
> }
> {code} 
> 2) so far we have been using the name as something unique. We use the name to 
> do an {{update-permission}} , {{delete-permission}} or even when you wish to 
> insert a permission before another permission we used to use the name. Going 
> forward it is not possible. Every permission will get an implicit index. 
> example
> {code}
> {
>   "permissions": [
> {
>   "name": "read",
>   "role": "dev",
>//this attribute is automatically assigned by the system
>   "index" : 1
> },
> {
>   "name": "read",
>   "collection": "x",
>   "role": "x_dev",
>   "index" : 2
> }
>   ]
> }
> {code}
> 3) example update commands
> {code}
> {
>   "set-permission" : {
> "index": 2,
> "name": "read",
> "collection" : "x",
> "role" :["xdev","admin"]
>   },
>   //this deletes the permission at index 2
>   "delete-permission" : 2,
>   //this will insert the command before the first item
>   "set-permission": {
> "name":"config-edit",
> "role":"admin",
> "before":1
>   }
> }
> {code}
> 4) you could construct a  permission purely with http attributes and you 
> don't need any name for that. As expected, this will be appended atthe end of 
> the list of permissions
> {code}
> {
>   "set-permission": {
>  "collection": null,
>  "path":"/admin/collections",
>  "params":{"action":[LIST, CREATE]},
>  "role": "admin"}
> }
> {code}
> Users with existing configuration will not observe any change in behavior. 
> But the commands issued to manipulate the permissions will be different .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3144 - Failure!

2016-03-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3144/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeMixedAdds

Error Message:
soft529 wasn't fast enough

Stack Trace:
java.lang.AssertionError: soft529 wasn't fast enough
at 
__randomizedtesting.SeedInfo.seed([5FBAEE4A6CD0392A:E6E17CADDA3098D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeMixedAdds(SoftAutoCommitTest.java:111)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10551 lines...]
   [junit4] Suite: org.apache.solr.update.SoftAutoCommitTest
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-8870) AngularJS Query tab breaks through proxy

2016-03-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201250#comment-15201250
 ] 

Jan Høydahl commented on SOLR-8870:
---

Also, the AngularJS code for query panel does not handle qt not prefixed with 
slash at all (legacy handleSelect=true and qt=foo). It will simply generate a 
url with core name and qt concatenated together, causing a 404. I'll attempt to 
fix that as well.

> AngularJS Query tab breaks through proxy
> 
>
> Key: SOLR-8870
> URL: https://issues.apache.org/jira/browse/SOLR-8870
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 5.5
>Reporter: Jan Høydahl
>Priority: Minor
>  Labels: 404-error, angularjs, encoding, newdev
>
> The AngularJS Query tab generates a request URL on this form: 
> http://localhost:8983/solr/techproducts%2Fselect?_=1458291250691=on=ram=json
>  Notice the urlencoded {{%2Fselect}} part.
> This works well locally with Jetty, but a customer has httpd as a proxy in 
> front, and we get a 404 error since the web server does not parse {{%2F}} as 
> a path separator and thus does not match the proxy rules for select.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8842) security should use an API to expose the permission name instead of using HTTP params

2016-03-18 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8842:
-
Description: 
Currently the well-known permissions are using the HTTP atributes, such as 
method, uri, params etc to identify the corresponding permission name such as 
'read', 'update' etc. Expose this value through an API so that it can be more 
accurate and handle various versions of the API

RequestHandlers will be able to implement an interface to provide the name
{code}
interface PermissionNameProvider {
 Name getPermissionName(SolrQueryRequest req)
}
{code} 

This means many significant changes to the API
1) {{name}} does not mean a set of http attributes. Name is decided by the 
requesthandler . Which means it's possible to use the same name across 
different permissions.  
examples
{code}
{
"permissions": [
{//this permission applies to all collections
  "name": "read",
  "role": "dev"
},
{
 
 // this applies to only collection x. But both means you are hitting a 
read type API
  "name": "read",
  "collection": "x",
  "role": "x_dev"
}
  ]
}
{code} 

2) so far we have been using the name as something unique. We use the name to 
do an {{update-permission}} , {{delete-permission}} or even when you wish to 
insert a permission before another permission we used to use the name. Going 
forward it is not possible. Every permission will get an implicit index. example
{code}
{
  "permissions": [
{
  "name": "read",
  "role": "dev",
   //this attribute is automatically assigned by the system
  "index" : 1
},
{
  "name": "read",
  "collection": "x",
  "role": "x_dev",
  "index" : 2
}
  ]
}
{code}

3) example update commands
{code}
{
  "set-permission" : {
"index": 2,
"name": "read",
"collection" : "x",
"role" :["xdev","admin"]
  },
  //this deletes the permission at index 2
  "delete-permission" : 2,
  //this will insert the command before the first item
  "set-permission": {
"name":"config-edit",
"role":"admin",
"before":1
  }
}
{code}

4) you could construct a  permission purely with http attributes and you don't 
need any name for that. As expected, this will be appended atthe end of the 
list of permissions
{code}
{
  "set-permission": {
 "collection": null,
 "path":"/admin/collections",
 "params":{"action":[LIST, CREATE]},
 "role": "admin"}
}
{code}
Users with existing configuration will not observe any change in behavior. But 
the commands issued to manipulate the permissions will be different .

  was:
Currently the well-known permissions are using the HTTP atributes, such as 
method, uri, params etc to identify the corresponding permission name such as 
'read', 'update' etc. Expose this value through an API so that it can be more 
accurate and handle various versions of the API

RequestHandlers will be able to implement an interface to provide the name
{code}
interface PermissionNameProvider {
String getPermissionName(SolrQueryRequest req)
}
{code} 

This means many significant changes to the API
1) {{name}} does not mean a set of http attributes. Name is decided by the 
requesthandler . Which means it's possible to use the same name across 
different permissions.  
examples
{code}
{
"permissions": [
{//this permission applies to all collections
  "name": "read",
  "role": "dev"
},
{
 
 // this applies to only collection x. But both means you are hitting a 
read type API
  "name": "read",
  "collection": "x",
  "role": "x_dev"
}
  ]
}
{code} 

2) so far we have been using the name as something unique. We use the name to 
do an {{update-permission}} , {{delete-permission}} or even when you wish to 
insert a permission before another permission we used to use the name. Going 
forward it is not possible. Every permission will get an implicit index. example
{code}
{
  "permissions": [
{
  "name": "read",
  "role": "dev",
   //this attribute is automatically assigned by the system
  "index" : 1
},
{
  "name": "read",
  "collection": "x",
  "role": "x_dev",
  "index" : 2
}
  ]
}
{code}

3) example update commands
{code}
{
  "set-permission" : {
"index": 2,
"name": "read",
"collection" : "x",
"role" :["xdev","admin"]
  },
  //this deletes the permission at index 2
  "delete-permission" : 2,
  //this will insert the command before the first item
  "set-permission": {
"name":"config-edit",
"role":"admin",
"before":1
  }
}
{code}

4) you could construct a  permission purely with http attributes and you don't 
need any name for that. As expected, this will be appended atthe end of the 
list of permissions
{code}
{
  "set-permission": {
 "collection": null,
 

[jira] [Commented] (SOLR-8798) org.apache.solr.rest.RestManager can't find cyrillic synonyms.

2016-03-18 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201955#comment-15201955
 ] 

Steve Rowe commented on SOLR-8798:
--

Sorry, I still can't reproduce with manual testing on Solr 5.5.0, on OS X 
10.11.3 with Oracle JDK 1.8.0_72.

Here's what I did from a freshly unpacked distribution (most responses left 
out, whitespace compressed on responses):

{code}
$ bin/solr start
$ bin/solr create -c test_managed_resource -d data_driven_schema_configs
$ curl -X POST -H 'Content-type: application/json' --data-binary '{
"add-field-type":{ "name":"managed_non_latin", "class":"solr.TextField",
  "analyzer":{ "tokenizer": { "class": "solr.StandardTokenizerFactory" },
"filters":[{ "class":"solr.ManagedSynonymFilterFactory", 
"managed":"nonlatin" }] } } }' \ 
http://localhost:8983/solr/test_managed_resource/schema

$ curl 
"http://localhost:8983/solr/test_managed_resource/schema/analysis/synonyms/nonlatin/ліжко;
{"responseHeader":{"status":404, "QTime":2},
  "error":{ "metadata":[
  "error-class","org.apache.solr.common.SolrException",
  "root-error-class","org.apache.solr.common.SolrException"],
"msg":"ліжко not found in /schema/analysis/synonyms/nonlatin","code":404}}

$ curl -X PUT -H 'Content-type: application/json' --data-binary 
'{"ліжко":["кровать"]}' \ 
http://localhost:8983/solr/test_managed_resource/schema/analysis/synonyms/nonlatin
{ "responseHeader":{ "status":0, "QTime":8}}

$ curl 
"http://localhost:8983/solr/test_managed_resource/schema/analysis/synonyms/nonlatin/ліжко;
{"responseHeader":{ "status":0, "QTime":1}, "ліжко":["кровать"]}

$ curl -X DELETE 
"http://localhost:8983/solr/test_managed_resource/schema/analysis/synonyms/nonlatin/ліжко;
{ "responseHeader":{ "status":0, "QTime":3}}

$ curl 
"http://localhost:8983/solr/test_managed_resource/schema/analysis/synonyms/nonlatin/ліжко;
{ "responseHeader":{ "status":404, "QTime":1},
  "error":{ "metadata":[
  "error-class","org.apache.solr.common.SolrException",
  "root-error-class","org.apache.solr.common.SolrException"],
"msg":"ліжко not found in /schema/analysis/synonyms/nonlatin", "code":404}}
{code}

> org.apache.solr.rest.RestManager can't find cyrillic synonyms.
> --
>
> Key: SOLR-8798
> URL: https://issues.apache.org/jira/browse/SOLR-8798
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.9.1
>Reporter: Vitalii
>
> RestManager doesn't work well with cyrillic symbols.
> I'm able to create new synonyms via REST interface. But I have an error when 
> I try to get created synonyns with via request:
> http://localhost:8983/solr/collection1/schema/analysis/synonyms/18/ліжко
> I get this message in console log:
> {code}
> # solr/console.log
> 4591823 [qtp1281335597-14] INFO  org.apache.solr.rest.RestManager  – Resource 
> not found for /schema/analysis/synonyms/18/%D0%BB%D1%96%D0%B6%D0%BA%D0%BE, 
> looking for parent: /schema/analysis/synonyms/18
> {code}
> But in synonyms file I have row with this word:
> {code}
> # /solr/collection1/conf/_schema_analysis_synonyms_18.json
>   "initArgs":{"ignoreCase":false},
>   "initializedOn":"2016-03-07T11:57:00.116Z",
>   "updatedSinceInit":"2016-03-07T12:19:11.174Z",
>   "managedMap":{
> "ліжко":["кровать"],
> "стілець":["стул"]}}
> {code}
> This issue have been tested by multiple persons and they can confirm that 
> faced this problem too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8856) Do not cache merge or 'read once' contexts in the hdfs block cache.

2016-03-18 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8856:
--
Attachment: SOLR-8856.patch

> Do not cache merge or 'read once' contexts in the hdfs block cache.
> ---
>
> Key: SOLR-8856
> URL: https://issues.apache.org/jira/browse/SOLR-8856
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8856.patch, SOLR-8856.patch, SOLR-8856.patch
>
>
> Generally the block cache will not be large enough to contain the whole index 
> and merges can thrash the cache for queries. Even if we still look in the 
> cache, we should not populate it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8856) Do not cache merge or read once contexts in the hdfs block cache.

2016-03-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197862#comment-15197862
 ] 

Mark Miller commented on SOLR-8856:
---

Next will look at making this configurable - I think this new patch is the 
right default though.

> Do not cache merge or read once contexts in the hdfs block cache.
> -
>
> Key: SOLR-8856
> URL: https://issues.apache.org/jira/browse/SOLR-8856
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8856.patch
>
>
> Generally the block cache will not be large enough to contain the whole index 
> and merges can thrash the cache for queries. Even if we still look in the 
> cache, we should not populate it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8765) Enforce required parameters in SolrJ Collection APIs

2016-03-18 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15200403#comment-15200403
 ] 

Anshum Gupta commented on SOLR-8765:


numShards shouldn't be mandatory and so the constructor should have excluded 
that. I think we need tests for all of this else we wouldn't know what's even 
broken.
Ideally, something that randomizes the getting of CollectionsAdminRequest 
objects through the deprecated constructor and the new approach would be good, 
but that could be a lot of effort to manage a deprecated API, so guess we 
should just add more tests for the new API.

> Enforce required parameters in SolrJ Collection APIs
> 
>
> Key: SOLR-8765
> URL: https://issues.apache.org/jira/browse/SOLR-8765
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: 6.1
>
> Attachments: SOLR-8765-splitshard.patch, SOLR-8765-splitshard.patch, 
> SOLR-8765.patch, SOLR-8765.patch
>
>
> Several Collection API commands have required parameters.  We should make 
> these constructor parameters, to enforce setting these in the API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7111) DocValuesRangeQuery.newLongRange behaves incorrectly for Long.MAX_VALUE and Long.MIN_VALUE

2016-03-18 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15199531#comment-15199531
 ] 

Robert Muir commented on LUCENE-7111:
-

and, if its an overflow in the logic that should not happen (looks like it 
might be), i think its worth it to change the add/subtracts here to 
Math.addExact/Math.subtractExact.

> DocValuesRangeQuery.newLongRange behaves incorrectly for Long.MAX_VALUE and 
> Long.MIN_VALUE
> --
>
> Key: LUCENE-7111
> URL: https://issues.apache.org/jira/browse/LUCENE-7111
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Reporter: Ishan Chattopadhyaya
> Attachments: LUCENE-7111.patch, LUCENE-7111.patch, LUCENE-7111.patch
>
>
> It seems that the following queries return all documents, which is unexpected:
> {code}
> DocValuesRangeQuery.newLongRange("dv", Long.MAX_VALUE, Long.MAX_VALUE, false, 
> true);
> DocValuesRangeQuery.newLongRange("dv", Long.MIN_VALUE, Long.MIN_VALUE, true, 
> false);
> {code}
> In Solr, floats and doubles are converted to longs and -0d gets converted to 
> Long.MIN_VALUE, and queries like {-0d TO 0d] could fail due to this, 
> returning all documents in the index.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8858) SolrIndexSearcher#doc() Completely Ignores Field Filters Unless Lazy Field Loading is Enabled

2016-03-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197866#comment-15197866
 ] 

ASF GitHub Bot commented on SOLR-8858:
--

GitHub user maedhroz opened a pull request:

https://github.com/apache/lucene-solr/pull/21

SOLR-8858 SolrIndexSearcher#doc() Completely Ignores Field Filters Unless 
Lazy Field Loading is Enabled

Instead of just discarding fields if lazy loading is not enabled, 
SolrIndexSearcher now passes them through to IndexReader. This means 
IndexReader creates a DocumentStoredFieldVisitor that we can use to later 
determine which fields need to be read.

https://issues.apache.org/jira/browse/SOLR-8858

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/maedhroz/lucene-solr SOLR-8858

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/21.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #21


commit fa8075c7861dbc331588dfb5c9e28576e2eb31f2
Author: Caleb Rackliffe 
Date:   2016-03-16T18:15:20Z

SOLR-8858 SolrIndexSearcher#doc() Completely Ignores Field Filters Unless 
Lazy Field Loading is Enabled

Instead of just discarding fields if lazy loading is not enabled, 
SolrIndexSearcher now passes them through to IndexReader. This means 
IndexReader creates a DocumentStoredFieldVisitor that we can use to later 
determine which fields need to be read.




> SolrIndexSearcher#doc() Completely Ignores Field Filters Unless Lazy Field 
> Loading is Enabled
> -
>
> Key: SOLR-8858
> URL: https://issues.apache.org/jira/browse/SOLR-8858
> Project: Solr
>  Issue Type: Bug
>Reporter: Caleb Rackliffe
>  Labels: easyfix
> Fix For: 5.5.1
>
>
> If {{enableLazyFieldLoading=false}}, a perfectly valid fields filter will be 
> ignored, and we'll create a {{DocumentStoredFieldVisitor}} without it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 460 - Failure!

2016-03-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/460/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.test

Error Message:
There are still nodes recoverying - waited for 45 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 45 
seconds
at 
__randomizedtesting.SeedInfo.seed([1C3BD10017CBEAE7:946FEEDAB937871F]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:173)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:856)
at 
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.test(DistribDocExpirationUpdateProcessorTest.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Created] (SOLR-8861) Fix missing CloudSolrClient.connect() before getZkStateReader

2016-03-18 Thread Kevin Risden (JIRA)
Kevin Risden created SOLR-8861:
--

 Summary: Fix missing CloudSolrClient.connect() before 
getZkStateReader
 Key: SOLR-8861
 URL: https://issues.apache.org/jira/browse/SOLR-8861
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: master, 6.0
Reporter: Kevin Risden
Priority: Critical
 Fix For: 6.0


There are a few places in the new solrj.io package that miss calling connect 
before getZkStateReader. This can cause NPE exceptions with getZkStateReader in 
some cases if the SolrCache is closed.

There is probably a better way to fix this moving forward, but for 6.0 this 
should be resolved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_72) - Build # 5714 - Still Failing!

2016-03-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5714/
Java: 64bit/jdk1.8.0_72 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
[index.20160317003713803, index.20160317003716670, index.properties, 
replication.properties] expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: [index.20160317003713803, index.20160317003716670, 
index.properties, replication.properties] expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([1DE0945A78842956:C64B949C7DAC40E5]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:823)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:790)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Created] (SOLR-8867) frange / ValueSourceRangeFilter / FunctionValues.getRangeScorer should not match documents w/o a value

2016-03-18 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-8867:
--

 Summary: frange / ValueSourceRangeFilter / 
FunctionValues.getRangeScorer should not match documents w/o a value
 Key: SOLR-8867
 URL: https://issues.apache.org/jira/browse/SOLR-8867
 Project: Solr
  Issue Type: Bug
Reporter: Yonik Seeley






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8819) Implement DatabaseMetaDataImpl getTables() and fix getSchemas()

2016-03-18 Thread Trey Cahill (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trey Cahill updated SOLR-8819:
--
Attachment: SOLR-8819.patch

> Implement DatabaseMetaDataImpl getTables() and fix getSchemas()
> ---
>
> Key: SOLR-8819
> URL: https://issues.apache.org/jira/browse/SOLR-8819
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: master, 6.0
>Reporter: Kevin Risden
> Attachments: SOLR-8819.patch, SOLR-8819.patch, SOLR-8819.patch, 
> SOLR-8819.patch, SOLR-8819.patch, SOLR-8819.patch
>
>
> DbVisualizer NPE when clicking on DB References tab. After connecting, NPE if 
> double click on "DB" under connection name then click on References tab.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6528) hdfs cluster with replication min set to 2 / Solr does not honor dfs.replication in hdfs-site.xml

2016-03-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197657#comment-15197657
 ] 

Mark Miller commented on SOLR-6528:
---

Yeah, only index files will use an hdfs config file. Tlog replication factor 
needs to be specified independently as shown above.

> hdfs cluster with replication min set to 2 / Solr does not honor 
> dfs.replication in hdfs-site.xml 
> --
>
> Key: SOLR-6528
> URL: https://issues.apache.org/jira/browse/SOLR-6528
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.9
> Environment: RedHat JDK 1.7 hadoop 2.4.1
>Reporter: davidchiu
> Fix For: 4.10.5
>
>
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): file 
> /user/solr/test1/core_node1/data/tlog/tlog.000 on client 
> 192.161.1.91.\nRequested replication 1 is less than the required minimum 2\n\t



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+109) - Build # 16225 - Failure!

2016-03-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16225/
Java: 32bit/jdk-9-ea+109 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.UnloadDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=5361, 
name=testExecutor-2563-thread-6, state=RUNNABLE, 
group=TGRP-UnloadDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=5361, name=testExecutor-2563-thread-6, 
state=RUNNABLE, group=TGRP-UnloadDistributedZkTest]
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:41661
at __randomizedtesting.SeedInfo.seed([4D6351181BC4A816]:0)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:583)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1158)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632)
at java.lang.Thread.run(Thread.java:804)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: http://127.0.0.1:41661
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:588)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:581)
... 4 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:482)
... 8 more




Build Log:
[...truncated 11368 lines...]
   [junit4] Suite: org.apache.solr.cloud.UnloadDistributedZkTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.UnloadDistributedZkTest_4D6351181BC4A816-001/init-core-data-001
   [junit4]   2> 698262 INFO  
(SUITE-UnloadDistributedZkTest-seed#[4D6351181BC4A816]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 698263 INFO  
(TEST-UnloadDistributedZkTest.test-seed#[4D6351181BC4A816]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 698264 INFO  (Thread-1766) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 698264 INFO  (Thread-1766) [] o.a.s.c.ZkTestServer 
Starting server
   

Re: [JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 14 - Still Failing

2016-03-18 Thread Steve Rowe
These tests failed with connection reset while using Turkish locales. The 
commit sha is just before I committed the upgrade to Jetty 9.3.8.v20160314, 
which includes a fix for the Turkish locale problem, so hopefully this is the 
last time we see this failure.

--
Steve
www.lucidworks.com

> On Mar 17, 2016, at 2:41 AM, Apache Jenkins Server 
>  wrote:
> 
> Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/14/
> 
> 2 tests failed.
> FAILED:  org.apache.lucene.replicator.http.HttpReplicatorTest.testBasic
> 
> Error Message:
> Connection reset
> 
> Stack Trace:
> java.net.SocketException: Connection reset
>   at 
> __randomizedtesting.SeedInfo.seed([D188CC207EB051F4:7A72D135A16CD7DA]:0)
>   at java.net.SocketInputStream.read(SocketInputStream.java:209)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at 
> org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:139)
>   at 
> org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:155)
>   at 
> org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:284)
>   at 
> org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
>   at 
> org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
>   at 
> org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
>   at 
> org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:165)
>   at 
> org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:167)
>   at 
> org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
>   at 
> org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
>   at 
> org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:271)
>   at 
> org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
>   at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
>   at 
> org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
>   at 
> org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
>   at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
>   at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
>   at 
> org.apache.lucene.replicator.http.HttpClientBase.executeGET(HttpClientBase.java:158)
>   at 
> org.apache.lucene.replicator.http.HttpReplicator.checkForUpdate(HttpReplicator.java:50)
>   at 
> org.apache.lucene.replicator.ReplicationClient.doUpdate(ReplicationClient.java:195)
>   at 
> org.apache.lucene.replicator.ReplicationClient.updateNow(ReplicationClient.java:401)
>   at 
> org.apache.lucene.replicator.http.HttpReplicatorTest.testBasic(HttpReplicatorTest.java:116)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
>   at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
>   at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>   at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
>   at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>   at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
>   at 
> 

[jira] [Resolved] (SOLR-8860) Remove back-compat handling of router format made in SOLR-4221

2016-03-18 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-8860.
-
Resolution: Fixed

> Remove back-compat handling of router format made in SOLR-4221
> --
>
> Key: SOLR-8860
> URL: https://issues.apache.org/jira/browse/SOLR-8860
> Project: Solr
>  Issue Type: Task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: master, 6.1
>
> Attachments: SOLR_8860.patch
>
>
> SOLR-4221 changed how router information is stored in cluster state from a 
> simple string to a map. There was back-compat handling added to ensure that 
> new clients can continue to index to an old cluster so that rolling upgrades 
> are supported. We don't need that anymore. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8798) org.apache.solr.rest.RestManager can't find cyrillic synonyms.

2016-03-18 Thread Kostiantyn (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197239#comment-15197239
 ] 

Kostiantyn edited comment on SOLR-8798 at 3/16/16 12:11 PM:


Got the same issue with Danish synonyms.

I think the original problem in API. According to documentation 
https://cwiki.apache.org/confluence/display/solr/Managed+Resources the example 
request below will add new synonym mapping.
{code}
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"mad":["angry","upset"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english;
{code}
If after that I will execute this request,
{code}
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"mad":["insane"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english;
{code}
I will get result mapping merged:
{code}
"managedMap":{"mad":["angry","upset","insane"]}
{code}
If I need not merging but replacing, I have to at first delete the "mad" 
synonym at all and then re-add it with new value.
{code}
curl -X DELETE -H 'Content-type:application/json' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english/mad;
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"mad":["insane"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english;
{code}
That is how I could get replaced mapping:
{code}
"managedMap":{ "mad":["insane"]}
{code}

In my opinion this API could not be considered as totally finished. There is 
must be a method to update a synonym mapping also.
Problem comes when you have non latin symbols (Dannish example "åbningstider") 
or cyrillic symbols as well.
In this case you cannot perform deletion command because Solr will return 404 
status.
Example. Add first synonym mapping for the Danish word badroom "soveværelse"
{code}
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"soveværelse":["køkken"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/daniish;
{code}
Than I need to replace mapping "køkken" (a kitchen) with "værelse" (a room). I 
cannot just execute PUT request, it will merge "værelse" with existent "køkken" 
and I will get
{code}
"managedMap":{"soveværelse":["køkken","værelse"]}
{code}
But I actually need this
{code}
"managedMap":{"soveværelse":["værelse"]}
{code}
If I try to delete "soveværelse" I get an error 404 from Solr
{code}
 curl -X DELETE -H 'Content-type:application/json' --data-binary 
'{"mad":["angry","upset"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/daniish/soveværelse;
{
  "responseHeader":{
"status":404,
"QTime":10},
  "error":{
"msg":"sovev%C3%A6relse not found in /schema/analysis/synonyms/2",
"code":404}}
{code}


It means that there is no way to maintain such synonym mappings.




was (Author: koschos):
Got the same issue with Danish synonyms.

I think the original problem in API. According to documentation 
https://cwiki.apache.org/confluence/display/solr/Managed+Resources the example 
request below will add new synonym mapping.
{code}
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"mad":["angry","upset"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english;
{code}
If after that I will execute this request,
{code}
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"mad":["insane"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english;
{code}
I will get result mapping merged:
{code}
"initArgs":{"ignoreCase":false},
  "initializedOn":"2016-03-07T11:57:00.116Z",
  "updatedSinceInit":"2016-03-07T12:19:11.174Z",
  "managedMap":{
"mad":["angry","upset","insane"]}}
{code}
If I need not merging but replacing, I have to at first delete the "mad" 
synonym at all and then re-add it with new value.
{code}
curl -X DELETE -H 'Content-type:application/json' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english/mad;
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"mad":["insane"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english;
{code}
That is how I could get replaced mapping:
{code}
"initArgs":{"ignoreCase":false},
  "initializedOn":"2016-03-07T11:57:00.116Z",
  "updatedSinceInit":"2016-03-07T12:19:11.174Z",
  "managedMap":{
"mad":["insane"]}}
{code}

In my opinion this API could not be considered as totally finished. There is 
must be a method to update a synonym mapping also.
Problem comes when you have non latin symbols (Dannish example "åbningstider") 
or cyrillic symbols as well.
In this case you cannot perform deletion command because Solr will return 404 
status.
Example. Add first synonym mapping for the Danish word badroom "soveværelse"
{code}
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"soveværelse":["køkken"]}' 

[jira] [Updated] (SOLR-8859) AbstractSpatialFieldType can use ShapeContext to read/write shapes (WKT, GeoJSON)

2016-03-18 Thread Ryan McKinley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McKinley updated SOLR-8859:

Attachment: SOLR-8859.patch

> AbstractSpatialFieldType can use ShapeContext to read/write shapes (WKT, 
> GeoJSON)
> -
>
> Key: SOLR-8859
> URL: https://issues.apache.org/jira/browse/SOLR-8859
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ryan McKinley
>Assignee: Ryan McKinley
>Priority: Minor
> Fix For: master, 6.1
>
> Attachments: SOLR-8859.patch
>
>
> Right now the AbstractSpatialFieldType throws exceptions if it needs to 
> convert to/from a string.  We should use the context to convert



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8335) HdfsLockFactory does not allow core to come up after a node was killed

2016-03-18 Thread lvchuanwen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15202567#comment-15202567
 ] 

lvchuanwen commented on SOLR-8335:
--

hi ,
On solr 4.10.3,I have occasionally encountered this problem, when the Solr has 
just started, why there will be a lock already exist? Is bug?Was it a bug?
{code:title=Directory.java|borderStyle=solid}
  @Override
  public String toString() {
return getClass().getSimpleName() + '@' + Integer.toHexString(hashCode()) + 
" lockFactory=" + getLockFactory();
  }
{code}
LOG as follow:

{code:xml}
2016-03-16 15:51:31,327 INFO 
org.apache.solr.servlet.SolrHadoopAuthenticationFilter: Connecting to ZooKeeper 
without authentication
2016-03-16 15:51:31,434 INFO 
org.apache.curator.framework.imps.CuratorFrameworkImpl: Starting
2016-03-16 15:51:31,445 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:zookeeper.version=3.4.5-test5.4.2--1, built on 05/19/2015 23:53 GMT
2016-03-16 15:51:31,445 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:host.name=Impala03
2016-03-16 15:51:31,445 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:java.version=1.7.0_55
2016-03-16 15:51:31,445 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:java.vendor=Oracle Corporation
2016-03-16 15:51:31,445 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:java.home=/usr/java/jdk/jre
2016-03-16 15:51:31,445 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:java.class.path=/home/mr/tomcat/bin/bootstrap.jar
2016-03-16 15:51:31,445 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:java.library.path=/home/hdfs/hdfs/lib/native/Linux-amd64-64/
2016-03-16 15:51:31,445 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:java.io.tmpdir=/var/lib/solr/
2016-03-16 15:51:31,445 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:java.compiler=
2016-03-16 15:51:31,446 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:os.name=Linux
2016-03-16 15:51:31,446 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:os.arch=amd64
2016-03-16 15:51:31,446 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:os.version=3.0.13-0.27-default
2016-03-16 15:51:31,446 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:user.name=mr
2016-03-16 15:51:31,446 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:user.home=/home/mr
2016-03-16 15:51:31,446 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:user.dir=/home/mr/solr/bin
2016-03-16 15:51:31,447 INFO org.apache.zookeeper.ZooKeeper: Initiating client 
connection, connectString=Impala04:2181,Impala02:2181,Impala03:2181 
sessionTimeout=6 watcher=org.apache.curator.ConnectionState@2590f83a
2016-03-16 15:51:31,472 INFO org.apache.zookeeper.ClientCnxn: Opening socket 
connection to server Impala03/10.233.85.238:2181. Will not attempt to 
authenticate using SASL (unknown error)
2016-03-16 15:51:31,483 INFO org.apache.zookeeper.ClientCnxn: Socket connection 
established, initiating session, client: /10.233.85.238:42992, server: 
Impala03/10.233.85.238:2181
2016-03-16 15:51:31,492 INFO org.apache.zookeeper.ClientCnxn: Session 
establishment complete on server Impala03/10.233.85.238:2181, sessionid = 
0x35379cb0b604e1f, negotiated timeout = 6
2016-03-16 15:51:31,499 INFO 
org.apache.curator.framework.state.ConnectionStateManager: State change: 
CONNECTED
2016-03-16 15:51:32,533 INFO 
org.apache.hadoop.security.authentication.util.ZKSignerSecretProvider: The 
secret znode already exists, retrieving data
2016-03-16 15:51:33,392 INFO 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
 Updating the current master key for generating delegation tokens
2016-03-16 15:51:33,420 INFO 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
 Starting expired delegation token remover thread, tokenRemoverScanInterval=60 
min(s)
2016-03-16 15:51:33,536 INFO 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
 Updating the current master key for generating delegation tokens
2016-03-16 15:51:33,570 INFO org.apache.solr.servlet.SolrDispatchFilter: 
SolrDispatchFilter.init()
2016-03-16 15:51:33,588 INFO org.apache.solr.core.SolrResourceLoader: No 
/solr/home in JNDI
2016-03-16 15:51:33,588 INFO org.apache.solr.core.SolrResourceLoader: using 
system property solr.solr.home: /var/lib/solr/
2016-03-16 15:51:33,588 INFO org.apache.solr.core.SolrResourceLoader: new 
SolrResourceLoader for directory: '/var/lib/solr/'
2016-03-16 15:51:33,767 INFO org.apache.solr.servlet.SolrDispatchFilter: Trying 
to read solr.xml from Impala04:2181,Impala02:2181,Impala03:2181/solr
2016-03-16 15:51:33,781 INFO org.apache.solr.common.cloud.SolrZkClient: Using 
default ZkCredentialsProvider
2016-03-16 15:51:33,786 INFO org.apache.zookeeper.ZooKeeper: Initiating client 
connection, connectString=Impala04:2181,Impala02:2181,Impala03:2181/solr 
sessionTimeout=3 

Re: Welcome Kevin Risden as Lucene/Solr committer

2016-03-18 Thread David Smiley
Welcome Kevin!

(corrected misspelling of your last name in the subject)

On Wed, Mar 16, 2016 at 1:02 PM Joel Bernstein  wrote:

> I'm pleased to announce that Kevin Risden has accepted the PMC's invitation
> to become a committer.
>
> Kevin, it's tradition that you introduce yourself with a brief bio.
>
> I believe your account has been setup and karma has been granted so that
> you can add yourself to the committers section of the Who We Are page on
> the website:
> .
>
> Congratulations and welcome!
>
>
> Joel Bernstein
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Commented] (SOLR-5750) Backup/Restore API for SolrCloud

2016-03-18 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15200515#comment-15200515
 ] 

Hrishikesh Gadre commented on SOLR-5750:


[~varunthacker] Is it possible that when the backup command is sent, one of the 
shards (or specifically the shard leader) is in "recovering" state ? If yes 
what happens with the current implementation?

> Backup/Restore API for SolrCloud
> 
>
> Key: SOLR-5750
> URL: https://issues.apache.org/jira/browse/SOLR-5750
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Varun Thacker
> Fix For: 5.2, master
>
> Attachments: SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch, 
> SOLR-5750.patch, SOLR-5750.patch
>
>
> We should have an easy way to do backups and restores in SolrCloud. The 
> ReplicationHandler supports a backup command which can create snapshots of 
> the index but that is too little.
> The command should be able to backup:
> # Snapshots of all indexes or indexes from the leader or the shards
> # Config set
> # Cluster state
> # Cluster properties
> # Aliases
> # Overseer work queue?
> A restore should be able to completely restore the cloud i.e. no manual steps 
> required other than bringing nodes back up or setting up a new cloud cluster.
> SOLR-5340 will be a part of this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 963 - Still Failing

2016-03-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/963/

3 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Timeout occured while waiting response from server at: http://127.0.0.1:52666

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:52666
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:588)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.makeRequest(CollectionsAPIDistributedZkTest.java:381)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:497)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:169)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Comment Edited] (SOLR-8798) org.apache.solr.rest.RestManager can't find cyrillic synonyms.

2016-03-18 Thread Kostiantyn (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197239#comment-15197239
 ] 

Kostiantyn edited comment on SOLR-8798 at 3/16/16 12:12 PM:


Got the same issue with Danish synonyms.

I think the original problem in API. According to documentation 
https://cwiki.apache.org/confluence/display/solr/Managed+Resources the example 
request below will add new synonym mapping.
{code}
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"mad":["angry","upset"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english;
{code}
If after that I will execute this request,
{code}
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"mad":["insane"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english;
{code}
I will get result mapping merged:
{code}
"managedMap":{"mad":["angry","upset","insane"]}
{code}
If I need not merging but replacing, I have to at first delete the "mad" 
synonym at all and then re-add it with new value.
{code}
curl -X DELETE -H 'Content-type:application/json' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english/mad;
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"mad":["insane"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english;
{code}
That is how I could get replaced mapping:
{code}
"managedMap":{ "mad":["insane"]}
{code}

In my opinion this API could not be considered as totally finished. There is 
must be a method to update a synonym mapping also.
Problem comes when you have non latin symbols (Dannish example "åbningstider") 
or cyrillic symbols as well.
In this case you cannot perform deletion command because Solr will return 404 
status.
Example. Add first synonym mapping for the Danish word badroom "soveværelse"
{code}
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"soveværelse":["køkken"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/danish;
{code}
Than I need to replace mapping "køkken" (a kitchen) with "værelse" (a room). I 
cannot just execute PUT request, it will merge "værelse" with existent "køkken" 
and I will get
{code}
"managedMap":{"soveværelse":["køkken","værelse"]}
{code}
But I actually need this
{code}
"managedMap":{"soveværelse":["værelse"]}
{code}
If I try to delete "soveværelse" I get an error 404 from Solr
{code}
 curl -X DELETE -H 'Content-type:application/json' --data-binary 
'{"mad":["angry","upset"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/danish/soveværelse;
{
  "responseHeader":{
"status":404,
"QTime":10},
  "error":{
"msg":"sovev%C3%A6relse not found in /schema/analysis/synonyms/danish",
"code":404}}
{code}


It means that there is no way to maintain such synonym mappings.




was (Author: koschos):
Got the same issue with Danish synonyms.

I think the original problem in API. According to documentation 
https://cwiki.apache.org/confluence/display/solr/Managed+Resources the example 
request below will add new synonym mapping.
{code}
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"mad":["angry","upset"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english;
{code}
If after that I will execute this request,
{code}
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"mad":["insane"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english;
{code}
I will get result mapping merged:
{code}
"managedMap":{"mad":["angry","upset","insane"]}
{code}
If I need not merging but replacing, I have to at first delete the "mad" 
synonym at all and then re-add it with new value.
{code}
curl -X DELETE -H 'Content-type:application/json' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english/mad;
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"mad":["insane"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english;
{code}
That is how I could get replaced mapping:
{code}
"managedMap":{ "mad":["insane"]}
{code}

In my opinion this API could not be considered as totally finished. There is 
must be a method to update a synonym mapping also.
Problem comes when you have non latin symbols (Dannish example "åbningstider") 
or cyrillic symbols as well.
In this case you cannot perform deletion command because Solr will return 404 
status.
Example. Add first synonym mapping for the Danish word badroom "soveværelse"
{code}
curl -X PUT -H 'Content-type:application/json' --data-binary 
'{"soveværelse":["køkken"]}' 
"http://localhost:8983/solr/techproducts/schema/analysis/synonyms/daniish;
{code}
Than I need to replace mapping "køkken" (a kitchen) with "værelse" (a room). I 
cannot just execute PUT request, it will merge "værelse" with existent "køkken" 
and I will get
{code}

Re: Welcome Kevin Risden as Lucene/Solr committer

2016-03-18 Thread Shalin Shekhar Mangar
Congratulations and Welcome Kevin!

On Wed, Mar 16, 2016 at 10:33 PM, David Smiley  wrote:
> Welcome Kevin!
>
> (corrected misspelling of your last name in the subject)
>
> On Wed, Mar 16, 2016 at 1:02 PM Joel Bernstein  wrote:
>>
>> I'm pleased to announce that Kevin Risden has accepted the PMC's
>> invitation to become a committer.
>>
>> Kevin, it's tradition that you introduce yourself with a brief bio.
>>
>> I believe your account has been setup and karma has been granted so that
>> you can add yourself to the committers section of the Who We Are page on the
>> website:
>> .
>>
>> Congratulations and welcome!
>>
>>
>> Joel Bernstein
>>
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com



-- 
Regards,
Shalin Shekhar Mangar.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8870) AngularJS Query tab breaks through proxy

2016-03-18 Thread JIRA
Jan Høydahl created SOLR-8870:
-

 Summary: AngularJS Query tab breaks through proxy
 Key: SOLR-8870
 URL: https://issues.apache.org/jira/browse/SOLR-8870
 Project: Solr
  Issue Type: Bug
  Components: UI
Affects Versions: 5.5
Reporter: Jan Høydahl
Priority: Minor


The AngularJS Query tab generates a request URL on this form: 
http://localhost:8983/solr/techproducts%2Fselect?_=1458291250691=on=ram=json
 Notice the urlencoded {{%2Fselect}} part.

This works well locally with Jetty, but a customer has httpd as a proxy in 
front, and we get a 404 error since the web server does not parse {{%2F}} as a 
path separator and thus does not match the proxy rules for select.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8856) Do not cache merge or read once contexts in the hdfs block cache.

2016-03-18 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8856:
--
Attachment: SOLR-8856.patch

First patch.

> Do not cache merge or read once contexts in the hdfs block cache.
> -
>
> Key: SOLR-8856
> URL: https://issues.apache.org/jira/browse/SOLR-8856
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8856.patch
>
>
> Generally the block cache will not be large enough to contain the whole index 
> and merges can thrash the cache for queries. Even if we still look in the 
> cache, we should not populate it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr UpdateLog & UpdateRequestProcessors

2016-03-18 Thread Ishan Chattopadhyaya
I agree that we should throw an exception if JavaBinCodec's fallback
serialization is hit, since it won't be deserialized during a log
reply/peersync.
Just curious, if the field value was not properly serialized by the
JavaBinCodec, how was it handled by the DUH2 and written to the index?

On Thu, Mar 17, 2016 at 12:58 AM, David Smiley 
wrote:

> For a project I work on, I have an URP that adds a Lucene Field object to
> the SolrInputField.  Normally it's the job of a FieldType to produce a
> Lucene Field (createFields()) but my use-case requires data from other
> fields.  An URP can do this but a FieldType cannot (somewhat related
> to SOLR-4329).  Note that Solr's DocumentBuilder will skip invoking the
> FieldType's createField() to get the field if the SolrInputField already
> has a Lucene Field.  So far so good.
>
> The problem is that the UpdateLog, invoked by DirectUpdateHandler2,
> invoked by RunUpdateProcessor URP (the last URP) passes the final
> SolrInputDocument to the UpdateLog to get serialized.  Of course, since
> it's the last URP to pass the doc along.  The UpdateLog will in turn
> consult JavaBinCodec which has a fallback for types it doesn't know about
> to emit the classname string, colon, then toString of the object.  In my
> opinion, it should return an error, or at the very least a warning!  And it
> doesn't know about Field (nor could it support that), of course.  Note that
> SolrCloud PeerSync consults the UpdateLog of replicas to get a new Leader
> up to date, and an error will get triggered (and we probably lose the doc).
>
> Is it pointless to haven have an URP produce something that JavaBinCodec
> can't serialize (assuming use of the UpdateLog/SolrCloud)? Maybe.  At least
> there's the JavaBinCodec.ObjectResolver interface.  And as I mentioned if
> there was an early warning/error, an insidious problem wouldn't creep up on
> you later.  Before I noticed ObjectResolver I was thinking of filing an
> issue related to controlling which URPs apply when, relative to the
> UpdateLog.  I wonder if anyone else has any thoughts on all of this.
>
> ~ David
>
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>


[jira] [Commented] (SOLR-8814) Support GeoJSON response format

2016-03-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197689#comment-15197689
 ] 

ASF subversion and git services commented on SOLR-8814:
---

Commit 5731331be1f5fcef829950fcfa9edcb3632babae in lucene-solr's branch 
refs/heads/branch_6x from [~ryantxu]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5731331 ]

SOLR-8814: Support GeoJSON response format


> Support GeoJSON response format
> ---
>
> Key: SOLR-8814
> URL: https://issues.apache.org/jira/browse/SOLR-8814
> Project: Solr
>  Issue Type: New Feature
>  Components: Response Writers
>Reporter: Ryan McKinley
>Priority: Minor
> Fix For: master, 6.1
>
> Attachments: SOLR-8814-add-GeoJSONResponseWriter.patch, 
> SOLR-8814-add-GeoJSONResponseWriter.patch, 
> SOLR-8814-add-GeoJSONResponseWriter.patch
>
>
> With minor changes, we can modify the existing JSON writer to produce a 
> GeoJSON `FeatureCollection` for ever SolrDocumentList.  We can then pick a 
> field to use as the geometry type, and use that for the Feature#geometry
> {code}
> "response":{"type":"FeatureCollection","numFound":1,"start":0,"features":[
>   {"type":"Feature",
> "geometry":{"type":"Point","coordinates":[1,2]},
> "properties":{
>   ... the normal solr doc fields here ...}}]
>   }}
> {code}
> This will allow adding solr results directly to various mapping clients like 
> [Leaflet|http://leafletjs.com/]
> 
> This patch will work with Documents that have a spatial field the either:
> 1. Extends AbstractSpatialFieldType
> 2. has a stored value with geojson
> 2. has a stored value that can be parsed by spatial4j (WKT, etc)
> The spatial field is identified with the parameter `geojson.field`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8862) MiniSolrCloudCluster.createCollection can complain there are no live servers immediately after construction

2016-03-18 Thread Hoss Man (JIRA)
Hoss Man created SOLR-8862:
--

 Summary: MiniSolrCloudCluster.createCollection can complain there 
are no live servers immediately after construction
 Key: SOLR-8862
 URL: https://issues.apache.org/jira/browse/SOLR-8862
 Project: Solr
  Issue Type: Bug
  Components: Tests
Reporter: Hoss Man


I haven't been able to make sense of this yet, but what i'm seeing in a new 
SolrCloudTestCase subclass i'm writing is that the code below, which 
(reasonably) attempts to create a collection immediately after configuring the 
MiniSolrCloudCluster gets a "SolrServerException: No live SolrServers available 
to handle this request" -- in spite of the fact, that (as far as i can tell at 
first glance) MiniSolrCloudCluster's constructor is suppose to block until all 
the servers are live..

{code}
configureCluster(numServers)
  .addConfig(configName, configDir.toPath())
  .configure();
Map collectionProperties = ...;
assertNotNull(cluster.createCollection(COLLECTION_NAME, numShards, 
repFactor,
   configName, null, null, 
collectionProperties));
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8864) TestTestInjection needs to cleanup after itself -- causes TestCloudDeleteByQuery fail (may be symptom of larger problem?)

2016-03-18 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8864:
---
Attachment: jenkins.log

attaching full jenkins log from 
https://builds.apache.org/job/Lucene-Solr-Tests-6.x/65/ (branch_6x @ 
7687667b5ff7867249762d104707a91834d30ce3) ...

{noformat}
  [junit4]   2> NOTE: test params are: codec=Asserting(Lucene60): 
{expected_shard_s=FSTOrd50, _version_=FST50,
id=FSTOrd50}, docValues:{}, maxPointsInLeafNode=212, 
maxMBSortInHeap=5.567883717673721, sim=ClassicSimilarity,
locale=pl, timezone=America/Mendoza
   [junit4]   2> NOTE: Linux 3.13.0-52-generic amd64/Oracle Corporation 1.8.0_66
(64-bit)/cpus=4,threads=1,free=304603432,total=524812288
   [junit4]   2> NOTE: All tests run in this JVM: [ReplicationFactorTest, 
TestRuleBasedAuthorizationPlugin,
TestSchemaVersionResource, TestSweetSpotSimilarityFactory, SuggesterWFSTTest, 
TestCharFilters, TestCoreDiscovery,
DirectUpdateHandlerTest, TestConfigOverlay, TestStandardQParsers, 
GraphQueryTest, StressHdfsTest,
CurrencyFieldXmlFileTest, StatelessScriptUpdateProcessorFactoryTest, 
TestSolrQueryParserResource, RecoveryZkTest,
TestRecovery, SuggesterTest, TestFastWriter, 
VMParamsZkACLAndCredentialsProvidersTest,
FieldAnalysisRequestHandlerTest, TestStressReorder, BitVectorTest, 
DistributedFacetPivotSmallAdvancedTest,
StatsComponentTest, DirectSolrConnectionTest, SolrTestCaseJ4Test, 
DirectUpdateHandlerOptimizeTest,
TestPivotHelperCode, IgnoreCommitOptimizeUpdateProcessorFactoryTest, 
FacetPivotSmallTest,
ClassificationUpdateProcessorFactoryTest, SharedFSAutoReplicaFailoverTest, 
BadIndexSchemaTest,
HLLSerializationTest, TestFuzzyAnalyzedSuggestions, SortSpecParsingTest, 
HardAutoCommitTest, UpdateParamsTest,
RegexBoostProcessorTest, SliceStateTest, TestSolrQueryParser, 
TestSchemaNameResource, SolrCloudExampleTest,
TestDocBasedVersionConstraints, TestRebalanceLeaders, TestIndexSearcher, 
SpatialRPTFieldTypeTest,
TestSimpleQParserPlugin, ConnectionManagerTest, CoreAdminHandlerTest, 
ZkStateWriterTest, TestExtendedDismaxParser,
TestFieldResource, DeleteLastCustomShardedReplicaTest, TestReqParamsAPI, 
TestSolrDynamicMBean, BlockCacheTest,
XmlUpdateRequestHandlerTest, TestNamedUpdateProcessors, 
BinaryUpdateRequestHandlerTest, UnloadDistributedZkTest,
SSLMigrationTest, TestJsonRequest, TestObjectReleaseTracker, 
LeaderInitiatedRecoveryOnCommitTest,
DebugComponentTest, TestSolrIndexConfig, TestManagedResourceStorage, 
TestCSVResponseWriter,
TestPostingsSolrHighlighter, SampleTest, TestSolrQueryResponse, 
HighlighterMaxOffsetTest, TestTestInjection,
HdfsTlogReplayBufferedWhileIndexingTest, TestCloudDeleteByQuery]
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestCloudDeleteByQuery -Dtests.seed=F6D0A21946A344B8
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=pl 
-Dtests.timezone=America/Mendoza -Dtests.asserts=true
-Dtests.file.encoding=ISO-8859-1
   [junit4] ERROR   0.00s J2 | TestCloudDeleteByQuery (suite) <<<
   [junit4]> Throwable #1: java.lang.AssertionError: expected:<2> but 
was:<1>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([F6D0A21946A344B8]:0)
   [junit4]>at
org.apache.solr.cloud.TestCloudDeleteByQuery.createMiniSolrCloudCluster(TestCloudDeleteByQuery.java:173)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
   [junit4] Completed [191/581 (1!)] on J2 in 15.37s, 0 tests, 1 failure <<< 
FAILURES!
{noformat}

FWIW, even when i tried adding this code to TestCloudDeleteByQuery on branch_6x 
i couldn't get that reproduce line to fail...

{code}
  static {
// nocommit:
org.apache.solr.util.TestInjection.failReplicaRequests = "BADSYNTAX";
  }
{code}

[~markrmil...@gmail.com] - can you take a look?



> TestTestInjection needs to cleanup after itself -- causes 
> TestCloudDeleteByQuery fail (may be symptom of larger problem?)
> -
>
> Key: SOLR-8864
> URL: https://issues.apache.org/jira/browse/SOLR-8864
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: jenkins.log
>
>
> https://builds.apache.org/job/Lucene-Solr-Tests-6.x/65/ recently reported a 
> failure from TestCloudDeleteByQuery's init methods that made no sense to me 
> -- looking at the logs showed an error from "TestInjection.parseValue" even 
> though this test doesn't do anything to setup TestInjection...
> {noformat}
>[junit4]   2> 527801 ERROR (qtp1490160324-5239) [n:127.0.0.1:48763_solr 
> c:test_col s:shard1 r:core_node4 x:test_col_shard1_replica2] 
> o.a.s.h.RequestHandlerBase java.lang.RuntimeException: No match, probably bad 
> syntax: TRUE:0:
>[junit4]   2>  at 
> org.apache.solr.util.TestInjection.parseValue(TestInjection.java:236)
>[junit4]   2>  at 
> 

[jira] [Commented] (LUCENE-7108) Test2BPoints.test2D() failure

2016-03-18 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201443#comment-15201443
 ] 

Steve Rowe commented on LUCENE-7108:


I did this, but in the same directory as the one Jenkins uses.  I wasn't quick 
enough to check on the results before Jenkins started again.  Fortunately, the 
failing test (and all others) succeeded: 
[http://jenkins.sarowe.net/job/Lucene-core-nightly-monster-master/254/] - took 
almost 27 hours :).  I'll resolve this issue.

> Test2BPoints.test2D() failure
> -
>
> Key: LUCENE-7108
> URL: https://issues.apache.org/jira/browse/LUCENE-7108
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
>
> From my Jenkins:
> bq. Checking out Revision 3c7e55da3a29224a90a8fc71815a7a52433a6a90 
> (refs/remotes/origin/master)
> {noformat}
>[junit4] Suite: org.apache.lucene.index.Test2BPoints
>[junit4]   1> DIR: 
> /slow/jenkins/HDD-workspaces/Lucene-core-nightly-monster-master/lucene/build/core/test/J3/temp/lucene.index.Test2BPoints_41B6ADF997AA6777-001/2BPoints1D-001
>[junit4]   1> TEST: now CheckIndex
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=Test2BPoints 
> -Dtests.method=test2D -Dtests.seed=41B6ADF997AA6777 -Dtests.nightly=true 
> -Dtests.slow=true -Dtests.locale=uk-UA -Dtests.timezone=America/Guyana 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 77421s J3 | Test2BPoints.test2D <<<
>[junit4]> Throwable #1: java.lang.AssertionError: expected:<82595525> 
> but was:<0>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([41B6ADF997AA6777:6C131FA9712D2694]:0)
>[junit4]>  at 
> org.apache.lucene.index.Test2BPoints.test2D(Test2BPoints.java:137)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]   2> NOTE: leaving temporary files on disk at: 
> /slow/jenkins/HDD-workspaces/Lucene-core-nightly-monster-master/lucene/build/core/test/J3/temp/lucene.index.Test2BPoints_41B6ADF997AA6777-001
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene60): {}, 
> docValues:{}, maxPointsInLeafNode=1961, maxMBSortInHeap=6.2420286301663985, 
> sim=ClassicSimilarity, locale=uk-UA, timezone=America/Guyana
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_45 (64-bit)/cpus=16,threads=1,free=3014352688,total=6316621824
>[junit4]   2> NOTE: All tests run in this JVM: [TestField, 
> TestTransactionRollback, TestIndexReaderClose, TestParallelCompositeReader, 
> TestCodecs, TestMaxTermFrequency, TestUnicodeUtil, TestMatchNoDocsQuery, 
> TestDemoParallelLeafReader, TestStressAdvance, TestFSTs, TestDocIdSetBuilder, 
> TestBytesRefHash, TestInfoStream, TestAllFilesDetectTruncation, 
> TestForTooMuchCloning, TestNoMergeScheduler, TestRegexpQuery, TestBoostQuery, 
> TestPrefixRandom, TestTermsEnum2, TestConsistentFieldNumbers, 
> TestExceedMaxTermLength, TestSortRandom, TestReaderClosed, TestSetOnce, 
> TestQueryRescorer, TestStressDeletes, TestRecyclingByteBlockAllocator, 
> TestSearchAfter, TestBooleanQueryVisitSubscorers, TestOfflineSorter, 
> TestReadOnlyIndex, TestLucene50SegmentInfoFormat, TestComplexExplanations, 
> TestTopDocsMerge, TestIndexCommit, TestPrefixInBooleanQuery, 
> TestSleepingLockWrapper, TestIndexWriterUnicode, TestDirectory, 
> TestIndexWriterMerging, Test2BPagedBytes, TestSimilarity2, TestByteBlockPool, 
> TestSloppyMath, TestDocCount, TestBinaryDocument, TestSameScoresWithThreads, 
> TestScorerPerf, TestScoreCachingWrappingScorer, TestIndexWriter, 
> TestPerFieldDocValuesFormat, TestIndexWriterWithThreads, 
> TestIndexWriterExceptions, TestMultiMMap, TestBooleanOr, TestBasics, 
> TestIndexWriterMergePolicy, TestNRTThreads, MultiCollectorTest, 
> TestTermdocPerf, Test2BPositions, TestPackedTokenAttributeImpl, 
> TestGrowableByteArrayDataOutput, TestBlockPostingsFormat3, 
> TestLucene50CompoundFormat, TestLucene50TermVectorsFormat, 
> TestLucene53NormsFormat, TestLucene54DocValuesFormat, 
> TestLucene60PointsFormat, TestFieldType, Test2BPoints]
>[junit4] Completed [415/415 (1!)] on J3 in 86675.45s, 2 tests, 1 failure 
> <<< FAILURES!
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8586) Implement hash over all documents to check for shard synchronization

2016-03-18 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-8586.
---
Resolution: Fixed

> Implement hash over all documents to check for shard synchronization
> 
>
> Key: SOLR-8586
> URL: https://issues.apache.org/jira/browse/SOLR-8586
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: master, 5.5
>
> Attachments: SOLR-8586.patch, SOLR-8586.patch, SOLR-8586.patch, 
> SOLR-8586.patch
>
>
> An order-independent hash across all of the versions in the index should 
> suffice.  The hash itself is pretty easy, but we need to figure out 
> when/where to do this check (for example, I think PeerSync is currently used 
> in multiple contexts and this check would perhaps not be appropriate for all 
> PeerSync calls?)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8082) can't query against negative float or double values when indexed="false" docValues="true" multiValued="false"

2016-03-18 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201529#comment-15201529
 ] 

Ishan Chattopadhyaya edited comment on SOLR-8082 at 3/18/16 4:24 PM:
-

Indeed! +1 to fixing the issue for 6.0 with the current patch (except for the 
stale comment {{If min is negative (or -0d) and max is positive (or +0d), then 
issue two range queries}}, which was left over an older patch).


was (Author: ichattopadhyaya):
Indeed! +1 to fixing the issue for 6.0 with the current patch (except for the 
stale comment mentioning the Boolean query, which was left over an older patch).

> can't query against negative float or double values when indexed="false" 
> docValues="true" multiValued="false"
> -
>
> Key: SOLR-8082
> URL: https://issues.apache.org/jira/browse/SOLR-8082
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Priority: Blocker
> Fix For: 6.0
>
> Attachments: SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, 
> SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch, SOLR-8082.patch
>
>
> Haven't dug into this yet, but something is evidently wrong in how the 
> DocValues based queries get build for single valued float or double fields 
> when negative numbers are involved.
> Steps to reproduce...
> {noformat}
> $ bin/solr -e schemaless -noprompt
> ...
> $ curl -X POST -H 'Content-type:application/json' --data-binary '{ 
> "add-field":{ "name":"f_dv_multi", "type":"tfloat", "stored":"true", 
> "indexed":"false", "docValues":"true", "multiValued":"true" }, "add-field":{ 
> "name":"f_dv_single", "type":"tfloat", "stored":"true", "indexed":"false", 
> "docValues":"true", "multiValued":"false" } }' 
> http://localhost:8983/solr/gettingstarted/schema
> {
>   "responseHeader":{
> "status":0,
> "QTime":84}}
> $ curl -X POST -H 'Content-type:application/json' --data-binary 
> '[{"id":"test", "f_dv_multi":-4.3, "f_dv_single":-4.3}]' 
> 'http://localhost:8983/solr/gettingstarted/update/json/docs?commit=true'
> {"responseHeader":{"status":0,"QTime":57}}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_multi:\"-4.3\""}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:"-4.3;'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"f_dv_single:\"-4.3\""}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}
> Explicit range queries (which is how numeric "field" queries are implemented 
> under the cover) are equally problematic...
> {noformat}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_multi:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_multi:[-4.3 TO -4.3]"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"test",
> "f_dv_multi":[-4.3],
> "f_dv_single":-4.3,
> "_version_":1512962117004689408}]
>   }}
> $ curl 
> 'http://localhost:8983/solr/gettingstarted/query?q=f_dv_single:%5B-4.3+TO+-4.3%5D'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"f_dv_single:[-4.3 TO -4.3]"}},
>   "response":{"numFound":0,"start":0,"docs":[]
>   }}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Request access for CWIKI and Wiki

2016-03-18 Thread Kevin Risden
Solr CWIKI (
https://cwiki.apache.org/confluence/display/solr/Apache+Solr+Reference+Guide
)
- committer access
- username - risdenk

Solr Wiki (https://wiki.apache.org/solr/)
- add to ContributorsGroup
- username - KevinRisden

Thanks.

Kevin Risden


[jira] [Commented] (LUCENE-7115) Speed up FieldCache.CacheEntry toString by setting initial StringBuilder capacity

2016-03-18 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15202202#comment-15202202
 ] 

Gregory Chanan commented on LUCENE-7115:


I uploaded a patch based on a previous version, going to upload a new one 
shortly.

> Speed up FieldCache.CacheEntry toString by setting initial StringBuilder 
> capacity
> -
>
> Key: LUCENE-7115
> URL: https://issues.apache.org/jira/browse/LUCENE-7115
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>
> Solr can end up printing a lot of these objects via the JmxMonitoriedMap, see 
> SOLR-8869 and SOLR-6747 as examples.
> From looking at some profiles, a lot of time and memory are spent resizing 
> the StringBuilder, which doesn't set the initial capacity.
> On my cluster, the strings are a bit over 200 chars; I set the initial 
> capacity to 250 and ran tests calling toString 1000 times.  Tests 
> consistently show 10-15% improvement when setting the initial capacity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8814) Support GeoJSON response format

2016-03-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197709#comment-15197709
 ] 

ASF subversion and git services commented on SOLR-8814:
---

Commit 36145d02ccc838f50538a8b9d6ff9c68f3ccce22 in lucene-solr's branch 
refs/heads/master from [~ryantxu]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=36145d0 ]

SOLR-8814: Support GeoJSON response format


> Support GeoJSON response format
> ---
>
> Key: SOLR-8814
> URL: https://issues.apache.org/jira/browse/SOLR-8814
> Project: Solr
>  Issue Type: New Feature
>  Components: Response Writers
>Reporter: Ryan McKinley
>Priority: Minor
> Fix For: master, 6.1
>
> Attachments: SOLR-8814-add-GeoJSONResponseWriter.patch, 
> SOLR-8814-add-GeoJSONResponseWriter.patch, 
> SOLR-8814-add-GeoJSONResponseWriter.patch
>
>
> With minor changes, we can modify the existing JSON writer to produce a 
> GeoJSON `FeatureCollection` for ever SolrDocumentList.  We can then pick a 
> field to use as the geometry type, and use that for the Feature#geometry
> {code}
> "response":{"type":"FeatureCollection","numFound":1,"start":0,"features":[
>   {"type":"Feature",
> "geometry":{"type":"Point","coordinates":[1,2]},
> "properties":{
>   ... the normal solr doc fields here ...}}]
>   }}
> {code}
> This will allow adding solr results directly to various mapping clients like 
> [Leaflet|http://leafletjs.com/]
> 
> This patch will work with Documents that have a spatial field the either:
> 1. Extends AbstractSpatialFieldType
> 2. has a stored value with geojson
> 2. has a stored value that can be parsed by spatial4j (WKT, etc)
> The spatial field is identified with the parameter `geojson.field`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8838) Returning non-stored docValues is incorrect for floats and doubles

2016-03-18 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe reassigned SOLR-8838:


Assignee: Steve Rowe

> Returning non-stored docValues is incorrect for floats and doubles
> --
>
> Key: SOLR-8838
> URL: https://issues.apache.org/jira/browse/SOLR-8838
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.5
>Reporter: Ishan Chattopadhyaya
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8838.patch, SOLR-8838.patch
>
>
> In SOLR-8220, we introduced returning non-stored docValues as if they were 
> regular stored fields. The handling of doubles and floats, as introduced 
> there, was incorrect for negative values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8823) Implement DatabaseMetaDataImpl.getColumns(String catalog, String schemaPattern, String tableNamePattern, String columnNamePattern)

2016-03-18 Thread Trey Cahill (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trey Cahill updated SOLR-8823:
--
Attachment: SOLR-8823.patch

> Implement DatabaseMetaDataImpl.getColumns(String catalog, String 
> schemaPattern, String tableNamePattern, String columnNamePattern)
> --
>
> Key: SOLR-8823
> URL: https://issues.apache.org/jira/browse/SOLR-8823
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: master
>Reporter: Kevin Risden
> Attachments: SOLR-8823.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8865) real-time get does not retrieve values from docValues

2016-03-18 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-8865:
---
Attachment: SOLR-8865.patch

> real-time get does not retrieve values from docValues
> -
>
> Key: SOLR-8865
> URL: https://issues.apache.org/jira/browse/SOLR-8865
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
> Attachments: SOLR-8865.patch, SOLR-8865.patch
>
>
> Uncovered during ad-hoc testing... the _version_ field, which has 
> stored=false docValues=true is not retrieved with realtime-get



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8861) Fix missing CloudSolrClient.connect() before getZkStateReader in solrj.io classes

2016-03-18 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8861:
---
Description: There are a few places in the new solrj.io package that miss 
calling connect before getZkStateReader. This can cause NPE exceptions with 
getZkStateReader in some cases if the SolrCache is closed.  (was: There are a 
few places in the new solrj.io package that miss calling connect before 
getZkStateReader. This can cause NPE exceptions with getZkStateReader in some 
cases if the SolrCache is closed.

There is probably a better way to fix this moving forward, but for 6.0 this 
should be resolved.)

> Fix missing CloudSolrClient.connect() before getZkStateReader in solrj.io 
> classes
> -
>
> Key: SOLR-8861
> URL: https://issues.apache.org/jira/browse/SOLR-8861
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: master, 6.0
>Reporter: Kevin Risden
>Priority: Critical
>
> There are a few places in the new solrj.io package that miss calling connect 
> before getZkStateReader. This can cause NPE exceptions with getZkStateReader 
> in some cases if the SolrCache is closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8740) use docValues by default

2016-03-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15198362#comment-15198362
 ] 

ASF subversion and git services commented on SOLR-8740:
---

Commit 14752476f445436944618a6f1dde9bd787a1f3c9 in lucene-solr's branch 
refs/heads/branch_6x from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1475247 ]

SOLR-8740: use docValues for non-text fields in schema templates


> use docValues by default
> 
>
> Key: SOLR-8740
> URL: https://issues.apache.org/jira/browse/SOLR-8740
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Yonik Seeley
> Fix For: master
>
> Attachments: SOLR-8740.patch, SOLR-8740.patch
>
>
> We should consider switching to docValues for most of our non-text fields.  
> This may be a better default since it is more NRT friendly and acts to avoid 
> OOM errors due to large field cache or UnInvertedField entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Kevin Risden as Lucene/Solr committer

2016-03-18 Thread Adrien Grand
Welcome Kevin!

Le jeu. 17 mars 2016 à 21:41, Tommaso Teofili  a
écrit :

> Welcome Kevin!
>
>
> Tommaso
>
> Il giorno gio 17 mar 2016 alle ore 16:59 Noble Paul 
> ha scritto:
>
>> Welcome Kevin
>>
>> On Thu, Mar 17, 2016 at 4:53 PM, Mikhail Khludnev <
>> mkhlud...@griddynamics.com> wrote:
>>
>>> Congratulations, Kevin!
>>>
>>> On Wed, Mar 16, 2016 at 11:23 PM, Kevin Risden >> > wrote:
>>>
 Thanks for the warm welcome. Its an honor to be invited to work on
 this project and with so many great people.

 Bio:
 I graduated from Rose-Hulman Institute of Technology in 2012. My
 undergrad revolved around software development, software testing, and
 robotics. In early 2013, I joined Avalon Consulting, LLC, moved down
 to Austin, TX, and first started using Solr. The focus at the time was
 to use Solr as an analytics engine to power charts/graphs. From 2013
 on, I worked a lot on Hadoop and Solr integrations with a continued
 focus on analytics. Providing training and education are two areas
 that I am really passionate about. In addition to my regular work, I
 have been improving the SolrJ JDBC driver to enable more analytics use
 cases.
 Kevin Risden


 On Wed, Mar 16, 2016 at 12:55 PM, Anshum Gupta 
 wrote:
 > Congratulations and Welcome Kevin!
 >
 > On Wed, Mar 16, 2016 at 10:03 AM, David Smiley <
 david.w.smi...@gmail.com>
 > wrote:
 >>
 >> Welcome Kevin!
 >>
 >> (corrected misspelling of your last name in the subject)
 >>
 >> On Wed, Mar 16, 2016 at 1:02 PM Joel Bernstein 
 wrote:
 >>>
 >>> I'm pleased to announce that Kevin Risden has accepted the PMC's
 >>> invitation to become a committer.
 >>>
 >>> Kevin, it's tradition that you introduce yourself with a brief bio.
 >>>
 >>> I believe your account has been setup and karma has been granted so
 that
 >>> you can add yourself to the committers section of the Who We Are
 page on the
 >>> website:
 >>> .
 >>>
 >>> Congratulations and welcome!
 >>>
 >>>
 >>> Joel Bernstein
 >>>
 >> --
 >> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
 >> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
 >> http://www.solrenterprisesearchserver.com
 >
 >
 >
 >
 > --
 > Anshum Gupta

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


>>>
>>>
>>> --
>>> Sincerely yours
>>> Mikhail Khludnev
>>> Principal Engineer,
>>> Grid Dynamics
>>>
>>> 
>>> 
>>>
>>
>>
>>
>> --
>> -
>> Noble Paul
>>
>


[jira] [Commented] (SOLR-7537) Could not find or load main class org.apache.solr.util.SimplePostTool

2016-03-18 Thread Reji A (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15200472#comment-15200472
 ] 

Reji A commented on SOLR-7537:
--

At the very simplest, the work around to run "post" on Windows is through 
Cygwin, be sure to edit the "post" file as below:

## SOLR_TIP=`dirname "$THIS_SCRIPT"`/..
## SOLR_TIP=`cd "$SOLR_TIP"; pwd`
#SOLR_TIP="C:\MY_SOLR_INSTALL_DIR"
# in my case it was as below
SOLR_TIP="D:\apache\apache-solr\solr-5.5.0"

I ran the 'post' command:
$ ./post -c gettingstarted "/DIRECTORY TO SCAN/"

..
..

312 files indexed.
COMMITting Solr index changes to 
http://localhost:8983/solr/gettingstarted/updat 
   e...
Time spent: 0:06:22.215






> Could not find or load main class org.apache.solr.util.SimplePostTool
> -
>
> Key: SOLR-7537
> URL: https://issues.apache.org/jira/browse/SOLR-7537
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 5.1
> Environment: Windows 8.1, cygwin4.3.33
>Reporter: Peng Li
>
> In "solr-5.1.0/bin" folder, I typed below command "../doc" folder has 
> "readme.docx"
> sh post -c gettingstarted ../doc
> And I got below exception:
> c:\Java\jdk1.8.0_20/bin/java -classpath 
> /cygdrive/c/Users/lipeng/_Main/Servers/solr-5.1.0/dist/solr-core-5.1.0.jar 
> -Dauto=yes -Dc=gettingstarted -Ddata=files -Drecursive=yes 
> org.apache.solr.util.SimplePostTool ../doc
> Error: Could not find or load main class org.apache.solr.util.SimplePostTool
> I followed instruction from here: 
> http://lucene.apache.org/solr/quickstart.html
> Can you help me to take a look at? Thank you!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-445) Update Handlers abort with bad documents

2016-03-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15198396#comment-15198396
 ] 

ASF subversion and git services commented on SOLR-445:
--

Commit a0d48f873c21ca0ab5ba02748c1659a983aad886 in lucene-solr's branch 
refs/heads/jira/SOLR-445 from [~hossman_luc...@fucit.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a0d48f8 ]

SOLR-445: start of a new randomized/chaosmonkey test, currently blocked by 
SOLR-8862 (no monkey yet)


> Update Handlers abort with bad documents
> 
>
> Key: SOLR-445
> URL: https://issues.apache.org/jira/browse/SOLR-445
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 1.3
>Reporter: Will Johnson
>Assignee: Hoss Man
> Attachments: SOLR-445-3_x.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445_3x.patch, solr-445.xml
>
>
> Has anyone run into the problem of handling bad documents / failures mid 
> batch.  Ie:
> 
>   
> 1
>   
>   
> 2
> I_AM_A_BAD_DATE
>   
>   
> 3
>   
> 
> Right now solr adds the first doc and then aborts.  It would seem like it 
> should either fail the entire batch or log a message/return a code and then 
> continue on to add doc 3.  Option 1 would seem to be much harder to 
> accomplish and possibly require more memory while Option 2 would require more 
> information to come back from the API.  I'm about to dig into this but I 
> thought I'd ask to see if anyone had any suggestions, thoughts or comments.   
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7114) analyzers-common tests fail with JDK9 EA 110 build

2016-03-18 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15199975#comment-15199975
 ] 

Dawid Weiss commented on LUCENE-7114:
-

Can't remember whether string buffer opts (by Alexey Shipilev) have been folded 
into this release or not. I'll tip him.

> analyzers-common tests fail with JDK9 EA 110 build
> --
>
> Key: LUCENE-7114
> URL: https://issues.apache.org/jira/browse/LUCENE-7114
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>
> Looks like this:
> {noformat}
>[junit4] Suite: org.apache.lucene.analysis.fr.TestFrenchLightStemFilter
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestFrenchLightStemFilter -Dtests.method=testVocabulary 
> -Dtests.seed=4044297F9BFA5E32 -Dtests.locale=az-Cyrl-AZ -Dtests.timezone=ACT 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 0.44s J0 | TestFrenchLightStemFilter.testVocabulary <<<
>[junit4]> Throwable #1: org.junit.ComparisonFailure: term 0 
> expected: but was:
> {noformat}
> So far i see these failing with french and portuguese. It may be a hotspot 
> issue, as these tests stem more than 10,000 words.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8858 SolrIndexSearcher#doc() Comple...

2016-03-18 Thread maedhroz
GitHub user maedhroz opened a pull request:

https://github.com/apache/lucene-solr/pull/21

SOLR-8858 SolrIndexSearcher#doc() Completely Ignores Field Filters Unless 
Lazy Field Loading is Enabled

Instead of just discarding fields if lazy loading is not enabled, 
SolrIndexSearcher now passes them through to IndexReader. This means 
IndexReader creates a DocumentStoredFieldVisitor that we can use to later 
determine which fields need to be read.

https://issues.apache.org/jira/browse/SOLR-8858

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/maedhroz/lucene-solr SOLR-8858

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/21.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #21


commit fa8075c7861dbc331588dfb5c9e28576e2eb31f2
Author: Caleb Rackliffe 
Date:   2016-03-16T18:15:20Z

SOLR-8858 SolrIndexSearcher#doc() Completely Ignores Field Filters Unless 
Lazy Field Loading is Enabled

Instead of just discarding fields if lazy loading is not enabled, 
SolrIndexSearcher now passes them through to IndexReader. This means 
IndexReader creates a DocumentStoredFieldVisitor that we can use to later 
determine which fields need to be read.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8859) AbstractSpatialFieldType can use ShapeContext to read/write shapes (WKT, GeoJSON)

2016-03-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15198026#comment-15198026
 ] 

ASF subversion and git services commented on SOLR-8859:
---

Commit 022877fefabadd5865c335a5b289874d182ed852 in lucene-solr's branch 
refs/heads/master from [~ryantxu]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=022877f ]

SOLR-8859: read/write Shapes to String


> AbstractSpatialFieldType can use ShapeContext to read/write shapes (WKT, 
> GeoJSON)
> -
>
> Key: SOLR-8859
> URL: https://issues.apache.org/jira/browse/SOLR-8859
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ryan McKinley
>Assignee: Ryan McKinley
>Priority: Minor
> Fix For: master, 6.1
>
> Attachments: SOLR-8859.patch
>
>
> Right now the AbstractSpatialFieldType throws exceptions if it needs to 
> convert to/from a string.  We should use the context to convert



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4509) Disable HttpClient stale check for performance.

2016-03-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15200780#comment-15200780
 ] 

Mark Miller commented on SOLR-4509:
---

I've just about got a new patch ready to show here.

I still don't have a decent way to hook into httpclient close anymore, so 
that's an issue that needs to be worked around outside of just tests (as I've 
done).

Some things to consider:

Supposedly the stale connection check is not as bad a performance killer as it 
used to be as it's not done every request any longer?
Without the stale check, when a server drops, even if it comes back up, the 
client might try to use a bad connection - in the past the stale connection 
check could catch that.
However, the stale connection check is still not 100% reliable and I assume the 
perf optimization that did has a similar issue as above in the right 
circumstance.
It still would be nice to try and control connection lifecycle from the client 
as much as possible.

> Disable HttpClient stale check for performance.
> ---
>
> Key: SOLR-4509
> URL: https://issues.apache.org/jira/browse/SOLR-4509
> Project: Solr
>  Issue Type: Improvement
>  Components: search
> Environment: 5 node SmartOS cluster (all nodes living in same global 
> zone - i.e. same physical machine)
>Reporter: Ryan Zezeski
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 5.0, master
>
> Attachments: IsStaleTime.java, SOLR-4509-4_4_0.patch, 
> SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
> SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
> SOLR-4509.patch, baremetal-stale-nostale-med-latency.dat, 
> baremetal-stale-nostale-med-latency.svg, 
> baremetal-stale-nostale-throughput.dat, baremetal-stale-nostale-throughput.svg
>
>
> By disabling the Apache HTTP Client stale check I've witnessed a 2-4x 
> increase in throughput and reduction of over 100ms.  This patch was made in 
> the context of a project I'm leading, called Yokozuna, which relies on 
> distributed search.
> Here's the patch on Yokozuna: https://github.com/rzezeski/yokozuna/pull/26
> Here's a write-up I did on my findings: 
> http://www.zinascii.com/2013/solr-distributed-search-and-the-stale-check.html
> I'm happy to answer any questions or make changes to the patch to make it 
> acceptable.
> ReviewBoard: https://reviews.apache.org/r/28393/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8838) Returning non-stored docValues is incorrect for floats and doubles

2016-03-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15198520#comment-15198520
 ] 

ASF subversion and git services commented on SOLR-8838:
---

Commit 11fd447860f5400c2fcf880bde9477e164606971 in lucene-solr's branch 
refs/heads/branch_5_5 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=11fd447 ]

SOLR-8838: Returning non-stored docValues is incorrect for negative floats and 
doubles.


> Returning non-stored docValues is incorrect for floats and doubles
> --
>
> Key: SOLR-8838
> URL: https://issues.apache.org/jira/browse/SOLR-8838
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.5
>Reporter: Ishan Chattopadhyaya
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8838.patch, SOLR-8838.patch, SOLR-8838.patch
>
>
> In SOLR-8220, we introduced returning non-stored docValues as if they were 
> regular stored fields. The handling of doubles and floats, as introduced 
> there, was incorrect for negative values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8838) Returning non-stored docValues is incorrect for floats and doubles

2016-03-18 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-8838.
--
Resolution: Fixed

> Returning non-stored docValues is incorrect for floats and doubles
> --
>
> Key: SOLR-8838
> URL: https://issues.apache.org/jira/browse/SOLR-8838
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.5
>Reporter: Ishan Chattopadhyaya
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8838.patch, SOLR-8838.patch, SOLR-8838.patch
>
>
> In SOLR-8220, we introduced returning non-stored docValues as if they were 
> regular stored fields. The handling of doubles and floats, as introduced 
> there, was incorrect for negative values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8867) frange / ValueSourceRangeFilter / FunctionValues.getRangeScorer should not match documents w/o a value

2016-03-18 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-8867.

Resolution: Fixed

> frange / ValueSourceRangeFilter / FunctionValues.getRangeScorer should not 
> match documents w/o a value
> --
>
> Key: SOLR-8867
> URL: https://issues.apache.org/jira/browse/SOLR-8867
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 6.0
>
> Attachments: SOLR-8867.patch, SOLR-8867.patch
>
>
> {!frange} currently can match documents w/o a value (because of a default 
> value of 0).
> This only existed historically because we didn't have info about what fields 
> had a value for numerics, and didn't have exists() on FunctionValues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4221) Custom sharding

2016-03-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15198715#comment-15198715
 ] 

ASF subversion and git services commented on SOLR-4221:
---

Commit f5a4b0419cd3e8fa3a9c707503ab0f42adfd59f0 in lucene-solr's branch 
refs/heads/branch_6x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f5a4b04 ]

SOLR-8860: Remove back-compat handling of router format made in SOLR-4221 in 
4.5.0
(cherry picked from commit ae846bf)


> Custom sharding
> ---
>
> Key: SOLR-4221
> URL: https://issues.apache.org/jira/browse/SOLR-4221
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Noble Paul
> Fix For: 4.5, master
>
> Attachments: SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, 
> SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, 
> SOLR-4221.patch
>
>
> Features to let users control everything about sharding/routing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8740) use docValues by default

2016-03-18 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-8740:
---
Attachment: SOLR-8740.patch

Here's an update w/ the other template schemas converted to use docValues.
I'll do some more testing to validate that things work and then commit.

> use docValues by default
> 
>
> Key: SOLR-8740
> URL: https://issues.apache.org/jira/browse/SOLR-8740
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Yonik Seeley
> Fix For: master
>
> Attachments: SOLR-8740.patch, SOLR-8740.patch
>
>
> We should consider switching to docValues for most of our non-text fields.  
> This may be a better default since it is more NRT friendly and acts to avoid 
> OOM errors due to large field cache or UnInvertedField entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Kevin Risden as Lucene/Solr committer

2016-03-18 Thread Kevin Risden
Thanks for the warm welcome. Its an honor to be invited to work on
this project and with so many great people.

Bio:
I graduated from Rose-Hulman Institute of Technology in 2012. My
undergrad revolved around software development, software testing, and
robotics. In early 2013, I joined Avalon Consulting, LLC, moved down
to Austin, TX, and first started using Solr. The focus at the time was
to use Solr as an analytics engine to power charts/graphs. From 2013
on, I worked a lot on Hadoop and Solr integrations with a continued
focus on analytics. Providing training and education are two areas
that I am really passionate about. In addition to my regular work, I
have been improving the SolrJ JDBC driver to enable more analytics use
cases.
Kevin Risden


On Wed, Mar 16, 2016 at 12:55 PM, Anshum Gupta  wrote:
> Congratulations and Welcome Kevin!
>
> On Wed, Mar 16, 2016 at 10:03 AM, David Smiley 
> wrote:
>>
>> Welcome Kevin!
>>
>> (corrected misspelling of your last name in the subject)
>>
>> On Wed, Mar 16, 2016 at 1:02 PM Joel Bernstein  wrote:
>>>
>>> I'm pleased to announce that Kevin Risden has accepted the PMC's
>>> invitation to become a committer.
>>>
>>> Kevin, it's tradition that you introduce yourself with a brief bio.
>>>
>>> I believe your account has been setup and karma has been granted so that
>>> you can add yourself to the committers section of the Who We Are page on the
>>> website:
>>> .
>>>
>>> Congratulations and welcome!
>>>
>>>
>>> Joel Bernstein
>>>
>> --
>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>> http://www.solrenterprisesearchserver.com
>
>
>
>
> --
> Anshum Gupta

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Kevin Risden as Lucene/Solr committer

2016-03-18 Thread Varun Thacker
Welcome Kevin!

On Wed, Mar 16, 2016 at 10:44 PM, Steve Rowe  wrote:

> Welcome Kevin!
>
> --
> Steve
> www.lucidworks.com
>
> > On Mar 16, 2016, at 1:02 PM, Joel Bernstein  wrote:
> >
> > I'm pleased to announce that Kevin Risden has accepted the PMC's
> invitation to become a committer.
> >
> > Kevin, it's tradition that you introduce yourself with a brief bio.
> >
> > I believe your account has been setup and karma has been granted so that
> you can add yourself to the committers section of the Who We Are page on
> the website:
> > .
> >
> > Congratulations and welcome!
> >
> >
> > Joel Bernstein
> >
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


-- 


Regards,
Varun Thacker


[jira] [Commented] (SOLR-6021) Always persist router.field in cluster state so CloudSolrServer can route documents correctly

2016-03-18 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15197987#comment-15197987
 ] 

Noble Paul commented on SOLR-6021:
--

router.field is not same as uniquekey. You could use an alternate field to 
route your docs. Imagine I have houses as docs and I choose to route docs based 
on zip codes

> Always persist router.field in cluster state so CloudSolrServer can route 
> documents correctly
> -
>
> Key: SOLR-6021
> URL: https://issues.apache.org/jira/browse/SOLR-6021
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
> Attachments: SOLR-6021.patch, SOLR-6021.patch
>
>
> CloudSolrServer has idField as "id" which is used for hashing and 
> distributing documents. There is a setter to change it as well.
> IMO, we should use the correct uniqueKey automatically. I propose that we 
> start storing router.field always in cluster state and set it to the 
> uniqueKey field name by default. Then CloudSolrServer would not need to 
> assume an "id" field by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-8586) Implement hash over all documents to check for shard synchronization

2016-03-18 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reopened SOLR-8586:
---
  Assignee: Yonik Seeley

> Implement hash over all documents to check for shard synchronization
> 
>
> Key: SOLR-8586
> URL: https://issues.apache.org/jira/browse/SOLR-8586
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 5.5, master
>
> Attachments: SOLR-8586.patch, SOLR-8586.patch, SOLR-8586.patch, 
> SOLR-8586.patch
>
>
> An order-independent hash across all of the versions in the index should 
> suffice.  The hash itself is pretty easy, but we need to figure out 
> when/where to do this check (for example, I think PeerSync is currently used 
> in multiple contexts and this check would perhaps not be appropriate for all 
> PeerSync calls?)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7061) fix remaining api issues with XYZPoint classes

2016-03-18 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15199439#comment-15199439
 ] 

Robert Muir commented on LUCENE-7061:
-

Math.nextUp/Math.nextDown

> fix remaining api issues with XYZPoint classes
> --
>
> Key: LUCENE-7061
> URL: https://issues.apache.org/jira/browse/LUCENE-7061
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: master, 6.0
>
> Attachments: LUCENE-7061.patch
>
>
> There are still some major problems today:
> XYZPoint.newRangeQuery has "brain damage" from variable length terms. nulls 
> for open ranges make no sense: these are fixed-width types and instead you 
> can use things like Integer.maxValue. Removing the nulls is safe, as we can 
> just switch to primitive types instead of boxed types.
> XYZPoint.newRangeQuery requires boolean arrays for inclusive/exclusive, but 
> thats just more brain damage. If you want to exclude an integer, you just 
> subtract 1 from it and other simple stuff.
> For the apis, this means Instead of:
> {code}
> public static Query newRangeQuery(String field, Long lowerValue, boolean 
> lowerInclusive, Long upperValue, boolean upperInclusive);
>   
> public static Query newMultiRangeQuery(String field, Long[] lowerValue, 
> boolean lowerInclusive[], Long[] upperValue, boolean upperInclusive[]);
> {code}
> we have:
> {code}
> public static Query newRangeQuery(String field, long lowerValue, long 
> upperValue);
> public static Query newRangeQuery(String field, long[] lowerValue, long[] 
> upperValue);
> {code}
> PointRangeQuery is horribly complex due to these nulls and boolean arrays, 
> and need not be. Now it only works "inclusive" and is much simpler.
> XYZPoint.newSetQuery throws IOException, just creating the query. This is 
> very confusing and unnecessary (no i/o happens).
> LatLonPoint's bounding box query is not inclusive like the other geo. And the 
> test does not fail!
> I also found a few missing checks here and there while cleaning up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4509) Disable HttpClient stale check for performance.

2016-03-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15200893#comment-15200893
 ] 

Mark Miller commented on SOLR-4509:
---

Nice, if we move to the new API's we also get most of the work I've been doing 
here built in, including the idle connection sweeper.


> Disable HttpClient stale check for performance.
> ---
>
> Key: SOLR-4509
> URL: https://issues.apache.org/jira/browse/SOLR-4509
> Project: Solr
>  Issue Type: Improvement
>  Components: search
> Environment: 5 node SmartOS cluster (all nodes living in same global 
> zone - i.e. same physical machine)
>Reporter: Ryan Zezeski
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 5.0, master
>
> Attachments: IsStaleTime.java, SOLR-4509-4_4_0.patch, 
> SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
> SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
> SOLR-4509.patch, baremetal-stale-nostale-med-latency.dat, 
> baremetal-stale-nostale-med-latency.svg, 
> baremetal-stale-nostale-throughput.dat, baremetal-stale-nostale-throughput.svg
>
>
> By disabling the Apache HTTP Client stale check I've witnessed a 2-4x 
> increase in throughput and reduction of over 100ms.  This patch was made in 
> the context of a project I'm leading, called Yokozuna, which relies on 
> distributed search.
> Here's the patch on Yokozuna: https://github.com/rzezeski/yokozuna/pull/26
> Here's a write-up I did on my findings: 
> http://www.zinascii.com/2013/solr-distributed-search-and-the-stale-check.html
> I'm happy to answer any questions or make changes to the patch to make it 
> acceptable.
> ReviewBoard: https://reviews.apache.org/r/28393/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_72) - Build # 172 - Still Failing!

2016-03-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/172/
Java: 32bit/jdk1.8.0_72 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.UnloadDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=7330, 
name=testExecutor-3743-thread-4, state=RUNNABLE, 
group=TGRP-UnloadDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=7330, name=testExecutor-3743-thread-4, 
state=RUNNABLE, group=TGRP-UnloadDistributedZkTest]
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:39101/kfyh
at __randomizedtesting.SeedInfo.seed([9728485661A72789]:0)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:583)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: http://127.0.0.1:39101/kfyh
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:588)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$0(BasicDistributedZkTest.java:581)
... 4 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:482)
... 8 more




Build Log:
[...truncated 11358 lines...]
   [junit4] Suite: org.apache.solr.cloud.UnloadDistributedZkTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.UnloadDistributedZkTest_9728485661A72789-001/init-core-data-001
   [junit4]   2> 767784 INFO  
(SUITE-UnloadDistributedZkTest-seed#[9728485661A72789]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /kfyh/
   [junit4]   2> 767787 INFO  
(TEST-UnloadDistributedZkTest.test-seed#[9728485661A72789]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 767787 INFO  (Thread-2183) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 767787 INFO  (Thread-2183) [] o.a.s.c.ZkTestServer 
Starting server
   

[jira] [Commented] (LUCENE-7111) DocValuesRangeQuery.newLongRange behaves incorrectly for Long.MAX_VALUE and Long.MIN_VALUE

2016-03-18 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15202276#comment-15202276
 ] 

Steve Rowe commented on LUCENE-7111:


+1 to the patch.

> DocValuesRangeQuery.newLongRange behaves incorrectly for Long.MAX_VALUE and 
> Long.MIN_VALUE
> --
>
> Key: LUCENE-7111
> URL: https://issues.apache.org/jira/browse/LUCENE-7111
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Reporter: Ishan Chattopadhyaya
> Fix For: 6.0
>
> Attachments: LUCENE-7111.patch, LUCENE-7111.patch, LUCENE-7111.patch
>
>
> It seems that the following queries return all documents, which is unexpected:
> {code}
> DocValuesRangeQuery.newLongRange("dv", Long.MAX_VALUE, Long.MAX_VALUE, false, 
> true);
> DocValuesRangeQuery.newLongRange("dv", Long.MIN_VALUE, Long.MIN_VALUE, true, 
> false);
> {code}
> In Solr, floats and doubles are converted to longs and -0d gets converted to 
> Long.MIN_VALUE, and queries like {-0d TO 0d] could fail due to this, 
> returning all documents in the index.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 964 - Still Failing

2016-03-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/964/

1 tests failed.
FAILED:  org.apache.solr.cloud.hdfs.HdfsUnloadDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=113603, 
name=testExecutor-8999-thread-3, state=RUNNABLE, 
group=TGRP-HdfsUnloadDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=113603, name=testExecutor-8999-thread-3, 
state=RUNNABLE, group=TGRP-HdfsUnloadDistributedZkTest]
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:39699/ojyv/zy
at __randomizedtesting.SeedInfo.seed([ED5B5F3CAA587275]:0)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$3(BasicDistributedZkTest.java:583)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$6(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: http://127.0.0.1:39699/ojyv/zy
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:588)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCores$3(BasicDistributedZkTest.java:581)
... 4 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:482)
... 8 more




Build Log:
[...truncated 12518 lines...]
   [junit4] Suite: org.apache.solr.cloud.hdfs.HdfsUnloadDistributedZkTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.HdfsUnloadDistributedZkTest_ED5B5F3CAA587275-001/init-core-data-001
   [junit4]   2> 3960719 INFO  
(SUITE-HdfsUnloadDistributedZkTest-seed#[ED5B5F3CAA587275]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: 
/ojyv/zy
   [junit4]   1> Formatting using clusterid: testClusterID
   [junit4]   2> 3960753 WARN  
(SUITE-HdfsUnloadDistributedZkTest-seed#[ED5B5F3CAA587275]-worker) [] 
o.a.h.m.i.MetricsConfig Cannot locate configuration: tried 
hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
   [junit4]   

  1   2   >