Re: [VOTE] Release Lucene/Solr 5.5.2 RC2

2016-06-21 Thread Shalin Shekhar Mangar
+1

SUCCESS! [2:19:37.075305]

On Tue, Jun 21, 2016 at 10:18 PM, Steve Rowe  wrote:

> Please vote for release candidate 2 for Lucene/Solr 5.5.2
>
> The artifacts can be downloaded from:
>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.2-RC2-rev8e5d40b22a3968df065dfc078ef81cbb031f0e4a/
>
> You can run the smoke tester directly with this command:
>
> python3 -u dev-tools/scripts/smokeTestRelease.py \
>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.2-RC2-rev8e5d40b22a3968df065dfc078ef81cbb031f0e4a/
>
> +1 from me - Docs, changes and javadocs look good, and smoke tester says:
> SUCCESS! [0:32:02.113685]
>
> --
> Steve
> www.lucidworks.com
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


-- 
Regards,
Shalin Shekhar Mangar.


[jira] [Commented] (SOLR-9230) Change of default to BinaryRequestWriter breaks some use cases

2016-06-21 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343707#comment-15343707
 ] 

David Smiley commented on SOLR-9230:


Also related to SOLR-8866.  Perhaps the same approach could be taken here -- 
throw an exception if we don't know the type.  Better to fail than silently do 
the wrong thing.

It's a separate matter, I think, to handle BigDecimal/BigInteger.

> Change of default to BinaryRequestWriter breaks some use cases
> --
>
> Key: SOLR-9230
> URL: https://issues.apache.org/jira/browse/SOLR-9230
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Eirik Lygre
>
> From Solr 6.0 onwards, SOLR-8595 changes the default writer in HttpSolrClient 
> (et al) from RequestWriter to BinaryRequestWriter.
> The RequestWriter writes java.math.BigDecimal values using a simple 
> toString() on the value. This means that a BigDecimal-value is passed to the 
> server using its text representation, which is then mapped into whatever the 
> server wants. (The RequestWriter probably uses toString() on anything it sees)
> The BinaryRequestWriter instead handles unknown value types by writing a 
> string containing the class name, a colon, and then the toString() value. 
> This means that a BigDecimal-value is passed to the server as a text 
> representation "java.math.BigDecimal:12345", which the server cannot convert 
> to a number, and which then stops indexing.
> I'm not entirely sure that this behaviour is a bug, but I'm fairly sure that 
> the quiet change of behaviour qualifies. The "Trivial Patch" (quote from 
> SOLR-8595) isn't, when straight forward indexing scenarios quietly stop 
> working.
> There are several possible paths forward:
> * Have the BinaryRequestWriter (really the JavaBinCodec) encode 
> java.lang.Numbers as Strings, the way the RequestWriter does
> * Add something in release notes to inform users about the change
> SOLR-4021 describes the problem, but the change of default writer increases 
> problem visibility. SOLR-6165 somehow seems relevant. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9237) DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter can't be overidden

2016-06-21 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343659#comment-15343659
 ] 

David Smiley commented on SOLR-9237:


+1.  Personally I wouldn't have bothered with that constructor only to call it 
with (null,null) but whatever.

> DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter can't be 
> overidden
> ---
>
> Key: SOLR-9237
> URL: https://issues.apache.org/jira/browse/SOLR-9237
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter
>Affects Versions: 6.1
>Reporter: Gethin James
>Assignee: Jan Høydahl
> Fix For: 6.1.1, 6.2, master (7.0)
>
> Attachments: SOLR-9237.patch
>
>
> With *6.1.0* the *protected* method 
> DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter passes in a 
> *private* class called FvhContainer which makes it very difficult to extend.
> I have code that extends DefaultSolrHighlighter which I can't fix due to this 
> issue.
> Could you consider making FvhContainer  "protected" and use a constructor?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7287) New lemma-tizer plugin for ukrainian language.

2016-06-21 Thread Andriy Rysin (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343258#comment-15343258
 ] 

Andriy Rysin edited comment on LUCENE-7287 at 6/22/16 3:07 AM:
---

I don't know much about solr, but I think MorfologikFilterFactory uses 
dictionary= parameter instead of dictionary-resource=
https://lucene.apache.org/core/6_1_0/analyzers-morfologik/org/apache/lucene/analysis/morfologik/MorfologikFilterFactory.html

Also would that mean that we don't get the stop words filter and 
apostrophe/stress character normalization?


was (Author: arysin):
I don't know much about solr, but I think MorfologikFilterFactory uses 
dictionary= parameter instead of dictionary-resource=
https://lucene.apache.org/core/6_1_0/analyzers-morfologik/org/apache/lucene/analysis/morfologik/MorfologikFilterFactory.html

> New lemma-tizer plugin for ukrainian language.
> --
>
> Key: LUCENE-7287
> URL: https://issues.apache.org/jira/browse/LUCENE-7287
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Dmytro Hambal
>Priority: Minor
>  Labels: analysis, language, plugin
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7287.patch
>
>
> Hi all,
> I wonder whether you are interested in supporting a plugin which provides a 
> mapping between ukrainian word forms and their lemmas. Some tests and docs go 
> out-of-the-box =) .
> https://github.com/mrgambal/elasticsearch-ukrainian-lemmatizer
> It's really simple but still works and generates some value for its users.
> More: https://github.com/elastic/elasticsearch/issues/18303



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7287) New lemma-tizer plugin for ukrainian language.

2016-06-21 Thread Andriy Rysin (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343258#comment-15343258
 ] 

Andriy Rysin commented on LUCENE-7287:
--

I don't know much about solr, but I think MorfologikFilterFactory uses 
dictionary= parameter instead of dictionary-resource=
https://lucene.apache.org/core/6_1_0/analyzers-morfologik/org/apache/lucene/analysis/morfologik/MorfologikFilterFactory.html

> New lemma-tizer plugin for ukrainian language.
> --
>
> Key: LUCENE-7287
> URL: https://issues.apache.org/jira/browse/LUCENE-7287
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Dmytro Hambal
>Priority: Minor
>  Labels: analysis, language, plugin
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7287.patch
>
>
> Hi all,
> I wonder whether you are interested in supporting a plugin which provides a 
> mapping between ukrainian word forms and their lemmas. Some tests and docs go 
> out-of-the-box =) .
> https://github.com/mrgambal/elasticsearch-ukrainian-lemmatizer
> It's really simple but still works and generates some value for its users.
> More: https://github.com/elastic/elasticsearch/issues/18303



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4584) Request proxy mechanism not work if rows param is equal to zero

2016-06-21 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343256#comment-15343256
 ] 

Anshum Gupta commented on SOLR-4584:


This isn't a 6.0 fix. Seems like something is off here.

> Request proxy mechanism not work if rows param is equal to zero
> ---
>
> Key: SOLR-4584
> URL: https://issues.apache.org/jira/browse/SOLR-4584
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.2
> Environment: Linux Centos 6, Tomcat 7
>Reporter: Yago Riveiro
>Assignee: Mark Miller
> Fix For: 4.3, 6.0
>
> Attachments: Screen Shot 00.png, Screen Shot 01.png, Screen Shot 
> 02.png, Screen Shot 03.png, select
>
>
> If I try to do a request like:
> http://192.168.20.47:8983/solr/ST-3A856BBCA3_12/select?q=*:*=0
> The request fail. The solr UI logging has this error:
> {code:java} 
> null:org.apache.solr.common.SolrException: Error trying to proxy request for 
> url: http://192.168.20.47:8983/solr/ST-3A856BBCA3_12/select
> {code} 
> Chrome says:
> This webpage is not available
> The webpage at 
> http://192.168.20.47:8983/solr/ST-038412DCC2_0612/query?q=id:*=0 might 
> be temporarily down or it may have moved permanently to a new web address.
> Error 321 (net::ERR_INVALID_CHUNKED_ENCODING): Unknown error.
> If the param rows is set to rows=4 or superior the query return data as 
> expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6492) Solr field type that supports multiple, dynamic analyzers

2016-06-21 Thread Trey Grainger (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343254#comment-15343254
 ] 

Trey Grainger commented on SOLR-6492:
-

Hi [~krantiparisa] and [~dannytei1]. Apologies for the long lapse without a 
response on this issue. I won't get into the reasons here (combination of 
personal and professional commitments), but I just wanted to say that I expect 
to pick this issue back up in the near future and continue work on this patch.

In the meantime, I have added an ASL 2.0 license to the current code (from Solr 
in Action) so that folks can feel free to use what's there now: 
https://github.com/treygrainger/solr-in-action/tree/master/src/main/java/sia/ch14

I'll turn what's there now into a patch, update it to Solr trunk, and keep 
iterating on it until the folks commenting on this issue are satisfied with the 
design and capabilities. Stay tuned...

> Solr field type that supports multiple, dynamic analyzers
> -
>
> Key: SOLR-6492
> URL: https://issues.apache.org/jira/browse/SOLR-6492
> Project: Solr
>  Issue Type: New Feature
>  Components: Schema and Analysis
>Reporter: Trey Grainger
> Fix For: 5.0
>
>
> A common request - particularly for multilingual search - is to be able to 
> support one or more dynamically-selected analyzers for a field. For example, 
> someone may have a "content" field and pass in a document in Greek (using an 
> Analyzer with Tokenizer/Filters for German), a separate document in English 
> (using an English Analyzer), and possibly even a field with mixed-language 
> content in Greek and English. This latter case could pass the content 
> separately through both an analyzer defined for Greek and another Analyzer 
> defined for English, stacking or concatenating the token streams based upon 
> the use-case.
> There are some distinct advantages in terms of index size and query 
> performance which can be obtained by stacking terms from multiple analyzers 
> in the same field instead of duplicating content in separate fields and 
> searching across multiple fields. 
> Other non-multilingual use cases may include things like switching to a 
> different analyzer for the same field to remove a feature (i.e. turning 
> on/off query-time synonyms against the same field on a per-query basis).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9241) Rebalance API for SolrCloud

2016-06-21 Thread Trey Grainger (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343228#comment-15343228
 ] 

Trey Grainger commented on SOLR-9241:
-

I'm also very excited to see this patch. For the next evolution of Solr's 
scalability (and ultimately auto-scaling), these are exactly the kinds of core 
capabilities we need for seamlessly scaling up/down, resharding, and 
redistributing shards and replicas across a cluster. 

The smart merge looks interesting - seems like effectively a way to index into 
a larger number of shards (for indexing throughput) while merging them into a 
smaller number of shards for searching, enabling scaling of indexing and 
searching resourced independently. This obviously won't work well with 
Near-Realtime Searching, but I'd be curious to hear more explanation about how 
this works in practice for SolrCloud clusters that don't need NRT search.

Agreed with Joel's comments about the update to trunk vs. 4.6.1. One thing that 
seems to have been added since 4.6.1 that probably overlaps with this patch is 
the Replica Placement Strategies (SOLR-6220) vs. the Allocation Strategies 
implemented here.

The rest of the patch seems like all new objects that don't overlap much with 
the current code base. Would be interesting to know how much has changed 
between 4.6.1 to 6.1 collections/SolrCloud-wise that would create conflicts 
with this patch. Am obviously hoping not too much...

Either way, very excited about the contribution and about the potential for 
getting these capabilities integrated into Solr.

> Rebalance API for SolrCloud
> ---
>
> Key: SOLR-9241
> URL: https://issues.apache.org/jira/browse/SOLR-9241
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Affects Versions: 4.6.1
> Environment: Ubuntu, Mac OsX
>Reporter: Nitin Sharma
>  Labels: Cluster, SolrCloud
> Fix For: 4.6.1
>
> Attachments: rebalance.diff
>
>   Original Estimate: 2,016h
>  Remaining Estimate: 2,016h
>
> This is the v1 of the patch for Solrcloud Rebalance api (as described in 
> http://engineering.bloomreach.com/solrcloud-rebalance-api/) , built at 
> Bloomreach by Nitin Sharma and Suruchi Shah. The goal of the API  is to 
> provide a zero downtime mechanism to perform data manipulation and  efficient 
> core allocation in solrcloud. This API was envisioned to be the base layer 
> that enables Solrcloud to be an auto scaling platform. (and work in unison 
> with other complementing monitoring and scaling features).
> Patch Status:
> ===
> The patch is work in progress and incremental. We have done a few rounds of 
> code clean up. We wanted to get the patch going first to get initial feed 
> back.  We will continue to work on making it more open source friendly and 
> easily testable.
>  Deployment Status:
> 
> The platform is deployed in production at bloomreach and has been battle 
> tested for large scale load. (millions of documents and hundreds of 
> collections).
>  Internals:
> =
> The internals of the API and performance : 
> http://engineering.bloomreach.com/solrcloud-rebalance-api/
> It is built on top of the admin collections API as an action (with various 
> flavors). At a high level, the rebalance api provides 2 constructs:
> Scaling Strategy:  Decides how to move the data.  Every flavor has multiple 
> options which can be reviewed in the api spec.
> Re-distribute  - Move around data in the cluster based on capacity/allocation.
> Auto Shard  - Dynamically shard a collection to any size.
> Smart Merge - Distributed Mode - Helps merging data from a larger shard setup 
> into smaller one.  (the source should be divisible by destination)
> Scale up -  Add replicas on the fly
> Scale Down - Remove replicas on the fly
> Allocation Strategy:  Decides where to put the data.  (Nodes with least 
> cores, Nodes that do not have this collection etc). Custom implementations 
> can be built on top as well. One other example is Availability Zone aware. 
> Distribute data such that every replica is placed on different availability 
> zone to support HA.
>  Detailed API Spec:
> 
>   https://github.com/bloomreach/solrcloud-rebalance-api
>  Contributors:
> =
>   Nitin Sharma
>   Suruchi Shah
>  Questions/Comments:
> =
>   You can reach me at nitin.sha...@bloomreach.com



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 282 - Still Failing

2016-06-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/282/

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ObjectTracker found 4 object(s) that were not released!!! [SolrCore, 
MockDirectoryWrapper, MDCAwareThreadPoolExecutor, MockDirectoryWrapper]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 4 object(s) that were not 
released!!! [SolrCore, MockDirectoryWrapper, MDCAwareThreadPoolExecutor, 
MockDirectoryWrapper]
at __randomizedtesting.SeedInfo.seed([F41004301C5B6057]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:257)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=981, name=searcherExecutor-364-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=981, name=searcherExecutor-364-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

[JENKINS] Lucene-Solr-5.5-Windows (64bit/jdk1.7.0_80) - Build # 95 - Still Failing!

2016-06-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Windows/95/
Java: 64bit/jdk1.7.0_80 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI

Error Message:
ObjectTracker found 8 object(s) that were not released!!! 
[MDCAwareThreadPoolExecutor, TransactionLog, MockDirectoryWrapper, 
MockDirectoryWrapper, MockDirectoryWrapper, TransactionLog, 
MDCAwareThreadPoolExecutor, MockDirectoryWrapper]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 8 object(s) that were not 
released!!! [MDCAwareThreadPoolExecutor, TransactionLog, MockDirectoryWrapper, 
MockDirectoryWrapper, MockDirectoryWrapper, TransactionLog, 
MDCAwareThreadPoolExecutor, MockDirectoryWrapper]
at __randomizedtesting.SeedInfo.seed([CD2CA3F263D747AD]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:238)
at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-5.5-Windows\solr\build\solr-core\test\J0\temp\solr.schema.TestManagedSchemaAPI_CD2CA3F263D747AD-001\tempDir-001\node2\testschemaapi_shard1_replica1\data\tlog\tlog.001:
 java.nio.file.FileSystemException: 
C:\Users\jenkins\workspace\Lucene-Solr-5.5-Windows\solr\build\solr-core\test\J0\temp\solr.schema.TestManagedSchemaAPI_CD2CA3F263D747AD-001\tempDir-001\node2\testschemaapi_shard1_replica1\data\tlog\tlog.001:
 The process cannot access the file because it is being used by another 
process. 
C:\Users\jenkins\workspace\Lucene-Solr-5.5-Windows\solr\build\solr-core\test\J0\temp\solr.schema.TestManagedSchemaAPI_CD2CA3F263D747AD-001\tempDir-001\node2\testschemaapi_shard1_replica1\data\tlog:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-5.5-Windows\solr\build\solr-core\test\J0\temp\solr.schema.TestManagedSchemaAPI_CD2CA3F263D747AD-001\tempDir-001\node2\testschemaapi_shard1_replica1\data\tlog

C:\Users\jenkins\workspace\Lucene-Solr-5.5-Windows\solr\build\solr-core\test\J0\temp\solr.schema.TestManagedSchemaAPI_CD2CA3F263D747AD-001\tempDir-001\node2\testschemaapi_shard1_replica1\data:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-5.5-Windows\solr\build\solr-core\test\J0\temp\solr.schema.TestManagedSchemaAPI_CD2CA3F263D747AD-001\tempDir-001\node2\testschemaapi_shard1_replica1\data


Re: VOTE: Apache Solr Ref Guide for 6.1

2016-06-21 Thread Steve Rowe
+1

--
Steve
www.lucidworks.com

> On Jun 21, 2016, at 2:19 PM, Cassandra Targett  wrote:
> 
> Please VOTE to release the Apache Solr Ref Guide for 6.1.
> 
> The artifacts can be downloaded from:
> https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-6.1-RC0/
> 
> $ more /apache-solr-ref-guide-6.1.pdf.sha1
> 5929b03039e99644bc4ef23b37088b343e2ff0c8  apache-solr-ref-guide-6.1.pdf
> 
> Here's my +1.
> 
> Thanks,
> Cassandra
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 98 - Still Failing

2016-06-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/98/

1 tests failed.
FAILED:  
org.apache.lucene.spatial.geopoint.search.TestLegacyGeoPointQuery.testRandomBig

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded
at 
__randomizedtesting.SeedInfo.seed([F85175520DCDF9C2:7F0608DD9C948542]:0)
at 
org.apache.lucene.util.fst.ByteSequenceOutputs.read(ByteSequenceOutputs.java:129)
at 
org.apache.lucene.util.fst.ByteSequenceOutputs.read(ByteSequenceOutputs.java:35)
at org.apache.lucene.util.fst.FST.readNextRealArc(FST.java:1088)
at org.apache.lucene.util.fst.FST.pack(FST.java:1769)
at org.apache.lucene.util.fst.Builder.finish(Builder.java:500)
at 
org.apache.lucene.codecs.memory.MemoryPostingsFormat$TermsWriter.finish(MemoryPostingsFormat.java:267)
at 
org.apache.lucene.codecs.memory.MemoryPostingsFormat$MemoryFieldsConsumer.write(MemoryPostingsFormat.java:401)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.write(PerFieldPostingsFormat.java:198)
at 
org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105)
at 
org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:216)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:101)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4316)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3893)
at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2055)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1888)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1845)
at 
org.apache.lucene.geo.BaseGeoPointTestCase.verifyRandomRectangles(BaseGeoPointTestCase.java:786)
at 
org.apache.lucene.geo.BaseGeoPointTestCase.verify(BaseGeoPointTestCase.java:743)
at 
org.apache.lucene.geo.BaseGeoPointTestCase.doTestRandom(BaseGeoPointTestCase.java:692)
at 
org.apache.lucene.geo.BaseGeoPointTestCase.testRandomBig(BaseGeoPointTestCase.java:623)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)




Build Log:
[...truncated 9223 lines...]
   [junit4] Suite: 
org.apache.lucene.spatial.geopoint.search.TestLegacyGeoPointQuery
   [junit4] IGNOR/A 0.11s J1 | TestLegacyGeoPointQuery.testRandomDistance
   [junit4]> Assumption #1: legacy encoding is too slow/hangs on this test
   [junit4] IGNOR/A 0.00s J1 | TestLegacyGeoPointQuery.testRandomDistanceHuge
   [junit4]> Assumption #1: legacy encoding is too slow/hangs on this test
   [junit4] IGNOR/A 0.00s J1 | TestLegacyGeoPointQuery.testSamePointManyTimes
   [junit4]> Assumption #1: legacy encoding goes OOM on this test
   [junit4]   2> NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestLegacyGeoPointQuery -Dtests.method=testRandomBig 
-Dtests.seed=F85175520DCDF9C2 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/x1/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=ar-QA -Dtests.timezone=Australia/Brisbane -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR120s J1 | TestLegacyGeoPointQuery.testRandomBig <<<
   [junit4]> Throwable #1: java.lang.OutOfMemoryError: GC overhead limit 
exceeded
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([F85175520DCDF9C2:7F0608DD9C948542]:0)
   [junit4]>at 
org.apache.lucene.util.fst.ByteSequenceOutputs.read(ByteSequenceOutputs.java:129)
   [junit4]>at 
org.apache.lucene.util.fst.ByteSequenceOutputs.read(ByteSequenceOutputs.java:35)
   [junit4]>at 
org.apache.lucene.util.fst.FST.readNextRealArc(FST.java:1088)
   

[jira] [Created] (SOLR-9242) Collection level backup/restore should provide a param for specifying the repository implementation it should use

2016-06-21 Thread Hrishikesh Gadre (JIRA)
Hrishikesh Gadre created SOLR-9242:
--

 Summary: Collection level backup/restore should provide a param 
for specifying the repository implementation it should use
 Key: SOLR-9242
 URL: https://issues.apache.org/jira/browse/SOLR-9242
 Project: Solr
  Issue Type: Bug
Reporter: Hrishikesh Gadre


SOLR-7374 provides BackupRepository interface to enable storing Solr index data 
to a configured file-system (e.g. HDFS, local file-system etc.). This JIRA is 
to track the work required to extend this functionality at the collection level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9242) Collection level backup/restore should provide a param for specifying the repository implementation it should use

2016-06-21 Thread Hrishikesh Gadre (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated SOLR-9242:
---
Issue Type: Improvement  (was: Bug)

> Collection level backup/restore should provide a param for specifying the 
> repository implementation it should use
> -
>
> Key: SOLR-9242
> URL: https://issues.apache.org/jira/browse/SOLR-9242
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hrishikesh Gadre
>
> SOLR-7374 provides BackupRepository interface to enable storing Solr index 
> data to a configured file-system (e.g. HDFS, local file-system etc.). This 
> JIRA is to track the work required to extend this functionality at the 
> collection level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-21 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343072#comment-15343072
 ] 

Hrishikesh Gadre edited comment on SOLR-7374 at 6/22/16 12:01 AM:
--

[~varunthacker] I have filed SOLR-9242 to track the work required to enable 
this feature at the collection level backup/restore.


was (Author: hgadre):
[~varunthacker] I have filed a JIRA to track the work required to enable this 
feature at the collection level backup/restore.

> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Mark Miller
> Fix For: 6.2
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch
>
>
> Currently when we create a backup we use SimpleFSDirectory to write the 
> backup indexes. Similarly during a restore we open the index using 
> FSDirectory.open . 
> We should provide a param called {{directoryImpl}} or {{type}} which will be 
> used to specify the Directory implementation to backup the index. 
> Likewise during a restore you would need to specify the directory impl which 
> was used during backup so that the index can be opened correctly.
> This param will address the problem that currently if a user is running Solr 
> on HDFS there is no way to use the backup/restore functionality as the 
> directory is hardcoded.
> With this one could be running Solr on a local FS but backup the index on 
> HDFS etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-21 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343072#comment-15343072
 ] 

Hrishikesh Gadre commented on SOLR-7374:


[~varunthacker] I have filed a JIRA to track the work required to enable this 
feature at the collection level backup/restore.

> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Mark Miller
> Fix For: 6.2
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch
>
>
> Currently when we create a backup we use SimpleFSDirectory to write the 
> backup indexes. Similarly during a restore we open the index using 
> FSDirectory.open . 
> We should provide a param called {{directoryImpl}} or {{type}} which will be 
> used to specify the Directory implementation to backup the index. 
> Likewise during a restore you would need to specify the directory impl which 
> was used during backup so that the index can be opened correctly.
> This param will address the problem that currently if a user is running Solr 
> on HDFS there is no way to use the backup/restore functionality as the 
> directory is hardcoded.
> With this one could be running Solr on a local FS but backup the index on 
> HDFS etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: position padding instead of positionIncrementGap

2016-06-21 Thread David Smiley
I don't understand the question. Maybe I do... I once proposed a change on
this list to the analysis API to make this easier... some mechanism for a
Tokenstream to know if it's processing a subsequent value vs the first
(very related to your inquiry and might theoretically be adapted to expose
the last posInc) but there was a -1. Ah well.

On Tue, Jun 21, 2016 at 6:36 PM Mikhail Khludnev  wrote:

> OK. Got it! Thanks.
> But do you consider it as an opportunity for extension (not necessary an
> improvement), or everything is fine?
> 21 июня 2016 г. 16:04 пользователь "David Smiley" <
> david.w.smi...@gmail.com> написал:
>
>> The other answers are fine; I've done this a couple times before to play
>> tricks with span queries.  It's a PITA if you want to integrate this with
>> Solr; you may end up writing an URP that puts together the merged
>> TokenStream then passes along a Field instance with this TS.  Solr's
>> DocumentBuilder will pass this straight through to the Lucene document and
>> skip the FieldType.  Alternatively if you really want to do the majority of
>> the work in a custom FieldType, you could write an URP that just wraps up
>> the values into something custom that will get passed into
>> FieldType.createFields by the DocumentBuilder.
>>
>> Good luck.
>>
>
>> On Mon, Jun 20, 2016 at 5:27 PM Mikhail Khludnev  wrote:
>>
>>> Hello! Devs,
>>>
>>> I'm sure it's discussed many times or were in it air. If I have a few
>>> 3-tokens values in a multivalued field, how I can assign positions:
>>> 0,1,2...10,11,12,...20,21,22...
>>> instead of
>>> 0,1,2, 12,13,14, 24,25,26.. giving that positionIncrementGap=10 ?
>>>
>>> --
>>> Sincerely yours
>>> Mikhail Khludnev
>>> Principal Engineer,
>>> Grid Dynamics
>>>
>> --
>>
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>> http://www.solrenterprisesearchserver.com
>>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Commented] (SOLR-9237) DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter can't be overidden

2016-06-21 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342977#comment-15342977
 ] 

Joel Bernstein commented on SOLR-9237:
--

+1

> DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter can't be 
> overidden
> ---
>
> Key: SOLR-9237
> URL: https://issues.apache.org/jira/browse/SOLR-9237
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter
>Affects Versions: 6.1
>Reporter: Gethin James
>Assignee: Jan Høydahl
> Fix For: 6.1.1, 6.2, master (7.0)
>
> Attachments: SOLR-9237.patch
>
>
> With *6.1.0* the *protected* method 
> DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter passes in a 
> *private* class called FvhContainer which makes it very difficult to extend.
> I have code that extends DefaultSolrHighlighter which I can't fix due to this 
> issue.
> Could you consider making FvhContainer  "protected" and use a constructor?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9240) Add partitionKeys parameter to the topic() Streaming Expressi

2016-06-21 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9240:
-
Summary: Add partitionKeys parameter to the topic() Streaming Expressi  
(was: Add the partitionKeys parameter to the topic() Streaming Expression)

> Add partitionKeys parameter to the topic() Streaming Expressi
> -
>
> Key: SOLR-9240
> URL: https://issues.apache.org/jira/browse/SOLR-9240
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>
> Currently the topic() function doesn't accept a partitionKeys parameter like 
> the search() function does. This means the topic() function can't be wrapped 
> by the parallel() function to run across worker nodes.
> It would be useful to support parallelizing the topic function because it 
> would provide a general purpose parallelized approach for processing batches 
> of data as they enter the index.
> For example this would allow a classify() function to be wrapped around a 
> topic() function to classify documents in parallel across worker nodes. 
> Sample syntax:
> {code}
> parallel(daemon(update(classify(topic(..., partitionKeys="id")
> {code}
> The example above would send a daemon to worker nodes that would classify all 
> new documents returned by the topic() function. The update function would 
> send the output of classify() to a SolrCloud collection for indexing.
> The partitionKeys parameter would ensure that each worker would receive a 
> partition of the results returned by the topic() function. This allows the 
> classify() function to be run in parallel.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9240) Add partitionKeys parameter to the topic() Streaming Expression

2016-06-21 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9240:
-
Summary: Add partitionKeys parameter to the topic() Streaming Expression  
(was: Add partitionKeys parameter to the topic() Streaming Expressi)

> Add partitionKeys parameter to the topic() Streaming Expression
> ---
>
> Key: SOLR-9240
> URL: https://issues.apache.org/jira/browse/SOLR-9240
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>
> Currently the topic() function doesn't accept a partitionKeys parameter like 
> the search() function does. This means the topic() function can't be wrapped 
> by the parallel() function to run across worker nodes.
> It would be useful to support parallelizing the topic function because it 
> would provide a general purpose parallelized approach for processing batches 
> of data as they enter the index.
> For example this would allow a classify() function to be wrapped around a 
> topic() function to classify documents in parallel across worker nodes. 
> Sample syntax:
> {code}
> parallel(daemon(update(classify(topic(..., partitionKeys="id")
> {code}
> The example above would send a daemon to worker nodes that would classify all 
> new documents returned by the topic() function. The update function would 
> send the output of classify() to a SolrCloud collection for indexing.
> The partitionKeys parameter would ensure that each worker would receive a 
> partition of the results returned by the topic() function. This allows the 
> classify() function to be run in parallel.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9241) Rebalance API for SolrCloud

2016-06-21 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342974#comment-15342974
 ] 

Joel Bernstein commented on SOLR-9241:
--

Very excited to see this patch! 

The ticket has this for 4.6.1. Do you believe it will be difficult to get this 
working on master?

One small suggestion, if you name the patch SOLR-9241.patch it will conform to 
the standard practice.

Thanks for submitting the patch.

> Rebalance API for SolrCloud
> ---
>
> Key: SOLR-9241
> URL: https://issues.apache.org/jira/browse/SOLR-9241
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Affects Versions: 4.6.1
> Environment: Ubuntu, Mac OsX
>Reporter: Nitin Sharma
>  Labels: Cluster, SolrCloud
> Fix For: 4.6.1
>
> Attachments: rebalance.diff
>
>   Original Estimate: 2,016h
>  Remaining Estimate: 2,016h
>
> This is the v1 of the patch for Solrcloud Rebalance api (as described in 
> http://engineering.bloomreach.com/solrcloud-rebalance-api/) , built at 
> Bloomreach by Nitin Sharma and Suruchi Shah. The goal of the API  is to 
> provide a zero downtime mechanism to perform data manipulation and  efficient 
> core allocation in solrcloud. This API was envisioned to be the base layer 
> that enables Solrcloud to be an auto scaling platform. (and work in unison 
> with other complementing monitoring and scaling features).
> Patch Status:
> ===
> The patch is work in progress and incremental. We have done a few rounds of 
> code clean up. We wanted to get the patch going first to get initial feed 
> back.  We will continue to work on making it more open source friendly and 
> easily testable.
>  Deployment Status:
> 
> The platform is deployed in production at bloomreach and has been battle 
> tested for large scale load. (millions of documents and hundreds of 
> collections).
>  Internals:
> =
> The internals of the API and performance : 
> http://engineering.bloomreach.com/solrcloud-rebalance-api/
> It is built on top of the admin collections API as an action (with various 
> flavors). At a high level, the rebalance api provides 2 constructs:
> Scaling Strategy:  Decides how to move the data.  Every flavor has multiple 
> options which can be reviewed in the api spec.
> Re-distribute  - Move around data in the cluster based on capacity/allocation.
> Auto Shard  - Dynamically shard a collection to any size.
> Smart Merge - Distributed Mode - Helps merging data from a larger shard setup 
> into smaller one.  (the source should be divisible by destination)
> Scale up -  Add replicas on the fly
> Scale Down - Remove replicas on the fly
> Allocation Strategy:  Decides where to put the data.  (Nodes with least 
> cores, Nodes that do not have this collection etc). Custom implementations 
> can be built on top as well. One other example is Availability Zone aware. 
> Distribute data such that every replica is placed on different availability 
> zone to support HA.
>  Detailed API Spec:
> 
>   https://github.com/bloomreach/solrcloud-rebalance-api
>  Contributors:
> =
>   Nitin Sharma
>   Suruchi Shah
>  Questions/Comments:
> =
>   You can reach me at nitin.sha...@bloomreach.com



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7287) New lemma-tizer plugin for ukrainian language.

2016-06-21 Thread Ahmet Arslan (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342944#comment-15342944
 ] 

Ahmet Arslan commented on LUCENE-7287:
--

Can we use this analyzer in solr?

{code:xml}
 
{code}


> New lemma-tizer plugin for ukrainian language.
> --
>
> Key: LUCENE-7287
> URL: https://issues.apache.org/jira/browse/LUCENE-7287
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Dmytro Hambal
>Priority: Minor
>  Labels: analysis, language, plugin
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7287.patch
>
>
> Hi all,
> I wonder whether you are interested in supporting a plugin which provides a 
> mapping between ukrainian word forms and their lemmas. Some tests and docs go 
> out-of-the-box =) .
> https://github.com/mrgambal/elasticsearch-ukrainian-lemmatizer
> It's really simple but still works and generates some value for its users.
> More: https://github.com/elastic/elasticsearch/issues/18303



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9237) DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter can't be overidden

2016-06-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-9237:
--
Attachment: SOLR-9237.patch

Proposed patch attached. [~dsmiley]

> DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter can't be 
> overidden
> ---
>
> Key: SOLR-9237
> URL: https://issues.apache.org/jira/browse/SOLR-9237
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter
>Affects Versions: 6.1
>Reporter: Gethin James
>Assignee: Jan Høydahl
> Fix For: 6.1.1, 6.2, master (7.0)
>
> Attachments: SOLR-9237.patch
>
>
> With *6.1.0* the *protected* method 
> DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter passes in a 
> *private* class called FvhContainer which makes it very difficult to extend.
> I have code that extends DefaultSolrHighlighter which I can't fix due to this 
> issue.
> Could you consider making FvhContainer  "protected" and use a constructor?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7739) Lucene Classification Integration - UpdateRequestProcessor

2016-06-21 Thread Alessandro Benedetti (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342902#comment-15342902
 ] 

Alessandro Benedetti commented on SOLR-7739:


Thanks again Adrien!
it sounds good !

For the wiki let's try with [~ctargett], can you help us ?

> Lucene Classification Integration - UpdateRequestProcessor
> --
>
> Key: SOLR-7739
> URL: https://issues.apache.org/jira/browse/SOLR-7739
> Project: Solr
>  Issue Type: New Feature
>  Components: update
>Reporter: Alessandro Benedetti
>Assignee: Tommaso Teofili
>Priority: Minor
>  Labels: classification, index-time, update.chain, 
> updateProperties
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-7739.1.patch, SOLR-7739.patch, SOLR-7739.patch, 
> SOLR-7739.patch
>
>
> It would be nice to have an UpdateRequestProcessor to interact with the 
> Lucene Classification Module and provide an easy way of auto classifying Solr 
> Documents on indexing.
> Documentation will be provided with the patch
> A first design will be provided soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9237) DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter can't be overidden

2016-06-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-9237:
--
Fix Version/s: 6.1.1

> DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter can't be 
> overidden
> ---
>
> Key: SOLR-9237
> URL: https://issues.apache.org/jira/browse/SOLR-9237
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter
>Affects Versions: 6.1
>Reporter: Gethin James
>Assignee: Jan Høydahl
> Fix For: master (7.0), 6.1.1, 6.2
>
>
> With *6.1.0* the *protected* method 
> DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter passes in a 
> *private* class called FvhContainer which makes it very difficult to extend.
> I have code that extends DefaultSolrHighlighter which I can't fix due to this 
> issue.
> Could you consider making FvhContainer  "protected" and use a constructor?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9237) DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter can't be overidden

2016-06-21 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342888#comment-15342888
 ] 

Jan Høydahl commented on SOLR-9237:
---

Nice catch. I'm gonna make both {{FvhContainer}} and 
{{doHighlightingOfField()}} protected just to follow existing practice for this 
class, ok?

> DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter can't be 
> overidden
> ---
>
> Key: SOLR-9237
> URL: https://issues.apache.org/jira/browse/SOLR-9237
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter
>Affects Versions: 6.1
>Reporter: Gethin James
> Fix For: master (7.0), 6.2
>
>
> With *6.1.0* the *protected* method 
> DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter passes in a 
> *private* class called FvhContainer which makes it very difficult to extend.
> I have code that extends DefaultSolrHighlighter which I can't fix due to this 
> issue.
> Could you consider making FvhContainer  "protected" and use a constructor?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9237) DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter can't be overidden

2016-06-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reassigned SOLR-9237:
-

Assignee: Jan Høydahl

> DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter can't be 
> overidden
> ---
>
> Key: SOLR-9237
> URL: https://issues.apache.org/jira/browse/SOLR-9237
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter
>Affects Versions: 6.1
>Reporter: Gethin James
>Assignee: Jan Høydahl
> Fix For: master (7.0), 6.2
>
>
> With *6.1.0* the *protected* method 
> DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter passes in a 
> *private* class called FvhContainer which makes it very difficult to extend.
> I have code that extends DefaultSolrHighlighter which I can't fix due to this 
> issue.
> Could you consider making FvhContainer  "protected" and use a constructor?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9237) DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter can't be overidden

2016-06-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-9237:
--
Fix Version/s: 6.2
   master (7.0)

> DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter can't be 
> overidden
> ---
>
> Key: SOLR-9237
> URL: https://issues.apache.org/jira/browse/SOLR-9237
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter
>Affects Versions: 6.1
>Reporter: Gethin James
> Fix For: master (7.0), 6.2
>
>
> With *6.1.0* the *protected* method 
> DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter passes in a 
> *private* class called FvhContainer which makes it very difficult to extend.
> I have code that extends DefaultSolrHighlighter which I can't fix due to this 
> issue.
> Could you consider making FvhContainer  "protected" and use a constructor?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9237) DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter can't be overidden

2016-06-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-9237:
--
Component/s: highlighter

> DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter can't be 
> overidden
> ---
>
> Key: SOLR-9237
> URL: https://issues.apache.org/jira/browse/SOLR-9237
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter
>Affects Versions: 6.1
>Reporter: Gethin James
> Fix For: master (7.0), 6.2
>
>
> With *6.1.0* the *protected* method 
> DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter passes in a 
> *private* class called FvhContainer which makes it very difficult to extend.
> I have code that extends DefaultSolrHighlighter which I can't fix due to this 
> issue.
> Could you consider making FvhContainer  "protected" and use a constructor?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-21 Thread Hrishikesh Gadre (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated SOLR-7374:
---
Attachment: SOLR-7374.patch

[~varunthacker] Please find the latest patch.

bq. For the patch , One thing I'd like to address would be - In 
TestHdfsBackupRestore make runCoreAdminCommand use the Replication handler 
instead since thats the current documented way of running core backups/restore.

Instead of replacing the usage of core admin API with replication handler, I 
just added another test which uses replication handler. This way we can test 
both the APIs.

[~markrmil...@gmail.com] I made a small change in HdfsDirectory class to define 
a constant for the buffer size. This way we can use the same value for both 
HdfsDirectory as well as HdfsBackupRepository.


> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Mark Miller
> Fix For: 6.2
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch
>
>
> Currently when we create a backup we use SimpleFSDirectory to write the 
> backup indexes. Similarly during a restore we open the index using 
> FSDirectory.open . 
> We should provide a param called {{directoryImpl}} or {{type}} which will be 
> used to specify the Directory implementation to backup the index. 
> Likewise during a restore you would need to specify the directory impl which 
> was used during backup so that the index can be opened correctly.
> This param will address the problem that currently if a user is running Solr 
> on HDFS there is no way to use the backup/restore functionality as the 
> directory is hardcoded.
> With this one could be running Solr on a local FS but backup the index on 
> HDFS etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7194) Ban Math.toRadians/toDegrees and remove all usages of it

2016-06-21 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342606#comment-15342606
 ] 

Karl Wright edited comment on LUCENE-7194 at 6/21/16 9:45 PM:
--

[~rcmuir]: Is this still needed?  In SloppyMath.java I see the following:

{code}
  // haversin
  // TODO: remove these for java 9, they fixed Math.toDegrees()/toRadians() to 
work just like this.
  public static final double TO_RADIANS = Math.PI / 180D;
  public static final double TO_DEGREES = 180D / Math.PI;
{code}

... which leads me to wonder if Java 9 was fixed and we should instead be using 
Math.toDegrees()/toRadians() everywhere?  Maybe [~thetaphi] knows?



was (Author: kwri...@metacarta.com):
[~rcmuir]: Is this still needed?  In SloppyMath.java I see the following:

{code}
  // haversin
  // TODO: remove these for java 9, they fixed Math.toDegrees()/toRadians() to 
work just like this.
  public static final double TO_RADIANS = Math.PI / 180D;
  public static final double TO_DEGREES = 180D / Math.PI;
{code}

... which leads me to wonder if Java 9 was fixed and we should instead be using 
Math.toDegrees()/toRadians() everywhere?


> Ban Math.toRadians/toDegrees and remove all usages of it
> 
>
> Key: LUCENE-7194
> URL: https://issues.apache.org/jira/browse/LUCENE-7194
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Assignee: Karl Wright
>
> The result of these methods is unreliable and changes across jvm versions: we 
> cannot use these methods.
> The following program prints 0.7722082215479366 on previous versions of java 
> but 0.7722082215479367 on java 9 because Math.toRadians is no longer doing 
> the same thing:
> {code}
> public class test {
>   public static void main(String args[]) throws Exception {
> System.out.println(Math.toRadians(44.244272));
>   }
> }
> {code}
> This is because of https://bugs.openjdk.java.net/browse/JDK-4477961. 
> I am not really sure its a bug, because the method says that the conversion 
> is "generally inexact". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: position padding instead of positionIncrementGap

2016-06-21 Thread Mikhail Khludnev
OK. Got it! Thanks.
But do you consider it as an opportunity for extension (not necessary an
improvement), or everything is fine?
21 июня 2016 г. 16:04 пользователь "David Smiley" 
написал:

> The other answers are fine; I've done this a couple times before to play
> tricks with span queries.  It's a PITA if you want to integrate this with
> Solr; you may end up writing an URP that puts together the merged
> TokenStream then passes along a Field instance with this TS.  Solr's
> DocumentBuilder will pass this straight through to the Lucene document and
> skip the FieldType.  Alternatively if you really want to do the majority of
> the work in a custom FieldType, you could write an URP that just wraps up
> the values into something custom that will get passed into
> FieldType.createFields by the DocumentBuilder.
>
> Good luck.
>
> On Mon, Jun 20, 2016 at 5:27 PM Mikhail Khludnev  wrote:
>
>> Hello! Devs,
>>
>> I'm sure it's discussed many times or were in it air. If I have a few
>> 3-tokens values in a multivalued field, how I can assign positions:
>> 0,1,2...10,11,12,...20,21,22...
>> instead of
>> 0,1,2, 12,13,14, 24,25,26.. giving that positionIncrementGap=10 ?
>>
>> --
>> Sincerely yours
>> Mikhail Khludnev
>> Principal Engineer,
>> Grid Dynamics
>>
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>


[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_92) - Build # 938 - Failure!

2016-06-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/938/
Java: 32bit/jdk1.8.0_92 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'P val' for path 'response/params/y/p' full 
output: {   "responseHeader":{ "status":0, "QTime":0},   "response":{   
  "znodeVersion":2, "params":{   "x":{ "a":"A val", 
"b":"B val", "":{"v":0}},   "y":{ "c":"CY val modified",
 "b":"BY val", "i":20, "d":[   "val 1",   
"val 2"], "e":"EY val", "":{"v":1},  from server:  
http://127.0.0.1:35460/w/zs/collection1

Stack Trace:
java.lang.AssertionError: Could not get expected value  'P val' for path 
'response/params/y/p' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "response":{
"znodeVersion":2,
"params":{
  "x":{
"a":"A val",
"b":"B val",
"":{"v":0}},
  "y":{
"c":"CY val modified",
"b":"BY val",
"i":20,
"d":[
  "val 1",
  "val 2"],
"e":"EY val",
"":{"v":1},  from server:  http://127.0.0.1:35460/w/zs/collection1
at 
__randomizedtesting.SeedInfo.seed([658A971E53819DBC:EDDEA8C4FD7DF044]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:481)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:215)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Comment Edited] (LUCENE-7194) Ban Math.toRadians/toDegrees and remove all usages of it

2016-06-21 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342656#comment-15342656
 ] 

Uwe Schindler edited comment on LUCENE-7194 at 6/21/16 8:53 PM:


You should supply an error message, the one was copied from the forbids before:
"[Use NIO.2 instead]" (thats simply wrong). The eror message can be appended 
with "@" after the signature.

bq. And then run ant precommit and you should see failures from places using 
these APIs...

Much faster is "ant check-forbidden-apis"!


was (Author: thetaphi):
You should supply an error message, the one was copied from the forbids before:
"[Use NIO.2 instead]" (thats simply wrong). The eror message can be appended 
with "@" after the signature.

> Ban Math.toRadians/toDegrees and remove all usages of it
> 
>
> Key: LUCENE-7194
> URL: https://issues.apache.org/jira/browse/LUCENE-7194
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Assignee: Karl Wright
>
> The result of these methods is unreliable and changes across jvm versions: we 
> cannot use these methods.
> The following program prints 0.7722082215479366 on previous versions of java 
> but 0.7722082215479367 on java 9 because Math.toRadians is no longer doing 
> the same thing:
> {code}
> public class test {
>   public static void main(String args[]) throws Exception {
> System.out.println(Math.toRadians(44.244272));
>   }
> }
> {code}
> This is because of https://bugs.openjdk.java.net/browse/JDK-4477961. 
> I am not really sure its a bug, because the method says that the conversion 
> is "generally inexact". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7194) Ban Math.toRadians/toDegrees and remove all usages of it

2016-06-21 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342656#comment-15342656
 ] 

Uwe Schindler commented on LUCENE-7194:
---

You should supply an error message, the one was copied from the forbids before:
"[Use NIO.2 instead]" (thats simply wrong). The eror message can be appended 
with "@" after the signature.

> Ban Math.toRadians/toDegrees and remove all usages of it
> 
>
> Key: LUCENE-7194
> URL: https://issues.apache.org/jira/browse/LUCENE-7194
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Assignee: Karl Wright
>
> The result of these methods is unreliable and changes across jvm versions: we 
> cannot use these methods.
> The following program prints 0.7722082215479366 on previous versions of java 
> but 0.7722082215479367 on java 9 because Math.toRadians is no longer doing 
> the same thing:
> {code}
> public class test {
>   public static void main(String args[]) throws Exception {
> System.out.println(Math.toRadians(44.244272));
>   }
> }
> {code}
> This is because of https://bugs.openjdk.java.net/browse/JDK-4477961. 
> I am not really sure its a bug, because the method says that the conversion 
> is "generally inexact". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7194) Ban Math.toRadians/toDegrees and remove all usages of it

2016-06-21 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342606#comment-15342606
 ] 

Karl Wright commented on LUCENE-7194:
-

[~rcmuir]: Is this still needed?  In SloppyMath.java I see the following:

{code}
  // haversin
  // TODO: remove these for java 9, they fixed Math.toDegrees()/toRadians() to 
work just like this.
  public static final double TO_RADIANS = Math.PI / 180D;
  public static final double TO_DEGREES = 180D / Math.PI;
{code}

... which leads me to wonder if Java 9 was fixed and we should instead be using 
Math.toDegrees()/toRadians() everywhere?


> Ban Math.toRadians/toDegrees and remove all usages of it
> 
>
> Key: LUCENE-7194
> URL: https://issues.apache.org/jira/browse/LUCENE-7194
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Assignee: Karl Wright
>
> The result of these methods is unreliable and changes across jvm versions: we 
> cannot use these methods.
> The following program prints 0.7722082215479366 on previous versions of java 
> but 0.7722082215479367 on java 9 because Math.toRadians is no longer doing 
> the same thing:
> {code}
> public class test {
>   public static void main(String args[]) throws Exception {
> System.out.println(Math.toRadians(44.244272));
>   }
> }
> {code}
> This is because of https://bugs.openjdk.java.net/browse/JDK-4477961. 
> I am not really sure its a bug, because the method says that the conversion 
> is "generally inexact". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9241) Rebalance API for SolrCloud

2016-06-21 Thread Nitin Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitin Sharma updated SOLR-9241:
---
Description: 
This is the v1 of the patch for Solrcloud Rebalance api (as described in 
http://engineering.bloomreach.com/solrcloud-rebalance-api/) , built at 
Bloomreach by Nitin Sharma and Suruchi Shah. The goal of the API  is to provide 
a zero downtime mechanism to perform data manipulation and  efficient core 
allocation in solrcloud. This API was envisioned to be the base layer that 
enables Solrcloud to be an auto scaling platform. (and work in unison with 
other complementing monitoring and scaling features).


Patch Status:
===
The patch is work in progress and incremental. We have done a few rounds of 
code clean up. We wanted to get the patch going first to get initial feed back. 
 We will continue to work on making it more open source friendly and easily 
testable.

 Deployment Status:

The platform is deployed in production at bloomreach and has been battle tested 
for large scale load. (millions of documents and hundreds of collections).

 Internals:
=
The internals of the API and performance : 
http://engineering.bloomreach.com/solrcloud-rebalance-api/

It is built on top of the admin collections API as an action (with various 
flavors). At a high level, the rebalance api provides 2 constructs:

Scaling Strategy:  Decides how to move the data.  Every flavor has multiple 
options which can be reviewed in the api spec.
Re-distribute  - Move around data in the cluster based on capacity/allocation.
Auto Shard  - Dynamically shard a collection to any size.
Smart Merge - Distributed Mode - Helps merging data from a larger shard setup 
into smaller one.  (the source should be divisible by destination)
Scale up -  Add replicas on the fly
Scale Down - Remove replicas on the fly

Allocation Strategy:  Decides where to put the data.  (Nodes with least cores, 
Nodes that do not have this collection etc). Custom implementations can be 
built on top as well. One other example is Availability Zone aware. Distribute 
data such that every replica is placed on different availability zone to 
support HA.

# Detailed API Spec:

  https://github.com/bloomreach/solrcloud-rebalance-api

# Contributors:
=
  Nitin Sharma
  Suruchi Shah

# Questions/Comments:
=
  You can reach me at nitin.sha...@bloomreach.com

  was:
This is the v1 of the patch for Solrcloud Rebalance api (as described in 
http://engineering.bloomreach.com/solrcloud-rebalance-api/) , built at 
Bloomreach by Nitin Sharma and Suruchi Shah. The goal of the API  is to provide 
a zero downtime mechanism to perform data manipulation and  efficient core 
allocation in solrcloud. This API was envisioned to be the base layer that 
enables Solrcloud to be an auto scaling platform. (and work in unison with 
other complementing monitoring and scaling features).


# Patch Status:
===
The patch is work in progress and incremental. We have done a few rounds of 
code clean up. We wanted to get the patch going first to get initial feed back. 
 We will continue to work on making it more open source friendly and easily 
testable.

# Deployment Status:

The platform is deployed in production at bloomreach and has been battle tested 
for large scale load. (millions of documents and hundreds of collections).

# Internals:
=
The internals of the API and performance : 
http://engineering.bloomreach.com/solrcloud-rebalance-api/

It is built on top of the admin collections API as an action (with various 
flavors). At a high level, the rebalance api provides 2 constructs:

Scaling Strategy:  Decides how to move the data.  Every flavor has multiple 
options which can be reviewed in the api spec.
Re-distribute  - Move around data in the cluster based on capacity/allocation.
Auto Shard  - Dynamically shard a collection to any size.
Smart Merge - Distributed Mode - Helps merging data from a larger shard setup 
into smaller one.  (the source should be divisible by destination)
Scale up -  Add replicas on the fly
Scale Down - Remove replicas on the fly

Allocation Strategy:  Decides where to put the data.  (Nodes with least cores, 
Nodes that do not have this collection etc). Custom implementations can be 
built on top as well. One other example is Availability Zone aware. Distribute 
data such that every replica is placed on different availability zone to 
support HA.

# Detailed API Spec:

  https://github.com/bloomreach/solrcloud-rebalance-api

# Contributors:
=
  Nitin Sharma
  Suruchi Shah

# Questions/Comments:
=
  You can reach me at nitin.sha...@bloomreach.com


> Rebalance API for SolrCloud
> ---
>
> Key: SOLR-9241
> URL: 

[jira] [Updated] (SOLR-9241) Rebalance API for SolrCloud

2016-06-21 Thread Nitin Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitin Sharma updated SOLR-9241:
---
Description: 
This is the v1 of the patch for Solrcloud Rebalance api (as described in 
http://engineering.bloomreach.com/solrcloud-rebalance-api/) , built at 
Bloomreach by Nitin Sharma and Suruchi Shah. The goal of the API  is to provide 
a zero downtime mechanism to perform data manipulation and  efficient core 
allocation in solrcloud. This API was envisioned to be the base layer that 
enables Solrcloud to be an auto scaling platform. (and work in unison with 
other complementing monitoring and scaling features).


Patch Status:
===
The patch is work in progress and incremental. We have done a few rounds of 
code clean up. We wanted to get the patch going first to get initial feed back. 
 We will continue to work on making it more open source friendly and easily 
testable.

 Deployment Status:

The platform is deployed in production at bloomreach and has been battle tested 
for large scale load. (millions of documents and hundreds of collections).

 Internals:
=
The internals of the API and performance : 
http://engineering.bloomreach.com/solrcloud-rebalance-api/

It is built on top of the admin collections API as an action (with various 
flavors). At a high level, the rebalance api provides 2 constructs:

Scaling Strategy:  Decides how to move the data.  Every flavor has multiple 
options which can be reviewed in the api spec.
Re-distribute  - Move around data in the cluster based on capacity/allocation.
Auto Shard  - Dynamically shard a collection to any size.
Smart Merge - Distributed Mode - Helps merging data from a larger shard setup 
into smaller one.  (the source should be divisible by destination)
Scale up -  Add replicas on the fly
Scale Down - Remove replicas on the fly

Allocation Strategy:  Decides where to put the data.  (Nodes with least cores, 
Nodes that do not have this collection etc). Custom implementations can be 
built on top as well. One other example is Availability Zone aware. Distribute 
data such that every replica is placed on different availability zone to 
support HA.

 Detailed API Spec:

  https://github.com/bloomreach/solrcloud-rebalance-api

 Contributors:
=
  Nitin Sharma
  Suruchi Shah

 Questions/Comments:
=
  You can reach me at nitin.sha...@bloomreach.com

  was:
This is the v1 of the patch for Solrcloud Rebalance api (as described in 
http://engineering.bloomreach.com/solrcloud-rebalance-api/) , built at 
Bloomreach by Nitin Sharma and Suruchi Shah. The goal of the API  is to provide 
a zero downtime mechanism to perform data manipulation and  efficient core 
allocation in solrcloud. This API was envisioned to be the base layer that 
enables Solrcloud to be an auto scaling platform. (and work in unison with 
other complementing monitoring and scaling features).


Patch Status:
===
The patch is work in progress and incremental. We have done a few rounds of 
code clean up. We wanted to get the patch going first to get initial feed back. 
 We will continue to work on making it more open source friendly and easily 
testable.

 Deployment Status:

The platform is deployed in production at bloomreach and has been battle tested 
for large scale load. (millions of documents and hundreds of collections).

 Internals:
=
The internals of the API and performance : 
http://engineering.bloomreach.com/solrcloud-rebalance-api/

It is built on top of the admin collections API as an action (with various 
flavors). At a high level, the rebalance api provides 2 constructs:

Scaling Strategy:  Decides how to move the data.  Every flavor has multiple 
options which can be reviewed in the api spec.
Re-distribute  - Move around data in the cluster based on capacity/allocation.
Auto Shard  - Dynamically shard a collection to any size.
Smart Merge - Distributed Mode - Helps merging data from a larger shard setup 
into smaller one.  (the source should be divisible by destination)
Scale up -  Add replicas on the fly
Scale Down - Remove replicas on the fly

Allocation Strategy:  Decides where to put the data.  (Nodes with least cores, 
Nodes that do not have this collection etc). Custom implementations can be 
built on top as well. One other example is Availability Zone aware. Distribute 
data such that every replica is placed on different availability zone to 
support HA.

# Detailed API Spec:

  https://github.com/bloomreach/solrcloud-rebalance-api

# Contributors:
=
  Nitin Sharma
  Suruchi Shah

# Questions/Comments:
=
  You can reach me at nitin.sha...@bloomreach.com


> Rebalance API for SolrCloud
> ---
>
> Key: SOLR-9241
> URL: 

[jira] [Comment Edited] (LUCENE-7194) Ban Math.toRadians/toDegrees and remove all usages of it

2016-06-21 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342425#comment-15342425
 ] 

Karl Wright edited comment on LUCENE-7194 at 6/21/16 6:46 PM:
--

Here's what it spits out:

{code}
[forbidden-apis] Forbidden method invocation: java.lang.Math#toRadians(double) 
[Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.geo.Rectangle (Rectangle.java:94)
[forbidden-apis] Forbidden method invocation: java.lang.Math#toRadians(double) 
[Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.geo.Rectangle (Rectangle.java:95)
[forbidden-apis] Forbidden method invocation: java.lang.Math#toDegrees(double) 
[Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.geo.Rectangle (Rectangle.java:121)
[forbidden-apis] Forbidden method invocation: java.lang.Math#toDegrees(double) 
[Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.geo.Rectangle (Rectangle.java:121)
[forbidden-apis] Forbidden method invocation: java.lang.Math#toDegrees(double) 
[Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.geo.Rectangle (Rectangle.java:121)
[forbidden-apis] Forbidden method invocation: java.lang.Math#toDegrees(double) 
[Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.geo.Rectangle (Rectangle.java:121)
[forbidden-apis] Forbidden method invocation: java.lang.Math#toRadians(double) 
[Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.geo.Rectangle (Rectangle.java:151)
[forbidden-apis] Forbidden method invocation: java.lang.Math#toDegrees(double) 
[Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.geo.Rectangle (Rectangle.java:169)
[forbidden-apis] Forbidden method invocation: java.lang.Math#toRadians(double) 
[Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.util.SloppyMath (SloppyMath.java:212)
[forbidden-apis] Scanned 2733 (and 585 related) class file(s) for forbidden API
invocations (in 2.98s), 9 error(s).
{code}


was (Author: kwri...@metacarta.com):
Here's what it spits out:

{code}
[forbidden-apis] Forbidden method invocation: java.lang.Math#toRadians(double) [
Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.geo.Rectangle (Rectangle.java:94)
[forbidden-apis] Forbidden method invocation: java.lang.Math#toRadians(double) [
Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.geo.Rectangle (Rectangle.java:95)
[forbidden-apis] Forbidden method invocation: java.lang.Math#toDegrees(double) [
Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.geo.Rectangle (Rectangle.java:121)
[forbidden-apis] Forbidden method invocation: java.lang.Math#toDegrees(double) [
Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.geo.Rectangle (Rectangle.java:121)
[forbidden-apis] Forbidden method invocation: java.lang.Math#toDegrees(double) [
Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.geo.Rectangle (Rectangle.java:121)
[forbidden-apis] Forbidden method invocation: java.lang.Math#toDegrees(double) [
Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.geo.Rectangle (Rectangle.java:121)
[forbidden-apis] Forbidden method invocation: java.lang.Math#toRadians(double) [
Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.geo.Rectangle (Rectangle.java:151)
[forbidden-apis] Forbidden method invocation: java.lang.Math#toDegrees(double) [
Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.geo.Rectangle (Rectangle.java:169)
[forbidden-apis] Forbidden method invocation: java.lang.Math#toRadians(double) [
Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.util.SloppyMath (SloppyMath.java:212)
[forbidden-apis] Scanned 2733 (and 585 related) class file(s) for forbidden API
invocations (in 2.98s), 9 error(s).
{code}

> Ban Math.toRadians/toDegrees and remove all usages of it
> 
>
> Key: LUCENE-7194
> URL: https://issues.apache.org/jira/browse/LUCENE-7194
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Assignee: Karl Wright
>
> The result of these methods is unreliable and changes across jvm versions: we 
> cannot use these methods.
> The following program prints 0.7722082215479366 on previous versions of java 
> but 0.7722082215479367 on java 9 because Math.toRadians is no longer doing 
> the same thing:
> {code}
> public class test {
>   public static void main(String args[]) throws Exception {
> System.out.println(Math.toRadians(44.244272));
>   }
> }
> {code}
> This is because of https://bugs.openjdk.java.net/browse/JDK-4477961. 
> I am not really sure its a bug, because the method says that the conversion 
> is "generally inexact". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, 

[jira] [Commented] (LUCENE-7194) Ban Math.toRadians/toDegrees and remove all usages of it

2016-06-21 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342425#comment-15342425
 ] 

Karl Wright commented on LUCENE-7194:
-

Here's what it spits out:

{code}
[forbidden-apis] Forbidden method invocation: java.lang.Math#toRadians(double) [
Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.geo.Rectangle (Rectangle.java:94)
[forbidden-apis] Forbidden method invocation: java.lang.Math#toRadians(double) [
Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.geo.Rectangle (Rectangle.java:95)
[forbidden-apis] Forbidden method invocation: java.lang.Math#toDegrees(double) [
Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.geo.Rectangle (Rectangle.java:121)
[forbidden-apis] Forbidden method invocation: java.lang.Math#toDegrees(double) [
Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.geo.Rectangle (Rectangle.java:121)
[forbidden-apis] Forbidden method invocation: java.lang.Math#toDegrees(double) [
Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.geo.Rectangle (Rectangle.java:121)
[forbidden-apis] Forbidden method invocation: java.lang.Math#toDegrees(double) [
Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.geo.Rectangle (Rectangle.java:121)
[forbidden-apis] Forbidden method invocation: java.lang.Math#toRadians(double) [
Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.geo.Rectangle (Rectangle.java:151)
[forbidden-apis] Forbidden method invocation: java.lang.Math#toDegrees(double) [
Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.geo.Rectangle (Rectangle.java:169)
[forbidden-apis] Forbidden method invocation: java.lang.Math#toRadians(double) [
Use NIO.2 instead]
[forbidden-apis]   in org.apache.lucene.util.SloppyMath (SloppyMath.java:212)
[forbidden-apis] Scanned 2733 (and 585 related) class file(s) for forbidden API
invocations (in 2.98s), 9 error(s).
{code}

> Ban Math.toRadians/toDegrees and remove all usages of it
> 
>
> Key: LUCENE-7194
> URL: https://issues.apache.org/jira/browse/LUCENE-7194
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Assignee: Karl Wright
>
> The result of these methods is unreliable and changes across jvm versions: we 
> cannot use these methods.
> The following program prints 0.7722082215479366 on previous versions of java 
> but 0.7722082215479367 on java 9 because Math.toRadians is no longer doing 
> the same thing:
> {code}
> public class test {
>   public static void main(String args[]) throws Exception {
> System.out.println(Math.toRadians(44.244272));
>   }
> }
> {code}
> This is because of https://bugs.openjdk.java.net/browse/JDK-4477961. 
> I am not really sure its a bug, because the method says that the conversion 
> is "generally inexact". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9241) Rebalance API for SolrCloud

2016-06-21 Thread Nitin Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitin Sharma updated SOLR-9241:
---
Description: 
This is the v1 of the patch for Solrcloud Rebalance api (as described in 
http://engineering.bloomreach.com/solrcloud-rebalance-api/) , built at 
Bloomreach by Nitin Sharma and Suruchi Shah. The goal of the API  is to provide 
a zero downtime mechanism to perform data manipulation and  efficient core 
allocation in solrcloud. This API was envisioned to be the base layer that 
enables Solrcloud to be an auto scaling platform. (and work in unison with 
other complementing monitoring and scaling features).


# Patch Status:
===
The patch is work in progress and incremental. We have done a few rounds of 
code clean up. We wanted to get the patch going first to get initial feed back. 
 We will continue to work on making it more open source friendly and easily 
testable.

# Deployment Status:

The platform is deployed in production at bloomreach and has been battle tested 
for large scale load. (millions of documents and hundreds of collections).

# Internals:
=
The internals of the API and performance : 
http://engineering.bloomreach.com/solrcloud-rebalance-api/

It is built on top of the admin collections API as an action (with various 
flavors). At a high level, the rebalance api provides 2 constructs:

Scaling Strategy:  Decides how to move the data.  Every flavor has multiple 
options which can be reviewed in the api spec.
Re-distribute  - Move around data in the cluster based on capacity/allocation.
Auto Shard  - Dynamically shard a collection to any size.
Smart Merge - Distributed Mode - Helps merging data from a larger shard setup 
into smaller one.  (the source should be divisible by destination)
Scale up -  Add replicas on the fly
Scale Down - Remove replicas on the fly

Allocation Strategy:  Decides where to put the data.  (Nodes with least cores, 
Nodes that do not have this collection etc). Custom implementations can be 
built on top as well. One other example is Availability Zone aware. Distribute 
data such that every replica is placed on different availability zone to 
support HA.

# Detailed API Spec:

  https://github.com/bloomreach/solrcloud-rebalance-api

# Contributors:
=
  Nitin Sharma
  Suruchi Shah

# Questions/Comments:
=
  You can reach me at nitin.sha...@bloomreach.com

  was:
This is the v1 of the patch for Solrcloud Rebalance api, built at Bloomreach by 
Nitin Sharma and Suruchi Shah. The goal of the API  is to provide a zero 
downtime mechanism to perform data manipulation and  efficient core allocation 
in solrcloud. This API was envisioned to be the base layer that enables 
Solrcloud to be an auto scaling platform. (and work in unison with other 
complementing monitoring and scaling features).


# Patch Status:
===
The patch is work in progress and incremental. We have done a few rounds of 
code clean up. We wanted to get the patch going first to get initial feed back. 
 We will continue to work on making it more open source friendly and easily 
testable.

# Deployment Status:

The platform is deployed in production at bloomreach and has been battle tested 
for large scale load. (millions of documents and hundreds of collections).

# Internals:
=
The internals of the API and performance : 
http://engineering.bloomreach.com/solrcloud-rebalance-api/

It is built on top of the admin collections API as an action (with various 
flavors). At a high level, the rebalance api provides 2 constructs:

Scaling Strategy:  Decides how to move the data.  Every flavor has multiple 
options which can be reviewed in the api spec.
Re-distribute  - Move around data in the cluster based on capacity/allocation.
Auto Shard  - Dynamically shard a collection to any size.
Smart Merge - Distributed Mode - Helps merging data from a larger shard setup 
into smaller one.  (the source should be divisible by destination)
Scale up -  Add replicas on the fly
Scale Down - Remove replicas on the fly

Allocation Strategy:  Decides where to put the data.  (Nodes with least cores, 
Nodes that do not have this collection etc). Custom implementations can be 
built on top as well. One other example is Availability Zone aware. Distribute 
data such that every replica is placed on different availability zone to 
support HA.

# Detailed API Spec:

  https://github.com/bloomreach/solrcloud-rebalance-api

# Contributors:
=
  Nitin Sharma
  Suruchi Shah

# Questions/Comments:
=
  You can reach me at nitin.sha...@bloomreach.com


> Rebalance API for SolrCloud
> ---
>
> Key: SOLR-9241
> URL: https://issues.apache.org/jira/browse/SOLR-9241
> Project: 

[jira] [Created] (SOLR-9241) Rebalance API for SolrCloud

2016-06-21 Thread Nitin Sharma (JIRA)
Nitin Sharma created SOLR-9241:
--

 Summary: Rebalance API for SolrCloud
 Key: SOLR-9241
 URL: https://issues.apache.org/jira/browse/SOLR-9241
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Affects Versions: 4.6.1
 Environment: Ubuntu, Mac OsX
Reporter: Nitin Sharma
 Fix For: 4.6.1
 Attachments: rebalance.diff

This is the v1 of the patch for Solrcloud Rebalance api, built at Bloomreach by 
Nitin Sharma and Suruchi Shah. The goal of the API  is to provide a zero 
downtime mechanism to perform data manipulation and  efficient core allocation 
in solrcloud. This API was envisioned to be the base layer that enables 
Solrcloud to be an auto scaling platform. (and work in unison with other 
complementing monitoring and scaling features).


# Patch Status:
===
The patch is work in progress and incremental. We have done a few rounds of 
code clean up. We wanted to get the patch going first to get initial feed back. 
 We will continue to work on making it more open source friendly and easily 
testable.

# Deployment Status:

The platform is deployed in production at bloomreach and has been battle tested 
for large scale load. (millions of documents and hundreds of collections).

# Internals:
=
The internals of the API and performance : 
http://engineering.bloomreach.com/solrcloud-rebalance-api/

It is built on top of the admin collections API as an action (with various 
flavors). At a high level, the rebalance api provides 2 constructs:

Scaling Strategy:  Decides how to move the data.  Every flavor has multiple 
options which can be reviewed in the api spec.
Re-distribute  - Move around data in the cluster based on capacity/allocation.
Auto Shard  - Dynamically shard a collection to any size.
Smart Merge - Distributed Mode - Helps merging data from a larger shard setup 
into smaller one.  (the source should be divisible by destination)
Scale up -  Add replicas on the fly
Scale Down - Remove replicas on the fly

Allocation Strategy:  Decides where to put the data.  (Nodes with least cores, 
Nodes that do not have this collection etc). Custom implementations can be 
built on top as well. One other example is Availability Zone aware. Distribute 
data such that every replica is placed on different availability zone to 
support HA.

# Detailed API Spec:

  https://github.com/bloomreach/solrcloud-rebalance-api

# Contributors:
=
  Nitin Sharma
  Suruchi Shah

# Questions/Comments:
=
  You can reach me at nitin.sha...@bloomreach.com



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9241) Rebalance API for SolrCloud

2016-06-21 Thread Nitin Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitin Sharma updated SOLR-9241:
---
Attachment: rebalance.diff

Rebalance API for SolrCloud

> Rebalance API for SolrCloud
> ---
>
> Key: SOLR-9241
> URL: https://issues.apache.org/jira/browse/SOLR-9241
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Affects Versions: 4.6.1
> Environment: Ubuntu, Mac OsX
>Reporter: Nitin Sharma
>  Labels: Cluster, SolrCloud
> Fix For: 4.6.1
>
> Attachments: rebalance.diff
>
>   Original Estimate: 2,016h
>  Remaining Estimate: 2,016h
>
> This is the v1 of the patch for Solrcloud Rebalance api, built at Bloomreach 
> by Nitin Sharma and Suruchi Shah. The goal of the API  is to provide a zero 
> downtime mechanism to perform data manipulation and  efficient core 
> allocation in solrcloud. This API was envisioned to be the base layer that 
> enables Solrcloud to be an auto scaling platform. (and work in unison with 
> other complementing monitoring and scaling features).
> # Patch Status:
> ===
> The patch is work in progress and incremental. We have done a few rounds of 
> code clean up. We wanted to get the patch going first to get initial feed 
> back.  We will continue to work on making it more open source friendly and 
> easily testable.
> # Deployment Status:
> 
> The platform is deployed in production at bloomreach and has been battle 
> tested for large scale load. (millions of documents and hundreds of 
> collections).
> # Internals:
> =
> The internals of the API and performance : 
> http://engineering.bloomreach.com/solrcloud-rebalance-api/
> It is built on top of the admin collections API as an action (with various 
> flavors). At a high level, the rebalance api provides 2 constructs:
> Scaling Strategy:  Decides how to move the data.  Every flavor has multiple 
> options which can be reviewed in the api spec.
> Re-distribute  - Move around data in the cluster based on capacity/allocation.
> Auto Shard  - Dynamically shard a collection to any size.
> Smart Merge - Distributed Mode - Helps merging data from a larger shard setup 
> into smaller one.  (the source should be divisible by destination)
> Scale up -  Add replicas on the fly
> Scale Down - Remove replicas on the fly
> Allocation Strategy:  Decides where to put the data.  (Nodes with least 
> cores, Nodes that do not have this collection etc). Custom implementations 
> can be built on top as well. One other example is Availability Zone aware. 
> Distribute data such that every replica is placed on different availability 
> zone to support HA.
> # Detailed API Spec:
> 
>   https://github.com/bloomreach/solrcloud-rebalance-api
> # Contributors:
> =
>   Nitin Sharma
>   Suruchi Shah
> # Questions/Comments:
> =
>   You can reach me at nitin.sha...@bloomreach.com



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



VOTE: Apache Solr Ref Guide for 6.1

2016-06-21 Thread Cassandra Targett
Please VOTE to release the Apache Solr Ref Guide for 6.1.

The artifacts can be downloaded from:
https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-6.1-RC0/

$ more /apache-solr-ref-guide-6.1.pdf.sha1
5929b03039e99644bc4ef23b37088b343e2ff0c8  apache-solr-ref-guide-6.1.pdf

Here's my +1.

Thanks,
Cassandra

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-21 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342235#comment-15342235
 ] 

Varun Thacker commented on SOLR-7374:
-

Yeah we can tackle that in another Jira . Whether we enforce a default 
repository or not , it could warrant a param in the replication handler and 
hence could be tackled separately.


For the patch , One thing I'd like to address would be - In 
TestHdfsBackupRestore make {{runCoreAdminCommand}} use the Replication handler 
instead since thats the current documented way of running core backups/restore.

> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Mark Miller
> Fix For: 6.2
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch
>
>
> Currently when we create a backup we use SimpleFSDirectory to write the 
> backup indexes. Similarly during a restore we open the index using 
> FSDirectory.open . 
> We should provide a param called {{directoryImpl}} or {{type}} which will be 
> used to specify the Directory implementation to backup the index. 
> Likewise during a restore you would need to specify the directory impl which 
> was used during backup so that the index can be opened correctly.
> This param will address the problem that currently if a user is running Solr 
> on HDFS there is no way to use the backup/restore functionality as the 
> directory is hardcoded.
> With this one could be running Solr on a local FS but backup the index on 
> HDFS etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9219) Make hdfs blockcache read buffer size configurable.

2016-06-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342208#comment-15342208
 ] 

ASF subversion and git services commented on SOLR-9219:
---

Commit 740198f33d31de8b07c3ba25ef510f60e0ddafc9 in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=740198f ]

SOLR-9219: Make hdfs blockcache read buffer sizes configurable and improve 
cache concurrency.


> Make hdfs blockcache read buffer size configurable.
> ---
>
> Key: SOLR-9219
> URL: https://issues.apache.org/jira/browse/SOLR-9219
> Project: Solr
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Attachments: SOLR-9219.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 222 - Still Failing!

2016-06-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/222/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 63193 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: 
/var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj4141107
 [ecj-lint] Compiling 932 source files to 
/var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj4141107
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/Users/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/Users/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 5. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 6. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 7. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java
 (at line 226)
 [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, 
blockCacheReadEnabled, false, cacheMerges, cacheReadOnce);
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'dir' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 8. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
 (at line 120)
 [ecj-lint] reader = cfiltfac.create(reader);
 [ecj-lint] 
 [ecj-lint] Resource leak: 'reader' is not closed at this location
 [ecj-lint] --
 [ecj-lint] 9. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
 (at line 144)
 [ecj-lint] return namedList;
 [ecj-lint] ^
 [ecj-lint] Resource leak: 'listBasedTokenStream' is not closed at this location
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 10. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java
 (at line 1186)
 [ecj-lint] DirectoryReader reader = s==null ? null : 
s.get().getIndexReader();
 [ecj-lint] ^^
 [ecj-lint] Resource leak: 'reader' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 11. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/handler/SQLHandler.java
 (at line 254)
 [ecj-lint] ParallelStream parallelStream = new 
ParallelStream(workerZkHost, workerCollection, tupleStream, numWorkers, comp);
 [ecj-lint]

[VOTE] Release Lucene/Solr 5.5.2 RC2

2016-06-21 Thread Steve Rowe
Please vote for release candidate 2 for Lucene/Solr 5.5.2

The artifacts can be downloaded from:
https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.2-RC2-rev8e5d40b22a3968df065dfc078ef81cbb031f0e4a/

You can run the smoke tester directly with this command:

python3 -u dev-tools/scripts/smokeTestRelease.py \
https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.2-RC2-rev8e5d40b22a3968df065dfc078ef81cbb031f0e4a/

+1 from me - Docs, changes and javadocs look good, and smoke tester says: 
SUCCESS! [0:32:02.113685]

--
Steve
www.lucidworks.com


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9240) Add the partitionKeys parameter to the topic() Streaming Expression

2016-06-21 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-9240:


 Summary: Add the partitionKeys parameter to the topic() Streaming 
Expression
 Key: SOLR-9240
 URL: https://issues.apache.org/jira/browse/SOLR-9240
 Project: Solr
  Issue Type: Improvement
Reporter: Joel Bernstein


Currently the topic() function doesn't accept a partitionKeys parameter like 
the search() function does. This means the topic() function can't be wrapped by 
the parallel() function to run across worker nodes.

It would be useful to support parallelizing the topic function because it would 
provide a general purpose parallelized approach for processing batches of data 
as they enter the index.

For example this would allow a classify() function to be wrapped around a 
topic() function to classify documents in parallel across worker nodes. 

Sample syntax:

{code}
parallel(daemon(update(classify(topic(..., partitionKeys="id")
{code}

The example above would send a daemon out to worker nodes that would classify 
all new documents returned by the topic() function. The update function would 
send the output of classify() to a SolrCloud collection for indexing.








--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9240) Add the partitionKeys parameter to the topic() Streaming Expression

2016-06-21 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9240:
-
Description: 
Currently the topic() function doesn't accept a partitionKeys parameter like 
the search() function does. This means the topic() function can't be wrapped by 
the parallel() function to run across worker nodes.

It would be useful to support parallelizing the topic function because it would 
provide a general purpose parallelized approach for processing batches of data 
as they enter the index.

For example this would allow a classify() function to be wrapped around a 
topic() function to classify documents in parallel across worker nodes. 

Sample syntax:

{code}
parallel(daemon(update(classify(topic(..., partitionKeys="id")
{code}

The example above would send a daemon to worker nodes that would classify all 
new documents returned by the topic() function. The update function would send 
the output of classify() to a SolrCloud collection for indexing.

The partitionKeys parameter would ensure that each worker would receive a 
partition of the results returned by the topic() function. This allows the 
classify() function to be run in parallel.






  was:
Currently the topic() function doesn't accept a partitionKeys parameter like 
the search() function does. This means the topic() function can't be wrapped by 
the parallel() function to run across worker nodes.

It would be useful to support parallelizing the topic function because it would 
provide a general purpose parallelized approach for processing batches of data 
as they enter the index.

For example this would allow a classify() function to be wrapped around a 
topic() function to classify documents in parallel across worker nodes. 

Sample syntax:

{code}
parallel(daemon(update(classify(topic(..., partitionKeys="id")
{code}

The example above would send a daemon out to worker nodes that would classify 
all new documents returned by the topic() function. The update function would 
send the output of classify() to a SolrCloud collection for indexing.







> Add the partitionKeys parameter to the topic() Streaming Expression
> ---
>
> Key: SOLR-9240
> URL: https://issues.apache.org/jira/browse/SOLR-9240
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>
> Currently the topic() function doesn't accept a partitionKeys parameter like 
> the search() function does. This means the topic() function can't be wrapped 
> by the parallel() function to run across worker nodes.
> It would be useful to support parallelizing the topic function because it 
> would provide a general purpose parallelized approach for processing batches 
> of data as they enter the index.
> For example this would allow a classify() function to be wrapped around a 
> topic() function to classify documents in parallel across worker nodes. 
> Sample syntax:
> {code}
> parallel(daemon(update(classify(topic(..., partitionKeys="id")
> {code}
> The example above would send a daemon to worker nodes that would classify all 
> new documents returned by the topic() function. The update function would 
> send the output of classify() to a SolrCloud collection for indexing.
> The partitionKeys parameter would ensure that each worker would receive a 
> partition of the results returned by the topic() function. This allows the 
> classify() function to be run in parallel.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342117#comment-15342117
 ] 

Mark Miller commented on SOLR-7374:
---

I have no strong preference on whether that needs to use a Repository here or 
not, but [~varunthacker] has not seemed very amenable to ignoring it yet.

> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Mark Miller
> Fix For: 6.2
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch
>
>
> Currently when we create a backup we use SimpleFSDirectory to write the 
> backup indexes. Similarly during a restore we open the index using 
> FSDirectory.open . 
> We should provide a param called {{directoryImpl}} or {{type}} which will be 
> used to specify the Directory implementation to backup the index. 
> Likewise during a restore you would need to specify the directory impl which 
> was used during backup so that the index can be opened correctly.
> This param will address the problem that currently if a user is running Solr 
> on HDFS there is no way to use the backup/restore functionality as the 
> directory is hardcoded.
> With this one could be running Solr on a local FS but backup the index on 
> HDFS etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-21 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342101#comment-15342101
 ] 

Hrishikesh Gadre commented on SOLR-7374:


[~markrmil...@gmail.com]

bq. So I think we do want it configurable in ReplicationHandler config, but if 
it's not config'd, perhaps we could default to the first repo defined? And 
local if no repos are defined? Forcing a default for this seems annoying, same 
as forcing config.

Personally I think that the code under consideration is quite unrelated to 
backup/restore functionality. It was added as part of SOLR-561 to implement 
replication. So my suggestion is to not consider it as part of this JIRA. We 
can file a separate JIRA for tracking. The backup/restore API in 
ReplicationHandler already accept repository parameter in my latest patch.

> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Mark Miller
> Fix For: 6.2
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch
>
>
> Currently when we create a backup we use SimpleFSDirectory to write the 
> backup indexes. Similarly during a restore we open the index using 
> FSDirectory.open . 
> We should provide a param called {{directoryImpl}} or {{type}} which will be 
> used to specify the Directory implementation to backup the index. 
> Likewise during a restore you would need to specify the directory impl which 
> was used during backup so that the index can be opened correctly.
> This param will address the problem that currently if a user is running Solr 
> on HDFS there is no way to use the backup/restore functionality as the 
> directory is hardcoded.
> With this one could be running Solr on a local FS but backup the index on 
> HDFS etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7337) MultiTermQuery are sometimes rewritten into an empty boolean query

2016-06-21 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-7337.

   Resolution: Fixed
Fix Version/s: 6.2
   master (7.0)

> MultiTermQuery are sometimes rewritten into an empty boolean query
> --
>
> Key: LUCENE-7337
> URL: https://issues.apache.org/jira/browse/LUCENE-7337
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Reporter: Ferenczi Jim
>Priority: Minor
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7337.patch
>
>
> MultiTermQuery are sometimes rewritten to an empty boolean query (depending 
> on the rewrite method), it can happen when no expansions are found on a fuzzy 
> query for instance.
> It can be problematic when the multi term query is boosted. 
> For instance consider the following query:
> `((title:bar~1)^100 text:bar)`
> This is a boolean query with two optional clauses. The first one is a fuzzy 
> query on the field title with a boost of 100. 
> If there is no expansion for "title:bar~1" the query is rewritten into:
> `(()^100 text:bar)`
> ... and when expansions are found:
> `((title:bars | title:bar)^100 text:bar)`
> The scoring of those two queries will differ because the normalization factor 
> and the norm for the first query will be equal to 1 (the boost is ignored 
> because the empty boolean query is not taken into account for the computation 
> of the normalization factor) whereas the second query will have a 
> normalization factor of 10,000 (100*100) and a norm equal to 0.01. 
> This kind of discrepancy can happen in a single index because the expansions 
> for the fuzzy query are done at the segment level. It can also happen when 
> multiple indices are requested (Solr/ElasticSearch case).
> A simple fix would be to replace the empty boolean query produced by the 
> multi term query with a MatchNoDocsQuery but I am not sure that it's the best 
> way to fix. WDYT ?
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7337) MultiTermQuery are sometimes rewritten into an empty boolean query

2016-06-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342072#comment-15342072
 ] 

ASF subversion and git services commented on LUCENE-7337:
-

Commit a3fc7efbccfa547add864e58268e40960bff571b in lucene-solr's branch 
refs/heads/branch_6x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a3fc7ef ]

LUCENE-7337: empty boolean query now rewrites to MatchNoDocsQuery instead of 
vice/versa


> MultiTermQuery are sometimes rewritten into an empty boolean query
> --
>
> Key: LUCENE-7337
> URL: https://issues.apache.org/jira/browse/LUCENE-7337
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Reporter: Ferenczi Jim
>Priority: Minor
> Attachments: LUCENE-7337.patch
>
>
> MultiTermQuery are sometimes rewritten to an empty boolean query (depending 
> on the rewrite method), it can happen when no expansions are found on a fuzzy 
> query for instance.
> It can be problematic when the multi term query is boosted. 
> For instance consider the following query:
> `((title:bar~1)^100 text:bar)`
> This is a boolean query with two optional clauses. The first one is a fuzzy 
> query on the field title with a boost of 100. 
> If there is no expansion for "title:bar~1" the query is rewritten into:
> `(()^100 text:bar)`
> ... and when expansions are found:
> `((title:bars | title:bar)^100 text:bar)`
> The scoring of those two queries will differ because the normalization factor 
> and the norm for the first query will be equal to 1 (the boost is ignored 
> because the empty boolean query is not taken into account for the computation 
> of the normalization factor) whereas the second query will have a 
> normalization factor of 10,000 (100*100) and a norm equal to 0.01. 
> This kind of discrepancy can happen in a single index because the expansions 
> for the fuzzy query are done at the segment level. It can also happen when 
> multiple indices are requested (Solr/ElasticSearch case).
> A simple fix would be to replace the empty boolean query produced by the 
> multi term query with a MatchNoDocsQuery but I am not sure that it's the best 
> way to fix. WDYT ?
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3354 - Still Failing!

2016-06-21 Thread Tommaso Teofili
sorry for this, thanks Adrien!

Tommaso

Il giorno mar 21 giu 2016 alle ore 17:59 Adrien Grand 
ha scritto:

> I pushed a fix. Please run precommit before pushing, thanks!
>
> Le mar. 21 juin 2016 à 17:08, Policeman Jenkins Server <
> jenk...@thetaphi.de> a écrit :
>
>> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3354/
>> Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC
>>
>> All tests passed
>>
>> Build Log:
>> [...truncated 51523 lines...]
>> -ecj-javadoc-lint-src:
>> [mkdir] Created dir:
>> /var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj775548066
>>  [ecj-lint] Compiling 15 source files to
>> /var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj775548066
>>  [ecj-lint] --
>>  [ecj-lint] 1. ERROR in
>> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/BooleanPerceptronClassifier.java
>> (at line 31)
>>  [ecj-lint] import org.apache.lucene.index.LeafReader;
>>  [ecj-lint]^^
>>  [ecj-lint] The import org.apache.lucene.index.LeafReader is never used
>>  [ecj-lint] --
>>  [ecj-lint] --
>>  [ecj-lint] 2. ERROR in
>> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/document/KNearestNeighborDocumentClassifier.java
>> (at line 31)
>>  [ecj-lint] import org.apache.lucene.index.LeafReader;
>>  [ecj-lint]^^
>>  [ecj-lint] The import org.apache.lucene.index.LeafReader is never used
>>  [ecj-lint] --
>>  [ecj-lint] --
>>  [ecj-lint] 3. ERROR in
>> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/document/SimpleNaiveBayesDocumentClassifier.java
>> (at line 37)
>>  [ecj-lint] import org.apache.lucene.index.LeafReader;
>>  [ecj-lint]^^
>>  [ecj-lint] The import org.apache.lucene.index.LeafReader is never used
>>  [ecj-lint] --
>>  [ecj-lint] --
>>  [ecj-lint] 4. WARNING in
>> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/utils/DatasetSplitter.java
>> (at line 78)
>>  [ecj-lint] IndexWriter testWriter = new IndexWriter(testIndex, new
>> IndexWriterConfig(analyzer));
>>  [ecj-lint] ^^
>>  [ecj-lint] Resource leak: 'testWriter' is never closed
>>  [ecj-lint] --
>>  [ecj-lint] 5. WARNING in
>> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/utils/DatasetSplitter.java
>> (at line 79)
>>  [ecj-lint] IndexWriter cvWriter = new
>> IndexWriter(crossValidationIndex, new IndexWriterConfig(analyzer));
>>  [ecj-lint] 
>>  [ecj-lint] Resource leak: 'cvWriter' is never closed
>>  [ecj-lint] --
>>  [ecj-lint] 6. WARNING in
>> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/utils/DatasetSplitter.java
>> (at line 80)
>>  [ecj-lint] IndexWriter trainingWriter = new
>> IndexWriter(trainingIndex, new IndexWriterConfig(analyzer));
>>  [ecj-lint] ^^
>>  [ecj-lint] Resource leak: 'trainingWriter' is never closed
>>  [ecj-lint] --
>>  [ecj-lint] 7. WARNING in
>> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/utils/DatasetSplitter.java
>> (at line 87)
>>  [ecj-lint] throw new IllegalStateException("the classFieldName \"" +
>> classFieldName + "\" must index sorted doc values");
>>  [ecj-lint]
>>  
>> ^^
>>  [ecj-lint] Resource leak: 'testWriter' is not closed at this location
>>  [ecj-lint] --
>>  [ecj-lint] 8. WARNING in
>> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/utils/DatasetSplitter.java
>> (at line 87)
>>  [ecj-lint] throw new IllegalStateException("the classFieldName \"" +
>> classFieldName + "\" must index sorted doc values");
>>  [ecj-lint]
>>  
>> ^^
>>  [ecj-lint] Resource leak: 'trainingWriter' is not closed at this location
>>  [ecj-lint] --
>>  [ecj-lint] 9. WARNING in
>> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/utils/DatasetSplitter.java
>> (at line 87)
>>  [ecj-lint] throw new IllegalStateException("the classFieldName \"" +
>> classFieldName + "\" must index sorted doc values");
>>  [ecj-lint]
>>  
>> ^^
>>  

Re: [JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3354 - Still Failing!

2016-06-21 Thread Adrien Grand
I pushed a fix. Please run precommit before pushing, thanks!

Le mar. 21 juin 2016 à 17:08, Policeman Jenkins Server 
a écrit :

> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3354/
> Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC
>
> All tests passed
>
> Build Log:
> [...truncated 51523 lines...]
> -ecj-javadoc-lint-src:
> [mkdir] Created dir:
> /var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj775548066
>  [ecj-lint] Compiling 15 source files to
> /var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj775548066
>  [ecj-lint] --
>  [ecj-lint] 1. ERROR in
> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/BooleanPerceptronClassifier.java
> (at line 31)
>  [ecj-lint] import org.apache.lucene.index.LeafReader;
>  [ecj-lint]^^
>  [ecj-lint] The import org.apache.lucene.index.LeafReader is never used
>  [ecj-lint] --
>  [ecj-lint] --
>  [ecj-lint] 2. ERROR in
> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/document/KNearestNeighborDocumentClassifier.java
> (at line 31)
>  [ecj-lint] import org.apache.lucene.index.LeafReader;
>  [ecj-lint]^^
>  [ecj-lint] The import org.apache.lucene.index.LeafReader is never used
>  [ecj-lint] --
>  [ecj-lint] --
>  [ecj-lint] 3. ERROR in
> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/document/SimpleNaiveBayesDocumentClassifier.java
> (at line 37)
>  [ecj-lint] import org.apache.lucene.index.LeafReader;
>  [ecj-lint]^^
>  [ecj-lint] The import org.apache.lucene.index.LeafReader is never used
>  [ecj-lint] --
>  [ecj-lint] --
>  [ecj-lint] 4. WARNING in
> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/utils/DatasetSplitter.java
> (at line 78)
>  [ecj-lint] IndexWriter testWriter = new IndexWriter(testIndex, new
> IndexWriterConfig(analyzer));
>  [ecj-lint] ^^
>  [ecj-lint] Resource leak: 'testWriter' is never closed
>  [ecj-lint] --
>  [ecj-lint] 5. WARNING in
> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/utils/DatasetSplitter.java
> (at line 79)
>  [ecj-lint] IndexWriter cvWriter = new
> IndexWriter(crossValidationIndex, new IndexWriterConfig(analyzer));
>  [ecj-lint] 
>  [ecj-lint] Resource leak: 'cvWriter' is never closed
>  [ecj-lint] --
>  [ecj-lint] 6. WARNING in
> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/utils/DatasetSplitter.java
> (at line 80)
>  [ecj-lint] IndexWriter trainingWriter = new
> IndexWriter(trainingIndex, new IndexWriterConfig(analyzer));
>  [ecj-lint] ^^
>  [ecj-lint] Resource leak: 'trainingWriter' is never closed
>  [ecj-lint] --
>  [ecj-lint] 7. WARNING in
> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/utils/DatasetSplitter.java
> (at line 87)
>  [ecj-lint] throw new IllegalStateException("the classFieldName \"" +
> classFieldName + "\" must index sorted doc values");
>  [ecj-lint]
>  
> ^^
>  [ecj-lint] Resource leak: 'testWriter' is not closed at this location
>  [ecj-lint] --
>  [ecj-lint] 8. WARNING in
> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/utils/DatasetSplitter.java
> (at line 87)
>  [ecj-lint] throw new IllegalStateException("the classFieldName \"" +
> classFieldName + "\" must index sorted doc values");
>  [ecj-lint]
>  
> ^^
>  [ecj-lint] Resource leak: 'trainingWriter' is not closed at this location
>  [ecj-lint] --
>  [ecj-lint] 9. WARNING in
> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/utils/DatasetSplitter.java
> (at line 87)
>  [ecj-lint] throw new IllegalStateException("the classFieldName \"" +
> classFieldName + "\" must index sorted doc values");
>  [ecj-lint]
>  
> ^^
>  [ecj-lint] Resource leak: 'cvWriter' is not closed at this location
>  [ecj-lint] --
>  [ecj-lint] 9 problems (3 errors, 6 warnings)
>
> BUILD FAILED
> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/build.xml:740: The
> 

[jira] [Commented] (LUCENE-7350) Let classifiers be constructed from IndexReaders

2016-06-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342049#comment-15342049
 ] 

ASF subversion and git services commented on LUCENE-7350:
-

Commit 5e2f340cfaf0943948c990769991ba2cbb443a8e in lucene-solr's branch 
refs/heads/branch_6x from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5e2f340 ]

LUCENE-7350: Remove unused imports.


> Let classifiers be constructed from IndexReaders
> 
>
> Key: LUCENE-7350
> URL: https://issues.apache.org/jira/browse/LUCENE-7350
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: master (7.0)
>
>
> Current {{Classifier}} implementations are built from {{LeafReaders}}, this 
> is an heritage of using certain Lucene 4.x {{AtomicReader}}'s specific APIs; 
> this is no longer required as what is used by current implementations is 
> based on {{IndexReader}} APIs and therefore it makes more sense to use that 
> as constructor parameter as it doesn't give any additional benefit whereas it 
> requires client code to deal with classifiers that are tight to segments 
> (which doesn't make much sense).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7350) Let classifiers be constructed from IndexReaders

2016-06-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342032#comment-15342032
 ] 

ASF subversion and git services commented on LUCENE-7350:
-

Commit 281af8b89c3f624d99a2060cc392ff55b34cf051 in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=281af8b ]

LUCENE-7350: Remove unused imports.


> Let classifiers be constructed from IndexReaders
> 
>
> Key: LUCENE-7350
> URL: https://issues.apache.org/jira/browse/LUCENE-7350
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: master (7.0)
>
>
> Current {{Classifier}} implementations are built from {{LeafReaders}}, this 
> is an heritage of using certain Lucene 4.x {{AtomicReader}}'s specific APIs; 
> this is no longer required as what is used by current implementations is 
> based on {{IndexReader}} APIs and therefore it makes more sense to use that 
> as constructor parameter as it doesn't give any additional benefit whereas it 
> requires client code to deal with classifiers that are tight to segments 
> (which doesn't make much sense).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1048 - Still Failing

2016-06-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1048/

11 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CdcrReplicationHandlerTest

Error Message:
ObjectTracker found 12 object(s) that were not released!!! [InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 12 object(s) that were not 
released!!! [InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient]
at __randomizedtesting.SeedInfo.seed([7CDD016B851C5D43]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:256)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CdcrReplicationHandlerTest

Error Message:
12 threads leaked from SUITE scope at 
org.apache.solr.cloud.CdcrReplicationHandlerTest: 1) Thread[id=190495, 
name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-CdcrReplicationHandlerTest] at java.lang.Thread.sleep(Native 
Method) at 
org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.lang.Thread.run(Thread.java:745)2) Thread[id=190773, 
name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-CdcrReplicationHandlerTest] at java.lang.Thread.sleep(Native 
Method) at 
org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.lang.Thread.run(Thread.java:745)3) Thread[id=192207, 
name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-CdcrReplicationHandlerTest] at java.lang.Thread.sleep(Native 
Method) at 
org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.lang.Thread.run(Thread.java:745)4) Thread[id=192450, 
name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-CdcrReplicationHandlerTest] at java.lang.Thread.sleep(Native 
Method) at 
org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.lang.Thread.run(Thread.java:745)5) Thread[id=190496, 
name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-CdcrReplicationHandlerTest] at 

[jira] [Commented] (LUCENE-7337) MultiTermQuery are sometimes rewritten into an empty boolean query

2016-06-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341973#comment-15341973
 ] 

ASF subversion and git services commented on LUCENE-7337:
-

Commit 7b5d7b396254998c0d4d1a6139134639aea1904f in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7b5d7b3 ]

LUCENE-7337: empty boolean query now rewrites to MatchNoDocsQuery instead of 
vice/versa


> MultiTermQuery are sometimes rewritten into an empty boolean query
> --
>
> Key: LUCENE-7337
> URL: https://issues.apache.org/jira/browse/LUCENE-7337
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Reporter: Ferenczi Jim
>Priority: Minor
> Attachments: LUCENE-7337.patch
>
>
> MultiTermQuery are sometimes rewritten to an empty boolean query (depending 
> on the rewrite method), it can happen when no expansions are found on a fuzzy 
> query for instance.
> It can be problematic when the multi term query is boosted. 
> For instance consider the following query:
> `((title:bar~1)^100 text:bar)`
> This is a boolean query with two optional clauses. The first one is a fuzzy 
> query on the field title with a boost of 100. 
> If there is no expansion for "title:bar~1" the query is rewritten into:
> `(()^100 text:bar)`
> ... and when expansions are found:
> `((title:bars | title:bar)^100 text:bar)`
> The scoring of those two queries will differ because the normalization factor 
> and the norm for the first query will be equal to 1 (the boost is ignored 
> because the empty boolean query is not taken into account for the computation 
> of the normalization factor) whereas the second query will have a 
> normalization factor of 10,000 (100*100) and a norm equal to 0.01. 
> This kind of discrepancy can happen in a single index because the expansions 
> for the fuzzy query are done at the segment level. It can also happen when 
> multiple indices are requested (Solr/ElasticSearch case).
> A simple fix would be to replace the empty boolean query produced by the 
> multi term query with a MatchNoDocsQuery but I am not sure that it's the best 
> way to fix. WDYT ?
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+122) - Build # 17031 - Still Failing!

2016-06-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17031/
Java: 32bit/jdk-9-ea+122 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.spelling.SpellCheckCollatorTest.testEstimatedHitCounts

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([E65E534C6240E03F:D7E5ED79C77FF0EF]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:780)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:747)
at 
org.apache.solr.spelling.SpellCheckCollatorTest.testEstimatedHitCounts(SpellCheckCollatorTest.java:562)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(java.base@9-ea/Thread.java:843)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//lst[@name='spellcheck']/lst[@name='collations']/lst[@name='collation']/int[@name='hits'

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3354 - Still Failing!

2016-06-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3354/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 51523 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: 
/var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj775548066
 [ecj-lint] Compiling 15 source files to 
/var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj775548066
 [ecj-lint] --
 [ecj-lint] 1. ERROR in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/BooleanPerceptronClassifier.java
 (at line 31)
 [ecj-lint] import org.apache.lucene.index.LeafReader;
 [ecj-lint]^^
 [ecj-lint] The import org.apache.lucene.index.LeafReader is never used
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 2. ERROR in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/document/KNearestNeighborDocumentClassifier.java
 (at line 31)
 [ecj-lint] import org.apache.lucene.index.LeafReader;
 [ecj-lint]^^
 [ecj-lint] The import org.apache.lucene.index.LeafReader is never used
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 3. ERROR in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/document/SimpleNaiveBayesDocumentClassifier.java
 (at line 37)
 [ecj-lint] import org.apache.lucene.index.LeafReader;
 [ecj-lint]^^
 [ecj-lint] The import org.apache.lucene.index.LeafReader is never used
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/utils/DatasetSplitter.java
 (at line 78)
 [ecj-lint] IndexWriter testWriter = new IndexWriter(testIndex, new 
IndexWriterConfig(analyzer));
 [ecj-lint] ^^
 [ecj-lint] Resource leak: 'testWriter' is never closed
 [ecj-lint] --
 [ecj-lint] 5. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/utils/DatasetSplitter.java
 (at line 79)
 [ecj-lint] IndexWriter cvWriter = new IndexWriter(crossValidationIndex, 
new IndexWriterConfig(analyzer));
 [ecj-lint] 
 [ecj-lint] Resource leak: 'cvWriter' is never closed
 [ecj-lint] --
 [ecj-lint] 6. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/utils/DatasetSplitter.java
 (at line 80)
 [ecj-lint] IndexWriter trainingWriter = new IndexWriter(trainingIndex, new 
IndexWriterConfig(analyzer));
 [ecj-lint] ^^
 [ecj-lint] Resource leak: 'trainingWriter' is never closed
 [ecj-lint] --
 [ecj-lint] 7. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/utils/DatasetSplitter.java
 (at line 87)
 [ecj-lint] throw new IllegalStateException("the classFieldName \"" + 
classFieldName + "\" must index sorted doc values");
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'testWriter' is not closed at this location
 [ecj-lint] --
 [ecj-lint] 8. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/utils/DatasetSplitter.java
 (at line 87)
 [ecj-lint] throw new IllegalStateException("the classFieldName \"" + 
classFieldName + "\" must index sorted doc values");
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'trainingWriter' is not closed at this location
 [ecj-lint] --
 [ecj-lint] 9. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/classification/src/java/org/apache/lucene/classification/utils/DatasetSplitter.java
 (at line 87)
 [ecj-lint] throw new IllegalStateException("the classFieldName \"" + 
classFieldName + "\" must index sorted doc values");
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'cvWriter' is not closed at this location
 [ecj-lint] --
 [ecj-lint] 9 problems (3 errors, 6 warnings)

BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/build.xml:740: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/build.xml:101: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build.xml:204: The 
following error occurred while executing this line:

[jira] [Commented] (SOLR-9237) DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter can't be overidden

2016-06-21 Thread Gethin James (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341931#comment-15341931
 ] 

Gethin James commented on SOLR-9237:


The commit that added the FvhContainer was 
https://github.com/covolution/lucene-solr/commit/e37e49ed46c42da4ea4fbd74f08de1ba10af7923
 by [~janhoy]

> DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter can't be 
> overidden
> ---
>
> Key: SOLR-9237
> URL: https://issues.apache.org/jira/browse/SOLR-9237
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.1
>Reporter: Gethin James
>
> With *6.1.0* the *protected* method 
> DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter passes in a 
> *private* class called FvhContainer which makes it very difficult to extend.
> I have code that extends DefaultSolrHighlighter which I can't fix due to this 
> issue.
> Could you consider making FvhContainer  "protected" and use a constructor?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7287) New lemma-tizer plugin for ukrainian language.

2016-06-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341899#comment-15341899
 ] 

ASF subversion and git services commented on LUCENE-7287:
-

Commit 21eb654e408727b56a78c1c6a00541efe6eda31e in lucene-solr's branch 
refs/heads/branch_6x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=21eb654 ]

LUCENE-7287: don't use full paths to resources


> New lemma-tizer plugin for ukrainian language.
> --
>
> Key: LUCENE-7287
> URL: https://issues.apache.org/jira/browse/LUCENE-7287
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Dmytro Hambal
>Priority: Minor
>  Labels: analysis, language, plugin
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7287.patch
>
>
> Hi all,
> I wonder whether you are interested in supporting a plugin which provides a 
> mapping between ukrainian word forms and their lemmas. Some tests and docs go 
> out-of-the-box =) .
> https://github.com/mrgambal/elasticsearch-ukrainian-lemmatizer
> It's really simple but still works and generates some value for its users.
> More: https://github.com/elastic/elasticsearch/issues/18303



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7287) New lemma-tizer plugin for ukrainian language.

2016-06-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341896#comment-15341896
 ] 

ASF subversion and git services commented on LUCENE-7287:
-

Commit ceb6e21f84414b42f6b1b3866fc5b62e7ab474c0 in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ceb6e21 ]

LUCENE-7287: don't use full paths to resources


> New lemma-tizer plugin for ukrainian language.
> --
>
> Key: LUCENE-7287
> URL: https://issues.apache.org/jira/browse/LUCENE-7287
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Dmytro Hambal
>Priority: Minor
>  Labels: analysis, language, plugin
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7287.patch
>
>
> Hi all,
> I wonder whether you are interested in supporting a plugin which provides a 
> mapping between ukrainian word forms and their lemmas. Some tests and docs go 
> out-of-the-box =) .
> https://github.com/mrgambal/elasticsearch-ukrainian-lemmatizer
> It's really simple but still works and generates some value for its users.
> More: https://github.com/elastic/elasticsearch/issues/18303



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7194) Ban Math.toRadians/toDegrees and remove all usages of it

2016-06-21 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341862#comment-15341862
 ] 

Michael McCandless commented on LUCENE-7194:


[~daddywri] Oh, I think you just add it to 
{{lucene/tools/forbiddenApis/lucene.txt}}?  And then run {{ant precommit}} and 
you should see failures from places using these APIs...

> Ban Math.toRadians/toDegrees and remove all usages of it
> 
>
> Key: LUCENE-7194
> URL: https://issues.apache.org/jira/browse/LUCENE-7194
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Assignee: Karl Wright
>
> The result of these methods is unreliable and changes across jvm versions: we 
> cannot use these methods.
> The following program prints 0.7722082215479366 on previous versions of java 
> but 0.7722082215479367 on java 9 because Math.toRadians is no longer doing 
> the same thing:
> {code}
> public class test {
>   public static void main(String args[]) throws Exception {
> System.out.println(Math.toRadians(44.244272));
>   }
> }
> {code}
> This is because of https://bugs.openjdk.java.net/browse/JDK-4477961. 
> I am not really sure its a bug, because the method says that the conversion 
> is "generally inexact". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-21 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341858#comment-15341858
 ] 

Varun Thacker commented on SOLR-7374:
-

Regarding the cloud level changes required, I agree with you'll - lets just 
then do it in another Jira as long as it's not SOLR-9055 as those have other 
enhancements  . I guess I got thrown off by this comment previously.

bq. The collection level changes are being captured in the patch for SOLR-9055

> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Mark Miller
> Fix For: 6.2
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch
>
>
> Currently when we create a backup we use SimpleFSDirectory to write the 
> backup indexes. Similarly during a restore we open the index using 
> FSDirectory.open . 
> We should provide a param called {{directoryImpl}} or {{type}} which will be 
> used to specify the Directory implementation to backup the index. 
> Likewise during a restore you would need to specify the directory impl which 
> was used during backup so that the index can be opened correctly.
> This param will address the problem that currently if a user is running Solr 
> on HDFS there is no way to use the backup/restore functionality as the 
> directory is hardcoded.
> With this one could be running Solr on a local FS but backup the index on 
> HDFS etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7287) New lemma-tizer plugin for ukrainian language.

2016-06-21 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341848#comment-15341848
 ] 

Michael McCandless commented on LUCENE-7287:


[~thetaphi] oh yeah I'll fix that!

> New lemma-tizer plugin for ukrainian language.
> --
>
> Key: LUCENE-7287
> URL: https://issues.apache.org/jira/browse/LUCENE-7287
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Dmytro Hambal
>Priority: Minor
>  Labels: analysis, language, plugin
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7287.patch
>
>
> Hi all,
> I wonder whether you are interested in supporting a plugin which provides a 
> mapping between ukrainian word forms and their lemmas. Some tests and docs go 
> out-of-the-box =) .
> https://github.com/mrgambal/elasticsearch-ukrainian-lemmatizer
> It's really simple but still works and generates some value for its users.
> More: https://github.com/elastic/elasticsearch/issues/18303



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9234) srcField works only when all fields are captured in the /update/json/docs endpoint

2016-06-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341838#comment-15341838
 ] 

ASF subversion and git services commented on SOLR-9234:
---

Commit 8e5d40b22a3968df065dfc078ef81cbb031f0e4a in lucene-solr's branch 
refs/heads/branch_5_5 from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8e5d40b ]

SOLR-9234: java 7 compile errors


> srcField works only when all fields are captured in the /update/json/docs 
> endpoint
> --
>
> Key: SOLR-9234
> URL: https://issues.apache.org/jira/browse/SOLR-9234
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.5.2, 6.2
>
> Attachments: SOLR-9234.patch
>
>
> {code}
> $ cat ~/Desktop/nested.json
> {
>   "id" : "123",
>   "description": "Testing /json/docs srcField",
>   "nested_data" : {
> "nested_inside" : "check check check"
>   }
> }
> $ curl 
> "http://localhost:8983/solr/test/update/json/docs?srcField=original_json_s=/=description_s:/descriptio=id:/id=true=true;
>  -H "Content-type:application/json" -d @/Users/erikhatcher/Desktop/nested.json
> {"responseHeader":{"status":0,"QTime":1},"docs":[{"id":"123","description_s":"Testing
>  /json/docs srcField","original_json_s":"{  \"id\" : \"123\",  
> \"description\": \"Testing /json/docs srcField\",  \"nested_data\" : {\" 
> : \"  }}"}]}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-21 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341836#comment-15341836
 ] 

Varun Thacker commented on SOLR-7374:
-

FYI I created SOLR-9239 for discussing the two approaches that we have for core 
level backup/restore

> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Mark Miller
> Fix For: 6.2
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch
>
>
> Currently when we create a backup we use SimpleFSDirectory to write the 
> backup indexes. Similarly during a restore we open the index using 
> FSDirectory.open . 
> We should provide a param called {{directoryImpl}} or {{type}} which will be 
> used to specify the Directory implementation to backup the index. 
> Likewise during a restore you would need to specify the directory impl which 
> was used during backup so that the index can be opened correctly.
> This param will address the problem that currently if a user is running Solr 
> on HDFS there is no way to use the backup/restore functionality as the 
> directory is hardcoded.
> With this one could be running Solr on a local FS but backup the index on 
> HDFS etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9239) Deprecate backup/restore via replication handler in favour of an equvalent core admin api

2016-06-21 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-9239:
---

 Summary: Deprecate  backup/restore via replication handler in 
favour of an equvalent core admin api
 Key: SOLR-9239
 URL: https://issues.apache.org/jira/browse/SOLR-9239
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Priority: Minor


In SOLR-5750 we added core backup/restore hooks via the core admin API . This 
was done at the time to leverage the backup/restore code from the cloud classes 
. A discussion on why we have two ways for core backup/restore came up in 
SOLR-7374 . 

Currently we document core backup/restore only via the replication handler. I 
think we should move in favour of it being a core admin operations. Here are 
some of the reasons why I think thats a good idea :
- SolrCloud backup/restore is implemented as a collection api. The logical 
equivalent of it for standalone should be core admin and not replication 
handler . 
- More importantly core admin supports async calls. So using the backup/restore 
api will be a lot cleaner. We don't need a separate backup/ restore status API 
. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9238) HashQParserPlugin should build a segment level filter

2016-06-21 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9238:
-
Summary: HashQParserPlugin should build a segment level filter  (was: 
HashQParserPlugin should create a segment level filter)

> HashQParserPlugin should build a segment level filter
> -
>
> Key: SOLR-9238
> URL: https://issues.apache.org/jira/browse/SOLR-9238
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>
> Currently the HashQParserPlugin creates a standard top level filter used with 
> the filter cache. This is not real-time friendly on a large index or when 
> there are many partition filters that need to be created.
> This ticket will change the HashQParserPlugin to create a segment level 
> filter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9238) HashQParserPlugin should create a segment level filter

2016-06-21 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9238:
-
Description: 
Currently the HashQParserPlugin creates a standard top level filter used with 
the filter cache. This is not real-time friendly on a large index or when there 
are many partition filters that need to be created.

This ticket will change the HashQParserPlugin to create a segment level filter.

  was:
Currently the HashQParserPlugin creates a standard top level filter used with 
the filter cache. This is not real-time friendly on a large index or when there 
are many partitions filters that need to be created.

This ticket will change the HashQParserPlugin to create a segment level filter.


> HashQParserPlugin should create a segment level filter
> --
>
> Key: SOLR-9238
> URL: https://issues.apache.org/jira/browse/SOLR-9238
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>
> Currently the HashQParserPlugin creates a standard top level filter used with 
> the filter cache. This is not real-time friendly on a large index or when 
> there are many partition filters that need to be created.
> This ticket will change the HashQParserPlugin to create a segment level 
> filter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9238) HashQParserPlugin should build a segment level filter

2016-06-21 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9238:
-
Description: 
Currently the HashQParserPlugin creates a standard top level filter used with 
the filter cache. This is not real-time friendly on a large index or when there 
are many partition filters that need to be created.

This ticket will change the HashQParserPlugin to build a segment level filter.

  was:
Currently the HashQParserPlugin creates a standard top level filter used with 
the filter cache. This is not real-time friendly on a large index or when there 
are many partition filters that need to be created.

This ticket will change the HashQParserPlugin to create a segment level filter.


> HashQParserPlugin should build a segment level filter
> -
>
> Key: SOLR-9238
> URL: https://issues.apache.org/jira/browse/SOLR-9238
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>
> Currently the HashQParserPlugin creates a standard top level filter used with 
> the filter cache. This is not real-time friendly on a large index or when 
> there are many partition filters that need to be created.
> This ticket will change the HashQParserPlugin to build a segment level filter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9238) HashQParserPlugin should create a segment level filter

2016-06-21 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-9238:


 Summary: HashQParserPlugin should create a segment level filter
 Key: SOLR-9238
 URL: https://issues.apache.org/jira/browse/SOLR-9238
 Project: Solr
  Issue Type: Improvement
Reporter: Joel Bernstein


Currently the HashQParserPlugin creates a standard top level filter used with 
the filter cache. This is not real-time friendly on a large index or when there 
are many partitions filters that need to be created.

This ticket will change the HashQParserPlugin to create a segment level filter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9237) DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter can't be overidden

2016-06-21 Thread Gethin James (JIRA)
Gethin James created SOLR-9237:
--

 Summary: 
DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter can't be overidden
 Key: SOLR-9237
 URL: https://issues.apache.org/jira/browse/SOLR-9237
 Project: Solr
  Issue Type: Bug
Affects Versions: 6.1
Reporter: Gethin James


With *6.1.0* the *protected* method 
DefaultSolrHighlighter.doHighlightingByFastVectorHighlighter passes in a 
*private* class called FvhContainer which makes it very difficult to extend.

I have code that extends DefaultSolrHighlighter which I can't fix due to this 
issue.
Could you consider making FvhContainer  "protected" and use a constructor?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9236) AutoAddReplicas feature with one replica loses some documents not committed during failover

2016-06-21 Thread Eungsop Yoo (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eungsop Yoo updated SOLR-9236:
--
Attachment: SOLR-9236.patch

> AutoAddReplicas feature with one replica loses some documents not committed 
> during failover
> ---
>
> Key: SOLR-9236
> URL: https://issues.apache.org/jira/browse/SOLR-9236
> Project: Solr
>  Issue Type: Bug
>  Components: hdfs, SolrCloud
>Reporter: Eungsop Yoo
>Priority: Minor
> Attachments: SOLR-9236.patch
>
>
> I need to index huge amount of logs, so I decide to use AutoAddReplica 
> feature with only one replica.
> When using AutoAddReplicas with one replica, some benefits are expected.
> - no redundant data files for replicas
> -- saving disk usage
> - best indexing performance 
> I expected that Solr fails over just like HBase.
> The feature worked almost as it was expected, except for some missing 
> documents during failover.
> I found two regions for the missing.
> 1. The leader replica does not replay any transaction logs. But when there is 
> only one replica, it should be the leader.
> So I made the leader replica replay the transaction logs when using 
> AutoAddReplicas with on replica.
> But the above fix did not resolve the problem.
> 2. As failover occurred, the transaction log directory had a deeper directory 
> depth. Just like this, tlog/tlog/tlog/...
> The transaction log could not be replayed, because the transaction log 
> directory was changed during failover. 
> So I made the transaction log directory not changed during failover.
> After these fixes, AutoAddReplicas with one replica fails over well without 
> losing any documents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9236) AutoAddReplicas feature with one replica loses some documents not committed during failover

2016-06-21 Thread Eungsop Yoo (JIRA)
Eungsop Yoo created SOLR-9236:
-

 Summary: AutoAddReplicas feature with one replica loses some 
documents not committed during failover
 Key: SOLR-9236
 URL: https://issues.apache.org/jira/browse/SOLR-9236
 Project: Solr
  Issue Type: Bug
  Components: hdfs, SolrCloud
Reporter: Eungsop Yoo
Priority: Minor


I need to index huge amount of logs, so I decide to use AutoAddReplica feature 
with only one replica.
When using AutoAddReplicas with one replica, some benefits are expected.
- no redundant data files for replicas
-- saving disk usage
- best indexing performance 

I expected that Solr fails over just like HBase.
The feature worked almost as it was expected, except for some missing documents 
during failover.
I found two regions for the missing.

1. The leader replica does not replay any transaction logs. But when there is 
only one replica, it should be the leader.
So I made the leader replica replay the transaction logs when using 
AutoAddReplicas with on replica.

But the above fix did not resolve the problem.

2. As failover occurred, the transaction log directory had a deeper directory 
depth. Just like this, tlog/tlog/tlog/...
The transaction log could not be replayed, because the transaction log 
directory was changed during failover. 
So I made the transaction log directory not changed during failover.

After these fixes, AutoAddReplicas with one replica fails over well without 
losing any documents.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9035) New cwiki page: IndexUpgrader

2016-06-21 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-9035.
-
   Resolution: Fixed
Fix Version/s: 6.1

Moved page to be child of 
https://cwiki.apache.org/confluence/display/solr/Upgrading+a+Solr+Cluster for 
the 6.1 Ref Guide.

> New cwiki page: IndexUpgrader
> -
>
> Key: SOLR-9035
> URL: https://issues.apache.org/jira/browse/SOLR-9035
> Project: Solr
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 6.0
>Reporter: Bram Van Dam
>Assignee: Cassandra Targett
>  Labels: documentation
> Fix For: 6.1
>
> Attachments: indexupgrader.html
>
>
> The cwiki does not contain any IndexUpgrader documentation, but it is 
> mentioned in passing in the "Major Changes"-pages.
> I'm attaching a file containing some basic usage instructions and adminitions 
> found in the IndexUpgrader javadoc. 
> Once the page is created, it would ideally be linked to from the Major 
> Changes page as well as the Upgrading Solr page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[CANCELLED][VOTE] Release Lucene/Solr 5.5.2 RC1

2016-06-21 Thread Steve Rowe
Okay, I’ll go make RC2 now.

--
Steve
www.lucidworks.com

> On Jun 21, 2016, at 9:25 AM, Noble Paul  wrote:
> 
> Thanks,
> I have committed it already
> 
> On Tue, Jun 21, 2016 at 6:33 PM, Steve Rowe  wrote:
>> Sure, sounds like a good bug to squash.
>> 
>> --
>> Steve
>> www.lucidworks.com
>> 
>>> On Jun 21, 2016, at 8:58 AM, Noble Paul  wrote:
>>> 
>>> HI Steve,
>>> Sorry for the trouble.
>>> Is it possible to include
>>> 
>>> SOLR-9234: srcField works only when all fields are captured in the
>>> /update/json/docs endpoint
>>> 
>>> 
>>> 
>>> On Tue, Jun 21, 2016 at 5:48 PM, Steve Rowe  wrote:
 Please vote for release candidate 1 for Lucene/Solr 5.5.2.
 
 The artifacts can be downloaded from:
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.2-RC1-revcfb792be2c783fc0dfee97a779281e0ef0148006/
 
 You can run the smoke tester directly with this command:
 
 python3 -u dev-tools/scripts/smokeTestRelease.py \
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.2-RC1-revcfb792be2c783fc0dfee97a779281e0ef0148006/
 
 +1 from me: Docs, changes and javadocs look good, and smoke tester says: 
 SUCCESS! [0:26:23.825686]
 
 --
 Steve
 www.lucidworks.com
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
>>> 
>>> 
>>> 
>>> --
>>> -
>>> Noble Paul
>>> 
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>> 
>> 
>> 
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>> 
> 
> 
> 
> -- 
> -
> Noble Paul
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.5-Windows (64bit/jdk1.8.0_92) - Build # 94 - Still Failing!

2016-06-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Windows/94/
Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.client.solrj.impl.CloudSolrClientTest.test

Error Message:
There should be 3 documents because there should be two id=1 docs due to 
overwrite=false expected:<3> but was:<1>

Stack Trace:
java.lang.AssertionError: There should be 3 documents because there should be 
two id=1 docs due to overwrite=false expected:<3> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([BB3AD86D363B42DA:336EE7B798C72F22]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testOverwriteOption(CloudSolrClientTest.java:174)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.test(CloudSolrClientTest.java:120)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (SOLR-9234) srcField works only when all fields are captured in the /update/json/docs endpoint

2016-06-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341757#comment-15341757
 ] 

ASF subversion and git services commented on SOLR-9234:
---

Commit cd981cec50617f070fcd535d0cdcafce9019e5d1 in lucene-solr's branch 
refs/heads/branch_5_5 from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cd981ce ]

SOLR-9234: srcField parameter works only when all fields are captured in the 
/update/json/docs endpoint


> srcField works only when all fields are captured in the /update/json/docs 
> endpoint
> --
>
> Key: SOLR-9234
> URL: https://issues.apache.org/jira/browse/SOLR-9234
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.5.2, 6.2
>
> Attachments: SOLR-9234.patch
>
>
> {code}
> $ cat ~/Desktop/nested.json
> {
>   "id" : "123",
>   "description": "Testing /json/docs srcField",
>   "nested_data" : {
> "nested_inside" : "check check check"
>   }
> }
> $ curl 
> "http://localhost:8983/solr/test/update/json/docs?srcField=original_json_s=/=description_s:/descriptio=id:/id=true=true;
>  -H "Content-type:application/json" -d @/Users/erikhatcher/Desktop/nested.json
> {"responseHeader":{"status":0,"QTime":1},"docs":[{"id":"123","description_s":"Testing
>  /json/docs srcField","original_json_s":"{  \"id\" : \"123\",  
> \"description\": \"Testing /json/docs srcField\",  \"nested_data\" : {\" 
> : \"  }}"}]}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9234) srcField works only when all fields are captured in the /update/json/docs endpoint

2016-06-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341753#comment-15341753
 ] 

ASF subversion and git services commented on SOLR-9234:
---

Commit 0db382e96a3e12e073c96b3dfc8bb7b0c69c8bbd in lucene-solr's branch 
refs/heads/branch_5_5 from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0db382e ]

SOLR-9234: srcField parameter works only when all fields are captured in the 
/update/json/docs endpoint


> srcField works only when all fields are captured in the /update/json/docs 
> endpoint
> --
>
> Key: SOLR-9234
> URL: https://issues.apache.org/jira/browse/SOLR-9234
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.5.2, 6.2
>
> Attachments: SOLR-9234.patch
>
>
> {code}
> $ cat ~/Desktop/nested.json
> {
>   "id" : "123",
>   "description": "Testing /json/docs srcField",
>   "nested_data" : {
> "nested_inside" : "check check check"
>   }
> }
> $ curl 
> "http://localhost:8983/solr/test/update/json/docs?srcField=original_json_s=/=description_s:/descriptio=id:/id=true=true;
>  -H "Content-type:application/json" -d @/Users/erikhatcher/Desktop/nested.json
> {"responseHeader":{"status":0,"QTime":1},"docs":[{"id":"123","description_s":"Testing
>  /json/docs srcField","original_json_s":"{  \"id\" : \"123\",  
> \"description\": \"Testing /json/docs srcField\",  \"nested_data\" : {\" 
> : \"  }}"}]}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 5.5.2 RC1

2016-06-21 Thread Noble Paul
Thanks,
I have committed it already

On Tue, Jun 21, 2016 at 6:33 PM, Steve Rowe  wrote:
> Sure, sounds like a good bug to squash.
>
> --
> Steve
> www.lucidworks.com
>
>> On Jun 21, 2016, at 8:58 AM, Noble Paul  wrote:
>>
>> HI Steve,
>> Sorry for the trouble.
>> Is it possible to include
>>
>> SOLR-9234: srcField works only when all fields are captured in the
>> /update/json/docs endpoint
>>
>>
>>
>> On Tue, Jun 21, 2016 at 5:48 PM, Steve Rowe  wrote:
>>> Please vote for release candidate 1 for Lucene/Solr 5.5.2.
>>>
>>> The artifacts can be downloaded from:
>>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.2-RC1-revcfb792be2c783fc0dfee97a779281e0ef0148006/
>>>
>>> You can run the smoke tester directly with this command:
>>>
>>> python3 -u dev-tools/scripts/smokeTestRelease.py \
>>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.2-RC1-revcfb792be2c783fc0dfee97a779281e0ef0148006/
>>>
>>> +1 from me: Docs, changes and javadocs look good, and smoke tester says: 
>>> SUCCESS! [0:26:23.825686]
>>>
>>> --
>>> Steve
>>> www.lucidworks.com
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>
>>
>>
>> --
>> -
>> Noble Paul
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>



-- 
-
Noble Paul

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9234) srcField works only when all fields are captured in the /update/json/docs endpoint

2016-06-21 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-9234.
--
   Resolution: Fixed
Fix Version/s: 6.2
   5.5.2

> srcField works only when all fields are captured in the /update/json/docs 
> endpoint
> --
>
> Key: SOLR-9234
> URL: https://issues.apache.org/jira/browse/SOLR-9234
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.5.2, 6.2
>
> Attachments: SOLR-9234.patch
>
>
> {code}
> $ cat ~/Desktop/nested.json
> {
>   "id" : "123",
>   "description": "Testing /json/docs srcField",
>   "nested_data" : {
> "nested_inside" : "check check check"
>   }
> }
> $ curl 
> "http://localhost:8983/solr/test/update/json/docs?srcField=original_json_s=/=description_s:/descriptio=id:/id=true=true;
>  -H "Content-type:application/json" -d @/Users/erikhatcher/Desktop/nested.json
> {"responseHeader":{"status":0,"QTime":1},"docs":[{"id":"123","description_s":"Testing
>  /json/docs srcField","original_json_s":"{  \"id\" : \"123\",  
> \"description\": \"Testing /json/docs srcField\",  \"nested_data\" : {\" 
> : \"  }}"}]}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9234) srcField works only when all fields are captured in the /update/json/docs endpoint

2016-06-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341745#comment-15341745
 ] 

ASF subversion and git services commented on SOLR-9234:
---

Commit 73d5f1c52cd7a9531f07aea6c9f88d1ff253ac64 in lucene-solr's branch 
refs/heads/branch_6x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=73d5f1c ]

SOLR-9234: srcField parameter works only when all fields are captured in the 
/update/json/docs endpoint


> srcField works only when all fields are captured in the /update/json/docs 
> endpoint
> --
>
> Key: SOLR-9234
> URL: https://issues.apache.org/jira/browse/SOLR-9234
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-9234.patch
>
>
> {code}
> $ cat ~/Desktop/nested.json
> {
>   "id" : "123",
>   "description": "Testing /json/docs srcField",
>   "nested_data" : {
> "nested_inside" : "check check check"
>   }
> }
> $ curl 
> "http://localhost:8983/solr/test/update/json/docs?srcField=original_json_s=/=description_s:/descriptio=id:/id=true=true;
>  -H "Content-type:application/json" -d @/Users/erikhatcher/Desktop/nested.json
> {"responseHeader":{"status":0,"QTime":1},"docs":[{"id":"123","description_s":"Testing
>  /json/docs srcField","original_json_s":"{  \"id\" : \"123\",  
> \"description\": \"Testing /json/docs srcField\",  \"nested_data\" : {\" 
> : \"  }}"}]}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9234) srcField works only when all fields are captured in the /update/json/docs endpoint

2016-06-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341743#comment-15341743
 ] 

ASF subversion and git services commented on SOLR-9234:
---

Commit 060cacfdab25ab3ce345cd79d4d10ded9a40c09a in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=060cacf ]

SOLR-9234: srcField parameter works only when all fields are captured in the 
/update/json/docs endpoint


> srcField works only when all fields are captured in the /update/json/docs 
> endpoint
> --
>
> Key: SOLR-9234
> URL: https://issues.apache.org/jira/browse/SOLR-9234
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-9234.patch
>
>
> {code}
> $ cat ~/Desktop/nested.json
> {
>   "id" : "123",
>   "description": "Testing /json/docs srcField",
>   "nested_data" : {
> "nested_inside" : "check check check"
>   }
> }
> $ curl 
> "http://localhost:8983/solr/test/update/json/docs?srcField=original_json_s=/=description_s:/descriptio=id:/id=true=true;
>  -H "Content-type:application/json" -d @/Users/erikhatcher/Desktop/nested.json
> {"responseHeader":{"status":0,"QTime":1},"docs":[{"id":"123","description_s":"Testing
>  /json/docs srcField","original_json_s":"{  \"id\" : \"123\",  
> \"description\": \"Testing /json/docs srcField\",  \"nested_data\" : {\" 
> : \"  }}"}]}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.5-Linux (64bit/jdk1.7.0_80) - Build # 325 - Failure!

2016-06-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/325/
Java: 64bit/jdk1.7.0_80 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.schema.TestManagedSchemaAPI.test

Error Message:
Error from server at http://127.0.0.1:33167/solr/testschemaapi_shard1_replica2: 
ERROR: [doc=2] unknown field 'myNewField1'

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:33167/solr/testschemaapi_shard1_replica2: ERROR: 
[doc=2] unknown field 'myNewField1'
at 
__randomizedtesting.SeedInfo.seed([9A10A170080020D0:12449EAAA6FC4D28]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:632)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:981)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at 
org.apache.solr.schema.TestManagedSchemaAPI.testAddFieldAndDocument(TestManagedSchemaAPI.java:101)
at 
org.apache.solr.schema.TestManagedSchemaAPI.test(TestManagedSchemaAPI.java:69)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Created] (SOLR-9235) Indexing stuck after delete by range query

2016-06-21 Thread Anders Melchiorsen (JIRA)
Anders Melchiorsen created SOLR-9235:


 Summary: Indexing stuck after delete by range query
 Key: SOLR-9235
 URL: https://issues.apache.org/jira/browse/SOLR-9235
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 6.1, 6.0.1
Reporter: Anders Melchiorsen


Upgrading from Solr 4.0.0 to 6.0.1/6.1.0, this old query suddenly got our 
indexing stuck:

{noformat}
lastdate_a:{* TO 20160620} AND lastdate_p:{* TO 20160620} AND 
country:9
{noformat}

with this error logged:

{noformat}
2016-06-20 02:20:36.429 ERROR (commitScheduler-15-thread-1) [   x:mycore] 
o.a.s.u.CommitTracker auto commit error...:java.lang.NullPointerException
at 
org.apache.solr.query.SolrRangeQuery.createDocSet(SolrRangeQuery.java:156)
at 
org.apache.solr.query.SolrRangeQuery.access$200(SolrRangeQuery.java:57)
at 
org.apache.solr.query.SolrRangeQuery$ConstWeight.getSegState(SolrRangeQuery.java:412)
at 
org.apache.solr.query.SolrRangeQuery$ConstWeight.scorer(SolrRangeQuery.java:484)
at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorer(LRUQueryCache.java:617)
at org.apache.lucene.search.BooleanWeight.scorer(BooleanWeight.java:389)
at 
org.apache.solr.update.DeleteByQueryWrapper$1.scorer(DeleteByQueryWrapper.java:89)
at 
org.apache.lucene.index.BufferedUpdatesStream.applyQueryDeletes(BufferedUpdatesStream.java:694)
at 
org.apache.lucene.index.BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:262)
at 
org.apache.lucene.index.IndexWriter.applyAllDeletesAndUpdates(IndexWriter.java:3187)
at 
org.apache.lucene.index.IndexWriter.maybeApplyDeletes(IndexWriter.java:3173)
at 
org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2825)
at 
org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2989)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2956)
at 
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:619)
at org.apache.solr.update.CommitTracker.run(CommitTracker.java:217)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}

The types were:

{noformat}
  
  
  
  
{noformat}

but changing the date fields into "integer" seems to avoid the problem:

{noformat}
  
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: posion padding instead of positionIncrementGap

2016-06-21 Thread David Smiley
The other answers are fine; I've done this a couple times before to play
tricks with span queries.  It's a PITA if you want to integrate this with
Solr; you may end up writing an URP that puts together the merged
TokenStream then passes along a Field instance with this TS.  Solr's
DocumentBuilder will pass this straight through to the Lucene document and
skip the FieldType.  Alternatively if you really want to do the majority of
the work in a custom FieldType, you could write an URP that just wraps up
the values into something custom that will get passed into
FieldType.createFields by the DocumentBuilder.

Good luck.

On Mon, Jun 20, 2016 at 5:27 PM Mikhail Khludnev  wrote:

> Hello! Devs,
>
> I'm sure it's discussed many times or were in it air. If I have a few
> 3-tokens values in a multivalued field, how I can assign positions:
> 0,1,2...10,11,12,...20,21,22...
> instead of
> 0,1,2, 12,13,14, 24,25,26.. giving that positionIncrementGap=10 ?
>
> --
> Sincerely yours
> Mikhail Khludnev
> Principal Engineer,
> Grid Dynamics
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


Re: [VOTE] Release Lucene/Solr 5.5.2 RC1

2016-06-21 Thread Steve Rowe
Sure, sounds like a good bug to squash.

--
Steve
www.lucidworks.com

> On Jun 21, 2016, at 8:58 AM, Noble Paul  wrote:
> 
> HI Steve,
> Sorry for the trouble.
> Is it possible to include
> 
> SOLR-9234: srcField works only when all fields are captured in the
> /update/json/docs endpoint
> 
> 
> 
> On Tue, Jun 21, 2016 at 5:48 PM, Steve Rowe  wrote:
>> Please vote for release candidate 1 for Lucene/Solr 5.5.2.
>> 
>> The artifacts can be downloaded from:
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.2-RC1-revcfb792be2c783fc0dfee97a779281e0ef0148006/
>> 
>> You can run the smoke tester directly with this command:
>> 
>> python3 -u dev-tools/scripts/smokeTestRelease.py \
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.2-RC1-revcfb792be2c783fc0dfee97a779281e0ef0148006/
>> 
>> +1 from me: Docs, changes and javadocs look good, and smoke tester says: 
>> SUCCESS! [0:26:23.825686]
>> 
>> --
>> Steve
>> www.lucidworks.com
>> 
>> 
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>> 
> 
> 
> 
> -- 
> -
> Noble Paul
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 5.5.2 RC1

2016-06-21 Thread Noble Paul
HI Steve,
Sorry for the trouble.
Is it possible to include

SOLR-9234: srcField works only when all fields are captured in the
/update/json/docs endpoint



On Tue, Jun 21, 2016 at 5:48 PM, Steve Rowe  wrote:
> Please vote for release candidate 1 for Lucene/Solr 5.5.2.
>
> The artifacts can be downloaded from:
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.2-RC1-revcfb792be2c783fc0dfee97a779281e0ef0148006/
>
> You can run the smoke tester directly with this command:
>
> python3 -u dev-tools/scripts/smokeTestRelease.py \
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.2-RC1-revcfb792be2c783fc0dfee97a779281e0ef0148006/
>
> +1 from me: Docs, changes and javadocs look good, and smoke tester says: 
> SUCCESS! [0:26:23.825686]
>
> --
> Steve
> www.lucidworks.com
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>



-- 
-
Noble Paul

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9234) srcField works only when all fields are captured in the /update/json/docs endpoint

2016-06-21 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-9234:
-
Attachment: SOLR-9234.patch

> srcField works only when all fields are captured in the /update/json/docs 
> endpoint
> --
>
> Key: SOLR-9234
> URL: https://issues.apache.org/jira/browse/SOLR-9234
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-9234.patch
>
>
> {code}
> $ cat ~/Desktop/nested.json
> {
>   "id" : "123",
>   "description": "Testing /json/docs srcField",
>   "nested_data" : {
> "nested_inside" : "check check check"
>   }
> }
> $ curl 
> "http://localhost:8983/solr/test/update/json/docs?srcField=original_json_s=/=description_s:/descriptio=id:/id=true=true;
>  -H "Content-type:application/json" -d @/Users/erikhatcher/Desktop/nested.json
> {"responseHeader":{"status":0,"QTime":1},"docs":[{"id":"123","description_s":"Testing
>  /json/docs srcField","original_json_s":"{  \"id\" : \"123\",  
> \"description\": \"Testing /json/docs srcField\",  \"nested_data\" : {\" 
> : \"  }}"}]}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[VOTE] Release Lucene/Solr 5.5.2 RC1

2016-06-21 Thread Steve Rowe
Please vote for release candidate 1 for Lucene/Solr 5.5.2.

The artifacts can be downloaded from:
https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.2-RC1-revcfb792be2c783fc0dfee97a779281e0ef0148006/

You can run the smoke tester directly with this command:

python3 -u dev-tools/scripts/smokeTestRelease.py \
https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.2-RC1-revcfb792be2c783fc0dfee97a779281e0ef0148006/

+1 from me: Docs, changes and javadocs look good, and smoke tester says: 
SUCCESS! [0:26:23.825686]

--
Steve
www.lucidworks.com


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9234) srcField works only when all fields are captured in the /update/json/docs endpoint

2016-06-21 Thread Noble Paul (JIRA)
Noble Paul created SOLR-9234:


 Summary: srcField works only when all fields are captured in the 
/update/json/docs endpoint
 Key: SOLR-9234
 URL: https://issues.apache.org/jira/browse/SOLR-9234
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul


{code}
$ cat ~/Desktop/nested.json
{
  "id" : "123",
  "description": "Testing /json/docs srcField",

  "nested_data" : {
"nested_inside" : "check check check"
  }
}

$ curl 
"http://localhost:8983/solr/test/update/json/docs?srcField=original_json_s=/=description_s:/descriptio=id:/id=true=true;
 -H "Content-type:application/json" -d @/Users/erikhatcher/Desktop/nested.json
{"responseHeader":{"status":0,"QTime":1},"docs":[{"id":"123","description_s":"Testing
 /json/docs srcField","original_json_s":"{  \"id\" : \"123\",  \"description\": 
\"Testing /json/docs srcField\",  \"nested_data\" : {\" : \"  }}"}]}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7287) New lemma-tizer plugin for ukrainian language.

2016-06-21 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341653#comment-15341653
 ] 

Uwe Schindler edited comment on LUCENE-7287 at 6/21/16 12:10 PM:
-

[~mikemccand]: Can you remove the absolute path here?

{code:java}
return 
Dictionary.read(UkrainianMorfologikAnalyzer.class.getResource("/org/apache/lucene/analysis/uk/ukrainian.dict"));
{code}

The file is in same package, so just the filename should be fine to resolve the 
URL.


was (Author: thetaphi):
Mike: Can you remove the absolute path here?

{code:java}
return 
Dictionary.read(UkrainianMorfologikAnalyzer.class.getResource("/org/apache/lucene/analysis/uk/ukrainian.dict"));
{code}

The file is in same package, so just the filename should be fine to resolve the 
URL.

> New lemma-tizer plugin for ukrainian language.
> --
>
> Key: LUCENE-7287
> URL: https://issues.apache.org/jira/browse/LUCENE-7287
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Dmytro Hambal
>Priority: Minor
>  Labels: analysis, language, plugin
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7287.patch
>
>
> Hi all,
> I wonder whether you are interested in supporting a plugin which provides a 
> mapping between ukrainian word forms and their lemmas. Some tests and docs go 
> out-of-the-box =) .
> https://github.com/mrgambal/elasticsearch-ukrainian-lemmatizer
> It's really simple but still works and generates some value for its users.
> More: https://github.com/elastic/elasticsearch/issues/18303



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_92) - Build # 17030 - Still Failing!

2016-06-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17030/
Java: 32bit/jdk1.8.0_92 -server -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
[/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J1/temp/solr.handler.TestReplicationHandler_A3298308E1B689BB-001/solr-instance-030/./collection1/data,
 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J1/temp/solr.handler.TestReplicationHandler_A3298308E1B689BB-001/solr-instance-030/./collection1/data/index.20160621114739857,
 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J1/temp/solr.handler.TestReplicationHandler_A3298308E1B689BB-001/solr-instance-030/./collection1/data/index.20160621114739779]
 expected:<2> but was:<3>

Stack Trace:
java.lang.AssertionError: 
[/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J1/temp/solr.handler.TestReplicationHandler_A3298308E1B689BB-001/solr-instance-030/./collection1/data,
 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J1/temp/solr.handler.TestReplicationHandler_A3298308E1B689BB-001/solr-instance-030/./collection1/data/index.20160621114739857,
 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J1/temp/solr.handler.TestReplicationHandler_A3298308E1B689BB-001/solr-instance-030/./collection1/data/index.20160621114739779]
 expected:<2> but was:<3>
at 
__randomizedtesting.SeedInfo.seed([A3298308E1B689BB:545A6D50275E265D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:900)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1332)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (LUCENE-7287) New lemma-tizer plugin for ukrainian language.

2016-06-21 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341653#comment-15341653
 ] 

Uwe Schindler commented on LUCENE-7287:
---

Mike: Can you remove the absolute path here?

{code:java}
return 
Dictionary.read(UkrainianMorfologikAnalyzer.class.getResource("/org/apache/lucene/analysis/uk/ukrainian.dict"));
{code}

The file is in same package, so just the filename should be fine to resolve the 
URL.

> New lemma-tizer plugin for ukrainian language.
> --
>
> Key: LUCENE-7287
> URL: https://issues.apache.org/jira/browse/LUCENE-7287
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Dmytro Hambal
>Priority: Minor
>  Labels: analysis, language, plugin
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7287.patch
>
>
> Hi all,
> I wonder whether you are interested in supporting a plugin which provides a 
> mapping between ukrainian word forms and their lemmas. Some tests and docs go 
> out-of-the-box =) .
> https://github.com/mrgambal/elasticsearch-ukrainian-lemmatizer
> It's really simple but still works and generates some value for its users.
> More: https://github.com/elastic/elasticsearch/issues/18303



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7350) Let classifiers be constructed from IndexReaders

2016-06-21 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili resolved LUCENE-7350.
-
Resolution: Fixed

> Let classifiers be constructed from IndexReaders
> 
>
> Key: LUCENE-7350
> URL: https://issues.apache.org/jira/browse/LUCENE-7350
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: master (7.0)
>
>
> Current {{Classifier}} implementations are built from {{LeafReaders}}, this 
> is an heritage of using certain Lucene 4.x {{AtomicReader}}'s specific APIs; 
> this is no longer required as what is used by current implementations is 
> based on {{IndexReader}} APIs and therefore it makes more sense to use that 
> as constructor parameter as it doesn't give any additional benefit whereas it 
> requires client code to deal with classifiers that are tight to segments 
> (which doesn't make much sense).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7350) Let classifiers be constructed from IndexReaders

2016-06-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341579#comment-15341579
 ] 

ASF subversion and git services commented on LUCENE-7350:
-

Commit fcf4389d82e440d078f61ed9ad8c6dedce10d124 in lucene-solr's branch 
refs/heads/master from [~teofili]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fcf4389 ]

LUCENE-7350 - Let classifiers be constructed from IndexReaders


> Let classifiers be constructed from IndexReaders
> 
>
> Key: LUCENE-7350
> URL: https://issues.apache.org/jira/browse/LUCENE-7350
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: master (7.0)
>
>
> Current {{Classifier}} implementations are built from {{LeafReaders}}, this 
> is an heritage of using certain Lucene 4.x {{AtomicReader}}'s specific APIs; 
> this is no longer required as what is used by current implementations is 
> based on {{IndexReader}} APIs and therefore it makes more sense to use that 
> as constructor parameter as it doesn't give any additional benefit whereas it 
> requires client code to deal with classifiers that are tight to segments 
> (which doesn't make much sense).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >