[jira] [Updated] (SOLR-10719) ADDREPLICA fails if the instanceDir is a symlink

2017-05-25 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-10719:
--
Attachment: SOLR-10719.patch

First cut at a patch. It seems to handle the cases in this JIRA, i.e. if the 
dest is a symlink it'll still create the core and write the core.properties. 
Additionally, if the core.properties file cannot be created it throws an error.

Will look more tomorrow but so far this approach looks promising.

> ADDREPLICA fails if the instanceDir is a symlink
> 
>
> Key: SOLR-10719
> URL: https://issues.apache.org/jira/browse/SOLR-10719
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-10719.patch
>
>
> Well, it doesn't actually fail until you try to restart the Solr instance. 
> The root is that creating core.properties fails.
> This is due to SOLR-8260. CorePropertiesLocator.writePropertiesFile changed 
> from:
> propfile.getParentFile().mkdirs();
> to
> Files.createDirectories(propfile.getParent());
> The former (apparently) thinks it's OK if a symlink points to a directory, 
> but the latter throws an exception.
> So the behavior here is that the call appears to succeed, the replica is 
> created and is functional. Until you restart the instance when it's not 
> discovered.
> I hacked in a simple test to see if the parent existed already and skip the 
> call to createDirectories if so and ADDREPLICA works just fine. Restarting 
> Solr finds the replica.
> The test "for real" would probably have to be better than this as we probably 
> really want to keep from overwriting an existing replica and the like, didn't 
> check whether that's already accounted for though.
> There's another issue here that failing to write the properties file should 
> fail the ADDREPLICA IMO.
> [~romseygeek] I'm guessing that this is an unintended side-effect of 
> SOLR-8260 but wanted to check before diving in deeper.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10718) Configuring Basic auth prevents adding a collection

2017-05-25 Thread Jason Gerlowski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski updated SOLR-10718:
---
Attachment: repro.sh

Attaching a script to reproduce the issue on Linux boxes.

Can verify that the issue does exist on branch_6_5.  Cannot reproduce on master 
though; likely already fixed.

> Configuring Basic auth prevents adding a collection
> ---
>
> Key: SOLR-10718
> URL: https://issues.apache.org/jira/browse/SOLR-10718
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 6.5, 6.5.1
>Reporter: Shawn Feldman
>Priority: Minor
> Attachments: repro.sh
>
>
> Configure Basic auth according to documentation 
> Add basic auth params 
> SOLR_AUTH_TYPE="basic"
> SOLR_AUTHENTICATION_OPTS="-Dbasicauth=solr:SolrRocks"
> Try to add a collection 
> Receive a timeout and error in the logs 
> {code}
> java.lang.IllegalArgumentException: Credentials may not be null
> at org.apache.http.util.Args.notNull(Args.java:54)
> at org.apache.http.auth.AuthState.update(AuthState.java:113)
> at 
> org.apache.solr.client.solrj.impl.PreemptiveAuth.process(PreemptiveAuth.java:56)
> at 
> org.apache.http.protocol.ImmutableHttpProcessor.process(ImmutableHttpProcessor.java:132)
> at 
> org.apache.http.protocol.HttpRequestExecutor.preProcess(HttpRequestExecutor.java:166)
> at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:485)
> at 
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
> at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
> at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:515)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10749) Should ref guide asciidoc files' line length be limited?

2017-05-25 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025764#comment-16025764
 ] 

David Smiley commented on SOLR-10749:
-

Yes I think we should line length limit them.  Even if I have IDE features to 
wrap them, I need to remember to toggle it which is a hassle.  And I 
practically never (not even once a year) have to use this IDE feature otherwise 
in my work.

If we do reformatting, lets do it in commits just for that purpose so as not to 
confuse reformatting with actual content editing.

> Should ref guide asciidoc files' line length be limited?
> 
>
> Key: SOLR-10749
> URL: https://issues.apache.org/jira/browse/SOLR-10749
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Priority: Minor
>
> From [~dsmiley] and [~janhoy] on SOLR-10290:
> {quote}
> David: Can we auto-linewrap the asciidoc content we've imported somehow? The 
> lines are super-long in my IDE (IntelliJ). I can toggle the active editor's 
> "soft wrap" at least (View menu, then Active Editor menu).
> Jan: Yea, those lines are long
> {quote}
> From a conversation [~ctargett] and I had on SOLR-10379:
> {quote}
> Steve: I updated the ref guide docs. While I was at it, I installed and used 
> the IntelliJ plugin named "Wrap To Column" to wrap at 120 chars (a.k.a. "fill 
> paragraph") in the two .adoc files I edited.
> Cassandra: What is the point of this, or even, the big deal about asking your 
> IDE to do soft wraps instead?
> Steve: Not all editors support soft-wrapping. There is project consensus to 
> wrap code at 120-chars; why make an exception for these doc files?
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 6.6.0 RC2

2017-05-25 Thread David Smiley
Yes definitely -- add descriptions, and make the smoke tester more clear.

BTW I saw you commit javadocs to PeerSync and some other files in which you
placed @lucene.experimental before the actual description instead of last.
I've *never* seen that before... perhaps it's valid but it's very
non-standard to say the least.

Thanks for being the RM.  I'll try it one of these days.

~ David

On Thu, May 25, 2017 at 8:23 AM Ishan Chattopadhyaya <
ichattopadhy...@gmail.com> wrote:

> David,
> Regarding SOLR-10004, those warnings are due to missing javadocs
> descriptions. I've added javadocs for a few classes to see if the warnings
> for those files disappear. If that is actually the root cause for this, we
> have three options ahead of us:
>
> 1. Lets add descriptions to all affected classes before the next RC.
> 2. Lets either suppress those warnings or mentally ignore them ourselves.
> 3. Edit the warning message appropriately at the smoke tester to make the
> cause clearly.
>
> I suggest we go with the next RC, and try option 3.
>
> Thoughts?
>
> Regards,
> Ishan
>
>
> On Thu, May 25, 2017 at 12:42 PM, Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com> wrote:
>
>> > -1 on my own new example :-(
>>
>> Okay, lets re-spin.
>>
>> > I still get the javadocs warning I last reported but I'll accept that
>> as the new(?) normal.
>>
>> I couldn't understand why it was happening or how to fix it. SOLR-10004.
>> The package-info.java files are present, so I can't understand why these
>> files are missing. Would really appreciate if someone knowledgeable can
>> kindly look into this before the next RC.
>>
>>
>> On Thu, May 25, 2017 at 11:02 AM, Alexandre Rafalovitch <
>> arafa...@gmail.com> wrote:
>>
>>> -1 on my own new example :-(
>>>
>>> DIH Atom example suddenly no longer works because StackOverflow has
>>> JUST moved to https and our URL pulling implementation apparently does
>>> not follow the redirect and fails silently.
>>>
>>> https://stackoverflow.blog/2017/05/22/stack-overflow-flipped-switch-https/
>>>
>>> The change is 1 character in the
>>> example/example-DIH/solr/atom/conf/atom-data-config.xml from "http" in
>>> the url to "https".  I've committed it to master, branch_6x and
>>> branch_6_6 just now.
>>>
>>> Regards,
>>>Alex.
>>> P.s. The logs also complain about other libraries not loaded (for
>>> different DIH cores), but I think that's a long standing issue and is
>>> not a blocker.
>>> 
>>> http://www.solr-start.com/ - Resources for Solr users, new and
>>> experienced
>>>
>>>
>>> On 24 May 2017 at 23:50, David Smiley  wrote:
>>> > +1
>>> >
>>> > SUCCESS! [0:54:26.342469]
>>> >
>>> > I still get the javadocs warning I last reported but I'll accept that
>>> as the
>>> > new(?) normal.
>>> > ~ David
>>> >
>>> > On Wed, May 24, 2017 at 2:58 PM Ishan Chattopadhyaya
>>> >  wrote:
>>> >>
>>> >> Please vote for release candidate 2 for Lucene/Solr 6.6.0
>>> >>
>>> >> The artifacts can be downloaded from:
>>> >>
>>> >>
>>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.6.0-RC2-rev87107084c3a90f7ef253c00423b12cc1790f8c2f
>>> >>
>>> >> You can run the smoke tester directly with this command:
>>> >>
>>> >> python3 -u dev-tools/scripts/smokeTestRelease.py \
>>> >>
>>> >>
>>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.6.0-RC2-rev87107084c3a90f7ef253c00423b12cc1790f8c2f
>>> >>
>>> >> Here's my +1
>>> >> SUCCESS! [0:58:58.949598]
>>> >>
>>> > --
>>> > Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>>> > LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>>> > http://www.solrenterprisesearchserver.com
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>>
>>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_131) - Build # 6587 - Unstable!

2017-05-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6587/
Java: 32bit/jdk1.8.0_131 -client -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.core.TestJmxIntegration.testJmxRegistration

Error Message:
org.apache.lucene.store.AlreadyClosedException: Already closed

Stack Trace:
javax.management.RuntimeMBeanException: 
org.apache.lucene.store.AlreadyClosedException: Already closed
at 
__randomizedtesting.SeedInfo.seed([730C2F25446A667B:FDDD4B1F292B3E1E]:0)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:852)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:651)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
at 
org.apache.solr.core.TestJmxIntegration.testJmxRegistration(TestJmxIntegration.java:121)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: A way to tell DIH (IF id already exist in current index THEN SKIP to next file)

2017-05-25 Thread David Smiley
Hello,
You've reached the wrong list.  This is the dev list; you should use the
solr-user list.
~ David

On Thu, May 25, 2017 at 10:57 PM Alejandro Rivas Martinez <
alex.rivas...@gmail.com> wrote:

> Hello! My name is Alejandro and I need your help ASAP!.
> I'm new at SOlr and I have a situation with data import handler.
> Some Context Inform...
> I'm using a multi-core Solr in which each core index different kinds of
> files(one core to Audios, other to Softwares...). I'm using DIH to
> automatically index thousands of files from an ftp servers and extracts
> metadata with tika entity processor and it works Just Fine (absoluteUrlPath
> as unique id).
> I'm using a social networking approach, so users can modify metadata of
> any kind in a Front-end Asp.Net MVC webapp and I'm using SolrNet to
> communicate Solr with .Net. When users finish to edit metadata I update the
> index with the users changes, Commit and So far so good.
> THE TROUBLE-> When I run again data import handler to detect new
> documents in the ftp server, it changes again all the values to default DIH
> config files and the user modification, get lost.
> So Am I doing something wrong, is there a way to tell Data Import Handler
> something like IF id already exist THEN SKIP to next file. Any suggestion
> or information that I need to know to make that requirement happen.
> Thank U so much for your time And so Sorry about my english (It is not my
> native language).
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+168) - Build # 3593 - Unstable!

2017-05-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/3593/
Java: 64bit/jdk-9-ea+168 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Error from server at https://127.0.0.1:45699/solr/awhollynewcollection_0: 
Expected mime type application/octet-stream but got text/html.   
 
Error 510HTTP ERROR: 510 Problem 
accessing /solr/awhollynewcollection_0/select. Reason: 
{metadata={error-class=org.apache.solr.common.SolrException,root-error-class=org.apache.solr.common.SolrException},msg={awhollynewcollection_0:6},code=510}
 http://eclipse.org/jetty;>Powered by Jetty:// 
9.3.14.v20161028   

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:45699/solr/awhollynewcollection_0: Expected 
mime type application/octet-stream but got text/html. 


Error 510 


HTTP ERROR: 510
Problem accessing /solr/awhollynewcollection_0/select. Reason:

{metadata={error-class=org.apache.solr.common.SolrException,root-error-class=org.apache.solr.common.SolrException},msg={awhollynewcollection_0:6},code=510}
http://eclipse.org/jetty;>Powered by Jetty:// 
9.3.14.v20161028



at 
__randomizedtesting.SeedInfo.seed([9929F482A91E43F0:D15C8036AF2D6C65]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:578)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:477)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:407)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1383)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1134)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1237)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1237)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1237)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1237)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1237)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1073)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:522)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:563)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 

A way to tell DIH (IF id already exist in current index THEN SKIP to next file)

2017-05-25 Thread Alejandro Rivas Martinez
Hello! My name is Alejandro and I need your help ASAP!.
I'm new at SOlr and I have a situation with data import handler.
Some Context Inform...
I'm using a multi-core Solr in which each core index different kinds of
files(one core to Audios, other to Softwares...). I'm using DIH to
automatically index thousands of files from an ftp servers and extracts
metadata with tika entity processor and it works Just Fine (absoluteUrlPath
as unique id).
I'm using a social networking approach, so users can modify metadata of any
kind in a Front-end Asp.Net MVC webapp and I'm using SolrNet to communicate
Solr with .Net. When users finish to edit metadata I update the index with
the users changes, Commit and So far so good.
THE TROUBLE-> When I run again data import handler to detect new
documents in the ftp server, it changes again all the values to default DIH
config files and the user modification, get lost.
So Am I doing something wrong, is there a way to tell Data Import Handler
something like IF id already exist THEN SKIP to next file. Any suggestion
or information that I need to know to make that requirement happen.
Thank U so much for your time And so Sorry about my english (It is not my
native language).


[jira] [Updated] (SOLR-10753) Add array Stream Evaluator

2017-05-25 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10753:
--
Attachment: SOLR-10753.patch

Patch with tests

> Add array Stream Evaluator
> --
>
> Key: SOLR-10753
> URL: https://issues.apache.org/jira/browse/SOLR-10753
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (7.0)
>
> Attachments: SOLR-10753.patch
>
>
> The *array* Stream Evaluator returns an array of numbers. It can contain 
> numbers and evaluators that return numbers.
> Syntax:
> {code}
> a = array(1, 2, 3, 4, 5, 6)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10753) Add array Stream Evaluator

2017-05-25 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-10753:
-

 Summary: Add array Stream Evaluator
 Key: SOLR-10753
 URL: https://issues.apache.org/jira/browse/SOLR-10753
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


The *array* Stream Evaluator returns an array of numbers. It can contain 
numbers and evaluators that return numbers.

Syntax:

{code}
a = array(1, 2, 3, 4, 5, 6)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10753) Add array Stream Evaluator

2017-05-25 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-10753:
-

Assignee: Joel Bernstein

> Add array Stream Evaluator
> --
>
> Key: SOLR-10753
> URL: https://issues.apache.org/jira/browse/SOLR-10753
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (7.0)
>
>
> The *array* Stream Evaluator returns an array of numbers. It can contain 
> numbers and evaluators that return numbers.
> Syntax:
> {code}
> a = array(1, 2, 3, 4, 5, 6)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10753) Add array Stream Evaluator

2017-05-25 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10753:
--
Fix Version/s: master (7.0)

> Add array Stream Evaluator
> --
>
> Key: SOLR-10753
> URL: https://issues.apache.org/jira/browse/SOLR-10753
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Fix For: master (7.0)
>
>
> The *array* Stream Evaluator returns an array of numbers. It can contain 
> numbers and evaluators that return numbers.
> Syntax:
> {code}
> a = array(1, 2, 3, 4, 5, 6)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: dataDir param for collection CREATE command

2017-05-25 Thread Takumi Yoshida
Hi All,

Thank you for replying. I didn’t have think about the case which multi core 
exists in the same server.
I saw SOLR-6671 and found out that helping me well! I’m looking forward to 
merge and release it.

Thank you,

Takumi

2017/05/26 7:17 に、"Erick Erickson"  を書き込みました:

Ahhh, as usual Jan is far ahead of the curve ;)

On Thu, May 25, 2017 at 2:18 PM, Jan Høydahl  wrote:
> Yea, having exact dataDir as a system property is a flawed design dating
> back before distributed Solr…
> See https://issues.apache.org/jira/browse/SOLR-6671 for my proposal to 
solve
> the user requirement
> of placing ALL data dirs on a separate volume. The patch is almost ready 
for
> commit…
>
> Please review and comment :)
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> 25. mai 2017 kl. 22.21 skrev Shawn Heisey :
>
> On 5/25/2017 2:05 AM, takumi yoshida wrote:
>
> I wounder if we add new parameter dataDir for collection CREATE
> command. There have been dataDir for ADDREPLICA command. So, if we add
> dataDir for CREATE too, it would be more easy to handle data directory
> when we make new collection with new disk or NFS, etc ...
>
>
> I'm with Erick.  The danger of creating multiple cores on the same
> server with exactly the same dataDir is simply too high.  It doesn't
> make sense to add a dataDir parameter to the Collections API CREATE.
>
> By editing core.properties files on an individual server after the
> collection is created to add the dataDir property, and restarting Solr,
> you can move a dataDir.  The location would be relative to the place the
> core.properties file is found.  This could be dangerous, but done
> correctly, would work.
>
> Thanks,
> Shawn
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org





[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 854 - Still Unstable!

2017-05-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/854/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.testSplitWithChaosMonkey

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([7B7DEB998F6E574C:F05A3848CE68FCC8]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:187)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:144)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:139)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:856)
at 
org.apache.solr.cloud.ShardSplitTest.testSplitWithChaosMonkey(ShardSplitTest.java:437)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Created] (SOLR-10752) replicationFactor default should be 0 if tlogReplicas is specified when creating a collection

2017-05-25 Thread JIRA
Tomás Fernández Löbbe created SOLR-10752:


 Summary: replicationFactor default should be 0 if tlogReplicas is 
specified when creating a collection
 Key: SOLR-10752
 URL: https://issues.apache.org/jira/browse/SOLR-10752
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Tomás Fernández Löbbe
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9706) fetchIndex blocks incoming queries when issued on a replica in SolrCloud

2017-05-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025590#comment-16025590
 ] 

Tomás Fernández Löbbe commented on SOLR-9706:
-

Took a look at this in the context of replica types (SOLR-9835 and SOLR-10233). 
In TLOG and PULL replicas the searcher is not closed and there should be no 
block. I believe it's fine since, the same as with Master/Slave, there are no 
commits on those types of replicas, so they should not be flushing any segments.

> fetchIndex blocks incoming queries when issued on a replica in SolrCloud
> 
>
> Key: SOLR-9706
> URL: https://issues.apache.org/jira/browse/SOLR-9706
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Erick Erickson
>
> This is something of an edge case, but it's perfectly possible to issue a 
> fetchIndex command through the core admin API to a replica in SolrCloud. 
> While the fetch is going on, incoming queries are blocked. Then when the 
> fetch completes, all the queued-up queries execute.
> In the normal case, this is probably the proper behavior as a fetchIndex 
> during "normal" SolrCloud operation indicates that the replica's index is too 
> far out of date and _shouldn't_ serve queries, this is a special case.
> Why would one want to do this? Well, in _extremely_ high indexing throughput 
> situations, the additional time taken for the leader forwarding the query on 
> to a follower is too high. So there is an indexing cluster and a search 
> cluster and an external process that issues a fetchIndex to each replica in 
> the search cluster periodiclally.
> What do people think about an "expert" option for fetchIndex that would cause 
> a replica to behave like the old master/slave days and continue serving 
> queries while the fetchindex was going on? Or another solution?
> FWIW, here's the stack traces where the blocking is going on (6.3 about). 
> This is not hard to reproduce if you introduce an artificial delay in the 
> fetch command then submit a fetchIndex and try to query.
> Blocked query thread(s)
> DefaultSolrCoreState.loci(159)
> DefaultSolrCoreState.getIndexWriter (104)
> SolrCore.openNewSearcher(1781)
> SolrCore.getSearcher(1931)
> SolrCore.getSearchers(1677)
> SolrCore.getSearcher(1577)
> SolrQueryRequestBase.getSearcher(115)
> QueryComponent.process(308).
> The stack trace that releases this is
> DefaultSolrCoreState.createMainIndexWriter(240)
> DefaultSolrCoreState.changeWriter(203)
> DefaultSolrCoreState.openIndexWriter(228) // LOCK RELEASED 2 lines later
> IndexFetcher.fetchLatestIndex(493) (approx, I have debugging code in there. 
> It's in the "finally" clause anyway.)
> IndexFetcher.fetchLatestIndex(251).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1324 - Still Unstable!

2017-05-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1324/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testCommitWithin

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([995AB6EB64F2F8D5:2388D993E7DC16C0]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:898)
at 
org.apache.solr.update.AutoCommitTest.testCommitWithin(AutoCommitTest.java:353)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=id:529=standard=0=20=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:891)
... 40 more




Build Log:
[...truncated 12011 lines...]
   [junit4] Suite: 

[jira] [Updated] (SOLR-9509) Fix problems in shell scripts reported by "shellcheck"

2017-05-25 Thread KuroSaka TeruHiko (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KuroSaka TeruHiko updated SOLR-9509:

Attachment: SOLR-9509.patch

This patch only includes mods to the bin/solr script. It has been lightly 
tested. Is there a test suite for the bin/solr script that I can run?

> Fix problems in shell scripts reported by "shellcheck"
> --
>
> Key: SOLR-9509
> URL: https://issues.apache.org/jira/browse/SOLR-9509
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>  Labels: newdev
> Attachments: bin_solr_shellcheck.txt, shellcheck_solr_20160915.txt, 
> shellcheck_solr_bin_bash_20160915.txt, shellcheck_solr_bin_sh_20160915.txt, 
> shellcheck_solr_usr_bin_env_bash_20160915.txt, SOLR-9509.patch
>
>
> Running {{shellcheck}} on our shell scripts reveal various improvements we 
> should consider.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9509) Fix problems in shell scripts reported by "shellcheck"

2017-05-25 Thread KuroSaka TeruHiko (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025547#comment-16025547
 ] 

KuroSaka TeruHiko edited comment on SOLR-9509 at 5/25/17 11:17 PM:
---

This patch only includes mods to the bin/solr script. It has been lightly 
tested. Is there a test suite for the bin/solr script that I can run? (Kuro)


was (Author: tkurosaka):
This patch only includes mods to the bin/solr script. It has been lightly 
tested. Is there a test suite for the bin/solr script that I can run?

> Fix problems in shell scripts reported by "shellcheck"
> --
>
> Key: SOLR-9509
> URL: https://issues.apache.org/jira/browse/SOLR-9509
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>  Labels: newdev
> Attachments: bin_solr_shellcheck.txt, shellcheck_solr_20160915.txt, 
> shellcheck_solr_bin_bash_20160915.txt, shellcheck_solr_bin_sh_20160915.txt, 
> shellcheck_solr_usr_bin_env_bash_20160915.txt, SOLR-9509.patch
>
>
> Running {{shellcheck}} on our shell scripts reveal various improvements we 
> should consider.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10719) ADDREPLICA fails if the instanceDir is a symlink

2017-05-25 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025518#comment-16025518
 ] 

Erick Erickson edited comment on SOLR-10719 at 5/25/17 11:00 PM:
-

This is actually a bit weirder. Files.createDirectories _does_ succeed if you 
specify a subdir of the symlink. So if
sym -> dir1

and I ask CreateDirectories to create sym/eoe1/eoe2/eoe3 all the directories 
are created just fine. But when core.properties is being written, it wants to 
write to sym/core.properties and the createDirectories fails on creating sym as 
it's a symlink.

I see multiple places in the code where we call Files.createDirectories, even 
some tagged with

//note, this will fail if this is a symlink

All in all, symlinks are going to be a problem in several places in the code.

So I'm thinking of providing a method  in FileUtils to deal with this kind of 
thing that would then be available for other users as appropriate.

Oh, and I'm _not_ suggesting that we make this a blanket change as I'm not sure 
these other places _should_ be changed.


was (Author: erickerickson):
This is actually a bit weirder. Files.createDirectories _does_ succeed if you 
specify a subdir of the symlink. So if
sym -> dir1

and I ask CreateDirectories to create sym/eoe1/eoe2/eoe3 all the directories 
are created just fine. But when core.properties is being written, it wants to 
write to sym/core.properties and the createDirectories fails on creating sym as 
it's a symlink.

I see multiple places in the code where we call Files.createDirectories, even 
some tagged with

//note, this will fail if this is a symlink

All in all, symlinks are going to be a problem in several places in the code.

So I'm thinking of providing a method  in FileUtils to deal with this kind of 
thing that would then be available for other users as appropriate

> ADDREPLICA fails if the instanceDir is a symlink
> 
>
> Key: SOLR-10719
> URL: https://issues.apache.org/jira/browse/SOLR-10719
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>
> Well, it doesn't actually fail until you try to restart the Solr instance. 
> The root is that creating core.properties fails.
> This is due to SOLR-8260. CorePropertiesLocator.writePropertiesFile changed 
> from:
> propfile.getParentFile().mkdirs();
> to
> Files.createDirectories(propfile.getParent());
> The former (apparently) thinks it's OK if a symlink points to a directory, 
> but the latter throws an exception.
> So the behavior here is that the call appears to succeed, the replica is 
> created and is functional. Until you restart the instance when it's not 
> discovered.
> I hacked in a simple test to see if the parent existed already and skip the 
> call to createDirectories if so and ADDREPLICA works just fine. Restarting 
> Solr finds the replica.
> The test "for real" would probably have to be better than this as we probably 
> really want to keep from overwriting an existing replica and the like, didn't 
> check whether that's already accounted for though.
> There's another issue here that failing to write the properties file should 
> fail the ADDREPLICA IMO.
> [~romseygeek] I'm guessing that this is an unintended side-effect of 
> SOLR-8260 but wanted to check before diving in deeper.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10719) ADDREPLICA fails if the instanceDir is a symlink

2017-05-25 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025518#comment-16025518
 ] 

Erick Erickson commented on SOLR-10719:
---

This is actually a bit weirder. Files.createDirectories _does_ succeed if you 
specify a subdir of the symlink. So if
sym -> dir1

and I ask CreateDirectories to create sym/eoe1/eoe2/eoe3 all the directories 
are created just fine. But when core.properties is being written, it wants to 
write to sym/core.properties and the createDirectories fails on creating sym as 
it's a symlink.

I see multiple places in the code where we call Files.createDirectories, even 
some tagged with

//note, this will fail if this is a symlink

All in all, symlinks are going to be a problem in several places in the code.

So I'm thinking of providing a method  in FileUtils to deal with this kind of 
thing that would then be available for other users as appropriate

> ADDREPLICA fails if the instanceDir is a symlink
> 
>
> Key: SOLR-10719
> URL: https://issues.apache.org/jira/browse/SOLR-10719
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>
> Well, it doesn't actually fail until you try to restart the Solr instance. 
> The root is that creating core.properties fails.
> This is due to SOLR-8260. CorePropertiesLocator.writePropertiesFile changed 
> from:
> propfile.getParentFile().mkdirs();
> to
> Files.createDirectories(propfile.getParent());
> The former (apparently) thinks it's OK if a symlink points to a directory, 
> but the latter throws an exception.
> So the behavior here is that the call appears to succeed, the replica is 
> created and is functional. Until you restart the instance when it's not 
> discovered.
> I hacked in a simple test to see if the parent existed already and skip the 
> call to createDirectories if so and ADDREPLICA works just fine. Restarting 
> Solr finds the replica.
> The test "for real" would probably have to be better than this as we probably 
> really want to keep from overwriting an existing replica and the like, didn't 
> check whether that's already accounted for though.
> There's another issue here that failing to write the properties file should 
> fail the ADDREPLICA IMO.
> [~romseygeek] I'm guessing that this is an unintended side-effect of 
> SOLR-8260 but wanted to check before diving in deeper.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10749) Should ref guide asciidoc files' line length be limited?

2017-05-25 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025504#comment-16025504
 ] 

Steve Rowe commented on SOLR-10749:
---

FYI, in IntelliJ 2017.1, which I use, soft wrapping can be togged in the 
current editing window via:

 View | Active Editor | Use Soft Wraps

(I had thought that IntelliJ might have a way to configure soft wraps by file 
type, but apparently this isn't possible.)

> Should ref guide asciidoc files' line length be limited?
> 
>
> Key: SOLR-10749
> URL: https://issues.apache.org/jira/browse/SOLR-10749
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Priority: Minor
>
> From [~dsmiley] and [~janhoy] on SOLR-10290:
> {quote}
> David: Can we auto-linewrap the asciidoc content we've imported somehow? The 
> lines are super-long in my IDE (IntelliJ). I can toggle the active editor's 
> "soft wrap" at least (View menu, then Active Editor menu).
> Jan: Yea, those lines are long
> {quote}
> From a conversation [~ctargett] and I had on SOLR-10379:
> {quote}
> Steve: I updated the ref guide docs. While I was at it, I installed and used 
> the IntelliJ plugin named "Wrap To Column" to wrap at 120 chars (a.k.a. "fill 
> paragraph") in the two .adoc files I edited.
> Cassandra: What is the point of this, or even, the big deal about asking your 
> IDE to do soft wraps instead?
> Steve: Not all editors support soft-wrapping. There is project consensus to 
> wrap code at 120-chars; why make an exception for these doc files?
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: dataDir param for collection CREATE command

2017-05-25 Thread Erick Erickson
Ahhh, as usual Jan is far ahead of the curve ;)

On Thu, May 25, 2017 at 2:18 PM, Jan Høydahl  wrote:
> Yea, having exact dataDir as a system property is a flawed design dating
> back before distributed Solr…
> See https://issues.apache.org/jira/browse/SOLR-6671 for my proposal to solve
> the user requirement
> of placing ALL data dirs on a separate volume. The patch is almost ready for
> commit…
>
> Please review and comment :)
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> 25. mai 2017 kl. 22.21 skrev Shawn Heisey :
>
> On 5/25/2017 2:05 AM, takumi yoshida wrote:
>
> I wounder if we add new parameter dataDir for collection CREATE
> command. There have been dataDir for ADDREPLICA command. So, if we add
> dataDir for CREATE too, it would be more easy to handle data directory
> when we make new collection with new disk or NFS, etc ...
>
>
> I'm with Erick.  The danger of creating multiple cores on the same
> server with exactly the same dataDir is simply too high.  It doesn't
> make sense to add a dataDir parameter to the Collections API CREATE.
>
> By editing core.properties files on an individual server after the
> collection is created to add the dataDir property, and restarting Solr,
> you can move a dataDir.  The location would be relative to the place the
> core.properties file is found.  This could be dangerous, but done
> correctly, would work.
>
> Thanks,
> Shawn
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10749) Should ref guide asciidoc files' line length be limited?

2017-05-25 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025470#comment-16025470
 ] 

Steve Rowe commented on SOLR-10749:
---

The most recent discussions about code line length:

"Line length in Lucene/Solr code", February 2013: 
[https://lists.apache.org/thread.html/32a568c5772f3d23224b92b8350298062ffb13cc5227f27b94aabbef@1361788759@%3Cdev.lucene.apache.org%3E]

"Change line length setting in eclipse to 120 chars", April 2015: 
[https://lists.apache.org/thread.html/8fea0227dccd362f5457d0b608a8afe31f7aad448a54a8c29d16d057@1429340836@%3Cdev.lucene.apache.org%3E]



> Should ref guide asciidoc files' line length be limited?
> 
>
> Key: SOLR-10749
> URL: https://issues.apache.org/jira/browse/SOLR-10749
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Priority: Minor
>
> From [~dsmiley] and [~janhoy] on SOLR-10290:
> {quote}
> David: Can we auto-linewrap the asciidoc content we've imported somehow? The 
> lines are super-long in my IDE (IntelliJ). I can toggle the active editor's 
> "soft wrap" at least (View menu, then Active Editor menu).
> Jan: Yea, those lines are long
> {quote}
> From a conversation [~ctargett] and I had on SOLR-10379:
> {quote}
> Steve: I updated the ref guide docs. While I was at it, I installed and used 
> the IntelliJ plugin named "Wrap To Column" to wrap at 120 chars (a.k.a. "fill 
> paragraph") in the two .adoc files I edited.
> Cassandra: What is the point of this, or even, the big deal about asking your 
> IDE to do soft wraps instead?
> Steve: Not all editors support soft-wrapping. There is project consensus to 
> wrap code at 120-chars; why make an exception for these doc files?
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10415) Within solr-core, debug/trace level logging should use parameterized log messages

2017-05-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025457#comment-16025457
 ] 

ASF GitHub Bot commented on SOLR-10415:
---

Github user tflobbe commented on the issue:

https://github.com/apache/lucene-solr/pull/182
  
This PR can be closed, right?


> Within solr-core, debug/trace level logging should use parameterized log 
> messages
> -
>
> Key: SOLR-10415
> URL: https://issues.apache.org/jira/browse/SOLR-10415
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Braun
>Priority: Trivial
>
> Noticed in several samplings of an active Solr that several debug statements 
> were taking decently measurable time because of the time of the .toString 
> even when the log.debug() statement would not output because it was 
> effectively INFO or higher. Using parameterized logging statements, ie 
> 'log.debug("Blah {}", o)' will avoid incurring that cost.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #182: SOLR-10415 - improve debug logging to use parameteri...

2017-05-25 Thread tflobbe
Github user tflobbe commented on the issue:

https://github.com/apache/lucene-solr/pull/182
  
This PR can be closed, right?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10379) Add ManagedSynonymGraphFilterFactory, deprecate ManagedSynonymFilterFactory

2017-05-25 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-10379.
---
   Resolution: Fixed
 Assignee: Steve Rowe
Fix Version/s: 6.7
   master (7.0)

> Add ManagedSynonymGraphFilterFactory, deprecate ManagedSynonymFilterFactory
> ---
>
> Key: SOLR-10379
> URL: https://issues.apache.org/jira/browse/SOLR-10379
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: master (7.0), 6.7
>
> Attachments: SOLR-10379.patch, SOLR-10379.patch
>
>
> SynonymFilterFactory was deprecated in LUCENE-6664



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10379) Add ManagedSynonymGraphFilterFactory, deprecate ManagedSynonymFilterFactory

2017-05-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025443#comment-16025443
 ] 

ASF subversion and git services commented on SOLR-10379:


Commit f15abbd197d1ea65cec4ad9d30d7cab6e58afbd8 in lucene-solr's branch 
refs/heads/branch_6x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f15abbd ]

SOLR-10379: Add ManagedSynonymGraphFilterFactory, deprecate 
ManagedSynonymFilterFactory


> Add ManagedSynonymGraphFilterFactory, deprecate ManagedSynonymFilterFactory
> ---
>
> Key: SOLR-10379
> URL: https://issues.apache.org/jira/browse/SOLR-10379
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-10379.patch, SOLR-10379.patch
>
>
> SynonymFilterFactory was deprecated in LUCENE-6664



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10379) Add ManagedSynonymGraphFilterFactory, deprecate ManagedSynonymFilterFactory

2017-05-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025444#comment-16025444
 ] 

ASF subversion and git services commented on SOLR-10379:


Commit 78e7e1c3072b315c92cbb2934c1874b7978cb99b in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=78e7e1c ]

SOLR-10379: Add ManagedSynonymGraphFilterFactory, deprecate 
ManagedSynonymFilterFactory


> Add ManagedSynonymGraphFilterFactory, deprecate ManagedSynonymFilterFactory
> ---
>
> Key: SOLR-10379
> URL: https://issues.apache.org/jira/browse/SOLR-10379
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-10379.patch, SOLR-10379.patch
>
>
> SynonymFilterFactory was deprecated in LUCENE-6664



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10233) Add support for different replica types in Solr

2017-05-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025438#comment-16025438
 ] 

ASF GitHub Bot commented on SOLR-10233:
---

Github user tflobbe closed the pull request at:

https://github.com/apache/lucene-solr/pull/196


> Add support for different replica types in Solr
> ---
>
> Key: SOLR-10233
> URL: https://issues.apache.org/jira/browse/SOLR-10233
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
> Fix For: master (7.0)
>
> Attachments: 11431.consoleText.txt, SOLR-10233.patch, 
> SOLR-10233.patch, SOLR-10233.patch, SOLR-10233.patch, SOLR-10233.patch
>
>
> For the majority of the cases, current SolrCloud's  distributed indexing is 
> great. There is a subset of use cases for which the legacy Master/Slave 
> replication may fit better:
> * Don’t require NRT
> * LIR can become an issue, prefer availability of reads vs consistency or NRT
> * High number of searches (requiring many search nodes)
> SOLR-9835 is adding replicas that don’t do indexing, just update their 
> transaction log. This Jira is to extend that idea and provide the following 
> replica types:
> * *Realtime:* Writes updates to transaction log and indexes locally. Replicas 
> of type “realtime” support NRT (soft commits) and RTG. Any _realtime_ replica 
> can become a leader. This is the only type supported in SolrCloud at this 
> time and will be the default.
> * *Append:* Writes to transaction log, but not to index, uses replication. 
> Any _append_ replica can become leader (by first applying all local 
> transaction log elements). If a replica is of type _append_ but is also the 
> leader, it will behave as a _realtime_. This is exactly what SOLR-9835 is 
> proposing (non-live replicas)
> * *Passive:* Doesn’t index or writes to transaction log. Just replicates from 
> _realtime_ or _append_ replicas. Passive replicas can’t become shard leaders 
> (i.e., if there are only passive replicas in the collection at some point, 
> updates will fail same as if there is no leaders, queries continue to work), 
> so they don’t even participate in elections.
> When the leader replica of the shard receives an update, it will distribute 
> it to all _realtime_ and _append_ replicas, the same as it does today. It 
> won't distribute to _passive_ replicas.
> By using a combination of _append_ and _passive_ replicas, one can achieve an 
> equivalent of the legacy Master/Slave architecture in SolrCloud mode with 
> most of its benefits, including high availability of writes. 
> h2. API (v1 style)
> {{/admin/collections?action=CREATE…&*realtimeReplicas=X=Y=Z*}}
> {{/admin/collections?action=ADDREPLICA…&*type=\[realtime/append/passive\]*}}
> * “replicationFactor=” will translate to “realtime=“ for back compatibility
> * if _passive_ > 0, _append_ or _realtime_ need to be >= 1 (can’t be all 
> passives)
> h2. Placement Strategies
> By using replica placement rules, one should be able to dedicate nodes to 
> search-only and write-only workloads. For example:
> {code}
> shard:*,replica:*,type:passive,fleet:slaves
> {code}
> where “type” is a new condition supported by the rule engine, and 
> “fleet:slaves” is a regular tag. Note that rules are only applied when the 
> replicas are created, so a later change in tags won't affect existing 
> replicas. Also, rules are per collection, so each collection could contain 
> it's own different rules.
> Note that on the server side Solr also needs to know how to distribute the 
> shard requests (maybe ShardHandler?) if we want to hit only a subset of 
> replicas (i.e. *passive *replicas only, or similar rules)
> h2. SolrJ
> SolrCloud client could be smart to prefer _passive_ replicas for search 
> requests when available (and if configured to do so). _Passive_ replicas 
> can’t respond RTG requests, so those should go to _realtime_ replicas. 
> h2. Cluster/Collection state
> {code}
> {"gettingstarted":{
>   "replicationFactor":"1",
>   "router":{"name":"compositeId"},
>   "maxShardsPerNode":"2",
>   "autoAddReplicas":"false",
>   "shards":{
> "shard1":{
>   "range":"8000-",
>   "state":"active",
>   "replicas":{
> "core_node5":{
>   "core":"gettingstarted_shard1_replica1",
>   "base_url":"http://127.0.0.1:8983/solr;,
>   "node_name":"127.0.0.1:8983_solr",
>   "state":"active",
>   "leader":"true",
>   **"type": "realtime"**},
> "core_node10":{
>   "core":"gettingstarted_shard1_replica2",
>   "base_url":"http://127.0.0.1:7574/solr;,
>   "node_name":"127.0.0.1:7574_solr",
>   

[GitHub] lucene-solr pull request #196: SOLR-10233: Add support for different replica...

2017-05-25 Thread tflobbe
Github user tflobbe closed the pull request at:

https://github.com/apache/lucene-solr/pull/196


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Unable to enrich UIMA annotated results to Solr fields

2017-05-25 Thread aruninfo100
Hi All, 

I am trying to integrate openNLP-UIMA with Solr.I have installed the pear
package generated by building the opennlp-uima source. 
I have analyzed the text files using CAS Visual Debugger by loading the 
respective AE and tokens are annotated as expected. 

Solrconfig:

 
  

  
  
  D:/solr-6.1.0/server/solr/star/conf/AnalyzerEngineMain.xml
  
  true
  
  
false

  content

  
  
  
  
  opennlp.uima.Sentence
  
coveredText
sentence_mxf
  
  
 
  opennlp.uima.Money
  
coveredText
money_mxf
  


  opennlp.uima.Organization
  
coveredText
organization_mxf
  


  opennlp.uima.Percentage
  
coveredText
percentage_mxf
  
  

  opennlp.uima.Time
  
coveredText
time_mxf
  
  

  opennlp.uima.Person
  
coveredText
person_mxf
  



  
  
  



  
uima
  


schema:

 

When I index the documents and query it,I am getting only
sentence,money,percentage fields for each document but NameFinders like
person,location,date,organization which was extracting as expected in CAS
Visual Debugger is not getting enriched as Solr field for each documents. 

AnalyzerEngineMain.xml: 

http://uima.apache.org/resourceSpecifier;>
   
org.apache.uima.java
false









OpenNlpTextAnalyzer

1.0
Apache Software Foundation




PEAR







en




true
   
false
false







The descriptor files are the same found in
:https://svn.apache.org/repos/asf/opennlp/trunk/opennlp-uima/descriptors/

Thanks and Regards, 
Arun 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Unable-to-enrich-UIMA-annotated-results-to-Solr-fields-tp4337355.html
Sent from the Lucene - Java Developer mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10751) Master/Slave IndexVersion conflict

2017-05-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-10751:
-
Attachment: SOLR-10751.patch

> Master/Slave IndexVersion conflict
> --
>
> Key: SOLR-10751
> URL: https://issues.apache.org/jira/browse/SOLR-10751
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
> Attachments: SOLR-10751.patch
>
>
> I’ve been looking at some failures in the replica types tests. One strange 
> failure I noticed is, master and slave share the same version, but have 
> different generation. The IndexFetcher code does more or less this:
> {code}
> masterVersion = fetchMasterVersion()
> masterGeneration = fetchMasterGeneration()
> if (masterVersion == 0 && slaveGeneration != 0 && forceReplication) {
>delete my index
>commit locally
>return
> } 
> if (masterVersion != slaveVersion) {
>   fetchIndexFromMaster(masterGeneration)
> } else {
>   //do nothing, master and slave are in sync.
> }
> {code}
> The problem I see happens with this sequence of events:
> delete index in master (not a DBQ=*:*, I mean a complete removal of the index 
> files and reload of the core)
> replication happens in slave (sees a version 0, deletes local index and 
> commit)
> add document in master and commit
> if the commit in master and in the slave happen at the same millisecond*, 
> they both end up with the same version, but different indices. 
> I think that in addition of checking for the same version, we should validate 
> that slave and master have the same generation and If not, consider them not 
> in sync, and proceed to the replication.
> True, this is a situation that's difficult to happen in a real prod 
> environment and it's more likely to affect tests, but I think the change 
> makes sense. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9546) There is a lot of unnecessary boxing/unboxing going on in {{SolrParams}} class

2017-05-25 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker resolved SOLR-9546.
-
   Resolution: Fixed
Fix Version/s: 6.3
   master (7.0)

Marking this issue as resolved.

> There is a lot of unnecessary boxing/unboxing going on in {{SolrParams}} class
> --
>
> Key: SOLR-9546
> URL: https://issues.apache.org/jira/browse/SOLR-9546
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Pushkar Raste
>Assignee: Noble Paul
>Priority: Minor
> Fix For: master (7.0), 6.3
>
> Attachments: SOLR-9546_CloudMLTQParser.patch, SOLR-9546.patch
>
>
> Here is an excerpt 
> {code}
>   public Long getLong(String param, Long def) {
> String val = get(param);
> try {
>   return val== null ? def : Long.parseLong(val);
> }
> catch( Exception ex ) {
>   throw new SolrException( SolrException.ErrorCode.BAD_REQUEST, 
> ex.getMessage(), ex );
> }
>   }
> {code}
> {{Long.parseLong()}} returns a primitive type but since method expect to 
> return a {{Long}}, it needs to be wrapped. There are many more method like 
> that. We might be creating a lot of unnecessary objects here.
> I am not sure if JVM catches upto it and somehow optimizes it if these 
> methods are called enough times (or may be compiler does some modifications 
> at compile time)
> Let me know if I am thinking of some premature optimization



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+168) - Build # 19708 - Unstable!

2017-05-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19708/
Java: 64bit/jdk-9-ea+168 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testMaxTime

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([D28A202C15C99102:487E5DCE8B530D3E]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:898)
at 
org.apache.solr.update.AutoCommitTest.testMaxTime(AutoCommitTest.java:270)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:563)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=id:529=standard=0=20=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:891)
... 39 more




Build Log:
[...truncated 12119 lines...]
   [junit4] Suite: org.apache.solr.update.AutoCommitTest
   [junit4]   2> 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4029 - Unstable!

2017-05-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4029/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest

Error Message:
Could not find collection : delLiveColl

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : delLiveColl
at 
__randomizedtesting.SeedInfo.seed([22AADF25982A4D99:8FCA6B2E8515E5EC]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:194)
at 
org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:247)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.deleteReplicaByCountForAllShards

Error Message:
Expected two shards with two replicas each null Live Nodes: 

Re: dataDir param for collection CREATE command

2017-05-25 Thread Jan Høydahl
Yea, having exact dataDir as a system property is a flawed design dating back 
before distributed Solr…
See https://issues.apache.org/jira/browse/SOLR-6671 
 for my proposal to solve the 
user requirement
of placing ALL data dirs on a separate volume. The patch is almost ready for 
commit…

Please review and comment :)

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 25. mai 2017 kl. 22.21 skrev Shawn Heisey :
> 
> On 5/25/2017 2:05 AM, takumi yoshida wrote:
>> I wounder if we add new parameter dataDir for collection CREATE
>> command. There have been dataDir for ADDREPLICA command. So, if we add
>> dataDir for CREATE too, it would be more easy to handle data directory
>> when we make new collection with new disk or NFS, etc ...
> 
> I'm with Erick.  The danger of creating multiple cores on the same
> server with exactly the same dataDir is simply too high.  It doesn't
> make sense to add a dataDir parameter to the Collections API CREATE.
> 
> By editing core.properties files on an individual server after the
> collection is created to add the dataDir property, and restarting Solr,
> you can move a dataDir.  The location would be relative to the place the
> core.properties file is found.  This could be dangerous, but done
> correctly, would work.
> 
> Thanks,
> Shawn
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 



[jira] [Commented] (SOLR-10735) Solr is broken when directory with spaces used on Windows

2017-05-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025409#comment-16025409
 ] 

Jan Høydahl commented on SOLR-10735:


[~thetaphi] Since I was not able to reproduce for techproducts example, can you 
give the exact steps for reproduction, including error messages?

> Solr is broken when directory with spaces used on Windows
> -
>
> Key: SOLR-10735
> URL: https://issues.apache.org/jira/browse/SOLR-10735
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.5
>Reporter: Ishan Chattopadhyaya
> Attachments: Screenshot from 2017-05-24 21-00-29.png
>
>
> [~thetaphi] mentioned this in the 6.6 RC1 voting thread:
> {code}
> The startup script (Windows at least) again does not work with whitepsace 
> directory names, which is standard on Windows. It does give an error message 
> not while server startup, but when trying to create the techproducts core. I 
> am about to open issue.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10751) Master/Slave IndexVersion conflict

2017-05-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025410#comment-16025410
 ] 

Tomás Fernández Löbbe commented on SOLR-10751:
--

[~hossman] and I had a conversation about this on IRC yesterday, and his 
concern was "Why is master creating an index with version 0 and the slave is 
not". After investigating some more, I noticed this code in the 
{{ReplicationHandler}}
{code:java}
if (commitPoint != null && replicationEnabled.get()) {
//
// There is a race condition here.  The commit point may be changed / 
deleted by the time
// we get around to reserving it.  This is a very small window though, 
and should not result
// in a catastrophic failure, but will result in the client getting an 
empty file list for
// the CMD_GET_FILE_LIST command.
//

core.getDeletionPolicy().setReserveDuration(commitPoint.getGeneration(), 
reserveCommitDuration);
rsp.add(CMD_INDEX_VERSION, 
IndexDeletionPolicyWrapper.getCommitTimestamp(commitPoint));
rsp.add(GENERATION, commitPoint.getGeneration());
  } else {
// This happens when replication is not configured to happen after 
startup and no commit/optimize
// has happened yet.
rsp.add(CMD_INDEX_VERSION, 0L);
rsp.add(GENERATION, 0L);
  }
{code}
so, "0" is not really the version of the index, but it's that the master 
responds to the slaves when there is no replicable index. 

> Master/Slave IndexVersion conflict
> --
>
> Key: SOLR-10751
> URL: https://issues.apache.org/jira/browse/SOLR-10751
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>
> I’ve been looking at some failures in the replica types tests. One strange 
> failure I noticed is, master and slave share the same version, but have 
> different generation. The IndexFetcher code does more or less this:
> {code}
> masterVersion = fetchMasterVersion()
> masterGeneration = fetchMasterGeneration()
> if (masterVersion == 0 && slaveGeneration != 0 && forceReplication) {
>delete my index
>commit locally
>return
> } 
> if (masterVersion != slaveVersion) {
>   fetchIndexFromMaster(masterGeneration)
> } else {
>   //do nothing, master and slave are in sync.
> }
> {code}
> The problem I see happens with this sequence of events:
> delete index in master (not a DBQ=*:*, I mean a complete removal of the index 
> files and reload of the core)
> replication happens in slave (sees a version 0, deletes local index and 
> commit)
> add document in master and commit
> if the commit in master and in the slave happen at the same millisecond*, 
> they both end up with the same version, but different indices. 
> I think that in addition of checking for the same version, we should validate 
> that slave and master have the same generation and If not, consider them not 
> in sync, and proceed to the replication.
> True, this is a situation that's difficult to happen in a real prod 
> environment and it's more likely to affect tests, but I think the change 
> makes sense. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10751) Master/Slave IndexVersion conflict

2017-05-25 Thread JIRA
Tomás Fernández Löbbe created SOLR-10751:


 Summary: Master/Slave IndexVersion conflict
 Key: SOLR-10751
 URL: https://issues.apache.org/jira/browse/SOLR-10751
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: master (7.0)
Reporter: Tomás Fernández Löbbe
Assignee: Tomás Fernández Löbbe


I’ve been looking at some failures in the replica types tests. One strange 
failure I noticed is, master and slave share the same version, but have 
different generation. The IndexFetcher code does more or less this:
{code}
masterVersion = fetchMasterVersion()
masterGeneration = fetchMasterGeneration()

if (masterVersion == 0 && slaveGeneration != 0 && forceReplication) {
   delete my index
   commit locally
   return
} 
if (masterVersion != slaveVersion) {
  fetchIndexFromMaster(masterGeneration)
} else {
  //do nothing, master and slave are in sync.
}
{code}
The problem I see happens with this sequence of events:

delete index in master (not a DBQ=*:*, I mean a complete removal of the index 
files and reload of the core)
replication happens in slave (sees a version 0, deletes local index and commit)
add document in master and commit

if the commit in master and in the slave happen at the same millisecond*, they 
both end up with the same version, but different indices. 
I think that in addition of checking for the same version, we should validate 
that slave and master have the same generation and If not, consider them not in 
sync, and proceed to the replication.
True, this is a situation that's difficult to happen in a real prod environment 
and it's more likely to affect tests, but I think the change makes sense. 




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7705) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-05-25 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025386#comment-16025386
 ] 

Robert Muir commented on LUCENE-7705:
-

Tests such as this are not effective. If no exception is thrown the test will 
pass.

{noformat}
+try {
+  new LetterTokenizer(newAttributeFactory(), 0);
+} catch (Exception e) {
+  assertEquals("maxTokenLen must be greater than 0 and less than 1048576 
passed: 0", e.getMessage());
+}
{noformat}

I would use expectThrows in this case instead.

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> -
>
> Key: LUCENE-7705
> URL: https://issues.apache.org/jira/browse/LUCENE-7705
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Amrit Sarkar
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: LUCENE-7705, LUCENE-7705.patch, LUCENE-7705.patch, 
> LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch, 
> LUCENE-7705.patch, LUCENE-7705.patch
>
>
> SOLR-10186
> [~erickerickson]: Is there a good reason that we hard-code a 256 character 
> limit for the CharTokenizer? In order to change this limit it requires that 
> people copy/paste the incrementToken into some new class since incrementToken 
> is final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10379) Add ManagedSynonymGraphFilterFactory, deprecate ManagedSynonymFilterFactory

2017-05-25 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-10379:
--
Attachment: SOLR-10379.patch

bq. I'll back out the formatting changes from the patch here, make a new issue 
to change the maximum .adoc line length, and link to it from SOLR-10290.

Attached new patch without {{.adoc}} formatting changes; I'll commit shortly.

.adoc line length issue here: SOLR-10749

> Add ManagedSynonymGraphFilterFactory, deprecate ManagedSynonymFilterFactory
> ---
>
> Key: SOLR-10379
> URL: https://issues.apache.org/jira/browse/SOLR-10379
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-10379.patch, SOLR-10379.patch
>
>
> SynonymFilterFactory was deprecated in LUCENE-6664



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-10750) RulesTest (6.6 branch) failing 100% on my setup

2017-05-25 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya closed SOLR-10750.
---
Resolution: Cannot Reproduce

Restarting the machine fixed it, and now it passes.

> RulesTest (6.6 branch) failing 100% on my setup
> ---
>
> Key: SOLR-10750
> URL: https://issues.apache.org/jira/browse/SOLR-10750
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
> Attachments: rulestest.log
>
>
> Seems that RulesTest is failing 100% of the time on my laptop. Same branch, 
> this didn't happen till morning. Hence, I was able to build the 6.6 RC2 
> properly, but unable to build the 6.6 RC3.
> I'm not yet claiming something is wrong with the test or the feature being 
> tested, but just looking for pointers as to why is it failing and how to make 
> it work, so that I can build the 6.6 RC3.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10750) RulesTest (6.6 branch) failing 100% on my setup

2017-05-25 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-10750:

Attachment: rulestest.log

Attaching logs for the failures.

> RulesTest (6.6 branch) failing 100% on my setup
> ---
>
> Key: SOLR-10750
> URL: https://issues.apache.org/jira/browse/SOLR-10750
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
> Attachments: rulestest.log
>
>
> Seems that RulesTest is failing 100% of the time on my laptop. Same branch, 
> this didn't happen till morning. Hence, I was able to build the 6.6 RC2 
> properly, but unable to build the 6.6 RC3.
> I'm not yet claiming something is wrong with the test or the feature being 
> tested, but just looking for pointers as to why is it failing and how to make 
> it work, so that I can build the 6.6 RC3.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10750) RulesTest (6.6 branch) failing 100% on my setup

2017-05-25 Thread Ishan Chattopadhyaya (JIRA)
Ishan Chattopadhyaya created SOLR-10750:
---

 Summary: RulesTest (6.6 branch) failing 100% on my setup
 Key: SOLR-10750
 URL: https://issues.apache.org/jira/browse/SOLR-10750
 Project: Solr
  Issue Type: Test
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Ishan Chattopadhyaya


Seems that RulesTest is failing 100% of the time on my laptop. Same branch, 
this didn't happen till morning. Hence, I was able to build the 6.6 RC2 
properly, but unable to build the 6.6 RC3.

I'm not yet claiming something is wrong with the test or the feature being 
tested, but just looking for pointers as to why is it failing and how to make 
it work, so that I can build the 6.6 RC3.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10749) Should ref guide asciidoc files' line length be limited?

2017-05-25 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-10749:
--
Issue Type: Sub-task  (was: Improvement)
Parent: SOLR-10290

> Should ref guide asciidoc files' line length be limited?
> 
>
> Key: SOLR-10749
> URL: https://issues.apache.org/jira/browse/SOLR-10749
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Priority: Minor
>
> From [~dsmiley] and [~janhoy] on SOLR-10290:
> {quote}
> David: Can we auto-linewrap the asciidoc content we've imported somehow? The 
> lines are super-long in my IDE (IntelliJ). I can toggle the active editor's 
> "soft wrap" at least (View menu, then Active Editor menu).
> Jan: Yea, those lines are long
> {quote}
> From a conversation [~ctargett] and I had on SOLR-10379:
> {quote}
> Steve: I updated the ref guide docs. While I was at it, I installed and used 
> the IntelliJ plugin named "Wrap To Column" to wrap at 120 chars (a.k.a. "fill 
> paragraph") in the two .adoc files I edited.
> Cassandra: What is the point of this, or even, the big deal about asking your 
> IDE to do soft wraps instead?
> Steve: Not all editors support soft-wrapping. There is project consensus to 
> wrap code at 120-chars; why make an exception for these doc files?
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: dataDir param for collection CREATE command

2017-05-25 Thread Shawn Heisey
On 5/25/2017 2:05 AM, takumi yoshida wrote:
> I wounder if we add new parameter dataDir for collection CREATE
> command. There have been dataDir for ADDREPLICA command. So, if we add
> dataDir for CREATE too, it would be more easy to handle data directory
> when we make new collection with new disk or NFS, etc ...

I'm with Erick.  The danger of creating multiple cores on the same
server with exactly the same dataDir is simply too high.  It doesn't
make sense to add a dataDir parameter to the Collections API CREATE.

By editing core.properties files on an individual server after the
collection is created to add the dataDir property, and restarting Solr,
you can move a dataDir.  The location would be relative to the place the
core.properties file is found.  This could be dangerous, but done
correctly, would work.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10749) Should ref guide asciidoc files' line length be limited?

2017-05-25 Thread Steve Rowe (JIRA)
Steve Rowe created SOLR-10749:
-

 Summary: Should ref guide asciidoc files' line length be limited?
 Key: SOLR-10749
 URL: https://issues.apache.org/jira/browse/SOLR-10749
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Steve Rowe
Priority: Minor


>From [~dsmiley] and [~janhoy] on SOLR-10290:

{quote}
David: Can we auto-linewrap the asciidoc content we've imported somehow? The 
lines are super-long in my IDE (IntelliJ). I can toggle the active editor's 
"soft wrap" at least (View menu, then Active Editor menu).

Jan: Yea, those lines are long
{quote}

>From a conversation [~ctargett] and I had on SOLR-10379:

{quote}
Steve: I updated the ref guide docs. While I was at it, I installed and used 
the IntelliJ plugin named "Wrap To Column" to wrap at 120 chars (a.k.a. "fill 
paragraph") in the two .adoc files I edited.

Cassandra: What is the point of this, or even, the big deal about asking your 
IDE to do soft wraps instead?

Steve: Not all editors support soft-wrapping. There is project consensus to 
wrap code at 120-chars; why make an exception for these doc files?
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_131) - Build # 914 - Still Unstable!

2017-05-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/914/
Java: 64bit/jdk1.8.0_131 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.codecs.blocktreeords.TestOrdsBlockTree

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\codecs\test\J1\temp\lucene.codecs.blocktreeords.TestOrdsBlockTree_5DAEE6DF99233B7C-001\index-MMapDirectory-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\codecs\test\J1\temp\lucene.codecs.blocktreeords.TestOrdsBlockTree_5DAEE6DF99233B7C-001\index-MMapDirectory-001

C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\codecs\test\J1\temp\lucene.codecs.blocktreeords.TestOrdsBlockTree_5DAEE6DF99233B7C-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\codecs\test\J1\temp\lucene.codecs.blocktreeords.TestOrdsBlockTree_5DAEE6DF99233B7C-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\codecs\test\J1\temp\lucene.codecs.blocktreeords.TestOrdsBlockTree_5DAEE6DF99233B7C-001\index-MMapDirectory-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\codecs\test\J1\temp\lucene.codecs.blocktreeords.TestOrdsBlockTree_5DAEE6DF99233B7C-001\index-MMapDirectory-001
   
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\codecs\test\J1\temp\lucene.codecs.blocktreeords.TestOrdsBlockTree_5DAEE6DF99233B7C-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\codecs\test\J1\temp\lucene.codecs.blocktreeords.TestOrdsBlockTree_5DAEE6DF99233B7C-001

at __randomizedtesting.SeedInfo.seed([5DAEE6DF99233B7C]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 6269 lines...]
   [junit4] Suite: org.apache.lucene.codecs.blocktreeords.TestOrdsBlockTree
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene62), 
sim=RandomSimilarity(queryNorm=true,coord=no): {f_DOCS_AND_FREQS=DFR I(n)1, 
field=IB SPL-L1, f_DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS=DFR I(F)B3(800.0), 
f_DOCS_AND_FREQS_AND_POSITIONS=DFR GL2, body=IB SPL-DZ(0.3), f_DOCS=DFR 
GLZ(0.3)}, locale=ar-TN, timezone=Africa/Libreville
   [junit4]   2> NOTE: Windows 10 10.0 amd64/Oracle Corporation 1.8.0_131 
(64-bit)/cpus=3,threads=1,free=29108328,total=84934656
   [junit4]   2> NOTE: All tests run in this JVM: 
[TestSimpleTextDocValuesFormat, TestFSTPostingsFormat, 
TestMemoryPostingsFormat, TestVarGapFixedIntervalPostingsFormat, 
TestSimpleTextPostingsFormat, TestBloomPostingsFormat, 
TestVarGapDocFreqIntervalPostingsFormat, TestOrdsBlockTree]
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestOrdsBlockTree 
-Dtests.seed=5DAEE6DF99233B7C -Dtests.slow=true -Dtests.locale=ar-TN 
-Dtests.timezone=Africa/Libreville -Dtests.asserts=true 
-Dtests.file.encoding=Cp1252
   [junit4] ERROR   0.00s J1 | TestOrdsBlockTree (suite) <<<
   [junit4]> Throwable #1: java.io.IOException: Could not remove the 
following files (in the order of attempts):
   [junit4]>
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\codecs\test\J1\temp\lucene.codecs.blocktreeords.TestOrdsBlockTree_5DAEE6DF99233B7C-001\index-MMapDirectory-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\codecs\test\J1\temp\lucene.codecs.blocktreeords.TestOrdsBlockTree_5DAEE6DF99233B7C-001\index-MMapDirectory-001
   [junit4]>

[jira] [Commented] (SOLR-10446) Http based ClusterStateProvider (CloudSolrClient needn't talk to ZooKeeper)

2017-05-25 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025284#comment-16025284
 ] 

Cassandra Targett commented on SOLR-10446:
--

bq. please review the documentation changes.

+1 [~ichattopadhyaya], looks good. Thanks.

> Http based ClusterStateProvider (CloudSolrClient needn't talk to ZooKeeper)
> ---
>
> Key: SOLR-10446
> URL: https://issues.apache.org/jira/browse/SOLR-10446
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Fix For: 6.6, master (7.0)
>
> Attachments: SOLR-10446.doc.patch, SOLR-10446.patch, 
> SOLR-10446.patch, SOLR-10446.patch, SOLR-10446.patch, SOLR-10446.patch, 
> SOLR-10446.patch, SOLR-10446.patch, SOLR-9057.patch
>
>
> An HTTP based ClusterStateProvider to remove the sole dependency of 
> CloudSolrClient on ZooKeeper, and hence provide an optional way for CSC to 
> access cluster state without requiring ZK.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2017-05-25 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025283#comment-16025283
 ] 

Cassandra Targett commented on SOLR-6736:
-

bq. please review the documentation changes for this issue.

Thanks [~ichattopadhyaya]. I only noticed one thing: In AsciiDoc, you need to 
put a blank line between a paragraph and a bulleted list (at L#182). Otherwise 
it will render as one whole paragraph, which isn't what you're going for.

> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Ishan Chattopadhyaya
> Fix For: 6.6, master (7.0)
>
> Attachments: newzkconf.zip, newzkconf.zip, SOLR-6736.doc.patch, 
> SOLR-6736-newapi.patch, SOLR-6736-newapi.patch, SOLR-6736-newapi.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> test_private.pem, test_pub.der, zkconfighandler.zip, zkconfighandler.zip
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip 
> http://localhost:8983/solr/admin/configs/mynewconf?sig=
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10446) Http based ClusterStateProvider (CloudSolrClient needn't talk to ZooKeeper)

2017-05-25 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-10446:

Attachment: SOLR-10446.doc.patch

[~ctargett], please review the documentation changes.

> Http based ClusterStateProvider (CloudSolrClient needn't talk to ZooKeeper)
> ---
>
> Key: SOLR-10446
> URL: https://issues.apache.org/jira/browse/SOLR-10446
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Fix For: 6.6, master (7.0)
>
> Attachments: SOLR-10446.doc.patch, SOLR-10446.patch, 
> SOLR-10446.patch, SOLR-10446.patch, SOLR-10446.patch, SOLR-10446.patch, 
> SOLR-10446.patch, SOLR-10446.patch, SOLR-9057.patch
>
>
> An HTTP based ClusterStateProvider to remove the sole dependency of 
> CloudSolrClient on ZooKeeper, and hence provide an optional way for CSC to 
> access cluster state without requiring ZK.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9623) Disable remote streaming by default

2017-05-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025253#comment-16025253
 ] 

Jan Høydahl commented on SOLR-9623:
---

[~yonik] any thought about whether the default limit for 
{{formdataUploadLimitInKB}} should also be raised? As I understand it applies 
when you post a HTML form or use curl to post without specifying content-type?

> Disable remote streaming by default
> ---
>
> Key: SOLR-9623
> URL: https://issues.apache.org/jira/browse/SOLR-9623
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Blocker
>  Labels: configset
> Fix For: master (7.0)
>
> Attachments: SOLR-9623.patch, SOLR-9623.patch, SOLR-9623.patch
>
>
> As we set more and more config settings suitable for production use, perhaps 
> it is time to disable remoteStreaming by default, and document how to enable 
> it.
> In all config sets, change into
> {code:xml}
> multipartUploadLimitInKB="2048000"
>formdataUploadLimitInKB="2048"
>addHttpRequestToContext="false"/>
> {code}
> And then consider adding support for it in solr.in.xxx



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10748) Disable stream.body by default

2017-05-25 Thread JIRA
Jan Høydahl created SOLR-10748:
--

 Summary: Disable stream.body by default
 Key: SOLR-10748
 URL: https://issues.apache.org/jira/browse/SOLR-10748
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: search
Reporter: Jan Høydahl
 Fix For: master (7.0)


Spinoff from SOLR-9623

Today you can issue a HTTP request parameter {{stream.body}} which will by Solr 
be interpreted as body content on the request, i.e. act as a POST request. This 
is useful for development and testing but can pose a security risk in 
production since users/clients with permission to to GET on various endpoints 
also can post by {{using stream.body}}. The classic example is 
{{=*:*}}. And this feature cannot 
be turned off by configuration, it is not controlled by 
{{enableRemoteStreaming}}.

This jira will add a configuration option 
{{requestDispatcher.requestParsers.enableStreamBody}} to the 
{{}} tag in solrconfig as well as to the Config API. I propose 
to set the default value to **{{false}}**.

Apart from security concerns, this also aligns well with our v2 API effort 
which tries to stick to the principle of least surprice in that GET requests 
shall not be able to modify state. Developers should known how to do a POST 
today :)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9623) Disable remote streaming by default

2017-05-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025250#comment-16025250
 ] 

Jan Høydahl commented on SOLR-9623:
---

Created SOLR-10748 for the {{enableStreamBody}} config.

> Disable remote streaming by default
> ---
>
> Key: SOLR-9623
> URL: https://issues.apache.org/jira/browse/SOLR-9623
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Blocker
>  Labels: configset
> Fix For: master (7.0)
>
> Attachments: SOLR-9623.patch, SOLR-9623.patch, SOLR-9623.patch
>
>
> As we set more and more config settings suitable for production use, perhaps 
> it is time to disable remoteStreaming by default, and document how to enable 
> it.
> In all config sets, change into
> {code:xml}
> multipartUploadLimitInKB="2048000"
>formdataUploadLimitInKB="2048"
>addHttpRequestToContext="false"/>
> {code}
> And then consider adding support for it in solr.in.xxx



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2017-05-25 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-6736:
---
Attachment: SOLR-6736.doc.patch

[~ctargett], please review the documentation changes for this issue.

> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Ishan Chattopadhyaya
> Fix For: 6.6, master (7.0)
>
> Attachments: newzkconf.zip, newzkconf.zip, SOLR-6736.doc.patch, 
> SOLR-6736-newapi.patch, SOLR-6736-newapi.patch, SOLR-6736-newapi.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> test_private.pem, test_pub.der, zkconfighandler.zip, zkconfighandler.zip
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip 
> http://localhost:8983/solr/admin/configs/mynewconf?sig=
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10379) Add ManagedSynonymGraphFilterFactory, deprecate ManagedSynonymFilterFactory

2017-05-25 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025207#comment-16025207
 ] 

Steve Rowe commented on SOLR-10379:
---

bq. But it seems we should have the conversation about it there? You just 
decided to do it here and discuss it here - it's not my call. I don't see the 
point, but if that's what the project wants, that's what we'll do.

You're right about the discussion not belonging here - I'll back out the 
formatting changes from the patch here, make a new issue to change the maximum 
{{.adoc}} line length, and link to it from SOLR-10290.

> Add ManagedSynonymGraphFilterFactory, deprecate ManagedSynonymFilterFactory
> ---
>
> Key: SOLR-10379
> URL: https://issues.apache.org/jira/browse/SOLR-10379
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-10379.patch
>
>
> SynonymFilterFactory was deprecated in LUCENE-6664



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10379) Add ManagedSynonymGraphFilterFactory, deprecate ManagedSynonymFilterFactory

2017-05-25 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025184#comment-16025184
 ] 

Cassandra Targett commented on SOLR-10379:
--

bq. David Smiley and Jan Høydahl mentioned on SOLR-10290 that wrapping long 
lines would be good

OK, that makes sense. But it seems we should have the conversation about it 
there? You just decided to do it here and discuss it here - it's not my call. I 
don't see the point, but if that's what the project wants, that's what we'll do.

> Add ManagedSynonymGraphFilterFactory, deprecate ManagedSynonymFilterFactory
> ---
>
> Key: SOLR-10379
> URL: https://issues.apache.org/jira/browse/SOLR-10379
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-10379.patch
>
>
> SynonymFilterFactory was deprecated in LUCENE-6664



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10515) Persist intermediate trigger state in ZK to continue tracking information across overseer restarts

2017-05-25 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  resolved SOLR-10515.
--
Resolution: Fixed

Merged to feature/autoscaling branch.

> Persist intermediate trigger state in ZK to continue tracking information 
> across overseer restarts
> --
>
> Key: SOLR-10515
> URL: https://issues.apache.org/jira/browse/SOLR-10515
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Andrzej Bialecki 
>  Labels: autoscaling
> Fix For: master (7.0)
>
> Attachments: SOLR-10515.patch, SOLR-10515.patch
>
>
> The current trigger design is simplistic and keeps all the intermediate state 
> in memory. But this presents two problems when the overseer itself fails:
> # We lose tracking state such as which node was added before the overseer 
> restarted
> # A nodeLost trigger can never really fire for the overseer node itself
> So we need a way, preferably in the trigger API itself to save intermediate 
> state or checkpoints so that it can seamlessly continue on overseer restarts.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7844) UnifiedHighlighter: simplify "maxPassages" input API

2017-05-25 Thread Timothy M. Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025178#comment-16025178
 ] 

Timothy M. Rodriguez commented on LUCENE-7844:
--

This syntax looks really good!
{code}
unifiedHighlighter.highlight(query, topDocs, 
 unifiedHighlighter.fieldOptionsWhole("title"),
 unifiedHighlighter.fieldOptions("body", 3)
);
{code}

with maybe {code}unifiedHighlighter.fieldOptionsWhole();{code} being a 
specialization of {code}unifiedHiglighter.fieldOptions("title", 3, 
BreakOption.WHOLE);{code} or something to that effect

Fair point on the performance difference being negligible.  In terms of now, 
I'd be in favor of leaving the current parallel array approach and working 
towards a fieldOption approach.  I can offer to help on that end!


> UnifiedHighlighter: simplify "maxPassages" input API
> 
>
> Key: LUCENE-7844
> URL: https://issues.apache.org/jira/browse/LUCENE-7844
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: David Smiley
>Priority: Minor
> Fix For: master (7.0)
>
> Attachments: LUCENE_7844__UH_maxPassages_simplification.patch
>
>
> The "maxPassages" input to the UnifiedHighlighter can be provided as an array 
> to some of the public methods on UnifiedHighlighter.  When it's provided as 
> an array, the index in the array is for the field in a parallel array. I 
> think this is awkward and furthermore it's inconsistent with the way this 
> highlighter customizes things on a by field basis.  Instead, the parameter 
> can be a simple int default (not an array), and then there can be a protected 
> method like {{getMaxPassageCount(String field}} that returns an Integer 
> which, when non-null, replaces the default value for this field.
> Aside from API simplicity and consistency, this will also remove some 
> annoying parallel array sorting going on.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10137) Configsets created via API should always be mutable

2017-05-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025176#comment-16025176
 ] 

ASF GitHub Bot commented on SOLR-10137:
---

GitHub user hgadre opened a pull request:

https://github.com/apache/lucene-solr/pull/208

[SOLR-10137] Ensure that ConfigSet created via an API is mutable



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hgadre/lucene-solr solr10137

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/208.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #208


commit 2e37a0bb5600419591428b9d985f68bfd79cfe06
Author: Hrishikesh Gadre 
Date:   2017-05-25T18:13:02Z

[SOLR-10137] Ensure that ConfigSet created via an API is mutable




> Configsets created via API should always be mutable
> ---
>
> Key: SOLR-10137
> URL: https://issues.apache.org/jira/browse/SOLR-10137
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hrishikesh Gadre
>
> Please refer to this discussion for details,
> https://marc.info/?l=solr-dev=148679049516375=4



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #208: [SOLR-10137] Ensure that ConfigSet created vi...

2017-05-25 Thread hgadre
GitHub user hgadre opened a pull request:

https://github.com/apache/lucene-solr/pull/208

[SOLR-10137] Ensure that ConfigSet created via an API is mutable



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hgadre/lucene-solr solr10137

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/208.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #208


commit 2e37a0bb5600419591428b9d985f68bfd79cfe06
Author: Hrishikesh Gadre 
Date:   2017-05-25T18:13:02Z

[SOLR-10137] Ensure that ConfigSet created via an API is mutable




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10379) Add ManagedSynonymGraphFilterFactory, deprecate ManagedSynonymFilterFactory

2017-05-25 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025168#comment-16025168
 ] 

Steve Rowe edited comment on SOLR-10379 at 5/25/17 6:33 PM:


bq. When I apply your patch, it adds line breaks I assume at every 120 
character point of a line, even in the middle of sentences. I guess that's what 
I'm supposed to be looking at?

Yes, sorry I wasn't clearer.

bq. What is the point of this, or even, the big deal about asking your IDE to 
do soft wraps instead?

[~dsmiley] and [~janhoy] mentioned on SOLR-10290 that wrapping long lines would 
be good: 
[https://issues.apache.org/jira/browse/SOLR-10290?focusedCommentId=16014709=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16014709]
 and 
[https://issues.apache.org/jira/browse/SOLR-10290?focusedCommentId=16015443=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16015443].

Not all editors support soft-wrapping.  There is project consensus to wrap code 
at 120-chars; why make an exception for these doc files?

bq. But, isn't it inconsistent to only do it on the one page you've edited?

Yes, but it would be time-consuming to do in all pages, so I thought maybe a 
fix-as-we-go strategy was more prudent?

bq. Did you add the break ({{}}) in the middle of the curl examples (such 
as L#239) or did your IDE do that?

I did that.


was (Author: steve_rowe):
bq. When I apply your patch, it adds line breaks I assume at every 120 
character point of a line, even in the middle of sentences. I guess that's what 
I'm supposed to be looking at?

Yes, sorry I wasn't clearer.

bq. What is the point of this, or even, the big deal about asking your IDE to 
do soft wraps instead?

[~dsmiley] and [~janhoy] mentioned on SOLR-10290 that wrapping long lines would 
be good: 
[https://issues.apache.org/jira/browse/SOLR-10290?focusedCommentId=16014709=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16014709]
 and 
[https://issues.apache.org/jira/browse/SOLR-10290?focusedCommentId=16015443=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16015443].

Not all editors support soft-wrapping.  There is project consensus to wrap code 
at 120-chars; why make an exception for these doc files?

bq. But, isn't it inconsistent to only do it on the one page you've edited?

Yes, but it would be time-consuming to do in all pages, so I thought maybe a 
fix-as-we-go strategy was more prudent?

bq. Did you add the break ({{}}) in the middle of the curl examples (such 
as L#239) or did your IDE do that?

I did that.

> Add ManagedSynonymGraphFilterFactory, deprecate ManagedSynonymFilterFactory
> ---
>
> Key: SOLR-10379
> URL: https://issues.apache.org/jira/browse/SOLR-10379
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-10379.patch
>
>
> SynonymFilterFactory was deprecated in LUCENE-6664



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10379) Add ManagedSynonymGraphFilterFactory, deprecate ManagedSynonymFilterFactory

2017-05-25 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025168#comment-16025168
 ] 

Steve Rowe commented on SOLR-10379:
---

bq. When I apply your patch, it adds line breaks I assume at every 120 
character point of a line, even in the middle of sentences. I guess that's what 
I'm supposed to be looking at?

Yes, sorry I wasn't clearer.

bq. What is the point of this, or even, the big deal about asking your IDE to 
do soft wraps instead?

[~dsmiley] and [~janhoy] mentioned on SOLR-10290 that wrapping long lines would 
be good: 
[https://issues.apache.org/jira/browse/SOLR-10290?focusedCommentId=16014709=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16014709]
 and 
[https://issues.apache.org/jira/browse/SOLR-10290?focusedCommentId=16015443=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16015443].

Not all editors support soft-wrapping.  There is project consensus to wrap code 
at 120-chars; why make an exception for these doc files?

bq. But, isn't it inconsistent to only do it on the one page you've edited?

Yes, but it would be time-consuming to do in all pages, so I thought maybe a 
fix-as-we-go strategy was more prudent?

bq. Did you add the break ({{}}) in the middle of the curl examples (such 
as L#239) or did your IDE do that?

I did that.

> Add ManagedSynonymGraphFilterFactory, deprecate ManagedSynonymFilterFactory
> ---
>
> Key: SOLR-10379
> URL: https://issues.apache.org/jira/browse/SOLR-10379
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-10379.patch
>
>
> SynonymFilterFactory was deprecated in LUCENE-6664



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Strange Solr JIRA versions (Lucene too!)

2017-05-25 Thread Cassandra Targett
There is an API in JIRA to create and update versions. Here are the
docs for it for the current version we're using:
https://docs.atlassian.com/jira/REST/6.3.15/#d2e3054.

Scroll down for other endpoints that might be helpful - one of them is
to get the list of unreleased issues for a particular version.

I've been able to use other issue-related API endpoints with my ASF
JIRA login and I assume that would be true here also, but not sure
about it.


On Thu, May 25, 2017 at 11:43 AM, Dawid Weiss  wrote:
> I don't think it can be automated -- it'd require those few manual
> clicks in Jira. I am not a Jira expert though, perhaps it has an API
> that does make it scriptable.
>
> Dawid
>
> On Thu, May 25, 2017 at 6:23 PM, Erick Erickson  
> wrote:
>> Dawid:
>>
>> So can we automate this somehow? It's still extra work for the RM and
>> if it could become a one-liner addition to the release process maybe
>> we can make it easier.
>>
>> On Thu, May 25, 2017 at 8:42 AM, Dawid Weiss  wrote:
>>> It's not really about wanting to tag it 6.x... It's something I got
>>> used to very much and something that helps (me) manage which
>>> branch(es) a given issue has been applied to. When 6.x tag is much
>>> like "next release cut from 6.x". When doing a release 6.[next] I'd
>>> grep for 6.x and bulk-add 6.[next] to all issues currently having 6.x,
>>> then remove 6.x from them (so that they have a constant fix-for, no
>>> branch included anymore).
>>>
>>> This process isn't the only one possible and I've had some discussions
>>> about alternative workflows. I didn't manage to convince my
>>> conversation partners and they failed to convince me, so I think it's
>>> a matter of personal preference.
>>>
>>> The ultimate reference is the changes.txt file anyway (?).
>>>
>>> Dawid
>>>
>>> On Thu, May 25, 2017 at 5:24 PM, Mike Drob  wrote:
 Christine,

 Wow, that's fantastic. You can also pass a --grep argument to git directly.

 Another problem that just occurred to me though, is that we might need to
 make updates to CHANGES files too. I'm not sure how to automate the check
 for that, since the format can be pretty messy.

 Mike

 On Thu, May 25, 2017 at 8:39 AM, Christine Poerschke (BLOOMBERG/ LONDON)
  wrote:
>
> Hi Everyone,
>
> Perhaps a little more context would help get us all on the same page re:
> the "to 6.x or to not 6.x" tag question.
>
> === "to 6.x" tag ===
>
> So, some of us (myself included) for SOLR issues used to tag FixVersion
> 6.x since the commit was to branch_6x and (at least myself) assumed that
> when branch_6_7 is cut from branch_6x then the process would somehow
> magically turn 6.x tags into 6.7 tags, and any subsequently added 6.x tags
> become 6.8 in future etc.
>
> The 6.x to 6.7 transition would be an extra part of the release process
> and if/since it isn't actually a part of the process then it's
> retrospectively really really tedious to resolve 6.x to the correct
> 6.something tag.
>
> === "to not 6.x" tag ===
>
> An alternative is always tag to a specific (future) version i.e. to _not_
> 6.x tag anything and to let the released/unreleased categorisation take 
> care
> of the already-released vs. scheduled-to-be-released difference.
>
> === where we are now ===
>
> There are still some tickets tagged to 6.x and people looking at the
> version dropdown choices will see 6.x as an existing choice. If/When no
> tickets are tagged to 6.x anymore then the 6.x choice could be removed 
> from
> the dropdown choices leaving only specific versions to choose from.
>
> Having said all that, turning existing 6.x tagging into specific versions
> is tedious but not totally impossible, I did a few yesterday using simple
> git grep lookups:
>
> what=LUCENE-
> for version in 0 1 2 3 4 5 6 ; do
> echo branch_6_$version
> git log --decorate --oneline --graph origin/branch_6_$version | grep $what
> done
>
> Hope that helps? What do people think?
>
> Christine
>
> From: dev@lucene.apache.org At: 05/25/17 14:08:37
> To: dev@lucene.apache.org, dawid.we...@gmail.com, jpou...@apache.org,
> luc...@mikemccandless.com, kwri...@apache.org, u...@thetaphi.de
> Subject: Re: Strange Solr JIRA versions (Lucene too!)
>
> Lucene devs, lets get on the same page about this issue.
>
> Dawid seems to _want_ to use 6.x
>
> https://issues.apache.org/jira/browse/LUCENE-7841?focusedCommentId=16024639=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16024639
> Christine and I are the only ones to have commented about this pertaining
> to LUCENE JIRA issues.  Lets have this conversation here, not on
> 

[jira] [Updated] (SOLR-10747) Allow /stream handler to execute Stream Evaluators directly

2017-05-25 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10747:
--
Fix Version/s: master (7.0)

> Allow /stream handler to execute Stream Evaluators directly
> ---
>
> Key: SOLR-10747
> URL: https://issues.apache.org/jira/browse/SOLR-10747
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (7.0)
>
> Attachments: SOLR-10747.patch
>
>
> Currently the /stream handler only executes Streaming Expressions that 
> compile to TupleStreams. This ticket will allow the /stream handler to 
> execute Streaming Expressions that compile StreamEvaluators.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10747) Allow /stream handler to execute Stream Evaluators directly

2017-05-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025129#comment-16025129
 ] 

ASF subversion and git services commented on SOLR-10747:


Commit b3ee2d03dbeecd5ff1197ae548bd2ce26518c0c0 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b3ee2d0 ]

SOLR-10747: Allow /stream handler to execute Stream Evaluators directly


> Allow /stream handler to execute Stream Evaluators directly
> ---
>
> Key: SOLR-10747
> URL: https://issues.apache.org/jira/browse/SOLR-10747
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-10747.patch
>
>
> Currently the /stream handler only executes Streaming Expressions that 
> compile to TupleStreams. This ticket will allow the /stream handler to 
> execute Streaming Expressions that compile StreamEvaluators.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10747) Allow /stream handler to execute Stream Evaluators directly

2017-05-25 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-10747:
-

Assignee: Joel Bernstein

> Allow /stream handler to execute Stream Evaluators directly
> ---
>
> Key: SOLR-10747
> URL: https://issues.apache.org/jira/browse/SOLR-10747
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-10747.patch
>
>
> Currently the /stream handler only executes Streaming Expressions that 
> compile to TupleStreams. This ticket will allow the /stream handler to 
> execute Streaming Expressions that compile StreamEvaluators.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10379) Add ManagedSynonymGraphFilterFactory, deprecate ManagedSynonymFilterFactory

2017-05-25 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025108#comment-16025108
 ] 

Cassandra Targett commented on SOLR-10379:
--

When I apply your patch, it adds line breaks I assume at every 120 character 
point of a line, even in the middle of sentences. I guess that's what I'm 
supposed to be looking at? 

My understanding of how Asciidoctor handles this is that it _should_ be fine 
(http://asciidoctor.org/docs/user-manual/#line-breaks). But, isn't it 
inconsistent to only do it on the one page you've edited? What is the point of 
this, or even, the big deal about asking your IDE to do soft wraps instead?

Did you add the break (\) in the middle of the curl examples (such as L#239) or 
did your IDE do that? 



> Add ManagedSynonymGraphFilterFactory, deprecate ManagedSynonymFilterFactory
> ---
>
> Key: SOLR-10379
> URL: https://issues.apache.org/jira/browse/SOLR-10379
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-10379.patch
>
>
> SynonymFilterFactory was deprecated in LUCENE-6664



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10379) Add ManagedSynonymGraphFilterFactory, deprecate ManagedSynonymFilterFactory

2017-05-25 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025068#comment-16025068
 ] 

Steve Rowe edited comment on SOLR-10379 at 5/25/17 5:46 PM:


Patch, adds ManagedSynonymGraphFilterFactory and deprecates 
ManagedSynonymFilterFactory.

I updated the ref guide docs.  While I was at it, I installed and used the 
IntelliJ plugin named "Wrap To Column" to wrap at 120 chars (a.k.a. "fill 
paragraph") in the two {{.adoc}} files I edited.  (IntelliJ's "Fill Paragraph" 
edit item was inactive for me in {{.adoc}} files, and the "Wrap To Column" 
plugin author says that he wrote it because he couldn't get "Fill Paragraph" to 
work: 
[https://andrewbrookins.com/tech/wrap-comments-and-text-to-column-width-in-intellij-editors/]).
  

[~ctargett] could you take a look and see if there's a problem with this?  
(AFAICT, using the JavaFX renderer in IntelliJ, wrapping long lines didn't 
change the HTML formatting.)

I think it's ready to go.


was (Author: steve_rowe):
Patch, adds ManagedSynonymGraphFilterFactory and deprecates 
ManagedSynonymFilterFactory.

I updated the ref guide docs.  While I was at it, I installed and used the 
IntelliJ plugin named "Wrap To Column" to wrap at 120 chars (a.k.a. "fill 
paragraph") in the two {{.adoc}} files I edited to manually wrap paragraphs at 
120 chars.  (IntelliJ's "Fill Paragraph" edit item was inactive for me in 
{{.adoc}} files, and the "Wrap To Column" plugin author says that he wrote it 
because he couldn't get "Fill Paragraph" to work: 
[https://andrewbrookins.com/tech/wrap-comments-and-text-to-column-width-in-intellij-editors/]).
  

[~ctargett] could you take a look and see if there's a problem with this?  
(AFAICT, using the JavaFX renderer in IntelliJ, wrapping long lines didn't 
change the HTML formatting.)

I think it's ready to go.

> Add ManagedSynonymGraphFilterFactory, deprecate ManagedSynonymFilterFactory
> ---
>
> Key: SOLR-10379
> URL: https://issues.apache.org/jira/browse/SOLR-10379
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-10379.patch
>
>
> SynonymFilterFactory was deprecated in LUCENE-6664



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10747) Allow /stream handler to execute Stream Evaluators directly

2017-05-25 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10747:
--
Attachment: SOLR-10747.patch

> Allow /stream handler to execute Stream Evaluators directly
> ---
>
> Key: SOLR-10747
> URL: https://issues.apache.org/jira/browse/SOLR-10747
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10747.patch
>
>
> Currently the /stream handler only executes Streaming Expressions that 
> compile to TupleStreams. This ticket will allow the /stream handler to 
> execute Streaming Expressions that compile StreamEvaluators.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10379) Add ManagedSynonymGraphFilterFactory, deprecate ManagedSynonymFilterFactory

2017-05-25 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-10379:
--
Attachment: SOLR-10379.patch

Patch, adds ManagedSynonymGraphFilterFactory and deprecates 
ManagedSynonymFilterFactory.

I updated the ref guide docs.  While I was at it, I installed and used the 
IntelliJ plugin named "Wrap To Column" to wrap at 120 chars (a.k.a. "fill 
paragraph") in the two {{.adoc}} files I edited to manually wrap paragraphs at 
120 chars.  (IntelliJ's "Fill Paragraph" edit item was inactive for me in 
{{.adoc}} files, and the "Wrap To Column" plugin author says that he wrote it 
because he couldn't get "Fill Paragraph" to work: 
[https://andrewbrookins.com/tech/wrap-comments-and-text-to-column-width-in-intellij-editors/]).
  

[~ctargett] could you take a look and see if there's a problem with this?  
(AFAICT, using the JavaFX renderer in IntelliJ, wrapping long lines didn't 
change the HTML formatting.)

I think it's ready to go.

> Add ManagedSynonymGraphFilterFactory, deprecate ManagedSynonymFilterFactory
> ---
>
> Key: SOLR-10379
> URL: https://issues.apache.org/jira/browse/SOLR-10379
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-10379.patch
>
>
> SynonymFilterFactory was deprecated in LUCENE-6664



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10379) Add ManagedSynonymGraphFilterFactory, deprecate ManagedSynonymFilterFactory

2017-05-25 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-10379:
--
Summary: Add ManagedSynonymGraphFilterFactory, deprecate 
ManagedSynonymFilterFactory  (was: ManagedSynonymFilterFactory should switch to 
using SynonymGraphFilterFactory as its delegate)

> Add ManagedSynonymGraphFilterFactory, deprecate ManagedSynonymFilterFactory
> ---
>
> Key: SOLR-10379
> URL: https://issues.apache.org/jira/browse/SOLR-10379
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>
> SynonymFilterFactory was deprecated in LUCENE-6664



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7383) DIH: rewrite XPathEntityProcessor/RSS example as the smallest good demo possible

2017-05-25 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025043#comment-16025043
 ] 

Alexandre Rafalovitch commented on SOLR-7383:
-

Gentle reminder accepted. Thank you for stepping in for now.

> DIH: rewrite XPathEntityProcessor/RSS example as the smallest good demo 
> possible
> 
>
> Key: SOLR-7383
> URL: https://issues.apache.org/jira/browse/SOLR-7383
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 5.0, 6.0
>Reporter: Upayavira
>Assignee: Alexandre Rafalovitch
>Priority: Minor
> Fix For: 6.6, master (7.0)
>
> Attachments: atom_20170315.tgz, rss-data-config.xml, SOLR-7383.patch
>
>
> The DIH example (solr/example/example-DIH/solr/rss/conf/rss-data-config.xml) 
> is broken again. See associated issues.
> Below is a config that should work.
> This is caused by Slashdot seemingly oscillating between RDF/RSS and pure 
> RSS. Perhaps we should depend upon something more static, rather than an 
> external service that is free to change as it desires.
> {code:xml}
> 
> 
> 
>  pk="link"
> url="http://rss.slashdot.org/Slashdot/slashdot;
> processor="XPathEntityProcessor"
> forEach="/RDF/item"
> transformer="DateFormatTransformer">
>   
>  commonField="true" />
>  commonField="true" />
>  commonField="true" />
>   
> 
> 
> 
> 
> 
>  dateTimeFormat="-MM-dd'T'HH:mm:ss" />
> 
> 
> 
> 
> 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10747) Allow /stream handler to execute Stream Evaluators directly

2017-05-25 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10747:
--
Description: Currently the /stream handler only executes Streaming 
Expressions that compile to TupleStreams. This ticket will allow the /stream 
handler to execute Streaming Expressions that compile StreamEvaluators.  (was: 
Currently the /stream handler only executes Streaming Expressions the compile 
to TupleStreams. This ticket will allow the /stream handler to execute 
Streaming Expressions that compile StreamEvaluators.)

> Allow /stream handler to execute Stream Evaluators directly
> ---
>
> Key: SOLR-10747
> URL: https://issues.apache.org/jira/browse/SOLR-10747
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> Currently the /stream handler only executes Streaming Expressions that 
> compile to TupleStreams. This ticket will allow the /stream handler to 
> execute Streaming Expressions that compile StreamEvaluators.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10747) Allow /stream handler to execute Stream Evaluators directly

2017-05-25 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10747:
--
Description: Currently the /stream handler only executes Streaming 
Expressions the compile to TupleStreams. This ticket will allow the /stream 
handler to execute Streaming Expressions that compile StreamEvaluators.

> Allow /stream handler to execute Stream Evaluators directly
> ---
>
> Key: SOLR-10747
> URL: https://issues.apache.org/jira/browse/SOLR-10747
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> Currently the /stream handler only executes Streaming Expressions the compile 
> to TupleStreams. This ticket will allow the /stream handler to execute 
> Streaming Expressions that compile StreamEvaluators.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10747) Allow /stream handler to execute Stream Evaluators directly

2017-05-25 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-10747:
-

 Summary: Allow /stream handler to execute Stream Evaluators 
directly
 Key: SOLR-10747
 URL: https://issues.apache.org/jira/browse/SOLR-10747
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: dataDir param for collection CREATE command

2017-05-25 Thread Erick Erickson
How would you specify it? Say you have two replicas on the same Solr
instance (a common occurrence). Now say at collection creation time
you specified an absolute path (or even a relative one that goes "up"
a few levels).

Now you'd have both replicas pointing to the same data dir. Somehow
you'd have to pass a different dataDir to each replica that was
created, which seems difficult.

Best,
Erick

On Thu, May 25, 2017 at 1:05 AM, takumi yoshida
 wrote:
> Hi,
>
> I wounder if we add new parameter dataDir for collection CREATE command.
> There have been dataDir for ADDREPLICA command. So, if we add dataDir for
> CREATE too, it would be more easy to handle data directory when we make new
> collection with new disk or NFS, etc ...
>
> What do you think?
>
> Thanks,
> Takumi

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10479) support HttpShardHandlerFactory.loadBalancerRequests(MinimumAbsolute|MaximumFraction) options

2017-05-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025011#comment-16025011
 ] 

ASF subversion and git services commented on SOLR-10479:


Commit 3b527f8a395450e926bebc3de9146d2e39aa0972 in lucene-solr's branch 
refs/heads/branch_6x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3b527f8 ]

SOLR-10479: Adds support for 
HttpShardHandlerFactory.loadBalancerRequests(MinimumAbsolute|MaximumFraction) 
configuration. (Ramsey Haddad, Daniel Collins, Christine Poerschke)


> support 
> HttpShardHandlerFactory.loadBalancerRequests(MinimumAbsolute|MaximumFraction) 
> options
> -
>
> Key: SOLR-10479
> URL: https://issues.apache.org/jira/browse/SOLR-10479
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-10479.patch, SOLR-10479.patch
>
>
> If a request sends no {{timeAllowed}} threshold (or if it sends a very 
> generous threshold) then that request can potentially be retried on 'very 
> many' servers in the cloud.
> Via the 
> {{HttpShardHandlerFactory.loadBalancerRequests(MinimumAbsolute|MaximumFraction)}}
>  options the number of servers tried can be restricted via configuration e.g.
> {code}
>  class="solr.HttpShardHandlerFactory">
>   2
>   0.50
> 
> {code}
> would on a six-replica-and-all-replicas-active collection/shard restrict 
> sending to three replicas i.e. max(2, 0.50 x 6) and if the collection/shard 
> temporarily becomes 
> three-replicas-active-and-three-replicas-recovering-or-down then sending is 
> restricted to two replicas i.e. max(2, 0.50 x 3) temporarily.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10659) remove ResponseBuilder.getSortSpec use in SearchGroupShardResponseProcessor

2017-05-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025012#comment-16025012
 ] 

ASF subversion and git services commented on SOLR-10659:


Commit 7452622e86f124c1f7a1affcb4c374ee046392de in lucene-solr's branch 
refs/heads/branch_6x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7452622 ]

SOLR-10659: Remove ResponseBuilder.getSortSpec use in 
SearchGroupShardResponseProcessor. (Judith Silverman via Christine Poerschke)


> remove ResponseBuilder.getSortSpec use in SearchGroupShardResponseProcessor
> ---
>
> Key: SOLR-10659
> URL: https://issues.apache.org/jira/browse/SOLR-10659
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-10659.patch
>
>
> For clarity splitting this short but very subtle refactor out from the 
> SOLR-6203 bug fix effort.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7852) out-of-date Copyright year(s) on NOTICE.txt files?

2017-05-25 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated LUCENE-7852:

Attachment: LUCENE-7852.patch

Attaching draft patch based on my understanding of the 
http://www.apache.org/dev/licensing-howto.html information.

Specific open questions (perhaps to be asked/answered here and/or perhaps more 
suitable for the legal-discuss 
[list|http://www.apache.org/foundation/mailinglists.html#foundation-legal] or 
[jira|https://issues.apache.org/jira/browse/LEGAL])
* How to determine the start year for lucene/NOTICE.txt and/or is just an end 
year sufficient?
* If X.0 is released in (say) 2017 and X.Y is released in (say) 2018, 
presumably the end year gets bumped up to 2018. What about X.Y.1 in (say) 2019, 
is the end year bumped up again to 2019 or does it stay at 2018 since it is 
only a bugfix release?


> out-of-date Copyright year(s) on NOTICE.txt files?
> --
>
> Key: LUCENE-7852
> URL: https://issues.apache.org/jira/browse/LUCENE-7852
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: LUCENE-7852.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7852) out-of-date Copyright year(s) on NOTICE.txt files?

2017-05-25 Thread Christine Poerschke (JIRA)
Christine Poerschke created LUCENE-7852:
---

 Summary: out-of-date Copyright year(s) on NOTICE.txt files?
 Key: LUCENE-7852
 URL: https://issues.apache.org/jira/browse/LUCENE-7852
 Project: Lucene - Core
  Issue Type: Task
Reporter: Christine Poerschke
Priority: Blocker
 Fix For: master (7.0)






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10710) LTR contrib failures

2017-05-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024993#comment-16024993
 ] 

ASF GitHub Bot commented on SOLR-10710:
---

Github user diegoceccarelli closed the pull request at:

https://github.com/apache/lucene-solr/pull/204


> LTR contrib failures
> 
>
> Key: SOLR-10710
> URL: https://issues.apache.org/jira/browse/SOLR-10710
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Reporter: Steve Rowe
>Priority: Blocker
> Fix For: master (7.0)
>
>
> Reproducing failures 
> [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1304/] - {{git 
> bisect}} says {{06a6034d9}}, the commit on LUCENE-7730, is where the 
> {{TestFieldLengthFeature.testRanking()}} failure started:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestFieldLengthFeature -Dtests.method=testRanking 
> -Dtests.seed=740EF58DAA5926DA -Dtests.slow=true -Dtests.locale=ja-JP 
> -Dtests.timezone=America/Port_of_Spain -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.06s J1 | TestFieldLengthFeature.testRanking <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: mismatch: '8'!='1' 
> @ response/docs/[0]/id
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([740EF58DAA5926DA:EB385C1332233915]:0)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:248)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:192)
>[junit4]>  at 
> org.apache.solr.ltr.feature.TestFieldLengthFeature.testRanking(TestFieldLengthFeature.java:117)
> {noformat}
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestParallelWeightCreation 
> -Dtests.method=testLTRScoringQueryParallelWeightCreationResultOrder 
> -Dtests.seed=740EF58DAA5926DA -Dtests.slow=true -Dtests.locale=ar-SD 
> -Dtests.timezone=Europe/Skopje -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   1.59s J1 | 
> TestParallelWeightCreation.testLTRScoringQueryParallelWeightCreationResultOrder
>  <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: mismatch: '3'!='4' 
> @ response/docs/[0]/id
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([740EF58DAA5926DA:1142D5ED603B4132]:0)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:248)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:192)
>[junit4]>  at 
> org.apache.solr.ltr.TestParallelWeightCreation.testLTRScoringQueryParallelWeightCreationResultOrder(TestParallelWeightCreation.java:45)
> {noformat}
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestSelectiveWeightCreation 
> -Dtests.method=testSelectiveWeightsRequestFeaturesFromDifferentStore 
> -Dtests.seed=740EF58DAA5926DA -Dtests.slow=true -Dtests.locale=hr-HR 
> -Dtests.timezone=Australia/Victoria -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.03s J1 | 
> TestSelectiveWeightCreation.testSelectiveWeightsRequestFeaturesFromDifferentStore
>  <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: mismatch: '3'!='4' 
> @ response/docs/[0]/id
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([740EF58DAA5926DA:293FE248276551B1]:0)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:248)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:192)
>[junit4]>  at 
> org.apache.solr.ltr.TestSelectiveWeightCreation.testSelectiveWeightsRequestFeaturesFromDifferentStore(TestSelectiveWeightCreation.java:230)
> {noformat}
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestLTRQParserPlugin -Dtests.method=ltrMoreResultsThanReRankedTest 
> -Dtests.seed=740EF58DAA5926DA -Dtests.slow=true -Dtests.locale=es-NI 
> -Dtests.timezone=Africa/Mogadishu -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.03s J1 | 
> TestLTRQParserPlugin.ltrMoreResultsThanReRankedTest <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: mismatch: 
> '0.09271725'!='0.105360515' @ response/docs/[3]/score
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([740EF58DAA5926DA:BD7644EA7596711B]:0)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:248)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:192)
>[junit4]>  at 
> org.apache.solr.ltr.TestLTRQParserPlugin.ltrMoreResultsThanReRankedTest(TestLTRQParserPlugin.java:94)
> {noformat}



--
This message was sent by 

[GitHub] lucene-solr pull request #204: SOLR-10710: Fix LTR contrib failures

2017-05-25 Thread diegoceccarelli
Github user diegoceccarelli closed the pull request at:

https://github.com/apache/lucene-solr/pull/204


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10710) LTR contrib failures

2017-05-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024989#comment-16024989
 ] 

Tomás Fernández Löbbe commented on SOLR-10710:
--

Thanks [~diegoceccarelli]. I forgot to mention in the commit message that it 
would close your PR, would you mind closing it yourself?

> LTR contrib failures
> 
>
> Key: SOLR-10710
> URL: https://issues.apache.org/jira/browse/SOLR-10710
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Reporter: Steve Rowe
>Priority: Blocker
> Fix For: master (7.0)
>
>
> Reproducing failures 
> [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1304/] - {{git 
> bisect}} says {{06a6034d9}}, the commit on LUCENE-7730, is where the 
> {{TestFieldLengthFeature.testRanking()}} failure started:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestFieldLengthFeature -Dtests.method=testRanking 
> -Dtests.seed=740EF58DAA5926DA -Dtests.slow=true -Dtests.locale=ja-JP 
> -Dtests.timezone=America/Port_of_Spain -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.06s J1 | TestFieldLengthFeature.testRanking <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: mismatch: '8'!='1' 
> @ response/docs/[0]/id
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([740EF58DAA5926DA:EB385C1332233915]:0)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:248)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:192)
>[junit4]>  at 
> org.apache.solr.ltr.feature.TestFieldLengthFeature.testRanking(TestFieldLengthFeature.java:117)
> {noformat}
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestParallelWeightCreation 
> -Dtests.method=testLTRScoringQueryParallelWeightCreationResultOrder 
> -Dtests.seed=740EF58DAA5926DA -Dtests.slow=true -Dtests.locale=ar-SD 
> -Dtests.timezone=Europe/Skopje -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   1.59s J1 | 
> TestParallelWeightCreation.testLTRScoringQueryParallelWeightCreationResultOrder
>  <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: mismatch: '3'!='4' 
> @ response/docs/[0]/id
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([740EF58DAA5926DA:1142D5ED603B4132]:0)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:248)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:192)
>[junit4]>  at 
> org.apache.solr.ltr.TestParallelWeightCreation.testLTRScoringQueryParallelWeightCreationResultOrder(TestParallelWeightCreation.java:45)
> {noformat}
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestSelectiveWeightCreation 
> -Dtests.method=testSelectiveWeightsRequestFeaturesFromDifferentStore 
> -Dtests.seed=740EF58DAA5926DA -Dtests.slow=true -Dtests.locale=hr-HR 
> -Dtests.timezone=Australia/Victoria -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.03s J1 | 
> TestSelectiveWeightCreation.testSelectiveWeightsRequestFeaturesFromDifferentStore
>  <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: mismatch: '3'!='4' 
> @ response/docs/[0]/id
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([740EF58DAA5926DA:293FE248276551B1]:0)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:248)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:192)
>[junit4]>  at 
> org.apache.solr.ltr.TestSelectiveWeightCreation.testSelectiveWeightsRequestFeaturesFromDifferentStore(TestSelectiveWeightCreation.java:230)
> {noformat}
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestLTRQParserPlugin -Dtests.method=ltrMoreResultsThanReRankedTest 
> -Dtests.seed=740EF58DAA5926DA -Dtests.slow=true -Dtests.locale=es-NI 
> -Dtests.timezone=Africa/Mogadishu -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.03s J1 | 
> TestLTRQParserPlugin.ltrMoreResultsThanReRankedTest <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: mismatch: 
> '0.09271725'!='0.105360515' @ response/docs/[3]/score
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([740EF58DAA5926DA:BD7644EA7596711B]:0)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:248)
>[junit4]>  at 
> org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:192)
>[junit4]>  at 
> org.apache.solr.ltr.TestLTRQParserPlugin.ltrMoreResultsThanReRankedTest(TestLTRQParserPlugin.java:94)
> 

[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 853 - Unstable!

2017-05-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/853/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  
'org.apache.solr.core.BlobStoreTestRequestHandler' for path 
'overlay/requestHandler/\/test1/class' full output: {   "responseHeader":{ 
"status":0, "QTime":0},   "overlay":{ "znodeVersion":0, 
"runtimeLib":{"colltest":{ "name":"colltest", "version":1,  
from server:  null

Stack Trace:
java.lang.AssertionError: Could not get expected value  
'org.apache.solr.core.BlobStoreTestRequestHandler' for path 
'overlay/requestHandler/\/test1/class' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "overlay":{
"znodeVersion":0,
"runtimeLib":{"colltest":{
"name":"colltest",
"version":1,  from server:  null
at 
__randomizedtesting.SeedInfo.seed([57C1FD675B7AD593:8F8CD030ACA77033]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:556)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:97)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: Strange Solr JIRA versions (Lucene too!)

2017-05-25 Thread Dawid Weiss
I don't think it can be automated -- it'd require those few manual
clicks in Jira. I am not a Jira expert though, perhaps it has an API
that does make it scriptable.

Dawid

On Thu, May 25, 2017 at 6:23 PM, Erick Erickson  wrote:
> Dawid:
>
> So can we automate this somehow? It's still extra work for the RM and
> if it could become a one-liner addition to the release process maybe
> we can make it easier.
>
> On Thu, May 25, 2017 at 8:42 AM, Dawid Weiss  wrote:
>> It's not really about wanting to tag it 6.x... It's something I got
>> used to very much and something that helps (me) manage which
>> branch(es) a given issue has been applied to. When 6.x tag is much
>> like "next release cut from 6.x". When doing a release 6.[next] I'd
>> grep for 6.x and bulk-add 6.[next] to all issues currently having 6.x,
>> then remove 6.x from them (so that they have a constant fix-for, no
>> branch included anymore).
>>
>> This process isn't the only one possible and I've had some discussions
>> about alternative workflows. I didn't manage to convince my
>> conversation partners and they failed to convince me, so I think it's
>> a matter of personal preference.
>>
>> The ultimate reference is the changes.txt file anyway (?).
>>
>> Dawid
>>
>> On Thu, May 25, 2017 at 5:24 PM, Mike Drob  wrote:
>>> Christine,
>>>
>>> Wow, that's fantastic. You can also pass a --grep argument to git directly.
>>>
>>> Another problem that just occurred to me though, is that we might need to
>>> make updates to CHANGES files too. I'm not sure how to automate the check
>>> for that, since the format can be pretty messy.
>>>
>>> Mike
>>>
>>> On Thu, May 25, 2017 at 8:39 AM, Christine Poerschke (BLOOMBERG/ LONDON)
>>>  wrote:

 Hi Everyone,

 Perhaps a little more context would help get us all on the same page re:
 the "to 6.x or to not 6.x" tag question.

 === "to 6.x" tag ===

 So, some of us (myself included) for SOLR issues used to tag FixVersion
 6.x since the commit was to branch_6x and (at least myself) assumed that
 when branch_6_7 is cut from branch_6x then the process would somehow
 magically turn 6.x tags into 6.7 tags, and any subsequently added 6.x tags
 become 6.8 in future etc.

 The 6.x to 6.7 transition would be an extra part of the release process
 and if/since it isn't actually a part of the process then it's
 retrospectively really really tedious to resolve 6.x to the correct
 6.something tag.

 === "to not 6.x" tag ===

 An alternative is always tag to a specific (future) version i.e. to _not_
 6.x tag anything and to let the released/unreleased categorisation take 
 care
 of the already-released vs. scheduled-to-be-released difference.

 === where we are now ===

 There are still some tickets tagged to 6.x and people looking at the
 version dropdown choices will see 6.x as an existing choice. If/When no
 tickets are tagged to 6.x anymore then the 6.x choice could be removed from
 the dropdown choices leaving only specific versions to choose from.

 Having said all that, turning existing 6.x tagging into specific versions
 is tedious but not totally impossible, I did a few yesterday using simple
 git grep lookups:

 what=LUCENE-
 for version in 0 1 2 3 4 5 6 ; do
 echo branch_6_$version
 git log --decorate --oneline --graph origin/branch_6_$version | grep $what
 done

 Hope that helps? What do people think?

 Christine

 From: dev@lucene.apache.org At: 05/25/17 14:08:37
 To: dev@lucene.apache.org, dawid.we...@gmail.com, jpou...@apache.org,
 luc...@mikemccandless.com, kwri...@apache.org, u...@thetaphi.de
 Subject: Re: Strange Solr JIRA versions (Lucene too!)

 Lucene devs, lets get on the same page about this issue.

 Dawid seems to _want_ to use 6.x

 https://issues.apache.org/jira/browse/LUCENE-7841?focusedCommentId=16024639=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16024639
 Christine and I are the only ones to have commented about this pertaining
 to LUCENE JIRA issues.  Lets have this conversation here, not on
 LUCENE-7841.

 ~ David

 On Thu, May 25, 2017 at 1:28 AM David Smiley 
 wrote:
>
> Aha; this problem is a little more than a nuisance... it seems to be why
> most of these issues are marked Resolved and not Closed as well.  The RM's
> release process is to search for JIRA issues with a fix version of the
> release version (i.e. 6.6 NOT 6.x).  Issues that do not have a real 
> version
> then fall through the cracks and remain in a "Resolved" limbo/ambiguity:
>
> 

[jira] [Reopened] (SOLR-10689) migrate in collections api not working

2017-05-25 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reopened SOLR-10689:
---

> migrate in collections api not working
> --
>
> Key: SOLR-10689
> URL: https://issues.apache.org/jira/browse/SOLR-10689
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, clients - java
>Affects Versions: 6.5.1
>Reporter: chandru
>
> When migrating with the same query that was given in collection api , There 
> was no docs that were migrated from A -> B. Please help me to proceed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10689) migrate in collections api not working

2017-05-25 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-10689.
---
Resolution: Cannot Reproduce

"Fixed" was misleading, my mistake.

> migrate in collections api not working
> --
>
> Key: SOLR-10689
> URL: https://issues.apache.org/jira/browse/SOLR-10689
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, clients - java
>Affects Versions: 6.5.1
>Reporter: chandru
>
> When migrating with the same query that was given in collection api , There 
> was no docs that were migrated from A -> B. Please help me to proceed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10682) Add variance Stream Evaluator

2017-05-25 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024980#comment-16024980
 ] 

Joel Bernstein commented on SOLR-10682:
---

The problem I see with overloading the function names is the following scenario:

{code}
select(timeseries(collection, q="*:*", start="...", end="...", gap="...", 
var(fieldx)),
  add(var(fieldx), 1) as outField))
{code}

In this scenario is var(fieldx) referring to the aggregation result or to the 
var Stream Evaluator. I don't think there is an easy way to deal with issue. So 
I think we should separate the Stream Evaluator function names from the 
aggregation function names to avoid this situation.

[~dpgove], any thoughts on the example?

> Add variance Stream Evaluator
> -
>
> Key: SOLR-10682
> URL: https://issues.apache.org/jira/browse/SOLR-10682
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> The variance Stream Evaluator will calculate the variance of a vector of 
> numbers.
> {code}
> v = var(colA)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10233) Add support for different replica types in Solr

2017-05-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024978#comment-16024978
 ] 

ASF subversion and git services commented on SOLR-10233:


Commit 1e4d2052e6ce10b4012eda8802a8d32ccadeeba3 in lucene-solr's branch 
refs/heads/master from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1e4d205 ]

SOLR-10233: ChaosMonkeySafeLeaderWithPullReplicasTest - Catch SolrException 
while waiting for the cluster to be ready


> Add support for different replica types in Solr
> ---
>
> Key: SOLR-10233
> URL: https://issues.apache.org/jira/browse/SOLR-10233
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
> Fix For: master (7.0)
>
> Attachments: 11431.consoleText.txt, SOLR-10233.patch, 
> SOLR-10233.patch, SOLR-10233.patch, SOLR-10233.patch, SOLR-10233.patch
>
>
> For the majority of the cases, current SolrCloud's  distributed indexing is 
> great. There is a subset of use cases for which the legacy Master/Slave 
> replication may fit better:
> * Don’t require NRT
> * LIR can become an issue, prefer availability of reads vs consistency or NRT
> * High number of searches (requiring many search nodes)
> SOLR-9835 is adding replicas that don’t do indexing, just update their 
> transaction log. This Jira is to extend that idea and provide the following 
> replica types:
> * *Realtime:* Writes updates to transaction log and indexes locally. Replicas 
> of type “realtime” support NRT (soft commits) and RTG. Any _realtime_ replica 
> can become a leader. This is the only type supported in SolrCloud at this 
> time and will be the default.
> * *Append:* Writes to transaction log, but not to index, uses replication. 
> Any _append_ replica can become leader (by first applying all local 
> transaction log elements). If a replica is of type _append_ but is also the 
> leader, it will behave as a _realtime_. This is exactly what SOLR-9835 is 
> proposing (non-live replicas)
> * *Passive:* Doesn’t index or writes to transaction log. Just replicates from 
> _realtime_ or _append_ replicas. Passive replicas can’t become shard leaders 
> (i.e., if there are only passive replicas in the collection at some point, 
> updates will fail same as if there is no leaders, queries continue to work), 
> so they don’t even participate in elections.
> When the leader replica of the shard receives an update, it will distribute 
> it to all _realtime_ and _append_ replicas, the same as it does today. It 
> won't distribute to _passive_ replicas.
> By using a combination of _append_ and _passive_ replicas, one can achieve an 
> equivalent of the legacy Master/Slave architecture in SolrCloud mode with 
> most of its benefits, including high availability of writes. 
> h2. API (v1 style)
> {{/admin/collections?action=CREATE…&*realtimeReplicas=X=Y=Z*}}
> {{/admin/collections?action=ADDREPLICA…&*type=\[realtime/append/passive\]*}}
> * “replicationFactor=” will translate to “realtime=“ for back compatibility
> * if _passive_ > 0, _append_ or _realtime_ need to be >= 1 (can’t be all 
> passives)
> h2. Placement Strategies
> By using replica placement rules, one should be able to dedicate nodes to 
> search-only and write-only workloads. For example:
> {code}
> shard:*,replica:*,type:passive,fleet:slaves
> {code}
> where “type” is a new condition supported by the rule engine, and 
> “fleet:slaves” is a regular tag. Note that rules are only applied when the 
> replicas are created, so a later change in tags won't affect existing 
> replicas. Also, rules are per collection, so each collection could contain 
> it's own different rules.
> Note that on the server side Solr also needs to know how to distribute the 
> shard requests (maybe ShardHandler?) if we want to hit only a subset of 
> replicas (i.e. *passive *replicas only, or similar rules)
> h2. SolrJ
> SolrCloud client could be smart to prefer _passive_ replicas for search 
> requests when available (and if configured to do so). _Passive_ replicas 
> can’t respond RTG requests, so those should go to _realtime_ replicas. 
> h2. Cluster/Collection state
> {code}
> {"gettingstarted":{
>   "replicationFactor":"1",
>   "router":{"name":"compositeId"},
>   "maxShardsPerNode":"2",
>   "autoAddReplicas":"false",
>   "shards":{
> "shard1":{
>   "range":"8000-",
>   "state":"active",
>   "replicas":{
> "core_node5":{
>   "core":"gettingstarted_shard1_replica1",
>   "base_url":"http://127.0.0.1:8983/solr;,
>   "node_name":"127.0.0.1:8983_solr",
>   "state":"active",
>   

Re: Strange Solr JIRA versions (Lucene too!)

2017-05-25 Thread Erick Erickson
Dawid:

So can we automate this somehow? It's still extra work for the RM and
if it could become a one-liner addition to the release process maybe
we can make it easier.

On Thu, May 25, 2017 at 8:42 AM, Dawid Weiss  wrote:
> It's not really about wanting to tag it 6.x... It's something I got
> used to very much and something that helps (me) manage which
> branch(es) a given issue has been applied to. When 6.x tag is much
> like "next release cut from 6.x". When doing a release 6.[next] I'd
> grep for 6.x and bulk-add 6.[next] to all issues currently having 6.x,
> then remove 6.x from them (so that they have a constant fix-for, no
> branch included anymore).
>
> This process isn't the only one possible and I've had some discussions
> about alternative workflows. I didn't manage to convince my
> conversation partners and they failed to convince me, so I think it's
> a matter of personal preference.
>
> The ultimate reference is the changes.txt file anyway (?).
>
> Dawid
>
> On Thu, May 25, 2017 at 5:24 PM, Mike Drob  wrote:
>> Christine,
>>
>> Wow, that's fantastic. You can also pass a --grep argument to git directly.
>>
>> Another problem that just occurred to me though, is that we might need to
>> make updates to CHANGES files too. I'm not sure how to automate the check
>> for that, since the format can be pretty messy.
>>
>> Mike
>>
>> On Thu, May 25, 2017 at 8:39 AM, Christine Poerschke (BLOOMBERG/ LONDON)
>>  wrote:
>>>
>>> Hi Everyone,
>>>
>>> Perhaps a little more context would help get us all on the same page re:
>>> the "to 6.x or to not 6.x" tag question.
>>>
>>> === "to 6.x" tag ===
>>>
>>> So, some of us (myself included) for SOLR issues used to tag FixVersion
>>> 6.x since the commit was to branch_6x and (at least myself) assumed that
>>> when branch_6_7 is cut from branch_6x then the process would somehow
>>> magically turn 6.x tags into 6.7 tags, and any subsequently added 6.x tags
>>> become 6.8 in future etc.
>>>
>>> The 6.x to 6.7 transition would be an extra part of the release process
>>> and if/since it isn't actually a part of the process then it's
>>> retrospectively really really tedious to resolve 6.x to the correct
>>> 6.something tag.
>>>
>>> === "to not 6.x" tag ===
>>>
>>> An alternative is always tag to a specific (future) version i.e. to _not_
>>> 6.x tag anything and to let the released/unreleased categorisation take care
>>> of the already-released vs. scheduled-to-be-released difference.
>>>
>>> === where we are now ===
>>>
>>> There are still some tickets tagged to 6.x and people looking at the
>>> version dropdown choices will see 6.x as an existing choice. If/When no
>>> tickets are tagged to 6.x anymore then the 6.x choice could be removed from
>>> the dropdown choices leaving only specific versions to choose from.
>>>
>>> Having said all that, turning existing 6.x tagging into specific versions
>>> is tedious but not totally impossible, I did a few yesterday using simple
>>> git grep lookups:
>>>
>>> what=LUCENE-
>>> for version in 0 1 2 3 4 5 6 ; do
>>> echo branch_6_$version
>>> git log --decorate --oneline --graph origin/branch_6_$version | grep $what
>>> done
>>>
>>> Hope that helps? What do people think?
>>>
>>> Christine
>>>
>>> From: dev@lucene.apache.org At: 05/25/17 14:08:37
>>> To: dev@lucene.apache.org, dawid.we...@gmail.com, jpou...@apache.org,
>>> luc...@mikemccandless.com, kwri...@apache.org, u...@thetaphi.de
>>> Subject: Re: Strange Solr JIRA versions (Lucene too!)
>>>
>>> Lucene devs, lets get on the same page about this issue.
>>>
>>> Dawid seems to _want_ to use 6.x
>>>
>>> https://issues.apache.org/jira/browse/LUCENE-7841?focusedCommentId=16024639=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16024639
>>> Christine and I are the only ones to have commented about this pertaining
>>> to LUCENE JIRA issues.  Lets have this conversation here, not on
>>> LUCENE-7841.
>>>
>>> ~ David
>>>
>>> On Thu, May 25, 2017 at 1:28 AM David Smiley 
>>> wrote:

 Aha; this problem is a little more than a nuisance... it seems to be why
 most of these issues are marked Resolved and not Closed as well.  The RM's
 release process is to search for JIRA issues with a fix version of the
 release version (i.e. 6.6 NOT 6.x).  Issues that do not have a real version
 then fall through the cracks and remain in a "Resolved" limbo/ambiguity:

 https://issues.apache.org/jira/issues/?jql=project%20%3D%20LUCENE%20AND%20status%20in%20(Resolved)%20AND%20fixVersion%20%3D%206.x%20ORDER%20BY%20fixVersion%20ASC%2C%20assignee%20ASC
 And thus it's unclear to users browsing these issues in JIRA for which
 version the issue was released in.

 ~ David


 On Wed, May 24, 2017 at 11:16 AM David Smiley 
 wrote:
>
> It seems this issue applies to Lucene too, and it's more widespread 

[jira] [Commented] (SOLR-10515) Persist intermediate trigger state in ZK to continue tracking information across overseer restarts

2017-05-25 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024954#comment-16024954
 ] 

Andrzej Bialecki  commented on SOLR-10515:
--

Keeping a map would require a deep clone to avoid in-place modifications, but I 
agree it would be less fragile. I'll look into it.

> Persist intermediate trigger state in ZK to continue tracking information 
> across overseer restarts
> --
>
> Key: SOLR-10515
> URL: https://issues.apache.org/jira/browse/SOLR-10515
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Andrzej Bialecki 
>  Labels: autoscaling
> Fix For: master (7.0)
>
> Attachments: SOLR-10515.patch, SOLR-10515.patch
>
>
> The current trigger design is simplistic and keeps all the intermediate state 
> in memory. But this presents two problems when the overseer itself fails:
> # We lose tracking state such as which node was added before the overseer 
> restarted
> # A nodeLost trigger can never really fire for the overseer node itself
> So we need a way, preferably in the trigger API itself to save intermediate 
> state or checkpoints so that it can seamlessly continue on overseer restarts.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Release planning for 7.0

2017-05-25 Thread Erick Erickson
I think people are missing my point. I am _not_ advocating having "two
major feature branches developed at once". I'm pointing out that
between now and the 7.0 release (and possibly for a bit thereafter),
there will be a number of JIRAs that _could_ be backported to a
(future) 6.7 with virtually no effort. Some of these will be quite
helpful to clients. Trust me on this.

There's no good reason to _artificially_ refuse to backport a JIRA if
it's easy just because "we're releasing 7.0". It always takes a while
for the next major version to burn in so there's this gray period
between when 7.0 is cut and when 7.0.x will have enough mileage on it
to be the go-to version for production systems. The key here is "if
it's easy".

I'm _not_ advocating that every new commit to 7x has to be backported.
If it doesn't backport cleanly with near-zero effort, don't bother. If
you're working on a nifty new feature that would require extra work to
back-port to 6x, don't bother.

If you can back-port it in 5 minutes with a simple merge between now
and when 7.0.x becomes our go-to, please consider it.

I also suspect that many of the Lucene-level changes are far more
difficult to back-port than many of the Solr changes, the Lucene code
has to deal with gnarly back-compat issues a lot more. So I'd rather
expect that many Lucene changes can't get back-ported because it's not
easy, especially when all the deprecated code is removed, which is
fine.

Just let's not automatically exclude the idea of back-porting a JIRA
to 6x if it can be done with minimal effort just because 7.0 is being
planned.

We've always had JIRAs back-ported to newest_release-1x to be picked
up _if_ there's another release along that branch, so I don't
understand why this is at all controversial.

Best,
Erick

On Thu, May 25, 2017 at 7:18 AM, Michael McCandless
 wrote:
> On Thu, May 25, 2017 at 9:16 AM, David Smiley 
> wrote:
>>
>> On Thu, May 25, 2017 at 9:06 AM Shawn Heisey  wrote:
>>>
>>> > To me the best trade-off is to stop doing 6.x minor releases once 7.0
>>> > is out.
>>>
>>> I did say it would be relatively safe to do bugfixes and backport
>>> self-contained features in 6.x after 7.0 comes out as long as care is
>>> taken to not change the index format or analysis component behavior.
>>>
>>> Despite saying that, I actually agree with you that new minor releases
>>> (and therefore new features) should be avoided in the previous major
>>> version unless there is a VERY compelling reason.  It doesn't seem very
>>> likely that a compelling reason will be encountered.
>>
>>
>> Why?  If someone (not you, obviously), is willing to be the RM, then
>> what's it to you?
>
>
> It's more than just an RM volunteer to do another 6.x feature release; it's
> also our collective effort to back-port features to 6.x, to spend limited CI
> resources running 6.x tests, etc.
>
>> I think there's nothing wrong with a 6.whatever release following a 7.0.
>
>
> I don't think that makes much sense.
>
> Why would we choose to have two major feature branches developed at once?
> Once 7.0 is out, we should work hard towards the next (7.1) feature release,
> and leave 6.6.x open only for bug fixes.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7844) UnifiedHighlighter: simplify "maxPassages" input API

2017-05-25 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024934#comment-16024934
 ] 

David Smiley commented on LUCENE-7844:
--

bq.  For example, a user may want to highlight a title fully (one passage) ...

For that case, the user _should_ be using WholeBreakIterator for that field, 
and thus they already need to subclass.
Does that make you feel any better?  If not, I'm not sure where this all leaves 
us right now.

---
I do like a FieldOptions (per-field object options) design over subclassing; 
again -- longer term.  I could imagine something like this:
{code:java}
unifiedHighlighter.highlight(query, topDocs, 
 unifiedHighlighter.fieldOptionsWhole("title"),
 unifiedHighlighter.fieldOptions("body", 3)
);
{code}
Indeed, WholeBreakIterator almost suggest a different FieldHighlighter that is 
simpler (no BI, Scorer)... yet the outcome will be a bunch more code for likely 
immeasurable performance win and it's all internal code so the user's perceived 
complexity doesn't change.

> UnifiedHighlighter: simplify "maxPassages" input API
> 
>
> Key: LUCENE-7844
> URL: https://issues.apache.org/jira/browse/LUCENE-7844
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: David Smiley
>Priority: Minor
> Fix For: master (7.0)
>
> Attachments: LUCENE_7844__UH_maxPassages_simplification.patch
>
>
> The "maxPassages" input to the UnifiedHighlighter can be provided as an array 
> to some of the public methods on UnifiedHighlighter.  When it's provided as 
> an array, the index in the array is for the field in a parallel array. I 
> think this is awkward and furthermore it's inconsistent with the way this 
> highlighter customizes things on a by field basis.  Instead, the parameter 
> can be a simple int default (not an array), and then there can be a protected 
> method like {{getMaxPassageCount(String field}} that returns an Integer 
> which, when non-null, replaces the default value for this field.
> Aside from API simplicity and consistency, this will also remove some 
> annoying parallel array sorting going on.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7383) DIH: rewrite XPathEntityProcessor/RSS example as the smallest good demo possible

2017-05-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024910#comment-16024910
 ] 

ASF subversion and git services commented on SOLR-7383:
---

Commit 2bc88b3df20f3367b13055aafe64da42e467790b in lucene-solr's branch 
refs/heads/branch_6_6 from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2bc88b3 ]

Ref Guide: fix atom example for SOLR-7383


> DIH: rewrite XPathEntityProcessor/RSS example as the smallest good demo 
> possible
> 
>
> Key: SOLR-7383
> URL: https://issues.apache.org/jira/browse/SOLR-7383
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 5.0, 6.0
>Reporter: Upayavira
>Assignee: Alexandre Rafalovitch
>Priority: Minor
> Fix For: 6.6, master (7.0)
>
> Attachments: atom_20170315.tgz, rss-data-config.xml, SOLR-7383.patch
>
>
> The DIH example (solr/example/example-DIH/solr/rss/conf/rss-data-config.xml) 
> is broken again. See associated issues.
> Below is a config that should work.
> This is caused by Slashdot seemingly oscillating between RDF/RSS and pure 
> RSS. Perhaps we should depend upon something more static, rather than an 
> external service that is free to change as it desires.
> {code:xml}
> 
> 
> 
>  pk="link"
> url="http://rss.slashdot.org/Slashdot/slashdot;
> processor="XPathEntityProcessor"
> forEach="/RDF/item"
> transformer="DateFormatTransformer">
>   
>  commonField="true" />
>  commonField="true" />
>  commonField="true" />
>   
> 
> 
> 
> 
> 
>  dateTimeFormat="-MM-dd'T'HH:mm:ss" />
> 
> 
> 
> 
> 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7383) DIH: rewrite XPathEntityProcessor/RSS example as the smallest good demo possible

2017-05-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024909#comment-16024909
 ] 

ASF subversion and git services commented on SOLR-7383:
---

Commit 17f565c71af875d95a47c81894a816159ba5a981 in lucene-solr's branch 
refs/heads/branch_6x from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=17f565c ]

Ref Guide: fix atom example for SOLR-7383


> DIH: rewrite XPathEntityProcessor/RSS example as the smallest good demo 
> possible
> 
>
> Key: SOLR-7383
> URL: https://issues.apache.org/jira/browse/SOLR-7383
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 5.0, 6.0
>Reporter: Upayavira
>Assignee: Alexandre Rafalovitch
>Priority: Minor
> Fix For: 6.6, master (7.0)
>
> Attachments: atom_20170315.tgz, rss-data-config.xml, SOLR-7383.patch
>
>
> The DIH example (solr/example/example-DIH/solr/rss/conf/rss-data-config.xml) 
> is broken again. See associated issues.
> Below is a config that should work.
> This is caused by Slashdot seemingly oscillating between RDF/RSS and pure 
> RSS. Perhaps we should depend upon something more static, rather than an 
> external service that is free to change as it desires.
> {code:xml}
> 
> 
> 
>  pk="link"
> url="http://rss.slashdot.org/Slashdot/slashdot;
> processor="XPathEntityProcessor"
> forEach="/RDF/item"
> transformer="DateFormatTransformer">
>   
>  commonField="true" />
>  commonField="true" />
>  commonField="true" />
>   
> 
> 
> 
> 
> 
>  dateTimeFormat="-MM-dd'T'HH:mm:ss" />
> 
> 
> 
> 
> 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7383) DIH: rewrite XPathEntityProcessor/RSS example as the smallest good demo possible

2017-05-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024908#comment-16024908
 ] 

ASF subversion and git services commented on SOLR-7383:
---

Commit b3024d67cae0f2c9bbfb9bdf897c9b43d6ab8926 in lucene-solr's branch 
refs/heads/master from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b3024d6 ]

Ref Guide: fix atom example for SOLR-7383


> DIH: rewrite XPathEntityProcessor/RSS example as the smallest good demo 
> possible
> 
>
> Key: SOLR-7383
> URL: https://issues.apache.org/jira/browse/SOLR-7383
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 5.0, 6.0
>Reporter: Upayavira
>Assignee: Alexandre Rafalovitch
>Priority: Minor
> Fix For: 6.6, master (7.0)
>
> Attachments: atom_20170315.tgz, rss-data-config.xml, SOLR-7383.patch
>
>
> The DIH example (solr/example/example-DIH/solr/rss/conf/rss-data-config.xml) 
> is broken again. See associated issues.
> Below is a config that should work.
> This is caused by Slashdot seemingly oscillating between RDF/RSS and pure 
> RSS. Perhaps we should depend upon something more static, rather than an 
> external service that is free to change as it desires.
> {code:xml}
> 
> 
> 
>  pk="link"
> url="http://rss.slashdot.org/Slashdot/slashdot;
> processor="XPathEntityProcessor"
> forEach="/RDF/item"
> transformer="DateFormatTransformer">
>   
>  commonField="true" />
>  commonField="true" />
>  commonField="true" />
>   
> 
> 
> 
> 
> 
>  dateTimeFormat="-MM-dd'T'HH:mm:ss" />
> 
> 
> 
> 
> 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10479) support HttpShardHandlerFactory.loadBalancerRequests(MinimumAbsolute|MaximumFraction) options

2017-05-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024892#comment-16024892
 ] 

ASF subversion and git services commented on SOLR-10479:


Commit 2bb6e2cacabdcea6c7534595dfc23cd17973a68d in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2bb6e2c ]

SOLR-10479: Adds support for 
HttpShardHandlerFactory.loadBalancerRequests(MinimumAbsolute|MaximumFraction) 
configuration. (Ramsey Haddad, Daniel Collins, Christine Poerschke)


> support 
> HttpShardHandlerFactory.loadBalancerRequests(MinimumAbsolute|MaximumFraction) 
> options
> -
>
> Key: SOLR-10479
> URL: https://issues.apache.org/jira/browse/SOLR-10479
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-10479.patch, SOLR-10479.patch
>
>
> If a request sends no {{timeAllowed}} threshold (or if it sends a very 
> generous threshold) then that request can potentially be retried on 'very 
> many' servers in the cloud.
> Via the 
> {{HttpShardHandlerFactory.loadBalancerRequests(MinimumAbsolute|MaximumFraction)}}
>  options the number of servers tried can be restricted via configuration e.g.
> {code}
>  class="solr.HttpShardHandlerFactory">
>   2
>   0.50
> 
> {code}
> would on a six-replica-and-all-replicas-active collection/shard restrict 
> sending to three replicas i.e. max(2, 0.50 x 6) and if the collection/shard 
> temporarily becomes 
> three-replicas-active-and-three-replicas-recovering-or-down then sending is 
> restricted to two replicas i.e. max(2, 0.50 x 3) temporarily.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10659) remove ResponseBuilder.getSortSpec use in SearchGroupShardResponseProcessor

2017-05-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16024893#comment-16024893
 ] 

ASF subversion and git services commented on SOLR-10659:


Commit 6ba1834bc35d5cf322e7ba30dbc86e4d273eebb7 in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6ba1834 ]

SOLR-10659: Remove ResponseBuilder.getSortSpec use in 
SearchGroupShardResponseProcessor. (Judith Silverman via Christine Poerschke)


> remove ResponseBuilder.getSortSpec use in SearchGroupShardResponseProcessor
> ---
>
> Key: SOLR-10659
> URL: https://issues.apache.org/jira/browse/SOLR-10659
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-10659.patch
>
>
> For clarity splitting this short but very subtle refactor out from the 
> SOLR-6203 bug fix effort.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >