[jira] [Commented] (SOLR-10092) HDFS: AutoAddReplica fails

2017-02-22 Thread Hendrik Haddorp (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15880065#comment-15880065
 ] 

Hendrik Haddorp commented on SOLR-10092:


For a setup using a local filesystem I did not see this code to be triggered at 
all. But I was just trying to reproduce this on an unpatched installation and 
for some reason it looks like it worked now as well. So am going to recheck 
again. From what I saw in the code it looked like the code required the shard 
id/name to be set, which is also what the exception said, but the 
OverseerAutoReplicaFailoverThread is not doing that.

Regarding the instance dir. I'm seeing this in the logs:
2017-02-23 06:43:13.968 INFO  (qtp1224347463-12) [c:test.test s:shard1 
r:core_node3 x:test.test_shard1_replica1] o.a.s.c.SolrCore 
[[test.test_shard1_replica1] ] Opening new SolrCore at 
[/var/opt/solr/test.test_shard1_replica1], 
dataDir=[hdfs://my-hdfs-namenode:8000/solr/test.test/core_node3/data/]
So even for HDFS there is local information. The folder only contains a 
core.properties file. Seems to contain everything required to determine the 
replica. Not sure why this is not taken from ZooKeeper though.

> HDFS: AutoAddReplica fails
> --
>
> Key: SOLR-10092
> URL: https://issues.apache.org/jira/browse/SOLR-10092
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: hdfs
>Affects Versions: 6.3
>Reporter: Hendrik Haddorp
> Attachments: SOLR-10092.patch
>
>
> OverseerAutoReplicaFailoverThread fails to create replacement core with this 
> exception:
> o.a.s.c.OverseerAutoReplicaFailoverThread Exception trying to create new 
> replica on 
> http://...:9000/solr:org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:
>  Error from server at http://...:9000/solr: Error CREATEing SolrCore 
> 'test2.collection-09_shard1_replica1': Unable to create core 
> [test2.collection-09_shard1_replica1] Caused by: No shard id for 
> CoreDescriptor[name=test2.collection-09_shard1_replica1;instanceDir=/var/opt/solr/test2.collection-09_shard1_replica1]
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:593)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
> at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
> at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.createSolrCore(OverseerAutoReplicaFailoverThread.java:456)
> at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.lambda$addReplica$0(OverseerAutoReplicaFailoverThread.java:251)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745) 
> also see this mail thread about the issue: 
> https://lists.apache.org/thread.html/%3CCAA70BoWyzbvQuJTyzaG4Kx1tj0Djgcm+MV=x_hoac1e6cse...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1150 - Unstable!

2017-02-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1150/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.testSplitWithChaosMonkey

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([9831B99AB83FE420:13166A4BF9394FA4]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:187)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:144)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:139)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:865)
at 
org.apache.solr.cloud.ShardSplitTest.testSplitWithChaosMonkey(ShardSplitTest.java:436)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
  

[jira] [Commented] (SOLR-9450) Link to online Javadocs instead of distributing with binary download

2017-02-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15880034#comment-15880034
 ] 

Jan Høydahl commented on SOLR-9450:
---

[~arafalov], works here
{noformat}
[master2] ~/git/lucene-solr-2$ patch -p1 -i ~/Desktop/SOLR-9450.patch 
patching file solr/CHANGES.txt
Hunk #1 succeeded at 246 with fuzz 2 (offset 22 lines).
Hunk #2 succeeded at 700 (offset 24 lines).
Hunk #3 succeeded at 977 (offset 24 lines).
patching file solr/README.txt
patching file solr/build.xml
patching file solr/common-build.xml
patching file solr/site/online-link.xsl
{noformat}

Also pushed it to my GItHub fork if you want to try a merge: 
https://github.com/cominvent/lucene-solr/tree/solr9450

> Link to online Javadocs instead of distributing with binary download
> 
>
> Key: SOLR-9450
> URL: https://issues.apache.org/jira/browse/SOLR-9450
> Project: Solr
>  Issue Type: Sub-task
>  Components: Build
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9450.patch, SOLR-9450.patch, SOLR-9450.patch, 
> SOLR-9450.patch
>
>
> Spinoff from SOLR-6806. This sub task will replace the contents of {{docs}} 
> in the binary download with a link to the online JavaDocs. The build should 
> make sure to generate a link to the correct version. I believe this is the 
> correct tamplate: http://lucene.apache.org/solr/6_2_0/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9530) Add an Atomic Update Processor

2017-02-22 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15880016#comment-15880016
 ] 

Noble Paul edited comment on SOLR-9530 at 2/23/17 7:07 AM:
---

Let's get rid of any URP configuration from {{solrconfig.xml}}. Let's move 
everything to parameters and define what is required.  The problem is , the 
config API does not support URP chain and it does not plan to do so. So, let's 
keep it as simple parameters

accept params as follows and nuke all the configuration required
{code}
processor=Atomic_newfield=add=set_i=inc
{code}


was (Author: noble.paul):
Let's get rid of any URP configuration from {{solrconfig.xml}}. Let's move 
everything to parameters and define what is required.  The problem is , the 
config API does not support URP chain and it does not plan to do so. So, let's 
keep it is simple parameters

accept params as follows and nuke all the configuration required
{code}
Atomic.my_newfield=add=set_i=inc
{code}

> Add an Atomic Update Processor 
> ---
>
> Key: SOLR-9530
> URL: https://issues.apache.org/jira/browse/SOLR-9530
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: SOLR-9530.patch, SOLR-9530.patch, SOLR-9530.patch
>
>
> I'd like to explore the idea of adding a new update processor to help ingest 
> partial updates.
> Example use-case - There are two datasets with a common id field. How can I 
> merge both of them at index time?
> Proposed Solution: 
> {code}
> 
>   
> add
>   
>   
>   
> 
> {code}
> So the first JSON dump could be ingested against 
> {{http://localhost:8983/solr/gettingstarted/update/json}}
> And then the second JSON could be ingested against
> {{http://localhost:8983/solr/gettingstarted/update/json?processor=atomic}}
> The Atomic Update Processor could support all the atomic update operations 
> currently supported.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9530) Add an Atomic Update Processor

2017-02-22 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15880016#comment-15880016
 ] 

Noble Paul commented on SOLR-9530:
--

Let's get rid of any URP configuration from {{solrconfig.xml}}. Let's move 
everything to parameters and define what is required.  The problem is , the 
config API does not support URP chain and it does not plan to do so. So, let's 
keep it is simple parameters

accept params as follows and nuke all the configuration required
{code}
Atomic.my_newfield=add=set_i=inc
{code}

> Add an Atomic Update Processor 
> ---
>
> Key: SOLR-9530
> URL: https://issues.apache.org/jira/browse/SOLR-9530
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: SOLR-9530.patch, SOLR-9530.patch, SOLR-9530.patch
>
>
> I'd like to explore the idea of adding a new update processor to help ingest 
> partial updates.
> Example use-case - There are two datasets with a common id field. How can I 
> merge both of them at index time?
> Proposed Solution: 
> {code}
> 
>   
> add
>   
>   
>   
> 
> {code}
> So the first JSON dump could be ingested against 
> {{http://localhost:8983/solr/gettingstarted/update/json}}
> And then the second JSON could be ingested against
> {{http://localhost:8983/solr/gettingstarted/update/json?processor=atomic}}
> The Atomic Update Processor could support all the atomic update operations 
> currently supported.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9530) Add an Atomic Update Processor

2017-02-22 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879998#comment-15879998
 ] 

Ishan Chattopadhyaya edited comment on SOLR-9530 at 2/23/17 7:04 AM:
-

bq. In case of multiple threads try to update the incoming doc to atomic-type 
update doc, all the threads will end up forming same atomic-type update doc (as 
same set of operations will be performed by 'SET' field).
The problem is with "inc" operations. When two clients see the value to be, say 
100, and want to increase by 50, they can supply the document version along 
with "inc":50. One of them would be executed first, and the second one would be 
rejected since the document version is no longer the same as what this client 
saw. Without optimistic concurrency, the value will end up being 200, but 
intended value was 150.

Also, do consider the cases when one client is indexing without this URP, but 
another client is using this URP, both in parallel.


was (Author: ichattopadhyaya):
bq. In case of multiple threads try to update the incoming doc to atomic-type 
update doc, all the threads will end up forming same atomic-type update doc (as 
same set of operations will be performed by 'SET' field).
The problem is with "inc" operations. When two clients see the value to be, say 
100, and want to increase by 50, they can supply the document version along 
with "inc":50. One of them would be executed first, and the second one would be 
rejected since the document version is no longer the same as what this client 
saw.

Also, do consider the cases when one client is indexing without this URP, but 
another client is using this URP, both in parallel. 

> Add an Atomic Update Processor 
> ---
>
> Key: SOLR-9530
> URL: https://issues.apache.org/jira/browse/SOLR-9530
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: SOLR-9530.patch, SOLR-9530.patch, SOLR-9530.patch
>
>
> I'd like to explore the idea of adding a new update processor to help ingest 
> partial updates.
> Example use-case - There are two datasets with a common id field. How can I 
> merge both of them at index time?
> Proposed Solution: 
> {code}
> 
>   
> add
>   
>   
>   
> 
> {code}
> So the first JSON dump could be ingested against 
> {{http://localhost:8983/solr/gettingstarted/update/json}}
> And then the second JSON could be ingested against
> {{http://localhost:8983/solr/gettingstarted/update/json?processor=atomic}}
> The Atomic Update Processor could support all the atomic update operations 
> currently supported.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9530) Add an Atomic Update Processor

2017-02-22 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879998#comment-15879998
 ] 

Ishan Chattopadhyaya commented on SOLR-9530:


bq. In case of multiple threads try to update the incoming doc to atomic-type 
update doc, all the threads will end up forming same atomic-type update doc (as 
same set of operations will be performed by 'SET' field).
The problem is with "inc" operations. When two clients see the value to be, say 
100, and want to increase by 50, they can supply the document version along 
with "inc":50. One of them would be executed first, and the second one would be 
rejected since the document version is no longer the same as what this client 
saw.

Also, do consider the cases when one client is indexing without this URP, but 
another client is using this URP, both in parallel. 

> Add an Atomic Update Processor 
> ---
>
> Key: SOLR-9530
> URL: https://issues.apache.org/jira/browse/SOLR-9530
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: SOLR-9530.patch, SOLR-9530.patch, SOLR-9530.patch
>
>
> I'd like to explore the idea of adding a new update processor to help ingest 
> partial updates.
> Example use-case - There are two datasets with a common id field. How can I 
> merge both of them at index time?
> Proposed Solution: 
> {code}
> 
>   
> add
>   
>   
>   
> 
> {code}
> So the first JSON dump could be ingested against 
> {{http://localhost:8983/solr/gettingstarted/update/json}}
> And then the second JSON could be ingested against
> {{http://localhost:8983/solr/gettingstarted/update/json?processor=atomic}}
> The Atomic Update Processor could support all the atomic update operations 
> currently supported.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7705) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-02-22 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated LUCENE-7705:
-
Description: 
SOLR-10186

[~erickerickson]: Is there a good reason that we hard-code a 256 character 
limit for the CharTokenizer? In order to change this limit it requires that 
people copy/paste the incrementToken into some new class since incrementToken 
is final.
KeywordTokenizer can easily change the default (which is also 256 bytes), but 
to do so requires code rather than being able to configure it in the schema.
For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
(WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
(Factories) it would take adding a c'tor to the base class in Lucene and using 
it in the factory.
Any objections?

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> -
>
> Key: LUCENE-7705
> URL: https://issues.apache.org/jira/browse/LUCENE-7705
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Amrit Sarkar
>Priority: Minor
>
> SOLR-10186
> [~erickerickson]: Is there a good reason that we hard-code a 256 character 
> limit for the CharTokenizer? In order to change this limit it requires that 
> people copy/paste the incrementToken into some new class since incrementToken 
> is final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7705) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-02-22 Thread Amrit Sarkar (JIRA)
Amrit Sarkar created LUCENE-7705:


 Summary: Allow CharTokenizer-derived tokenizers and 
KeywordTokenizer to configure the max token length
 Key: LUCENE-7705
 URL: https://issues.apache.org/jira/browse/LUCENE-7705
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Amrit Sarkar
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10186) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-02-22 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879971#comment-15879971
 ] 

Amrit Sarkar edited comment on SOLR-10186 at 2/23/17 6:35 AM:
--

Erick,

If we specify the correct tag in the schema, get(..) and getInt(..) will remove 
the desired tuple from the arguments and the _(!args.isEmpty())_ check is for 
the unknown parameters only.

{code:xml}
   maxCharLen = getInt(args, "maxCharLen", 
KeywordTokenizer.DEFAULT_BUFFER_SIZE);

   protected final int getInt(Map args, String name, int 
defaultVal) {
String s = args.remove(name);
return s == null ? defaultVal : Integer.parseInt(s);
  }
{code}

I will write tests for this too. Opening JIRA under Lucene, and let me know 
where to have the discussion from the either two.


was (Author: sarkaramr...@gmail.com):
Erick,

If we specify the correct tag in the schema, get(..) and getInt(..) will remove 
the desired tuple from the arguments and the _(!args.isEmpty())_ check if for 
the unknown parameters only.

{code:xml}
   maxCharLen = getInt(args, "maxCharLen", 
KeywordTokenizer.DEFAULT_BUFFER_SIZE);

   protected final int getInt(Map args, String name, int 
defaultVal) {
String s = args.remove(name);
return s == null ? defaultVal : Integer.parseInt(s);
  }
{code}

I will write tests for this too. Opening JIRA under Lucene, and let me know 
where to have the discussion from the either two.

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> -
>
> Key: SOLR-10186
> URL: https://issues.apache.org/jira/browse/SOLR-10186
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Priority: Minor
> Attachments: SOLR-10186.patch
>
>
> Is there a good reason that we hard-code a 256 character limit for the 
> CharTokenizer? In order to change this limit it requires that people 
> copy/paste the incrementToken into some new class since incrementToken is 
> final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10186) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-02-22 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879971#comment-15879971
 ] 

Amrit Sarkar commented on SOLR-10186:
-

Erick,

If we specify the correct tag in the schema, get(..) and getInt(..) will remove 
the desired the tuple from the argument and the _(!args.isEmpty())_ check if 
for the unknown parameters only.

{code:xml}
   maxCharLen = getInt(args, "maxCharLen", 
KeywordTokenizer.DEFAULT_BUFFER_SIZE);

   protected final int getInt(Map args, String name, int 
defaultVal) {
String s = args.remove(name);
return s == null ? defaultVal : Integer.parseInt(s);
  }
{code}

I will write tests for this too. Opening JIRA under Lucene, and let me know 
where to have the discussion from the either two.

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> -
>
> Key: SOLR-10186
> URL: https://issues.apache.org/jira/browse/SOLR-10186
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Priority: Minor
> Attachments: SOLR-10186.patch
>
>
> Is there a good reason that we hard-code a 256 character limit for the 
> CharTokenizer? In order to change this limit it requires that people 
> copy/paste the incrementToken into some new class since incrementToken is 
> final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10186) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-02-22 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879971#comment-15879971
 ] 

Amrit Sarkar edited comment on SOLR-10186 at 2/23/17 6:35 AM:
--

Erick,

If we specify the correct tag in the schema, get(..) and getInt(..) will remove 
the desired tuple from the arguments and the _(!args.isEmpty())_ check if for 
the unknown parameters only.

{code:xml}
   maxCharLen = getInt(args, "maxCharLen", 
KeywordTokenizer.DEFAULT_BUFFER_SIZE);

   protected final int getInt(Map args, String name, int 
defaultVal) {
String s = args.remove(name);
return s == null ? defaultVal : Integer.parseInt(s);
  }
{code}

I will write tests for this too. Opening JIRA under Lucene, and let me know 
where to have the discussion from the either two.


was (Author: sarkaramr...@gmail.com):
Erick,

If we specify the correct tag in the schema, get(..) and getInt(..) will remove 
the desired the tuple from the argument and the _(!args.isEmpty())_ check if 
for the unknown parameters only.

{code:xml}
   maxCharLen = getInt(args, "maxCharLen", 
KeywordTokenizer.DEFAULT_BUFFER_SIZE);

   protected final int getInt(Map args, String name, int 
defaultVal) {
String s = args.remove(name);
return s == null ? defaultVal : Integer.parseInt(s);
  }
{code}

I will write tests for this too. Opening JIRA under Lucene, and let me know 
where to have the discussion from the either two.

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> -
>
> Key: SOLR-10186
> URL: https://issues.apache.org/jira/browse/SOLR-10186
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Priority: Minor
> Attachments: SOLR-10186.patch
>
>
> Is there a good reason that we hard-code a 256 character limit for the 
> CharTokenizer? In order to change this limit it requires that people 
> copy/paste the incrementToken into some new class since incrementToken is 
> final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 715 - Still Unstable!

2017-02-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/715/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  org.apache.solr.cloud.CustomCollectionTest.testCustomCollectionsAPI

Error Message:
Could not find collection : implicitcoll

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : implicitcoll
at 
__randomizedtesting.SeedInfo.seed([D1E37C890235A44:67FFB9A3ADB9EC3C]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:194)
at 
org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:245)
at 
org.apache.solr.cloud.CustomCollectionTest.testCustomCollectionsAPI(CustomCollectionTest.java:68)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
org.apache.solr.cloud.TestSegmentSorting.testAtomicUpdateOfSegmentSortField

Error Message:
Could not find collection:testAtomicUpdateOfSegmentSortField


[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1245 - Unstable

2017-02-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1245/

2 tests failed.
FAILED:  
org.apache.solr.core.HdfsDirectoryFactoryTest.testInitArgsOrSysPropConfig

Error Message:
The max direct memory is likely too low.  Either increase it (by adding 
-XX:MaxDirectMemorySize=g -XX:+UseLargePages to your containers startup 
args) or disable direct allocation using 
solr.hdfs.blockcache.direct.memory.allocation=false in solrconfig.xml. If you 
are putting the block cache on the heap, your java heap size might not be large 
enough. Failed allocating ~134.217728 MB.

Stack Trace:
java.lang.RuntimeException: The max direct memory is likely too low.  Either 
increase it (by adding -XX:MaxDirectMemorySize=g -XX:+UseLargePages to 
your containers startup args) or disable direct allocation using 
solr.hdfs.blockcache.direct.memory.allocation=false in solrconfig.xml. If you 
are putting the block cache on the heap, your java heap size might not be large 
enough. Failed allocating ~134.217728 MB.
at 
__randomizedtesting.SeedInfo.seed([BFA67DA57E6DA6E2:4809B48E83E44CC9]:0)
at 
org.apache.solr.core.HdfsDirectoryFactory.createBlockCache(HdfsDirectoryFactory.java:310)
at 
org.apache.solr.core.HdfsDirectoryFactory.getBlockDirectoryCache(HdfsDirectoryFactory.java:286)
at 
org.apache.solr.core.HdfsDirectoryFactory.create(HdfsDirectoryFactory.java:226)
at 
org.apache.solr.core.HdfsDirectoryFactoryTest.testInitArgsOrSysPropConfig(HdfsDirectoryFactoryTest.java:108)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-9835) Create another replication mode for SolrCloud

2017-02-22 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879841#comment-15879841
 ] 

Ishan Chattopadhyaya commented on SOLR-9835:


Also, lets add a simple test to ensure that in-place updates work on a replica:
# Index few documents to leader, including a full document with id=0.
# Commit
# Index few more documents
# Commit
# Update id=0 document in-place
# Commit
# Assert that the document with id=0 has the same updated value in the leader 
and the replica.

> Create another replication mode for SolrCloud
> -
>
> Key: SOLR-9835
> URL: https://issues.apache.org/jira/browse/SOLR-9835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch
>
>
> The current replication mechanism of SolrCloud is called state machine, which 
> replicas start in same initial state and for each input, the input is 
> distributed across replicas so all replicas will end up with same next state. 
> But this type of replication have some drawbacks
> - The commit (which costly) have to run on all replicas
> - Slow recovery, because if replica miss more than N updates on its down 
> time, the replica have to download entire index from its leader.
> So we create create another replication mode for SolrCloud called state 
> transfer, which acts like master/slave replication. In basically
> - Leader distribute the update to other replicas, but the leader only apply 
> the update to IW, other replicas just store the update to UpdateLog (act like 
> replication).
> - Replicas frequently polling the latest segments from leader.
> Pros:
> - Lightweight for indexing, because only leader are running the commit, 
> updates.
> - Very fast recovery, replicas just have to download the missing segments.
> On CAP point of view, this ticket will trying to promise to end users a 
> distributed systems :
> - Partition tolerance
> - Weak Consistency for normal query : clusters can serve stale data. This 
> happen when leader finish a commit and slave is fetching for latest segment. 
> This period can at most {{pollInterval + time to fetch latest segment}}.
> - Consistency for RTG : just like original SolrCloud mode
> - Weak Availability : just like original SolrCloud mode. If a leader down, 
> client must wait until new leader being elected.
> To use this new replication mode, a new collection must be created with an 
> additional parameter {{liveReplicas=1}}
> {code}
> http://localhost:8983/solr/admin/collections?action=CREATE=newCollection=2=1=1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Solr Zookeeper error while connecting from Spark context

2017-02-22 Thread Manjunath N S (mans3)
Hello,
I have been trying to read hive tables by invoking HiveContext within 
SparkContext.
The data is being read from hive tables,However when I try to connect it to 
Solr external zookeeper server from Hadoop edge node,I am facing an issue.

The same code works when I try to push sample json from SparkContext to Solr 
using same zookeeper server.I am using lucidworks spark-solr:2.1.0 package to 
do this. End goal is to push parquet files to Solr directly.

Please find the error below.

17/02/22 20:14:17 ERROR ZooKeeperSaslClient: SASL authentication failed using 
login context 'Client'.
17/02/22 20:14:17 WARN ConnectionManager: zkClient received AuthFailed
17/02/22 20:14:17 WARN SolrQuerySupport: Can't get uniqueKey for testspark due 
to: com.google.common.util.concurrent.UncheckedExecutionException: 
org.apache.solr.common.cloud.ZooKeeperException:
17/02/22 20:14:17 ERROR ZooKeeperSaslClient: SASL authentication failed using 
login context 'Client'.
17/02/22 20:14:17 WARN ConnectionManager: zkClient received AuthFailed
Exception in thread "main" 
com.google.common.util.concurrent.UncheckedExecutionException: 
org.apache.solr.common.cloud.ZooKeeperException:
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2263)
at com.google.common.cache.LocalCache.get(LocalCache.java:4000)
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:4004)
at 
com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4874)
at 
com.lucidworks.spark.util.SolrSupport$.getCachedCloudClient(SolrSupport.scala:93)
at 
com.lucidworks.spark.util.SolrSupport$.getSolrBaseUrl(SolrSupport.scala:97)
at 
com.lucidworks.spark.util.SolrRelationUtil$.getBaseSchema(SolrRelationUtil.scala:34)
at com.lucidworks.spark.SolrRelation.(SolrRelation.scala:83)
at solr.DefaultSource.createRelation(DefaultSource.scala:26)
at 
org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:222)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148)
at com.cisco.SparkSolr.main(SparkSolr.java:58)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:742)
at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.solr.common.cloud.ZooKeeperException:
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.connect(CloudSolrClient.java:475)
at 
com.lucidworks.spark.util.SolrSupport$.getSolrCloudClient(SolrSupport.scala:83)
at 
com.lucidworks.spark.util.SolrSupport$.getNewSolrCloudClient(SolrSupport.scala:89)
at 
com.lucidworks.spark.util.CacheSolrClient$$anon$1.load(SolrSupport.scala:38)
at 
com.lucidworks.spark.util.CacheSolrClient$$anon$1.load(SolrSupport.scala:36)
at 
com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3599)
at 
com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2379)
at 
com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2342)
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2257)
... 20 more
Caused by: org.apache.zookeeper.KeeperException$AuthFailedException: 
KeeperErrorCode = AuthFailed for /clusterstate.json
at org.apache.zookeeper.KeeperException.create(KeeperException.java:123)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1041)
at 
org.apache.solr.common.cloud.SolrZkClient$5.execute(SolrZkClient.java:311)
at 
org.apache.solr.common.cloud.SolrZkClient$5.execute(SolrZkClient.java:308)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
at 
org.apache.solr.common.cloud.SolrZkClient.exists(SolrZkClient.java:308)
at 
org.apache.solr.common.cloud.ZkStateReader.createClusterStateWatchersAndUpdate(ZkStateReader.java:289)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.connect(CloudSolrClient.java:467)
... 28 more
hdsvmg@hddev-c01-edge-01:/users/hdsvmg>


[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3850 - Still Unstable!

2017-02-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3850/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

6 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Could not find collection:collection2

Stack Trace:
java.lang.AssertionError: Could not find collection:collection2
at 
__randomizedtesting.SeedInfo.seed([69266822E406B4B2:E17257F84AFAD94A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:159)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:144)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:139)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:865)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrClient(FullSolrCloudDistribCmdsTest.java:620)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+155) - Build # 19030 - Unstable!

2017-02-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19030/
Java: 32bit/jdk-9-ea+155 -server -XX:+UseSerialGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.util.TestSolrCLIRunExample

Error Message:
ObjectTracker found 5 object(s) that were not released!!! [TransactionLog, 
MockDirectoryWrapper, MockDirectoryWrapper, MDCAwareThreadPoolExecutor, 
MockDirectoryWrapper] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.update.TransactionLog  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.update.TransactionLog.(TransactionLog.java:188)  at 
org.apache.solr.update.UpdateLog.newTransactionLog(UpdateLog.java:443)  at 
org.apache.solr.update.UpdateLog.ensureLog(UpdateLog.java:1102)  at 
org.apache.solr.update.UpdateLog.add(UpdateLog.java:529)  at 
org.apache.solr.update.UpdateLog.add(UpdateLog.java:514)  at 
org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:294)
  at 
org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:213)
  at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:168)
  at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
  at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:987)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1200)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:749)
  at 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
  at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
  at 
org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:336)
  at 
org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:97)  
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:179)
  at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:135)
  at org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:306) 
 at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:251)  at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:121)
  at org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:271) 
 at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:251)  at 
org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:173)  at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:186)
  at 
org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:107)
  at org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:54)  
at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
  at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:171)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2413)  at 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:722)  at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:528)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:347)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:298)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) 
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:395)  
at 

[jira] [Commented] (LUCENE-7686) NRT suggester should have option to filter out duplicates

2017-02-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879747#comment-15879747
 ] 

ASF subversion and git services commented on LUCENE-7686:
-

Commit 0d5a61b3df04593691796867ae3b32d05e66a0c0 in lucene-solr's branch 
refs/heads/branch_6x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0d5a61b ]

LUCENE-7686: add efficient de-duping to the NRT document suggester


> NRT suggester should have option to filter out duplicates
> -
>
> Key: LUCENE-7686
> URL: https://issues.apache.org/jira/browse/LUCENE-7686
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7686.patch, LUCENE-7686.patch, LUCENE-7686.patch
>
>
> Some of the other suggesters have this ability, and it's quite simple to add 
> it to the NRT suggester as long as the thing we are filtering on is the 
> suggest key itself, not e.g. another stored field from the document.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_121) - Build # 745 - Still unstable!

2017-02-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/745/
Java: 64bit/jdk1.8.0_121 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.OverseerRolesTest.testOverseerRole

Error Message:
Timed out waiting for overseer state change

Stack Trace:
java.lang.AssertionError: Timed out waiting for overseer state change
at 
__randomizedtesting.SeedInfo.seed([E8A1FB5DF8BF7844:96A06C9C30C4E95]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.OverseerRolesTest.waitForNewOverseer(OverseerRolesTest.java:62)
at 
org.apache.solr.cloud.OverseerRolesTest.testOverseerRole(OverseerRolesTest.java:140)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 12558 lines...]
   [junit4] Suite: org.apache.solr.cloud.OverseerRolesTest
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-9835) Create another replication mode for SolrCloud

2017-02-22 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879721#comment-15879721
 ] 

Cao Manh Dat commented on SOLR-9835:


Thanks [~shalinmangar]!
bq. LeaderInitiatedRecoveryThread – What is the reason behind adding 
SocketTimeoutException in the list of communication errors on which no more 
retries are made?
This change come from a jepsen test. This bug is also affect current mode. I 
created another issue for this bug SOLR-9913. We can skip this change for this 
ticket.
bq.ZkController.register method – The condition for !isLeader && 
onlyLeaderIndexes can be replaced by the isReplicaInOnlyLeaderIndexes variable.
Yeah, that's right
bq. Since there is no log replay on startup on replicas anymore, what if the 
replica is killed (which keeps its state as 'active' in ZK) and then the 
cluster is restarted and the replica becomes leader candidate? If we do not 
replay the discarded log then it could lead to data loss?
Very good catch, I try to resolve this problem.
bq. UpdateLog – Can you please add javadocs outlining the motivation/purpose of 
the new methods such as copyOverBufferingUpdates and switchToNewTlog e.g. why 
does switchToNewTlog require copying over some updates from the old tlog?
Sure!
bq.It seems that any commits that might be triggered explicitly by the user can 
interfere with the index replication. Suppose that a replication is in progress 
and a user explicitly calls commit which is distributed to all replicas, in 
such a case the tlogs will be rolled over and then when the ReplicateFromLeader 
calls switchToNewTlog(), the previous tlog may not have all the updates that 
should have been copied over. We should have a way to either disable explicit 
commits or protect against them on the replicas.
I don't think so, switchToNewTlog() is based on commit version at lucene index 
level ({{commit.getUserData().get(SolrIndexWriter.COMMIT_COMMAND_VERSION)}}), 
so we will always roll over updates in right way.
bq.UpdateLog – why does copyOverBufferUpdates block updates while calling 
switchToNewTlog but ReplicateFromLeader doesn't? How are they both safe?
Good catch I think we should blockUpdates in switchToNewTlog as well.
bq.Can we add tests for testing CDCR and backup/restore with this new 
replication scheme?
CDCR is very complex, I don't think we should support CDCR in this new 
replication mode now.
bq. ZkController.startReplicationFromLeader – Using a ConcurrentHashMap is not 
enough to prevent two simultaneous replications from happening concurrently. 
You should use the atomic putIfAbsent to put a core to the map before starting 
replication.
Yeah, that's sounds a good idea.
bq.Aren't some of the guarantees of real-time-get are relaxed in this new mode 
especially around delete-by-queries which no longer apply on replicas? Can you 
please document them as a comment on the issue that we can transfer to the ref 
guide in future?
I will update the ticket description now. Basically RTG is not consistency for 
DBQs

> Create another replication mode for SolrCloud
> -
>
> Key: SOLR-9835
> URL: https://issues.apache.org/jira/browse/SOLR-9835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch
>
>
> The current replication mechanism of SolrCloud is called state machine, which 
> replicas start in same initial state and for each input, the input is 
> distributed across replicas so all replicas will end up with same next state. 
> But this type of replication have some drawbacks
> - The commit (which costly) have to run on all replicas
> - Slow recovery, because if replica miss more than N updates on its down 
> time, the replica have to download entire index from its leader.
> So we create create another replication mode for SolrCloud called state 
> transfer, which acts like master/slave replication. In basically
> - Leader distribute the update to other replicas, but the leader only apply 
> the update to IW, other replicas just store the update to UpdateLog (act like 
> replication).
> - Replicas frequently polling the latest segments from leader.
> Pros:
> - Lightweight for indexing, because only leader are running the commit, 
> updates.
> - Very fast recovery, replicas just have to download the missing segments.
> On CAP point of view, this ticket will trying to promise to end users a 
> distributed systems :
> - Partition tolerance
> - Weak Consistency for normal query : clusters can serve stale data. This 
> happen when leader finish a 

[jira] [Resolved] (SOLR-10021) Cannot reload a core if it fails initialization.

2017-02-22 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-10021.
---
   Resolution: Fixed
Fix Version/s: 6.5
   trunk

Thanks Mike!

> Cannot reload a core if it fails initialization.
> 
>
> Key: SOLR-10021
> URL: https://issues.apache.org/jira/browse/SOLR-10021
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Fix For: trunk, 6.5
>
> Attachments: SOLR-10021.patch, SOLR-10021.patch
>
>
> Once a core initialization fails, all calls to CoreContainer.getCore() throw 
> an error forever, including the core admin RELOAD command.
> I think that RELOAD (and only RELOAD) should go ahead even after 
> initialization failure since it is, after all, reloading everything. For any 
> other core ops since you don't know why the core load failed in the first 
> place you couldn't rely on the state of the core to try to do anything so 
> failing is appropriate.
> However, the current structure of the code needs a SolrCore to get the 
> CoreDescriptor which you need to have to, well, reload the core. The work on 
> SOLR-10007 and associated JIRAs _should_ make it possible to get the 
> CoreDescriptor without having to have a core already. Once that's possible, 
> RELOAD will have to distinguish between having a SolrCore already  and using 
> the present reload() method or creating a new core.
> We could also consider a new core admin API command. It's always bugged me 
> that there's an UNLOAD but no LOAD, we've kinda, sorta, maybe been able to 
> use CREATE.
> I think I like making RELOAD smarter though. Consider the scenario where you 
> make a config change that you mess up. You'd have to change to LOAD when 
> RELOAD failed. I can be convinced otherwise though.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10021) Cannot reload a core if it fails initialization.

2017-02-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879706#comment-15879706
 ] 

ASF subversion and git services commented on SOLR-10021:


Commit 04bcba77c824125c2ef2feb4c64dfcfc37b48211 in lucene-solr's branch 
refs/heads/branch_6x from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=04bcba7 ]

SOLR-10021: Cannot reload a core if it fails initialization.

(cherry picked from commit 8367e15)


> Cannot reload a core if it fails initialization.
> 
>
> Key: SOLR-10021
> URL: https://issues.apache.org/jira/browse/SOLR-10021
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-10021.patch, SOLR-10021.patch
>
>
> Once a core initialization fails, all calls to CoreContainer.getCore() throw 
> an error forever, including the core admin RELOAD command.
> I think that RELOAD (and only RELOAD) should go ahead even after 
> initialization failure since it is, after all, reloading everything. For any 
> other core ops since you don't know why the core load failed in the first 
> place you couldn't rely on the state of the core to try to do anything so 
> failing is appropriate.
> However, the current structure of the code needs a SolrCore to get the 
> CoreDescriptor which you need to have to, well, reload the core. The work on 
> SOLR-10007 and associated JIRAs _should_ make it possible to get the 
> CoreDescriptor without having to have a core already. Once that's possible, 
> RELOAD will have to distinguish between having a SolrCore already  and using 
> the present reload() method or creating a new core.
> We could also consider a new core admin API command. It's always bugged me 
> that there's an UNLOAD but no LOAD, we've kinda, sorta, maybe been able to 
> use CREATE.
> I think I like making RELOAD smarter though. Consider the scenario where you 
> make a config change that you mess up. You'd have to change to LOAD when 
> RELOAD failed. I can be convinced otherwise though.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10021) Cannot reload a core if it fails initialization.

2017-02-22 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-10021:
--
Attachment: SOLR-10021.patch

Final patch with CHANGES annotation. Plus there were a couple of tests that 
failed because the error messages had changed.

> Cannot reload a core if it fails initialization.
> 
>
> Key: SOLR-10021
> URL: https://issues.apache.org/jira/browse/SOLR-10021
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-10021.patch, SOLR-10021.patch
>
>
> Once a core initialization fails, all calls to CoreContainer.getCore() throw 
> an error forever, including the core admin RELOAD command.
> I think that RELOAD (and only RELOAD) should go ahead even after 
> initialization failure since it is, after all, reloading everything. For any 
> other core ops since you don't know why the core load failed in the first 
> place you couldn't rely on the state of the core to try to do anything so 
> failing is appropriate.
> However, the current structure of the code needs a SolrCore to get the 
> CoreDescriptor which you need to have to, well, reload the core. The work on 
> SOLR-10007 and associated JIRAs _should_ make it possible to get the 
> CoreDescriptor without having to have a core already. Once that's possible, 
> RELOAD will have to distinguish between having a SolrCore already  and using 
> the present reload() method or creating a new core.
> We could also consider a new core admin API command. It's always bugged me 
> that there's an UNLOAD but no LOAD, we've kinda, sorta, maybe been able to 
> use CREATE.
> I think I like making RELOAD smarter though. Consider the scenario where you 
> make a config change that you mess up. You'd have to change to LOAD when 
> RELOAD failed. I can be convinced otherwise though.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10021) Cannot reload a core if it fails initialization.

2017-02-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879690#comment-15879690
 ] 

ASF subversion and git services commented on SOLR-10021:


Commit 8367e159e4a287a34adf6552a5aecfe3b8073d8e in lucene-solr's branch 
refs/heads/master from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8367e15 ]

SOLR-10021: Cannot reload a core if it fails initialization.


> Cannot reload a core if it fails initialization.
> 
>
> Key: SOLR-10021
> URL: https://issues.apache.org/jira/browse/SOLR-10021
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-10021.patch
>
>
> Once a core initialization fails, all calls to CoreContainer.getCore() throw 
> an error forever, including the core admin RELOAD command.
> I think that RELOAD (and only RELOAD) should go ahead even after 
> initialization failure since it is, after all, reloading everything. For any 
> other core ops since you don't know why the core load failed in the first 
> place you couldn't rely on the state of the core to try to do anything so 
> failing is appropriate.
> However, the current structure of the code needs a SolrCore to get the 
> CoreDescriptor which you need to have to, well, reload the core. The work on 
> SOLR-10007 and associated JIRAs _should_ make it possible to get the 
> CoreDescriptor without having to have a core already. Once that's possible, 
> RELOAD will have to distinguish between having a SolrCore already  and using 
> the present reload() method or creating a new core.
> We could also consider a new core admin API command. It's always bugged me 
> that there's an UNLOAD but no LOAD, we've kinda, sorta, maybe been able to 
> use CREATE.
> I think I like making RELOAD smarter though. Consider the scenario where you 
> make a config change that you mess up. You'd have to change to LOAD when 
> RELOAD failed. I can be convinced otherwise though.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-10125) CollectionsAPIDistributedZkTest is too fragile.

2017-02-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reopened SOLR-10125:


> CollectionsAPIDistributedZkTest is too fragile.
> ---
>
> Key: SOLR-10125
> URL: https://issues.apache.org/jira/browse/SOLR-10125
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 6.5, master (7.0)
>
> Attachments: stdout
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10196) ElectionContext#runLeaderProcess can hit NPE on core close.

2017-02-22 Thread Mark Miller (JIRA)
Mark Miller created SOLR-10196:
--

 Summary: ElectionContext#runLeaderProcess can hit NPE on core 
close.
 Key: SOLR-10196
 URL: https://issues.apache.org/jira/browse/SOLR-10196
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor


{noformat}
   [junit4]   2> 191445 INFO  
(zkCallback-7-thread-7-processing-n:127.0.0.1:45055_) [n:127.0.0.1:45055_ 
c:solrj_collection2 s:shard2 r:core_node3 x:solrj_collection2_shard2_replica1] 
o.a.s.m.SolrMetricManager Closing metric reporters for: 
solr.core.solrj_collection2.shard2.replica1
   [junit4]   2> 191445 INFO  
(zkCallback-7-thread-7-processing-n:127.0.0.1:45055_) [n:127.0.0.1:45055_ 
c:solrj_collection2 s:shard2 r:core_node3 x:solrj_collection2_shard2_replica1] 
o.a.s.s.h.HdfsDirectory Closing hdfs directory 
hdfs://localhost:34043/solr_hdfs_home/solrj_collection2/core_node3/data
   [junit4]   2> 191476 INFO  
(zkCallback-7-thread-7-processing-n:127.0.0.1:45055_) [n:127.0.0.1:45055_ 
c:solrj_collection2 s:shard2 r:core_node3 x:solrj_collection2_shard2_replica1] 
o.a.s.s.h.HdfsDirectory Closing hdfs directory 
hdfs://localhost:34043/solr_hdfs_home/solrj_collection2/core_node3/data/index
   [junit4]   2> 191484 INFO  
(zkCallback-7-thread-7-processing-n:127.0.0.1:45055_) [n:127.0.0.1:45055_ 
c:solrj_collection2 s:shard2 r:core_node3 x:solrj_collection2_shard2_replica1] 
o.a.s.s.h.HdfsDirectory Closing hdfs directory 
hdfs://localhost:34043/solr_hdfs_home/solrj_collection2/core_node3/data/snapshot_metadata
   [junit4]   2> 191523 INFO  (coreCloseExecutor-172-thread-6) 
[n:127.0.0.1:45055_ c:solrj_collection4 s:shard5 r:core_node4 
x:solrj_collection4_shard5_replica1] o.a.s.m.SolrMetricManager Closing metric 
reporters for: solr.core.solrj_collection4.shard5.replica1
   [junit4]   2> 191530 INFO  
(zkCallback-7-thread-9-processing-n:127.0.0.1:45055_) [n:127.0.0.1:45055_] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/solrj_collection2/state.json] for collection 
[solrj_collection2] has occurred - updating... (live nodes size: [1])
   [junit4]   2> 191554 INFO  (coreCloseExecutor-172-thread-6) 
[n:127.0.0.1:45055_ c:solrj_collection4 s:shard5 r:core_node4 
x:solrj_collection4_shard5_replica1] o.a.s.s.h.HdfsDirectory Closing hdfs 
directory 
hdfs://localhost:34043/solr_hdfs_home/solrj_collection4/core_node4/data/index
   [junit4]   2> 191555 ERROR 
(zkCallback-7-thread-7-processing-n:127.0.0.1:45055_) [n:127.0.0.1:45055_ 
c:solrj_collection2 s:shard2 r:core_node3 x:solrj_collection2_shard2_replica1] 
o.a.s.c.ShardLeaderElectionContext There was a problem trying to register as 
the leader:java.lang.NullPointerException
   [junit4]   2>at 
org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:426)
   [junit4]   2>at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170)
   [junit4]   2>at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135)
   [junit4]   2>at 
org.apache.solr.cloud.LeaderElector.access$200(LeaderElector.java:56)
   [junit4]   2>at 
org.apache.solr.cloud.LeaderElector$ElectionWatcher.process(LeaderElector.java:348)
   [junit4]   2>at 
org.apache.solr.common.cloud.SolrZkClient$3.lambda$process$0(SolrZkClient.java:268)
   [junit4]   2>at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   [junit4]   2>at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)
   [junit4]   2>at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
   [junit4]   2>at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
   [junit4]   2>at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
   [junit4]   2>at java.lang.Thread.run(Thread.java:745)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10195) Harden AbstractSolrMorphlineZkTestBase based tests.

2017-02-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879612#comment-15879612
 ] 

ASF subversion and git services commented on SOLR-10195:


Commit c53b7c33b03aad3880b57a85d4402a31f3e0ea36 in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c53b7c3 ]

SOLR-10195: Harden AbstractSolrMorphlineZkTestBase based tests.


> Harden AbstractSolrMorphlineZkTestBase based tests.
> ---
>
> Key: SOLR-10195
> URL: https://issues.apache.org/jira/browse/SOLR-10195
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10195) Harden AbstractSolrMorphlineZkTestBase based tests.

2017-02-22 Thread Mark Miller (JIRA)
Mark Miller created SOLR-10195:
--

 Summary: Harden AbstractSolrMorphlineZkTestBase based tests.
 Key: SOLR-10195
 URL: https://issues.apache.org/jira/browse/SOLR-10195
 Project: Solr
  Issue Type: Test
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Tests
Reporter: Mark Miller
Assignee: Mark Miller






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9450) Link to online Javadocs instead of distributing with binary download

2017-02-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879547#comment-15879547
 ] 

Jan Høydahl commented on SOLR-9450:
---

Yes it is just another ant target, and {{package}} taget depends on both 
{{documentation}} (full javadocs) and {{documentation-online}} which makes the 
link.

> Link to online Javadocs instead of distributing with binary download
> 
>
> Key: SOLR-9450
> URL: https://issues.apache.org/jira/browse/SOLR-9450
> Project: Solr
>  Issue Type: Sub-task
>  Components: Build
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9450.patch, SOLR-9450.patch, SOLR-9450.patch, 
> SOLR-9450.patch
>
>
> Spinoff from SOLR-6806. This sub task will replace the contents of {{docs}} 
> in the binary download with a link to the online JavaDocs. The build should 
> make sure to generate a link to the correct version. I believe this is the 
> correct tamplate: http://lucene.apache.org/solr/6_2_0/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9450) Link to online Javadocs instead of distributing with binary download

2017-02-22 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879542#comment-15879542
 ] 

Uwe Schindler commented on SOLR-9450:
-

I will try this out tomorrow and think about how to configure Jenkins. For 
Jenkins it would need to build the Javadocs. Is it possible to zip only the 
link, but still build the full javadocs so it can be copied to Jenkins' 
Javadocs folder? I mean executing both documentation modes.

> Link to online Javadocs instead of distributing with binary download
> 
>
> Key: SOLR-9450
> URL: https://issues.apache.org/jira/browse/SOLR-9450
> Project: Solr
>  Issue Type: Sub-task
>  Components: Build
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9450.patch, SOLR-9450.patch, SOLR-9450.patch, 
> SOLR-9450.patch
>
>
> Spinoff from SOLR-6806. This sub task will replace the contents of {{docs}} 
> in the binary download with a link to the online JavaDocs. The build should 
> make sure to generate a link to the correct version. I believe this is the 
> correct tamplate: http://lucene.apache.org/solr/6_2_0/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10194) Unable to use the UninvertedField implementation with legacy facets

2017-02-22 Thread Victor Igumnov (JIRA)
Victor Igumnov created SOLR-10194:
-

 Summary: Unable to use the UninvertedField implementation with 
legacy facets
 Key: SOLR-10194
 URL: https://issues.apache.org/jira/browse/SOLR-10194
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: 6.4.1, 6.3, 6.2
 Environment: Linux
Reporter: Victor Igumnov
Priority: Minor


FacetComponent's method "modifyRequestForFieldFacets" modifies the distributed 
facet request and sets the mincount count to zero which then the SimpleFacets 
implementation is unable to get into the UIF code block when facet.method=uif 
is applied. The workaround which I found is to use facet.distrib.mco=true which 
sets the mincount to one instead of zero. 

Working:

http://somehost:9100/solr/collection/select?facet.method=uif=attribute=*:*=true=true=true
 

None-Working:

http://somehost:9100/solr/collection/select?facet.method=uif=attribute=*:*=true=true=false

Semi-working when it isn't a distributed call:

http://somehost:9100/solr/collection/select?facet.method=uif=attribute=*:*=true=true=false=false

Just make sure to run it on a multi-shard setup. 




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 1688 - Unstable

2017-02-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1688/

1 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
expected:<3> but was:<1>

Stack Trace:
java.lang.AssertionError: expected:<3> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([4520B761189B179D:D55C3D51EA83808]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:523)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11988 lines...]
   [junit4] Suite: org.apache.solr.cloud.CollectionsAPIDistributedZkTest
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-10092) HDFS: AutoAddReplica fails

2017-02-22 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879447#comment-15879447
 ] 

Kevin Risden commented on SOLR-10092:
-

Ah good point [~markrmil...@gmail.com].

[~HendrikH] - Is this collection actually on HDFS? The stack trace shows the 
instance dir is local and not on HDFS?

{quote}
instanceDir=/var/opt/solr/test2.collection-09_shard1_replica1
{quote}

> HDFS: AutoAddReplica fails
> --
>
> Key: SOLR-10092
> URL: https://issues.apache.org/jira/browse/SOLR-10092
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: hdfs
>Affects Versions: 6.3
>Reporter: Hendrik Haddorp
> Attachments: SOLR-10092.patch
>
>
> OverseerAutoReplicaFailoverThread fails to create replacement core with this 
> exception:
> o.a.s.c.OverseerAutoReplicaFailoverThread Exception trying to create new 
> replica on 
> http://...:9000/solr:org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:
>  Error from server at http://...:9000/solr: Error CREATEing SolrCore 
> 'test2.collection-09_shard1_replica1': Unable to create core 
> [test2.collection-09_shard1_replica1] Caused by: No shard id for 
> CoreDescriptor[name=test2.collection-09_shard1_replica1;instanceDir=/var/opt/solr/test2.collection-09_shard1_replica1]
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:593)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
> at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
> at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.createSolrCore(OverseerAutoReplicaFailoverThread.java:456)
> at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.lambda$addReplica$0(OverseerAutoReplicaFailoverThread.java:251)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745) 
> also see this mail thread about the issue: 
> https://lists.apache.org/thread.html/%3CCAA70BoWyzbvQuJTyzaG4Kx1tj0Djgcm+MV=x_hoac1e6cse...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 683 - Unstable!

2017-02-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/683/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testParallelCommitStream

Error Message:
expected:<5> but was:<3>

Stack Trace:
java.lang.AssertionError: expected:<5> but was:<3>
at 
__randomizedtesting.SeedInfo.seed([D1263D4B5F39571E:F1CC5F4BC378BA52]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testParallelCommitStream(StreamExpressionTest.java:3972)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)


FAILED:  

[jira] [Commented] (SOLR-9450) Link to online Javadocs instead of distributing with binary download

2017-02-22 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879427#comment-15879427
 ] 

Alexandre Rafalovitch commented on SOLR-9450:
-

It refuses to apply for me on trunc due to trailing whitespace. I tried git 
apply with various whitespace ignoring options but it does not seem to make a 
difference. Perhaps because the CHANGES.txt file diff does not apply cleanly as 
well.

{quote}
/Users/arafalov/Downloads/SOLR-9450.patch:53: trailing whitespace.

/Users/arafalov/Downloads/SOLR-9450.patch:73: trailing whitespace.

/Users/arafalov/Downloads/SOLR-9450.patch:150: trailing whitespace.

error: patch failed: solr/CHANGES.txt:224
error: solr/CHANGES.txt: patch does not apply
{quote}

> Link to online Javadocs instead of distributing with binary download
> 
>
> Key: SOLR-9450
> URL: https://issues.apache.org/jira/browse/SOLR-9450
> Project: Solr
>  Issue Type: Sub-task
>  Components: Build
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9450.patch, SOLR-9450.patch, SOLR-9450.patch, 
> SOLR-9450.patch
>
>
> Spinoff from SOLR-6806. This sub task will replace the contents of {{docs}} 
> in the binary download with a link to the online JavaDocs. The build should 
> make sure to generate a link to the correct version. I believe this is the 
> correct tamplate: http://lucene.apache.org/solr/6_2_0/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Getting totalTermFreq and docFreq for terms

2017-02-22 Thread Joel Bernstein
The idea of adding a terms.ttf parameter sounds fine to me. And It would be
good to get terms.list better integrated into the TermsComponent.  In
general I think it's time for more attention to be paid to the
TermsComponent.

Joel Bernstein
http://joelsolr.blogspot.com/

On Wed, Feb 22, 2017 at 4:12 PM, Shai Erera  wrote:

> Hmm .. so if I want to add totalTermFreq to the response, it will break
> the current output format of TermsComponent, which returns for each term
> only the docFreq. What's our BWC policy for such API and is there a way to
> handle it?
>
> I can add a new terms.ttf parameter, and so if you set it to true, the
> response will look different (each term will have both docFreq and
> totalTermFreq elements), but if you didn't, you will get the same response.
> Is that acceptable?
>
> Somewhat related, but can be handled separately, I noticed that if you
> specify terms.list and multiple terms.fl parameters, you only receive stats
> for the first field (the rest are ignored), but if you don't specify
> terms.list, you get results for all fields. I don't see any reason not to
> support multiple fields with terms list, what do you think?
>
> On Wed, Feb 22, 2017 at 10:08 PM Shai Erera  wrote:
>
>> Looks like this could be a very easy addition to TermsComponent? From
>> what I read in the code, it uses TermContext to compute/hold the stats, and
>> the latter already has docFreq and totalTermFreq (!!). It's just that
>> TermsComponent does not output TTF (only computes it...):
>>
>> for(int i=0; i>   if(termContexts[i] != null) {
>> String outTerm = fieldType.indexedToReadable(
>> terms[i].bytes().utf8ToString());
>> int docFreq = termContexts[i].docFreq();
>> termsMap.add(outTerm, docFreq);
>>   }
>> }
>>
>>
>> On Wed, Feb 22, 2017 at 5:34 PM Joel Bernstein 
>> wrote:
>>
>> Yeah, I think expanding the functionality of the terms component looks
>> like the right place to add these stats.
>>
>> I plan on exposing these types of terms stats as Streaming Expression
>> functions but I would likely use the terms component under the covers.
>>
>>
>>
>> Joel Bernstein
>> http://joelsolr.blogspot.com/
>>
>> On Wed, Feb 22, 2017 at 8:56 AM, Shai Erera  wrote:
>>
>> No, they are not global distributed stats. I am willing to live with
>> approximated stats though (unless again, there's an API which can give me
>> both). I wonder why doesn't Terms component return ttf in addition to
>> docfreq. The API (at the Lucene level) is right there already.
>>
>> On Wed, Feb 22, 2017 at 3:49 PM Joel Bernstein 
>> wrote:
>>
>> Hi Shai,
>>
>> Do ttf and docfreq return global stats in distributed mode? I wasn't
>> aware that there was a mechanism for aggregating values in the field list.
>>
>>
>> Joel Bernstein
>> http://joelsolr.blogspot.com/
>>
>> On Wed, Feb 22, 2017 at 7:18 AM, Shai Erera  wrote:
>>
>> Hi
>>
>> I am currently using function queries to obtain these two statistics, as
>> I didn't see a better or more explicit API and the Terms component only
>> returns docFreq, but not totalTermFreq.
>>
>> The way I use the API is submit requests as follows:
>>
>> curl "http://localhost:8983/solr/mycollection/select?q=*:*;
>> rows=1=ttf(text,'t1'),docfreq(text,'t1')"
>>
>> Today I noticed that it sometimes returns 0 for these stats for existing
>> terms. After debugging and going through the code, I noticed that it
>> performs analysis on the value that's given. So if I provide an already
>> stemmed value, it analyzes the value further and in some cases it results
>> in a non-existing term (and in other cases I get stats for a term I didn't
>> ask for).
>>
>> I want to get the stats of the indexed version of the terms, and that's
>> why I send the already stemmed one. In my case I tried to get the stats for
>> the term 'disguis' which is the stem of 'disguise' and 'disguised', however
>> it further analyzed the value to 'disgui' (per the analysis chain) and that
>> term does not exist in the index.
>>
>> So first question is -- is this the right API to retrieve such
>> statistics? I didn't find another one, but could be I missed it.
>>
>> If it is, why does it analyze the value? I tried to wrap the value with
>> single and double quotes, but of course that does not affect the analysis
>> ... is analysis an intended behavior or a bug?
>>
>> Shai
>>
>>
>>
>>


[jira] [Commented] (SOLR-10055) Manual bin/solr start causes crash due to resolving wrong solr.in.sh

2017-02-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879408#comment-15879408
 ] 

ASF subversion and git services commented on SOLR-10055:


Commit 11a7313cecb2f16f272ed4658ccb0f8d723d9029 in lucene-solr's branch 
refs/heads/branch_6x from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=11a7313 ]

SOLR-10055: Linux installer now renames existing bin/solr.in.* as 
bin/solr.in.*.orig to avoid wrong resolving.

(cherry picked from commit 1e206d8)


> Manual bin/solr start causes crash due to resolving wrong solr.in.sh
> 
>
> Key: SOLR-10055
> URL: https://issues.apache.org/jira/browse/SOLR-10055
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-10055.patch
>
>
> The install script installs {{solr.in.sh}} in {{/etc/defaults/}}. However, if 
> the user manually runs {{solr start}}, the script will use the {{solr.in.sh}} 
> file from {{bin/}} since that is first in the search path. And it will fail 
> since {{/opt/solr}} is write protected. But if user starts with {{service 
> solr start}} then the file from installation is used and all is fine.
> Since the default {{/opt/solr/server/solr}} is not writable by solr user, 
> this creates a bad user experience and classifies as a bug.
> My proposal is that the installer renames {{bin/solr.in.sh -> 
> bin/solr.in.sh.orig}} and the same with {{solr.in.cmd}}, so that the 
> resolution logic will end up finding the one from the install. User can still 
> override this by creating a {{$HOME/.solr.in.sh}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10055) Manual bin/solr start causes crash due to resolving wrong solr.in.sh

2017-02-22 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-10055.

Resolution: Fixed

> Manual bin/solr start causes crash due to resolving wrong solr.in.sh
> 
>
> Key: SOLR-10055
> URL: https://issues.apache.org/jira/browse/SOLR-10055
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-10055.patch
>
>
> The install script installs {{solr.in.sh}} in {{/etc/defaults/}}. However, if 
> the user manually runs {{solr start}}, the script will use the {{solr.in.sh}} 
> file from {{bin/}} since that is first in the search path. And it will fail 
> since {{/opt/solr}} is write protected. But if user starts with {{service 
> solr start}} then the file from installation is used and all is fine.
> Since the default {{/opt/solr/server/solr}} is not writable by solr user, 
> this creates a bad user experience and classifies as a bug.
> My proposal is that the installer renames {{bin/solr.in.sh -> 
> bin/solr.in.sh.orig}} and the same with {{solr.in.cmd}}, so that the 
> resolution logic will end up finding the one from the install. User can still 
> override this by creating a {{$HOME/.solr.in.sh}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10055) Manual bin/solr start causes crash due to resolving wrong solr.in.sh

2017-02-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879396#comment-15879396
 ] 

ASF subversion and git services commented on SOLR-10055:


Commit 1e206d820ab0a3c080562e056970c77ef5c99f04 in lucene-solr's branch 
refs/heads/master from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1e206d8 ]

SOLR-10055: Linux installer now renames existing bin/solr.in.* as 
bin/solr.in.*.orig to avoid wrong resolving.


> Manual bin/solr start causes crash due to resolving wrong solr.in.sh
> 
>
> Key: SOLR-10055
> URL: https://issues.apache.org/jira/browse/SOLR-10055
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-10055.patch
>
>
> The install script installs {{solr.in.sh}} in {{/etc/defaults/}}. However, if 
> the user manually runs {{solr start}}, the script will use the {{solr.in.sh}} 
> file from {{bin/}} since that is first in the search path. And it will fail 
> since {{/opt/solr}} is write protected. But if user starts with {{service 
> solr start}} then the file from installation is used and all is fine.
> Since the default {{/opt/solr/server/solr}} is not writable by solr user, 
> this creates a bad user experience and classifies as a bug.
> My proposal is that the installer renames {{bin/solr.in.sh -> 
> bin/solr.in.sh.orig}} and the same with {{solr.in.cmd}}, so that the 
> resolution logic will end up finding the one from the install. User can still 
> override this by creating a {{$HOME/.solr.in.sh}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9450) Link to online Javadocs instead of distributing with binary download

2017-02-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879388#comment-15879388
 ] 

Jan Høydahl commented on SOLR-9450:
---

[~arafalov] can you give it another try? 

[~thetaphi], do you see the need for more actions than the list below?
* Commit to master, 
* Modify jenkins config to add {{solr.javadoc.url}} for local builds
* Update website {{quickstart.mdtext}} to suggest indexing something else than 
local javadocs

> Link to online Javadocs instead of distributing with binary download
> 
>
> Key: SOLR-9450
> URL: https://issues.apache.org/jira/browse/SOLR-9450
> Project: Solr
>  Issue Type: Sub-task
>  Components: Build
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9450.patch, SOLR-9450.patch, SOLR-9450.patch, 
> SOLR-9450.patch
>
>
> Spinoff from SOLR-6806. This sub task will replace the contents of {{docs}} 
> in the binary download with a link to the online JavaDocs. The build should 
> make sure to generate a link to the correct version. I believe this is the 
> correct tamplate: http://lucene.apache.org/solr/6_2_0/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9640) Support PKI authentication in standalone mode

2017-02-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879376#comment-15879376
 ] 

Jan Høydahl commented on SOLR-9640:
---

One thing to improve could be use of {{System.setProperty}} in 
{{TestPKIAuthenticationPlugin.testResolveUrlScheme}}, do we have a safe, 
non-global way to test use of java opts?

> Support PKI authentication in standalone mode
> -
>
> Key: SOLR-9640
> URL: https://issues.apache.org/jira/browse/SOLR-9640
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: authentication, pki
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9640.patch, SOLR-9640.patch, SOLR-9640.patch, 
> SOLR-9640.patch
>
>
> While working with SOLR-9481 I managed to secure Solr standalone on a 
> single-node server. However, when adding 
> {{=localhost:8081/solr/foo,localhost:8082/solr/foo}} to the request, I 
> get 401 error. This issue will fix PKI auth to work for standalone, which 
> should automatically make both sharding and master/slave index replication 
> work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9640) Support PKI authentication in standalone mode

2017-02-22 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-9640:
--
Fix Version/s: (was: 6.x)
   6.5

> Support PKI authentication in standalone mode
> -
>
> Key: SOLR-9640
> URL: https://issues.apache.org/jira/browse/SOLR-9640
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: authentication, pki
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9640.patch, SOLR-9640.patch, SOLR-9640.patch, 
> SOLR-9640.patch
>
>
> While working with SOLR-9481 I managed to secure Solr standalone on a 
> single-node server. However, when adding 
> {{=localhost:8081/solr/foo,localhost:8082/solr/foo}} to the request, I 
> get 401 error. This issue will fix PKI auth to work for standalone, which 
> should automatically make both sharding and master/slave index replication 
> work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9640) Support PKI authentication in standalone mode

2017-02-22 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-9640:
--
Attachment: SOLR-9640.patch

New patch that applies to master. Moved changes entry to 6.5.
Comments still welcome. Plan to commit on friday.

> Support PKI authentication in standalone mode
> -
>
> Key: SOLR-9640
> URL: https://issues.apache.org/jira/browse/SOLR-9640
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: authentication, pki
> Fix For: 6.x, master (7.0)
>
> Attachments: SOLR-9640.patch, SOLR-9640.patch, SOLR-9640.patch, 
> SOLR-9640.patch
>
>
> While working with SOLR-9481 I managed to secure Solr standalone on a 
> single-node server. However, when adding 
> {{=localhost:8081/solr/foo,localhost:8082/solr/foo}} to the request, I 
> get 401 error. This issue will fix PKI auth to work for standalone, which 
> should automatically make both sharding and master/slave index replication 
> work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10134) EmbeddedSolrServer does not support SchemaAPI

2017-02-22 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-10134:

Attachment: SOLR-10134.patch

> EmbeddedSolrServer does not support SchemaAPI
> -
>
> Key: SOLR-10134
> URL: https://issues.apache.org/jira/browse/SOLR-10134
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server, SolrJ
>Affects Versions: 6.4.1
>Reporter: Robert Alexandersson
>  Labels: test-driven
> Attachments: SOLR-10134.patch, SOLR-10134.patch, SOLR-10134.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> The EmbeddedSolrServer server does not support calls to the POST methods of 
> SchemaAPI using SolRJ api. The reason is that the httpMethod param is never 
> set by the EmbeddedSolrServer#request(SolrRequest, String) and this is later 
> required by the SchemaHandler class that actually performs the call at 
> SchemaHandler#handleRequestBody(SolrQueryRequest, SolrQueryResponse). 
> Proposal is to enhance the EmbeddedSolrServer to forward the httpMethod at 
> aprox row 174 with the following: "req.getContext().put("httpMethod", 
> request.getMethod().name());". This change requires the Factory methods of 
> SolrJ to add the intended method to be used example : new 
> SchemaRequest.AddField() should append the POST method similar to how the 
> SchemaRequest.Field appends the GET method.
> I have written a separate EmbeddedSolrServer that replaces the one in SolR. 
> It works for now and fields can be created on the fly using the SchemaAPI of 
> the solrj client, but would like to be able to remove this workaround.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10092) HDFS: AutoAddReplica fails

2017-02-22 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879325#comment-15879325
 ] 

Mark Miller commented on SOLR-10092:


I don't think local filesystem support was ever actually done for this feature? 
Originally it still had to be a shared filesystem.

> HDFS: AutoAddReplica fails
> --
>
> Key: SOLR-10092
> URL: https://issues.apache.org/jira/browse/SOLR-10092
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: hdfs
>Affects Versions: 6.3
>Reporter: Hendrik Haddorp
> Attachments: SOLR-10092.patch
>
>
> OverseerAutoReplicaFailoverThread fails to create replacement core with this 
> exception:
> o.a.s.c.OverseerAutoReplicaFailoverThread Exception trying to create new 
> replica on 
> http://...:9000/solr:org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:
>  Error from server at http://...:9000/solr: Error CREATEing SolrCore 
> 'test2.collection-09_shard1_replica1': Unable to create core 
> [test2.collection-09_shard1_replica1] Caused by: No shard id for 
> CoreDescriptor[name=test2.collection-09_shard1_replica1;instanceDir=/var/opt/solr/test2.collection-09_shard1_replica1]
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:593)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
> at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
> at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.createSolrCore(OverseerAutoReplicaFailoverThread.java:456)
> at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.lambda$addReplica$0(OverseerAutoReplicaFailoverThread.java:251)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745) 
> also see this mail thread about the issue: 
> https://lists.apache.org/thread.html/%3CCAA70BoWyzbvQuJTyzaG4Kx1tj0Djgcm+MV=x_hoac1e6cse...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9481) BasicAuthPlugin should support standalone mode

2017-02-22 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-9481.
---
Resolution: Fixed
  Assignee: Jan Høydahl

Ok I went on and pushed this to 6x since the code has baked for so long in 
master, and I want to give it some time in 6x before someone announces a 6.5 
RC. Added a small "experimental" notice to the CHANGES entry since it will 
still not work fully with SSL

> BasicAuthPlugin should support standalone mode
> --
>
> Key: SOLR-9481
> URL: https://issues.apache.org/jira/browse/SOLR-9481
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: authentication
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9481-6x.patch, SOLR-9481-6x.patch, SOLR-9481.patch, 
> SOLR-9481.patch
>
>
> The BasicAuthPlugin currently only supports SolrCloud, and reads users and 
> credentials from ZK /security.json
> Add support for standalone mode operation



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7686) NRT suggester should have option to filter out duplicates

2017-02-22 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-7686.

Resolution: Fixed

> NRT suggester should have option to filter out duplicates
> -
>
> Key: LUCENE-7686
> URL: https://issues.apache.org/jira/browse/LUCENE-7686
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7686.patch, LUCENE-7686.patch, LUCENE-7686.patch
>
>
> Some of the other suggesters have this ability, and it's quite simple to add 
> it to the NRT suggester as long as the thing we are filtering on is the 
> suggest key itself, not e.g. another stored field from the document.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10134) EmbeddedSolrServer does not support SchemaAPI

2017-02-22 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-10134:

Attachment: SOLR-10134.patch

feedback for [^SOLR-10134.patch] is so welcomed!! 

> EmbeddedSolrServer does not support SchemaAPI
> -
>
> Key: SOLR-10134
> URL: https://issues.apache.org/jira/browse/SOLR-10134
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server, SolrJ
>Affects Versions: 6.4.1
>Reporter: Robert Alexandersson
>  Labels: test-driven
> Attachments: SOLR-10134.patch, SOLR-10134.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> The EmbeddedSolrServer server does not support calls to the POST methods of 
> SchemaAPI using SolRJ api. The reason is that the httpMethod param is never 
> set by the EmbeddedSolrServer#request(SolrRequest, String) and this is later 
> required by the SchemaHandler class that actually performs the call at 
> SchemaHandler#handleRequestBody(SolrQueryRequest, SolrQueryResponse). 
> Proposal is to enhance the EmbeddedSolrServer to forward the httpMethod at 
> aprox row 174 with the following: "req.getContext().put("httpMethod", 
> request.getMethod().name());". This change requires the Factory methods of 
> SolrJ to add the intended method to be used example : new 
> SchemaRequest.AddField() should append the POST method similar to how the 
> SchemaRequest.Field appends the GET method.
> I have written a separate EmbeddedSolrServer that replaces the one in SolR. 
> It works for now and fields can be created on the fly using the SchemaAPI of 
> the solrj client, but would like to be able to remove this workaround.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9481) BasicAuthPlugin should support standalone mode

2017-02-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879285#comment-15879285
 ] 

ASF subversion and git services commented on SOLR-9481:
---

Commit b1ac6ddcf2f1027806f04a6af0e5a51f01334113 in lucene-solr's branch 
refs/heads/branch_6x from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b1ac6dd ]

SOLR-9481: Authentication and Authorization plugins now support standalone mode


> BasicAuthPlugin should support standalone mode
> --
>
> Key: SOLR-9481
> URL: https://issues.apache.org/jira/browse/SOLR-9481
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>  Labels: authentication
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9481-6x.patch, SOLR-9481-6x.patch, SOLR-9481.patch, 
> SOLR-9481.patch
>
>
> The BasicAuthPlugin currently only supports SolrCloud, and reads users and 
> credentials from ZK /security.json
> Add support for standalone mode operation



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9481) BasicAuthPlugin should support standalone mode

2017-02-22 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-9481:
--
Attachment: SOLR-9481-6x.patch

The CHANGES entry was set to 6.x when master was committed, I'm still targeting 
6.5 :)

Attaching updated 6x backport patch. Tests and precommit passes.

> BasicAuthPlugin should support standalone mode
> --
>
> Key: SOLR-9481
> URL: https://issues.apache.org/jira/browse/SOLR-9481
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>  Labels: authentication
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9481-6x.patch, SOLR-9481-6x.patch, SOLR-9481.patch, 
> SOLR-9481.patch
>
>
> The BasicAuthPlugin currently only supports SolrCloud, and reads users and 
> credentials from ZK /security.json
> Add support for standalone mode operation



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10092) HDFS: AutoAddReplica fails

2017-02-22 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879264#comment-15879264
 ] 

Kevin Risden commented on SOLR-10092:
-

[~mdrob] or [~hgadre] - maybe you have thoughts as well since its HDFS related?

> HDFS: AutoAddReplica fails
> --
>
> Key: SOLR-10092
> URL: https://issues.apache.org/jira/browse/SOLR-10092
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: hdfs
>Affects Versions: 6.3
>Reporter: Hendrik Haddorp
> Attachments: SOLR-10092.patch
>
>
> OverseerAutoReplicaFailoverThread fails to create replacement core with this 
> exception:
> o.a.s.c.OverseerAutoReplicaFailoverThread Exception trying to create new 
> replica on 
> http://...:9000/solr:org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:
>  Error from server at http://...:9000/solr: Error CREATEing SolrCore 
> 'test2.collection-09_shard1_replica1': Unable to create core 
> [test2.collection-09_shard1_replica1] Caused by: No shard id for 
> CoreDescriptor[name=test2.collection-09_shard1_replica1;instanceDir=/var/opt/solr/test2.collection-09_shard1_replica1]
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:593)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
> at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
> at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.createSolrCore(OverseerAutoReplicaFailoverThread.java:456)
> at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.lambda$addReplica$0(OverseerAutoReplicaFailoverThread.java:251)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745) 
> also see this mail thread about the issue: 
> https://lists.apache.org/thread.html/%3CCAA70BoWyzbvQuJTyzaG4Kx1tj0Djgcm+MV=x_hoac1e6cse...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10092) HDFS: AutoAddReplica fails

2017-02-22 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879262#comment-15879262
 ] 

Kevin Risden commented on SOLR-10092:
-

[~markrmil...@gmail.com] - Thoughts on this? See you were last in here.

> HDFS: AutoAddReplica fails
> --
>
> Key: SOLR-10092
> URL: https://issues.apache.org/jira/browse/SOLR-10092
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: hdfs
>Affects Versions: 6.3
>Reporter: Hendrik Haddorp
> Attachments: SOLR-10092.patch
>
>
> OverseerAutoReplicaFailoverThread fails to create replacement core with this 
> exception:
> o.a.s.c.OverseerAutoReplicaFailoverThread Exception trying to create new 
> replica on 
> http://...:9000/solr:org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:
>  Error from server at http://...:9000/solr: Error CREATEing SolrCore 
> 'test2.collection-09_shard1_replica1': Unable to create core 
> [test2.collection-09_shard1_replica1] Caused by: No shard id for 
> CoreDescriptor[name=test2.collection-09_shard1_replica1;instanceDir=/var/opt/solr/test2.collection-09_shard1_replica1]
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:593)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
> at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
> at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.createSolrCore(OverseerAutoReplicaFailoverThread.java:456)
> at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.lambda$addReplica$0(OverseerAutoReplicaFailoverThread.java:251)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745) 
> also see this mail thread about the issue: 
> https://lists.apache.org/thread.html/%3CCAA70BoWyzbvQuJTyzaG4Kx1tj0Djgcm+MV=x_hoac1e6cse...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Getting totalTermFreq and docFreq for terms

2017-02-22 Thread Shai Erera
Hmm .. so if I want to add totalTermFreq to the response, it will break the
current output format of TermsComponent, which returns for each term only
the docFreq. What's our BWC policy for such API and is there a way to
handle it?

I can add a new terms.ttf parameter, and so if you set it to true, the
response will look different (each term will have both docFreq and
totalTermFreq elements), but if you didn't, you will get the same response.
Is that acceptable?

Somewhat related, but can be handled separately, I noticed that if you
specify terms.list and multiple terms.fl parameters, you only receive stats
for the first field (the rest are ignored), but if you don't specify
terms.list, you get results for all fields. I don't see any reason not to
support multiple fields with terms list, what do you think?

On Wed, Feb 22, 2017 at 10:08 PM Shai Erera  wrote:

> Looks like this could be a very easy addition to TermsComponent? From what
> I read in the code, it uses TermContext to compute/hold the stats, and the
> latter already has docFreq and totalTermFreq (!!). It's just that
> TermsComponent does not output TTF (only computes it...):
>
> for(int i=0; i   if(termContexts[i] != null) {
> String outTerm =
> fieldType.indexedToReadable(terms[i].bytes().utf8ToString());
> int docFreq = termContexts[i].docFreq();
> termsMap.add(outTerm, docFreq);
>   }
> }
>
>
> On Wed, Feb 22, 2017 at 5:34 PM Joel Bernstein  wrote:
>
> Yeah, I think expanding the functionality of the terms component looks
> like the right place to add these stats.
>
> I plan on exposing these types of terms stats as Streaming Expression
> functions but I would likely use the terms component under the covers.
>
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Wed, Feb 22, 2017 at 8:56 AM, Shai Erera  wrote:
>
> No, they are not global distributed stats. I am willing to live with
> approximated stats though (unless again, there's an API which can give me
> both). I wonder why doesn't Terms component return ttf in addition to
> docfreq. The API (at the Lucene level) is right there already.
>
> On Wed, Feb 22, 2017 at 3:49 PM Joel Bernstein  wrote:
>
> Hi Shai,
>
> Do ttf and docfreq return global stats in distributed mode? I wasn't aware
> that there was a mechanism for aggregating values in the field list.
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Wed, Feb 22, 2017 at 7:18 AM, Shai Erera  wrote:
>
> Hi
>
> I am currently using function queries to obtain these two statistics, as I
> didn't see a better or more explicit API and the Terms component only
> returns docFreq, but not totalTermFreq.
>
> The way I use the API is submit requests as follows:
>
> curl "
> http://localhost:8983/solr/mycollection/select?q=*:*=1=ttf(text,'t1'),docfreq(text,'t1
> ')"
>
> Today I noticed that it sometimes returns 0 for these stats for existing
> terms. After debugging and going through the code, I noticed that it
> performs analysis on the value that's given. So if I provide an already
> stemmed value, it analyzes the value further and in some cases it results
> in a non-existing term (and in other cases I get stats for a term I didn't
> ask for).
>
> I want to get the stats of the indexed version of the terms, and that's
> why I send the already stemmed one. In my case I tried to get the stats for
> the term 'disguis' which is the stem of 'disguise' and 'disguised', however
> it further analyzed the value to 'disgui' (per the analysis chain) and that
> term does not exist in the index.
>
> So first question is -- is this the right API to retrieve such statistics?
> I didn't find another one, but could be I missed it.
>
> If it is, why does it analyze the value? I tried to wrap the value with
> single and double quotes, but of course that does not affect the analysis
> ... is analysis an intended behavior or a bug?
>
> Shai
>
>
>
>


[jira] [Commented] (LUCENE-7686) NRT suggester should have option to filter out duplicates

2017-02-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879224#comment-15879224
 ] 

ASF subversion and git services commented on LUCENE-7686:
-

Commit 4e2cf61ac76db33f35d3aceacaf1563a9bd5edb2 in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4e2cf61 ]

LUCENE-7686: add efficient de-duping to the NRT document suggester


> NRT suggester should have option to filter out duplicates
> -
>
> Key: LUCENE-7686
> URL: https://issues.apache.org/jira/browse/LUCENE-7686
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7686.patch, LUCENE-7686.patch, LUCENE-7686.patch
>
>
> Some of the other suggesters have this ability, and it's quite simple to add 
> it to the NRT suggester as long as the thing we are filtering on is the 
> suggest key itself, not e.g. another stored field from the document.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10193) Improve MiniSolrCloudCluster#shutdown.

2017-02-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-10193.

   Resolution: Fixed
Fix Version/s: master (7.0)
   6.5

> Improve MiniSolrCloudCluster#shutdown.
> --
>
> Key: SOLR-10193
> URL: https://issues.apache.org/jira/browse/SOLR-10193
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 6.5, master (7.0)
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10193) Improve MiniSolrCloudCluster#shutdown.

2017-02-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879221#comment-15879221
 ] 

ASF subversion and git services commented on SOLR-10193:


Commit 5d76917cf53a0db7a39ab6ca92eb53d9fe54a412 in lucene-solr's branch 
refs/heads/branch_6x from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5d76917 ]

SOLR-10193: Improve MiniSolrCloudCluster#shutdown.

# Conflicts:
#   
solr/test-framework/src/java/org/apache/solr/cloud/MiniSolrCloudCluster.java


> Improve MiniSolrCloudCluster#shutdown.
> --
>
> Key: SOLR-10193
> URL: https://issues.apache.org/jira/browse/SOLR-10193
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 6.5, master (7.0)
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7696) Remove ancient projects from the dist area

2017-02-22 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/LUCENE-7696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated LUCENE-7696:

Description: 
In http://www.apache.org/dist/lucene/ we have these folders:
{noformat}
[DIR] java/   2017-02-14 08:33-   
[DIR] mahout/ 2015-02-17 20:27-   
[DIR] nutch/  2015-02-17 20:29-   
[DIR] pylucene/   2017-02-13 22:00-   
[DIR] solr/   2017-02-14 08:33-   
[DIR] tika/   2015-02-17 20:29-   
[   ] KEYS2016-08-30 09:59  148K  
{noformat}

Nobody will expect to find mahout, nutch and tika here anymore, and they are 
only redirect links, so why not clean up?

Regarding the archive, we'll keep all historic releases as is but ask Nutch if 
they want to either copy the oldest releases to their archive or provide a link 
to the lucene/nutch archive for the oldest releases. Tika already have such a 
link, and hadoop already has a complete set of artifacts in their main repo.

  was:
In https://archive.apache.org/dist/lucene/ we have these folders:
{noformat}
[DIR] hadoop/ 2008-01-22 23:40-   
[DIR] java/   2017-02-14 08:33-   
[DIR] mahout/ 2015-02-17 20:27-   
[DIR] nutch/  2015-02-17 20:29-   
[DIR] pylucene/   2017-02-13 22:00-   
[DIR] solr/   2017-02-14 08:33-   
[DIR] tika/   2015-02-17 20:29-   
[   ] KEYS2016-08-30 09:59  148K  
{noformat}

Nobody will expect to find hadoop, mahout, nutch and tika here anymore, so why 
not clean up?

I double checked, and both https://archive.apache.org/dist/hadoop/core/ and 
https://archive.apache.org/dist/mahout/ have a full copy of all releases, so we 
lose nothing. 

For https://archive.apache.org/dist/nutch/, they do not have 0.6-0.8 releases 
that we have under lucene, and https://archive.apache.org/dist/tika/ do not 
have v0.2-0.7 that only exists with us. For these two projects we could ask 
their PMC to copy over the early versions and then we nuk'em?

Any other reason to keep these in the lucene area?


> Remove ancient projects from the dist area
> --
>
> Key: LUCENE-7696
> URL: https://issues.apache.org/jira/browse/LUCENE-7696
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/website
>Reporter: Jan Høydahl
>  Labels: archive, dist, download
>
> In http://www.apache.org/dist/lucene/ we have these folders:
> {noformat}
> [DIR] java/   2017-02-14 08:33-   
> [DIR] mahout/ 2015-02-17 20:27-   
> [DIR] nutch/  2015-02-17 20:29-   
> [DIR] pylucene/   2017-02-13 22:00-   
> [DIR] solr/   2017-02-14 08:33-   
> [DIR] tika/   2015-02-17 20:29-   
> [   ] KEYS2016-08-30 09:59  148K  
> {noformat}
> Nobody will expect to find mahout, nutch and tika here anymore, and they are 
> only redirect links, so why not clean up?
> Regarding the archive, we'll keep all historic releases as is but ask Nutch 
> if they want to either copy the oldest releases to their archive or provide a 
> link to the lucene/nutch archive for the oldest releases. Tika already have 
> such a link, and hadoop already has a complete set of artifacts in their main 
> repo.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10193) Improve MiniSolrCloudCluster#shutdown.

2017-02-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879217#comment-15879217
 ] 

ASF subversion and git services commented on SOLR-10193:


Commit 29a5ea44a7f010e27a8c8951d697fc0fbb8d5403 in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=29a5ea4 ]

SOLR-10193: Improve MiniSolrCloudCluster#shutdown.


> Improve MiniSolrCloudCluster#shutdown.
> --
>
> Key: SOLR-10193
> URL: https://issues.apache.org/jira/browse/SOLR-10193
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7696) Remove ancient projects from the dist area

2017-02-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/LUCENE-7696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879215#comment-15879215
 ] 

Jan Høydahl commented on LUCENE-7696:
-

Yea, looks like the archive normally stays as-is, and that's fine I guess, 
People only go there if they explicitly look for old versions. For the archive 
I'll follow-up with Nutch to ask if they want to write a few words on their 
download site about the oldest releases being found in the lucene area.

I'll rewrite the issue description to focus on the dist area and the mirrors 
that people normally see, e.g. http://www.apache.org/dist/lucene/
Here, hadoop is no longer there, but mahout, nutch and tika folders are 
{{.htaccess}} redirects. Assuming these are no longer needed, I plan to remove 
them.

> Remove ancient projects from the dist area
> --
>
> Key: LUCENE-7696
> URL: https://issues.apache.org/jira/browse/LUCENE-7696
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/website
>Reporter: Jan Høydahl
>  Labels: archive, dist, download
>
> In https://archive.apache.org/dist/lucene/ we have these folders:
> {noformat}
> [DIR] hadoop/ 2008-01-22 23:40-   
> [DIR] java/   2017-02-14 08:33-   
> [DIR] mahout/ 2015-02-17 20:27-   
> [DIR] nutch/  2015-02-17 20:29-   
> [DIR] pylucene/   2017-02-13 22:00-   
> [DIR] solr/   2017-02-14 08:33-   
> [DIR] tika/   2015-02-17 20:29-   
> [   ] KEYS2016-08-30 09:59  148K  
> {noformat}
> Nobody will expect to find hadoop, mahout, nutch and tika here anymore, so 
> why not clean up?
> I double checked, and both https://archive.apache.org/dist/hadoop/core/ and 
> https://archive.apache.org/dist/mahout/ have a full copy of all releases, so 
> we lose nothing. 
> For https://archive.apache.org/dist/nutch/, they do not have 0.6-0.8 releases 
> that we have under lucene, and https://archive.apache.org/dist/tika/ do not 
> have v0.2-0.7 that only exists with us. For these two projects we could ask 
> their PMC to copy over the early versions and then we nuk'em?
> Any other reason to keep these in the lucene area?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9481) BasicAuthPlugin should support standalone mode

2017-02-22 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879203#comment-15879203
 ] 

Steve Rowe commented on SOLR-9481:
--

Jan, if you don't have time to do it short term, please move the CHANGES entry 
back out of the 6.5 section.

> BasicAuthPlugin should support standalone mode
> --
>
> Key: SOLR-9481
> URL: https://issues.apache.org/jira/browse/SOLR-9481
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>  Labels: authentication
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9481-6x.patch, SOLR-9481.patch, SOLR-9481.patch
>
>
> The BasicAuthPlugin currently only supports SolrCloud, and reads users and 
> credentials from ZK /security.json
> Add support for standalone mode operation



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9855) DynamicInterceptor in HttpClientUtils use synchronization that can deadlock and puts a global mutex around per request process calls.

2017-02-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-9855.
---
   Resolution: Fixed
Fix Version/s: master (7.0)

> DynamicInterceptor in HttpClientUtils use synchronization that can deadlock 
> and puts a global mutex around per request process calls.
> -
>
> Key: SOLR-9855
> URL: https://issues.apache.org/jira/browse/SOLR-9855
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: master (7.0)
>
>
> Only affects trunk.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10193) Improve MiniSolrCloudCluster#shutdown.

2017-02-22 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879181#comment-15879181
 ] 

Mark Miller commented on SOLR-10193:


I've seen some test fails that look related to this.

> Improve MiniSolrCloudCluster#shutdown.
> --
>
> Key: SOLR-10193
> URL: https://issues.apache.org/jira/browse/SOLR-10193
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10193) Improve MiniSolrCloudCluster#shutdown.

2017-02-22 Thread Mark Miller (JIRA)
Mark Miller created SOLR-10193:
--

 Summary: Improve MiniSolrCloudCluster#shutdown.
 Key: SOLR-10193
 URL: https://issues.apache.org/jira/browse/SOLR-10193
 Project: Solr
  Issue Type: Test
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10186) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-02-22 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879134#comment-15879134
 ] 

Erick Erickson commented on SOLR-10186:
---

Gah, any class found in IntelliJ by the cmd-o key sequence MUST be in Solr, 
right? My mistake.

Yes, let's open the JIRA in LUCENE if for no other reason than have the Lucene 
guys notice and comment if they don't like the idea.

Erick

P.S. On a quick glance I notice these lines still in the code:
if (!args.isEmpty()) {
   throw new IllegalArgumentException("Unknown parameters: " + args);
 }

So I think if you specify a tag in the schema file it'll throw an error here. 
It'd be good to have a test here I should think.

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> -
>
> Key: SOLR-10186
> URL: https://issues.apache.org/jira/browse/SOLR-10186
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Priority: Minor
> Attachments: SOLR-10186.patch
>
>
> Is there a good reason that we hard-code a 256 character limit for the 
> CharTokenizer? In order to change this limit it requires that people 
> copy/paste the incrementToken into some new class since incrementToken is 
> final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 714 - Still unstable!

2017-02-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/714/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

6 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test

Error Message:
expected:<0> but was:<4>

Stack Trace:
java.lang.AssertionError: expected:<0> but was:<4>
at 
__randomizedtesting.SeedInfo.seed([144CBF0BFE38B622:9C1880D150C4DBDA]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test(ChaosMonkeySafeLeaderTest.java:146)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: Getting totalTermFreq and docFreq for terms

2017-02-22 Thread Shai Erera
Looks like this could be a very easy addition to TermsComponent? From what
I read in the code, it uses TermContext to compute/hold the stats, and the
latter already has docFreq and totalTermFreq (!!). It's just that
TermsComponent does not output TTF (only computes it...):

for(int i=0; i wrote:

> Yeah, I think expanding the functionality of the terms component looks
> like the right place to add these stats.
>
> I plan on exposing these types of terms stats as Streaming Expression
> functions but I would likely use the terms component under the covers.
>
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Wed, Feb 22, 2017 at 8:56 AM, Shai Erera  wrote:
>
> No, they are not global distributed stats. I am willing to live with
> approximated stats though (unless again, there's an API which can give me
> both). I wonder why doesn't Terms component return ttf in addition to
> docfreq. The API (at the Lucene level) is right there already.
>
> On Wed, Feb 22, 2017 at 3:49 PM Joel Bernstein  wrote:
>
> Hi Shai,
>
> Do ttf and docfreq return global stats in distributed mode? I wasn't aware
> that there was a mechanism for aggregating values in the field list.
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Wed, Feb 22, 2017 at 7:18 AM, Shai Erera  wrote:
>
> Hi
>
> I am currently using function queries to obtain these two statistics, as I
> didn't see a better or more explicit API and the Terms component only
> returns docFreq, but not totalTermFreq.
>
> The way I use the API is submit requests as follows:
>
> curl "
> http://localhost:8983/solr/mycollection/select?q=*:*=1=ttf(text,'t1'),docfreq(text,'t1
> ')"
>
> Today I noticed that it sometimes returns 0 for these stats for existing
> terms. After debugging and going through the code, I noticed that it
> performs analysis on the value that's given. So if I provide an already
> stemmed value, it analyzes the value further and in some cases it results
> in a non-existing term (and in other cases I get stats for a term I didn't
> ask for).
>
> I want to get the stats of the indexed version of the terms, and that's
> why I send the already stemmed one. In my case I tried to get the stats for
> the term 'disguis' which is the stem of 'disguise' and 'disguised', however
> it further analyzed the value to 'disgui' (per the analysis chain) and that
> term does not exist in the index.
>
> So first question is -- is this the right API to retrieve such statistics?
> I didn't find another one, but could be I missed it.
>
> If it is, why does it analyze the value? I tried to wrap the value with
> single and double quotes, but of course that does not affect the analysis
> ... is analysis an intended behavior or a bug?
>
> Shai
>
>
>
>


[jira] [Commented] (SOLR-9481) BasicAuthPlugin should support standalone mode

2017-02-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879116#comment-15879116
 ] 

Jan Høydahl commented on SOLR-9481:
---

Absolutely :) This patch is pure open source idealism, not paid work.. Will try 
to get around to it again though.

> BasicAuthPlugin should support standalone mode
> --
>
> Key: SOLR-9481
> URL: https://issues.apache.org/jira/browse/SOLR-9481
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>  Labels: authentication
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9481-6x.patch, SOLR-9481.patch, SOLR-9481.patch
>
>
> The BasicAuthPlugin currently only supports SolrCloud, and reads users and 
> credentials from ZK /security.json
> Add support for standalone mode operation



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_121) - Build # 2919 - Unstable!

2017-02-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2919/
Java: 32bit/jdk1.8.0_121 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.testSplitWithChaosMonkey

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([96F7B2C8F6450563:1DD06119B743AEE7]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:187)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:144)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:139)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:865)
at 
org.apache.solr.cloud.ShardSplitTest.testSplitWithChaosMonkey(ShardSplitTest.java:437)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Commented] (SOLR-9824) Documents indexed in bulk are replicated using too many HTTP requests

2017-02-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879075#comment-15879075
 ] 

ASF subversion and git services commented on SOLR-9824:
---

Commit d6337ac3e566c504766d69499ab470bd26744a29 in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d6337ac ]

SOLR-9824: Some bulk update paths could be very slow due to CUSC polling.


> Documents indexed in bulk are replicated using too many HTTP requests
> -
>
> Key: SOLR-9824
> URL: https://issues.apache.org/jira/browse/SOLR-9824
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.3
>Reporter: David Smiley
>Assignee: Mark Miller
> Attachments: SOLR-9824.patch, SOLR-9824.patch, SOLR-9824.patch, 
> SOLR-9824.patch, SOLR-9824.patch, SOLR-9824.patch, SOLR-9824.patch
>
>
> This takes awhile to explain; bear with me. While working on bulk indexing 
> small documents, I looked at the logs of my SolrCloud nodes.  I noticed that 
> shards would see an /update log message every ~6ms which is *way* too much.  
> These are requests from one shard (that isn't a leader/replica for these docs 
> but the recipient from my client) to the target shard leader (no additional 
> replicas).  One might ask why I'm not sending docs to the right shard in the 
> first place; I have a reason but it's besides the point -- there's a real 
> Solr perf problem here and this probably applies equally to 
> replicationFactor>1 situations too.  I could turn off the logs but that would 
> hide useful stuff, and it's disconcerting to me that so many short-lived HTTP 
> requests are happening, somehow at the bequest of DistributedUpdateProcessor. 
>  After lots of analysis and debugging and hair pulling, I finally figured it 
> out.  
> In SOLR-7333 ([~tpot]) introduced an optimization called 
> {{UpdateRequest.isLastDocInBatch()}} in which ConcurrentUpdateSolrClient will 
> poll with a '0' timeout to the internal queue, so that it can close the 
> connection without it hanging around any longer than needed.  This part makes 
> sense to me.  Currently the only spot that has the smarts to set this flag is 
> {{JavaBinUpdateRequestCodec.unmarshal.readOuterMostDocIterator()}} at the 
> last document.  So if a shard received docs in a javabin stream (but not 
> other formats) one would expect the _last_ document to have this flag.  
> There's even a test.  Docs without this flag get the default poll time; for 
> javabin it's 25ms.  Okay.
> I _suspect_ that if someone used CloudSolrClient or HttpSolrClient to send 
> javabin data in a batch, the intended efficiencies of SOLR-7333 would apply.  
> I didn't try. In my case, I'm using ConcurrentUpdateSolrClient (and BTW 
> DistributedUpdateProcessor uses CUSC too).  CUSC uses the RequestWriter 
> (defaulting to javabin) to send each document separately without any leading 
> marker or trailing marker.  For the XML format by comparison, there is a 
> leading and trailing marker ( ... ).  Since there's no outer 
> container for the javabin unmarshalling to detect the last document, it marks 
> _every_ document as {{req.lastDocInBatch()}}!  Ouch!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10126) PeerSyncReplicationTest is a flakey test.

2017-02-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879073#comment-15879073
 ] 

ASF subversion and git services commented on SOLR-10126:


Commit be64c26c270fc9663609492de77c1dec5574afda in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=be64c26 ]

SOLR-10126: Improve test a bit.


> PeerSyncReplicationTest is a flakey test.
> -
>
> Key: SOLR-10126
> URL: https://issues.apache.org/jira/browse/SOLR-10126
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
> Attachments: faillogs.tar.gz
>
>
> Could be related to SOLR-9555, but I will see what else pops up under 
> beasting.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9855) DynamicInterceptor in HttpClientUtils use synchronization that can deadlock and puts a global mutex around per request process calls.

2017-02-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879074#comment-15879074
 ] 

ASF subversion and git services commented on SOLR-9855:
---

Commit 2f82409e5b3a90363941caa3767c3de2abecdaf0 in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2f82409 ]

SOLR-9855: DynamicInterceptor in HttpClientUtils use synchronization that can 
deadlock and puts a global mutex around per request process calls.


> DynamicInterceptor in HttpClientUtils use synchronization that can deadlock 
> and puts a global mutex around per request process calls.
> -
>
> Key: SOLR-9855
> URL: https://issues.apache.org/jira/browse/SOLR-9855
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
>Reporter: Mark Miller
>Assignee: Mark Miller
>
> Only affects trunk.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Solr-Artifacts-6.x - Build # 249 - Failure

2017-02-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-Artifacts-6.x/249/

No tests ran.

Build Log:
[...truncated 18 lines...]
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
git://git.apache.org/lucene-solr.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:806)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1066)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1097)
at hudson.scm.SCM.checkout(SCM.java:495)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1278)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:604)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1728)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:404)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress git://git.apache.org/lucene-solr.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: fatal: read error: Connection reset by peer

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1784)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1513)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:64)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:315)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:152)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:145)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:336)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to lucene(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1537)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:253)
at hudson.remoting.Channel.call(Channel.java:822)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:145)
at sun.reflect.GeneratedMethodAccessor465.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:131)
at com.sun.proxy.$Proxy96.execute(Unknown Source)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:804)
... 11 more
ERROR: null
Retrying after 10 seconds
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://git.apache.org/lucene-solr.git # 
 > timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Fetching upstream changes from git://git.apache.org/lucene-solr.git
 > git --version # timeout=10
 > git fetch --tags --progress git://git.apache.org/lucene-solr.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
git://git.apache.org/lucene-solr.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:806)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1066)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1097)
at hudson.scm.SCM.checkout(SCM.java:495)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1278)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:604)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1728)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:404)

[jira] [Commented] (SOLR-9835) Create another replication mode for SolrCloud

2017-02-22 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879022#comment-15879022
 ] 

Shalin Shekhar Mangar commented on SOLR-9835:
-

Thanks Dat. Sorry it took me a while to finish reviewing. A few 
questions/comments

# LeaderInitiatedRecoveryThread -- What is the reason behind adding 
SocketTimeoutException in the list of communication errors on which no more 
retries are made?
# ZkController.register method -- The condition for !isLeader && 
onlyLeaderIndexes can be replaced by the isReplicaInOnlyLeaderIndexes variable.
# Since there is no log replay on startup on replicas anymore, what if the 
replica is killed (which keeps its state as 'active' in ZK) and then the 
cluster is restarted and the replica becomes leader candidate? If we do not 
replay the discarded log then it could lead to data loss?
# UpdateLog -- Can you please add javadocs outlining the motivation/purpose of 
the new methods such as copyOverBufferingUpdates and switchToNewTlog e.g. why 
does switchToNewTlog require copying over some updates from the old tlog?
# It seems that any commits that might be triggered explicitly by the user can 
interfere with the index replication. Suppose that a replication is in progress 
and a user explicitly calls commit which is distributed to all replicas, in 
such a case the tlogs will be rolled over and then when the ReplicateFromLeader 
calls switchToNewTlog(), the previous tlog may not have all the updates that 
should have been copied over. We should have a way to either disable explicit 
commits or protect against them on the replicas.
# UpdateLog -- why does copyOverBufferUpdates block updates while calling 
switchToNewTlog but ReplicateFromLeader doesn't? How are they both safe?
# Can we add tests for testing CDCR and backup/restore with this new 
replication scheme?
# ZkController.startReplicationFromLeader -- Using a ConcurrentHashMap is not 
enough to prevent two simultaneous replications from happening concurrently. 
You should use the atomic putIfAbsent to put a core to the map before starting 
replication.
# Aren't some of the guarantees of real-time-get are relaxed in this new mode 
especially around delete-by-queries which no longer apply on replicas? Can you 
please document them as a comment on the issue that we can transfer to the ref 
guide in future?

> Create another replication mode for SolrCloud
> -
>
> Key: SOLR-9835
> URL: https://issues.apache.org/jira/browse/SOLR-9835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch
>
>
> The current replication mechanism of SolrCloud is called state machine, which 
> replicas start in same initial state and for each input, the input is 
> distributed across replicas so all replicas will end up with same next state. 
> But this type of replication have some drawbacks
> - The commit (which costly) have to run on all replicas
> - Slow recovery, because if replica miss more than N updates on its down 
> time, the replica have to download entire index from its leader.
> So we create create another replication mode for SolrCloud called state 
> transfer, which acts like master/slave replication. In basically
> - Leader distribute the update to other replicas, but the leader only apply 
> the update to IW, other replicas just store the update to UpdateLog (act like 
> replication).
> - Replicas frequently polling the latest segments from leader.
> Pros:
> - Lightweight for indexing, because only leader are running the commit, 
> updates.
> - Very fast recovery, replicas just have to download the missing segments.
> On CAP point of view, this ticket will trying to promise to end users a 
> distributed systems :
> - Partition tolerance
> - Weak Consistency for normal query : clusters can serve stale data. This 
> happen when leader finish a commit and slave is fetching for latest segment. 
> This period can at most {{pollInterval + time to fetch latest segment}}.
> - Consistency for RTG : just like original SolrCloud mode
> - Weak Availability : just like original SolrCloud mode. If a leader down, 
> client must wait until new leader being elected.
> To use this new replication mode, a new collection must be created with an 
> additional parameter {{liveReplicas=1}}
> {code}
> http://localhost:8983/solr/admin/collections?action=CREATE=newCollection=2=1=1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (SOLR-10186) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-02-22 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-10186:

Attachment: SOLR-10186.patch

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> -
>
> Key: SOLR-10186
> URL: https://issues.apache.org/jira/browse/SOLR-10186
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Priority: Minor
> Attachments: SOLR-10186.patch
>
>
> Is there a good reason that we hard-code a 256 character limit for the 
> CharTokenizer? In order to change this limit it requires that people 
> copy/paste the incrementToken into some new class since incrementToken is 
> final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10186) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-02-22 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-10186:

Attachment: (was: SOLR-10186.patch)

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> -
>
> Key: SOLR-10186
> URL: https://issues.apache.org/jira/browse/SOLR-10186
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Priority: Minor
>
> Is there a good reason that we hard-code a 256 character limit for the 
> CharTokenizer? In order to change this limit it requires that people 
> copy/paste the incrementToken into some new class since incrementToken is 
> final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10186) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-02-22 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-10186:

Attachment: SOLR-10186.patch

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> -
>
> Key: SOLR-10186
> URL: https://issues.apache.org/jira/browse/SOLR-10186
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Priority: Minor
> Attachments: SOLR-10186.patch
>
>
> Is there a good reason that we hard-code a 256 character limit for the 
> CharTokenizer? In order to change this limit it requires that people 
> copy/paste the incrementToken into some new class since incrementToken is 
> final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10186) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-02-22 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-10186:

Attachment: (was: SOLR-10186.patch)

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> -
>
> Key: SOLR-10186
> URL: https://issues.apache.org/jira/browse/SOLR-10186
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Priority: Minor
>
> Is there a good reason that we hard-code a 256 character limit for the 
> CharTokenizer? In order to change this limit it requires that people 
> copy/paste the incrementToken into some new class since incrementToken is 
> final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10186) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-02-22 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-10186:

Attachment: SOLR-10186.patch

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> -
>
> Key: SOLR-10186
> URL: https://issues.apache.org/jira/browse/SOLR-10186
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Priority: Minor
> Attachments: SOLR-10186.patch
>
>
> Is there a good reason that we hard-code a 256 character limit for the 
> CharTokenizer? In order to change this limit it requires that people 
> copy/paste the incrementToken into some new class since incrementToken is 
> final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10186) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-02-22 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15878984#comment-15878984
 ] 

Amrit Sarkar commented on SOLR-10186:
-

Erick,

First draft, SOLR-10186.patch, is uploaded which allows CharTokenizer-derived 
tokenizers and KeywordTokenizer to configure the max token length in their 
definition in schema.

{code:xml}
modified:   
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/KeywordTokenizerFactory.java
modified:   
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/LetterTokenizer.java
modified:   
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/LetterTokenizerFactory.java
modified:   
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/UnicodeWhitespaceTokenizer.java
modified:   
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/WhitespaceTokenizer.java
modified:   
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/WhitespaceTokenizerFactory.java
modified:   
lucene/analysis/common/src/java/org/apache/lucene/analysis/util/CharTokenizer.java
{code}

Currently finishing up relevant comments for the new arguments, modified and 
new constructors in respective classes and thorough tests.

As all the classes/tokenizers are part of lucene core, I agree with Mr Smiley 
of opening JIRA under Lucene project and probably link this JIRA there. 

Let me know your thoughts.

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> -
>
> Key: SOLR-10186
> URL: https://issues.apache.org/jira/browse/SOLR-10186
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Priority: Minor
>
> Is there a good reason that we hard-code a 256 character limit for the 
> CharTokenizer? In order to change this limit it requires that people 
> copy/paste the incrementToken into some new class since incrementToken is 
> final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10143) Create IndexOrDocValuesQuery for PointFields when possible

2017-02-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15878962#comment-15878962
 ] 

ASF subversion and git services commented on SOLR-10143:


Commit ed609013871121a3ccf281007fb1b8ca9ae3c7ad in lucene-solr's branch 
refs/heads/branch_6x from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ed60901 ]

SOLR-10143: Added CHANGES entry


> Create IndexOrDocValuesQuery for PointFields when possible
> --
>
> Key: SOLR-10143
> URL: https://issues.apache.org/jira/browse/SOLR-10143
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-10143.patch, SOLR-10143.patch
>
>
> IndexOrDocValuesQuery was recently added in Lucene as an optimization for 
> queries on fields that have DV and Points. See LUCENE-7055 and LUCENE-7643



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10143) Create IndexOrDocValuesQuery for PointFields when possible

2017-02-22 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe resolved SOLR-10143.
--
   Resolution: Fixed
Fix Version/s: master (7.0)
   6.5

> Create IndexOrDocValuesQuery for PointFields when possible
> --
>
> Key: SOLR-10143
> URL: https://issues.apache.org/jira/browse/SOLR-10143
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-10143.patch, SOLR-10143.patch
>
>
> IndexOrDocValuesQuery was recently added in Lucene as an optimization for 
> queries on fields that have DV and Points. See LUCENE-7055 and LUCENE-7643



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10143) Create IndexOrDocValuesQuery for PointFields when possible

2017-02-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15878961#comment-15878961
 ] 

ASF subversion and git services commented on SOLR-10143:


Commit 784d03f7bf26771f1c53b5e9db5e609d37a4b4f8 in lucene-solr's branch 
refs/heads/branch_6x from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=784d03f ]

SOLR-10143: PointFields will create IndexOrDocValuesQuery when a field is both, 
indexed=true and docValues=true


> Create IndexOrDocValuesQuery for PointFields when possible
> --
>
> Key: SOLR-10143
> URL: https://issues.apache.org/jira/browse/SOLR-10143
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-10143.patch, SOLR-10143.patch
>
>
> IndexOrDocValuesQuery was recently added in Lucene as an optimization for 
> queries on fields that have DV and Points. See LUCENE-7055 and LUCENE-7643



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10143) Create IndexOrDocValuesQuery for PointFields when possible

2017-02-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15878932#comment-15878932
 ] 

ASF subversion and git services commented on SOLR-10143:


Commit 21690f5e126e1be0baf70cd3af2d570a18cd712d in lucene-solr's branch 
refs/heads/master from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=21690f5 ]

SOLR-10143: PointFields will create IndexOrDocValuesQuery when a field is both, 
indexed=true and docValues=true


> Create IndexOrDocValuesQuery for PointFields when possible
> --
>
> Key: SOLR-10143
> URL: https://issues.apache.org/jira/browse/SOLR-10143
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-10143.patch, SOLR-10143.patch
>
>
> IndexOrDocValuesQuery was recently added in Lucene as an optimization for 
> queries on fields that have DV and Points. See LUCENE-7055 and LUCENE-7643



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10143) Create IndexOrDocValuesQuery for PointFields when possible

2017-02-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15878933#comment-15878933
 ] 

ASF subversion and git services commented on SOLR-10143:


Commit 55ef713eb281178a10ae9d34fce4d7a91a7d3733 in lucene-solr's branch 
refs/heads/master from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=55ef713 ]

SOLR-10143: Added CHANGES entry


> Create IndexOrDocValuesQuery for PointFields when possible
> --
>
> Key: SOLR-10143
> URL: https://issues.apache.org/jira/browse/SOLR-10143
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-10143.patch, SOLR-10143.patch
>
>
> IndexOrDocValuesQuery was recently added in Lucene as an optimization for 
> queries on fields that have DV and Points. See LUCENE-7055 and LUCENE-7643



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7704) SysnonymGraphFilter doesn't respect ignoreCase parameter

2017-02-22 Thread Sebastian Yonekura Baeza (JIRA)
Sebastian Yonekura Baeza created LUCENE-7704:


 Summary: SysnonymGraphFilter doesn't respect ignoreCase parameter
 Key: LUCENE-7704
 URL: https://issues.apache.org/jira/browse/LUCENE-7704
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
Affects Versions: 6.4.1
Reporter: Sebastian Yonekura Baeza
Priority: Minor


Hi, it seems that SynonymGraphFilter doesn't respect ignoreCase parameter. In 
particular this test doesn't pass:

{code:title=UppercaseSynonymMapTest.java|borderStyle=solid}
package com.mapcity.suggest.lucene;

import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.Tokenizer;
import org.apache.lucene.analysis.core.WhitespaceTokenizer;
import org.apache.lucene.analysis.synonym.SynonymGraphFilter;
import org.apache.lucene.analysis.synonym.SynonymMap;
import org.apache.lucene.util.CharsRef;
import org.apache.lucene.util.CharsRefBuilder;
import org.junit.Test;

import java.io.IOException;

import static 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents;

/**
 * @author Sebastian Yonekura
 * Created on 22-02-17
 */
public class UppercaseSynonymMapTest {

@Test
public void analyzerTest01() throws IOException {
// This passes
testAssertMapping("word", "synonym");
// this one not
testAssertMapping("word".toUpperCase(), "synonym");
}

private void testAssertMapping(String inputString, String outputString) 
throws IOException {
SynonymMap.Builder builder = new SynonymMap.Builder(false);
CharsRef input = SynonymMap.Builder.join(inputString.split(" "), new 
CharsRefBuilder());
CharsRef output = SynonymMap.Builder.join(outputString.split(" "), new 
CharsRefBuilder());
builder.add(input, output, true);
Analyzer analyzer = new CustomAnalyzer(builder.build());
TokenStream tokenStream = analyzer.tokenStream("field", inputString);
assertTokenStreamContents(tokenStream, new String[]{
outputString, inputString
});
}

static class CustomAnalyzer extends Analyzer {
private SynonymMap synonymMap;

CustomAnalyzer(SynonymMap synonymMap) {
this.synonymMap = synonymMap;
}

@Override
protected TokenStreamComponents createComponents(String s) {
Tokenizer tokenizer = new WhitespaceTokenizer();
TokenStream tokenStream = new SynonymGraphFilter(tokenizer, 
synonymMap, true); // Ignore case True
return new TokenStreamComponents(tokenizer, tokenStream);
}
}
}

{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3849 - Still Unstable!

2017-02-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3849/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForImplicitRouter

Error Message:
Collection not found: withShardField

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: withShardField
at 
__randomizedtesting.SeedInfo.seed([76B07DF8AA96FDF0:23E0956A066F3200]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.getCollectionNames(CloudSolrClient.java:1376)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1072)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1042)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:232)
at 
org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForImplicitRouter(CustomCollectionTest.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[JENKINS] Lucene-Solr-SmokeRelease-6.x - Build # 268 - Failure

2017-02-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-6.x/268/

No tests ran.

Build Log:
[...truncated 10572 lines...]
package-src-tgz:
   [delete] Deleting directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/solr/build/solr/src-export/lucene/tools/javadoc/java8
   [delete] Deleting directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/solr/build/solr/src-export/lucene/tools/clover
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/solr/build/solr/src-export/solr/docs/changes
 [exec] Section 'Optimizations' appears more than once under release 
'6.5.0' at 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/site/changes/changes2html.pl
 line 136.

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/build.xml:555: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/solr/build.xml:489:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/common-build.xml:2520:
 exec returned: 25

Total time: 3 minutes 50 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: 6.4.2 release?

2017-02-22 Thread Christine Poerschke (BLOOMBERG/ LONDON)
Thanks Steve for the guidance!

Since LUCENE-7676 and SOLR-10083 were both (unreleased) 6.5.0 to 6.4.2 
backports I have gone ahead and moved the CHANGES.txt entries on both branch_6x 
and master (after the branch_6_4 commit itself).

My plan for https://issues.apache.org/jira/browse/SOLR-10192 is to have it in 
the 6.4.2 section from the outset (if it goes into the 6.4.2 release that is).

Christine

- Original Message -
From: dev@lucene.apache.org
To: dev@lucene.apache.org
At: 02/22/17 13:42:54

Hi Christine,

> On Feb 22, 2017, at 5:10 AM, Christine Poerschke (BLOOMBERG/ LONDON) 
>  wrote:
> 
> What process do people typically follow w.r.t. updating CHANGES.txt on 
> branch_6x and master in those circumstances e.g. do the entries move from the 
> 6.5 to the 6.4.2 section or are they duplicated in the 6.4.2 section or is it 
> taken care of somehow overall (for master and branch_6x but not branch_6_4) 
> as part of the RC process?

It’s typically a mess: some people move CHANGES entries on the unstable and 
stable branches when they backport to point releases, some don't.  There’s a 
TODO item for the release manager to sync CHANGES post-release: 
.

It’s complicated by the fact that entries should never be removed from 
*released* versions, so issues backported to a point release for an older 
branch typically don’t ever trigger *removal* of duplicate entries elsewhere, 
just copy/paste of the point release’s section into the stable & unstable 
branches’ CHANGES.

--
Steve
www.lucidworks.com
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Comment Edited] (SOLR-9481) BasicAuthPlugin should support standalone mode

2017-02-22 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15878816#comment-15878816
 ] 

Steve Rowe edited comment on SOLR-9481 at 2/22/17 6:01 PM:
---

bq. SOLR-9481: Moving changes entry to 6.5 and targeting that release instead

[~janhoy], looks like you forgot to commit this to branch_6x?


was (Author: steve_rowe):
bq. SOLR-9481: Moving changes entry to 6.5 and targeting that release instead

@janhoy, looks like you forgot to commit this to branch_6x?

> BasicAuthPlugin should support standalone mode
> --
>
> Key: SOLR-9481
> URL: https://issues.apache.org/jira/browse/SOLR-9481
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>  Labels: authentication
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9481-6x.patch, SOLR-9481.patch, SOLR-9481.patch
>
>
> The BasicAuthPlugin currently only supports SolrCloud, and reads users and 
> credentials from ZK /security.json
> Add support for standalone mode operation



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7676) FilterCodecReader to override more super-class methods

2017-02-22 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated LUCENE-7676:

Fix Version/s: 6.4.2

> FilterCodecReader to override more super-class methods
> --
>
> Key: LUCENE-7676
> URL: https://issues.apache.org/jira/browse/LUCENE-7676
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 6.x, master (7.0), 6.4.2
>
> Attachments: LUCENE-7676.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7676) FilterCodecReader to override more super-class methods

2017-02-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15878876#comment-15878876
 ] 

ASF subversion and git services commented on LUCENE-7676:
-

Commit b9b699fbebe9f3a0bb8397d0de9cc7f31faac98a in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b9b699f ]

LUCENE-7676: move CHANGES.txt entry from 6.5.0 to (newly created) 6.4.2 section.


> FilterCodecReader to override more super-class methods
> --
>
> Key: LUCENE-7676
> URL: https://issues.apache.org/jira/browse/LUCENE-7676
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 6.x, master (7.0)
>
> Attachments: LUCENE-7676.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7676) FilterCodecReader to override more super-class methods

2017-02-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15878854#comment-15878854
 ] 

ASF subversion and git services commented on LUCENE-7676:
-

Commit 8c12c19548b63c961ab0318c135500104580f869 in lucene-solr's branch 
refs/heads/branch_6x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8c12c19 ]

LUCENE-7676: move CHANGES.txt entry from 6.5.0 to (newly created) 6.4.2 section.


> FilterCodecReader to override more super-class methods
> --
>
> Key: LUCENE-7676
> URL: https://issues.apache.org/jira/browse/LUCENE-7676
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 6.x, master (7.0)
>
> Attachments: LUCENE-7676.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10083) Fix instanceof check in ConstDoubleSource.equals

2017-02-22 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-10083:
---
Fix Version/s: 6.4.2

> Fix instanceof check in ConstDoubleSource.equals
> 
>
> Key: SOLR-10083
> URL: https://issues.apache.org/jira/browse/SOLR-10083
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 6.x, master (7.0), 6.4.2
>
> Attachments: SOLR-10083.patch
>
>
> Splitting this out from the parent task for potential inclusion in 6.4.1 
> (though it might have just missed the train looks like, sorry).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10083) Fix instanceof check in ConstDoubleSource.equals

2017-02-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15878832#comment-15878832
 ] 

ASF subversion and git services commented on SOLR-10083:


Commit 18a2509ae33fcba6d4037c6441c73b317206195a in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=18a2509 ]

SOLR-10083: move CHANGES.txt entry from 6.5.0 to 6.4.2 section.


> Fix instanceof check in ConstDoubleSource.equals
> 
>
> Key: SOLR-10083
> URL: https://issues.apache.org/jira/browse/SOLR-10083
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 6.x, master (7.0)
>
> Attachments: SOLR-10083.patch
>
>
> Splitting this out from the parent task for potential inclusion in 6.4.1 
> (though it might have just missed the train looks like, sorry).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10064) The Nightly test HdfsCollectionsAPIDistributedZkTest appears to be too fragile.

2017-02-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-10064.

   Resolution: Fixed
Fix Version/s: master (7.0)
   6.5

> The Nightly test HdfsCollectionsAPIDistributedZkTest appears to be too 
> fragile.
> ---
>
> Key: SOLR-10064
> URL: https://issues.apache.org/jira/browse/SOLR-10064
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 6.5, master (7.0)
>
>
> HdfsCollectionsAPIDistributedZkTest 73.00% half–cracked 30.00 282.56 @Nightly



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10064) The Nightly test HdfsCollectionsAPIDistributedZkTest appears to be too fragile.

2017-02-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15878829#comment-15878829
 ] 

ASF subversion and git services commented on SOLR-10064:


Commit 887d39ffd8ffbad5f04dafd41a86b55a79468502 in lucene-solr's branch 
refs/heads/branch_6x from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=887d39f ]

SOLR-10064: The Nightly test HdfsCollectionsAPIDistributedZkTest appears to be 
too fragile.


> The Nightly test HdfsCollectionsAPIDistributedZkTest appears to be too 
> fragile.
> ---
>
> Key: SOLR-10064
> URL: https://issues.apache.org/jira/browse/SOLR-10064
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 6.5, master (7.0)
>
>
> HdfsCollectionsAPIDistributedZkTest 73.00% half–cracked 30.00 282.56 @Nightly



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10064) The Nightly test HdfsCollectionsAPIDistributedZkTest appears to be too fragile.

2017-02-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15878826#comment-15878826
 ] 

ASF subversion and git services commented on SOLR-10064:


Commit 3357aab84221081a5460e62dd41a56c2008cd9a4 in lucene-solr's branch 
refs/heads/branch_6x from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3357aab ]

SOLR-10064: The Nightly test HdfsCollectionsAPIDistributedZkTest appears to be 
too fragile.

# Conflicts:
#   
solr/test-framework/src/java/org/apache/solr/cloud/SolrCloudTestCase.java


> The Nightly test HdfsCollectionsAPIDistributedZkTest appears to be too 
> fragile.
> ---
>
> Key: SOLR-10064
> URL: https://issues.apache.org/jira/browse/SOLR-10064
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>
> HdfsCollectionsAPIDistributedZkTest 73.00% half–cracked 30.00 282.56 @Nightly



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10064) The Nightly test HdfsCollectionsAPIDistributedZkTest appears to be too fragile.

2017-02-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15878828#comment-15878828
 ] 

ASF subversion and git services commented on SOLR-10064:


Commit c9027adee85b73a368f6697e0799871fe5fe4385 in lucene-solr's branch 
refs/heads/branch_6x from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c9027ad ]

SOLR-10064: The Nightly test HdfsCollectionsAPIDistributedZkTest appears to be 
too fragile.


> The Nightly test HdfsCollectionsAPIDistributedZkTest appears to be too 
> fragile.
> ---
>
> Key: SOLR-10064
> URL: https://issues.apache.org/jira/browse/SOLR-10064
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>
> HdfsCollectionsAPIDistributedZkTest 73.00% half–cracked 30.00 282.56 @Nightly



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10064) The Nightly test HdfsCollectionsAPIDistributedZkTest appears to be too fragile.

2017-02-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15878827#comment-15878827
 ] 

ASF subversion and git services commented on SOLR-10064:


Commit 141ed719753fd603beeb7329071b9544110fb7ff in lucene-solr's branch 
refs/heads/branch_6x from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=141ed71 ]

SOLR-10064: Lower block cache size to fit within default limits.


> The Nightly test HdfsCollectionsAPIDistributedZkTest appears to be too 
> fragile.
> ---
>
> Key: SOLR-10064
> URL: https://issues.apache.org/jira/browse/SOLR-10064
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>
> HdfsCollectionsAPIDistributedZkTest 73.00% half–cracked 30.00 282.56 @Nightly



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   3   >