[jira] [Resolved] (SOLR-11069) CDCR bootstrapping can get into an infinite loop when a core is reloaded

2017-08-16 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-11069.
---
Resolution: Fixed

> CDCR bootstrapping can get into an infinite loop when a core is reloaded
> 
>
> Key: SOLR-11069
> URL: https://issues.apache.org/jira/browse/SOLR-11069
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 6.2, 6.3, 6.4, 6.5, 6.6, 7.0
>Reporter: Amrit Sarkar
>Assignee: Erick Erickson
> Fix For: 7.0, 6.7, 6.6.1, master (8.0), 7.1
>
> Attachments: SOLR-11069.patch, SOLR-11069.patch, SOLR-11069.patch
>
>
> {{LASTPROCESSEDVERSION}} (a.b.v. LPV) action for CDCR breaks down due to 
> poorly initialised and maintained buffer log for either source or target 
> cluster core nodes.
> If buffer is enabled for cores of either source or target cluster, it return 
> {{-1}}, *irrespective of number of entries in tlog read by the {{leader}}* 
> node of each shard of respective collection of respective cluster. Once 
> disabled, it starts telling us the correct LPV for each core.
> Due to the same flawed behavior, Update Log Synchroniser may doesn't work 
> properly as expected, i.e. provides incorrect seek to the {{non-leader}} 
> nodes to advance at. I am not sure whether this is an intended behavior for 
> sync but it surely doesn't feel right.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+181) - Build # 20331 - Unstable!

2017-08-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20331/
Java: 32bit/jdk-9-ea+181 -client -XX:+UseG1GC --illegal-access=deny

1 tests failed.
FAILED:  org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader

Error Message:
Doc with id=1 not found in 
http://127.0.0.1:46031/as_d/j/forceleader_test_collection due to: Path not 
found: /id; rsp={doc=null}

Stack Trace:
java.lang.AssertionError: Doc with id=1 not found in 
http://127.0.0.1:46031/as_d/j/forceleader_test_collection due to: Path not 
found: /id; rsp={doc=null}
at 
__randomizedtesting.SeedInfo.seed([94EE9B8794E9B4E:EFD9DD7840CC622F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:603)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:556)
at 
org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader(ForceLeaderTest.java:142)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
 

[jira] [Updated] (SOLR-11249) Update Jetty to 9.3.20

2017-08-16 Thread Michael Braun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Braun updated SOLR-11249:
-
Attachment: SOLR-11249.patch

> Update Jetty to 9.3.20
> --
>
> Key: SOLR-11249
> URL: https://issues.apache.org/jira/browse/SOLR-11249
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (8.0)
>Reporter: Michael Braun
> Attachments: SOLR-11249.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11249) Update Jetty to 9.3.20

2017-08-16 Thread Michael Braun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Braun updated SOLR-11249:
-
Flags: Patch

> Update Jetty to 9.3.20
> --
>
> Key: SOLR-11249
> URL: https://issues.apache.org/jira/browse/SOLR-11249
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (8.0)
>Reporter: Michael Braun
> Attachments: SOLR-11249.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11249) Update Jetty to 9.3.20

2017-08-16 Thread Michael Braun (JIRA)
Michael Braun created SOLR-11249:


 Summary: Update Jetty to 9.3.20
 Key: SOLR-11249
 URL: https://issues.apache.org/jira/browse/SOLR-11249
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: master (8.0)
Reporter: Michael Braun






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10628) Less verbose output from bin/solr create

2017-08-16 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129800#comment-16129800
 ] 

Jason Gerlowski edited comment on SOLR-10628 at 8/17/17 2:58 AM:
-

Hey [~janhoy], you have a chance to take a look at this anytime coming up?  
(I'm fine with letting this sit; I just wanted to make sure it didn't get lost 
by accident in the bustle)


was (Author: gerlowskija):
Hey [~janhoy], you have a chance to take a look at this anytime coming up?  
(Just wanted to make sure it didn't get lost in the shuffle)

> Less verbose output from bin/solr create
> 
>
> Key: SOLR-10628
> URL: https://issues.apache.org/jira/browse/SOLR-10628
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Attachments: SOLR-10628.patch, SOLR-10628.patch, SOLR-10628.patch, 
> SOLR-10628.patch, solr_script_outputs.txt, updated_command_output.txt
>
>
> Creating a collection with {{bin/solr create}} today is too verbose:
> {noformat}
> $ bin/solr create -c foo
> Connecting to ZooKeeper at localhost:9983 ...
> INFO  - 2017-05-08 09:06:54.409; 
> org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
> localhost:9983 ready
> Uploading 
> /Users/janhoy/git/lucene-solr/solr/server/solr/configsets/data_driven_schema_configs/conf
>  for config foo to ZooKeeper at localhost:9983
> Creating new collection 'foo' using command:
> http://localhost:8983/solr/admin/collections?action=CREATE=foo=1=1=1=foo
> {
>   "responseHeader":{
> "status":0,
> "QTime":4178},
>   "success":{"192.168.127.248:8983_solr":{
>   "responseHeader":{
> "status":0,
> "QTime":2959},
>   "core":"foo_shard1_replica1"}}}
> {noformat}
> A normal user don't need all this info. Propose to move all the details to 
> verbose mode ({{-V)}} and let the default be the following instead:
> {noformat}
> $ bin/solr create -c foo
> Connecting to ZooKeeper at localhost:9983 ...
> Created collection 'foo' with 1 shard(s), 1 replica(s) using config-set 
> 'data_driven_schema_configs'
> {noformat}
> Error messages must of course still be verbose.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10628) Less verbose output from bin/solr create

2017-08-16 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129800#comment-16129800
 ] 

Jason Gerlowski commented on SOLR-10628:


Hey [~janhoy], you have a chance to take a look at this anytime coming up?  
(Just wanted to make sure it didn't get lost in the shuffle)

> Less verbose output from bin/solr create
> 
>
> Key: SOLR-10628
> URL: https://issues.apache.org/jira/browse/SOLR-10628
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Attachments: SOLR-10628.patch, SOLR-10628.patch, SOLR-10628.patch, 
> SOLR-10628.patch, solr_script_outputs.txt, updated_command_output.txt
>
>
> Creating a collection with {{bin/solr create}} today is too verbose:
> {noformat}
> $ bin/solr create -c foo
> Connecting to ZooKeeper at localhost:9983 ...
> INFO  - 2017-05-08 09:06:54.409; 
> org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
> localhost:9983 ready
> Uploading 
> /Users/janhoy/git/lucene-solr/solr/server/solr/configsets/data_driven_schema_configs/conf
>  for config foo to ZooKeeper at localhost:9983
> Creating new collection 'foo' using command:
> http://localhost:8983/solr/admin/collections?action=CREATE=foo=1=1=1=foo
> {
>   "responseHeader":{
> "status":0,
> "QTime":4178},
>   "success":{"192.168.127.248:8983_solr":{
>   "responseHeader":{
> "status":0,
> "QTime":2959},
>   "core":"foo_shard1_replica1"}}}
> {noformat}
> A normal user don't need all this info. Propose to move all the details to 
> verbose mode ({{-V)}} and let the default be the following instead:
> {noformat}
> $ bin/solr create -c foo
> Connecting to ZooKeeper at localhost:9983 ...
> Created collection 'foo' with 1 shard(s), 1 replica(s) using config-set 
> 'data_driven_schema_configs'
> {noformat}
> Error messages must of course still be verbose.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11206) Migrate logic from bin/solr scripts to SolrCLI

2017-08-16 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129789#comment-16129789
 ] 

Jason Gerlowski edited comment on SOLR-11206 at 8/17/17 2:40 AM:
-

My access to Windows is spotty, and so I haven't been able to get an 
output-benchmark from a Windows machine yet, though I should have access and 
time to do so tomorrow.

In the meantime, I'm uploading a proof-of-concept patch for one of the commands 
supported by the control-scripts ("create").

Notes/caveats on the patch:
- I chose "create" because it had enough arguments to demonstrate the value in 
the change.
- As I mentioned above, I haven't had Windows access recently, so there might 
be issues with the {{bin/solr.cmd}} changes.  Though the changes are accurate 
enough to show the approach.
- As-is, the patch matches command output on success, but error messages about 
missing/invalid arguments don't line up exactly with the pre-patch code.  The 
argument parsing in Java-land uses the commons-cli library, which makes the 
parsing concise/convenient, but ties us to the error-message format dictated by 
the library.  I'm curious what the backward-compatibility expectations are 
around the output of the bin/solr scripts.  I've heard guidelines for the Java 
code, and for API output, but not for the control scripts.  We can match all 
stdout output if we eschew commons-cli, but the library is so standard and 
makes the code so maintainable that I'd like to lobby for using it if it 
doesn't stretch/break our backward-compatibility promises/expectations.  Could 
use some guidance here.


was (Author: gerlowskija):
My access to Windows is spotty, and so I haven't been able to get an 
output-benchmark from a Windows machine yet, though I should have access and 
time to do so tomorrow.

In the meantime, I'm uploading a proof-of-concept patch for one of the commands 
supported by the control-scripts ("create").

Notes/caveats on the patch:
- I chose "create" because it had enough arguments to demonstrate the value in 
the change.
- As I mentioned above, I haven't had Windows access recently, so there might 
be issues with the {{bin/solr.cmd}} changes.  Though the changes are accurate 
enough to show the approach.
- As-is, the patch matches command output on success, but error messages about 
missing/invalid arguments don't line up exactly with the pre-patch code.  The 
argument parsing in Java-land uses the commons-cli library, which makes the 
parsing concise/convenient, but ties us to the error-message format dictated by 
the library.  I'm curious what the backward-compatibility expectations are 
around the output of the bin/solr scripts.  I've heard guidelines for the Java 
code, and for API output, but not for the control scripts.  We can match all 
stdout output if we eschew commons-cli, but the library is so standard and 
makes the code so maintainable that I'd like to lobby for using it if it 
doesn't stretch/break our backward-compatibility promises/expectations.

> Migrate logic from bin/solr scripts to SolrCLI
> --
>
> Key: SOLR-11206
> URL: https://issues.apache.org/jira/browse/SOLR-11206
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jason Gerlowski
> Fix For: master (8.0)
>
> Attachments: ctrl-script-output-benchmark.sh, 
> linux-initial-output.txt, SOLR-11206.patch
>
>
> The {{bin/solr}} and {{bin/solr.cmd}} scripts have taken on a lot of logic 
> that would be easier to maintain if it was instead written in Java code, for 
> a handful of reasons
> * Any logic in the control scripts is duplicated in two places by definition.
> * Increasing test coverage of this logic would be much easier if it was 
> written in Java.
> * Few developers are conversant in both bash and Windows-shell, making 
> editing difficult.
> Some sections in these scripts make good candidates for migration to Java.  
> This issue should examine any of these that are brought up.  However the 
> biggest and most obvious candidate for migration is the argument parsing, 
> validation, usage/help text, etc. for the commands that don't directly 
> start/stop Solr processes (i.e. the "create", "delete", "zk", "auth", 
> "assert" commands).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11206) Migrate logic from bin/solr scripts to SolrCLI

2017-08-16 Thread Jason Gerlowski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski updated SOLR-11206:
---
Attachment: SOLR-11206.patch

My access to Windows is spotty, and so I haven't been able to get an 
output-benchmark from a Windows machine yet, though I should have access and 
time to do so tomorrow.

In the meantime, I'm uploading a proof-of-concept patch for one of the commands 
supported by the control-scripts ("create").

Notes/caveats on the patch:
- I chose "create" because it had enough arguments to demonstrate the value in 
the change.
- As I mentioned above, I haven't had Windows access recently, so there might 
be issues with the {{bin/solr.cmd}} changes.  Though the changes are accurate 
enough to show the approach.
- As-is, the patch matches command output on success, but error messages about 
missing/invalid arguments don't line up exactly with the pre-patch code.  The 
argument parsing in Java-land uses the commons-cli library, which makes the 
parsing concise/convenient, but ties us to the error-message format dictated by 
the library.  I'm curious what the backward-compatibility expectations are 
around the output of the bin/solr scripts.  I've heard guidelines for the Java 
code, and for API output, but not for the control scripts.  We can match all 
stdout output if we eschew commons-cli, but the library is so standard and 
makes the code so maintainable that I'd like to lobby for using it if it 
doesn't stretch/break our backward-compatibility promises/expectations.

> Migrate logic from bin/solr scripts to SolrCLI
> --
>
> Key: SOLR-11206
> URL: https://issues.apache.org/jira/browse/SOLR-11206
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jason Gerlowski
> Fix For: master (8.0)
>
> Attachments: ctrl-script-output-benchmark.sh, 
> linux-initial-output.txt, SOLR-11206.patch
>
>
> The {{bin/solr}} and {{bin/solr.cmd}} scripts have taken on a lot of logic 
> that would be easier to maintain if it was instead written in Java code, for 
> a handful of reasons
> * Any logic in the control scripts is duplicated in two places by definition.
> * Increasing test coverage of this logic would be much easier if it was 
> written in Java.
> * Few developers are conversant in both bash and Windows-shell, making 
> editing difficult.
> Some sections in these scripts make good candidates for migration to Java.  
> This issue should examine any of these that are brought up.  However the 
> biggest and most obvious candidate for migration is the argument parsing, 
> validation, usage/help text, etc. for the commands that don't directly 
> start/stop Solr processes (i.e. the "create", "delete", "zk", "auth", 
> "assert" commands).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11069) CDCR bootstrapping can get into an infinite loop when a core is reloaded

2017-08-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129772#comment-16129772
 ] 

ASF subversion and git services commented on SOLR-11069:


Commit c7f9fcea4b4455c921987e4447b68cdbe046e2f6 in lucene-solr's branch 
refs/heads/branch_6_6 from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c7f9fce ]

SOLR-11069: CDCR bootstrapping can get into an infinite loop when a core is 
reloaded

(cherry picked from commit ac97931c7e5800b2e314545f54c4d524eb69b73b)


> CDCR bootstrapping can get into an infinite loop when a core is reloaded
> 
>
> Key: SOLR-11069
> URL: https://issues.apache.org/jira/browse/SOLR-11069
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 6.2, 6.3, 6.4, 6.5, 6.6, 7.0
>Reporter: Amrit Sarkar
>Assignee: Erick Erickson
> Fix For: 7.0, 6.7, 6.6.1, master (8.0), 7.1
>
> Attachments: SOLR-11069.patch, SOLR-11069.patch, SOLR-11069.patch
>
>
> {{LASTPROCESSEDVERSION}} (a.b.v. LPV) action for CDCR breaks down due to 
> poorly initialised and maintained buffer log for either source or target 
> cluster core nodes.
> If buffer is enabled for cores of either source or target cluster, it return 
> {{-1}}, *irrespective of number of entries in tlog read by the {{leader}}* 
> node of each shard of respective collection of respective cluster. Once 
> disabled, it starts telling us the correct LPV for each core.
> Due to the same flawed behavior, Update Log Synchroniser may doesn't work 
> properly as expected, i.e. provides incorrect seek to the {{non-leader}} 
> nodes to advance at. I am not sure whether this is an intended behavior for 
> sync but it surely doesn't feel right.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11069) CDCR bootstrapping can get into an infinite loop when a core is reloaded

2017-08-16 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-11069:
--
Fix Version/s: 7.1
   master (8.0)
   6.6.1
   6.7
   7.0

> CDCR bootstrapping can get into an infinite loop when a core is reloaded
> 
>
> Key: SOLR-11069
> URL: https://issues.apache.org/jira/browse/SOLR-11069
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 6.2, 6.3, 6.4, 6.5, 6.6, 7.0
>Reporter: Amrit Sarkar
>Assignee: Erick Erickson
> Fix For: 7.0, 6.7, 6.6.1, master (8.0), 7.1
>
> Attachments: SOLR-11069.patch, SOLR-11069.patch, SOLR-11069.patch
>
>
> {{LASTPROCESSEDVERSION}} (a.b.v. LPV) action for CDCR breaks down due to 
> poorly initialised and maintained buffer log for either source or target 
> cluster core nodes.
> If buffer is enabled for cores of either source or target cluster, it return 
> {{-1}}, *irrespective of number of entries in tlog read by the {{leader}}* 
> node of each shard of respective collection of respective cluster. Once 
> disabled, it starts telling us the correct LPV for each core.
> Due to the same flawed behavior, Update Log Synchroniser may doesn't work 
> properly as expected, i.e. provides incorrect seek to the {{non-leader}} 
> nodes to advance at. I am not sure whether this is an intended behavior for 
> sync but it surely doesn't feel right.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11069) CDCR bootstrapping can get into an infinite loop when a core is reloaded

2017-08-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129767#comment-16129767
 ] 

ASF subversion and git services commented on SOLR-11069:


Commit e2477ecce2503f7c4f69ac1966c49691a3c977b8 in lucene-solr's branch 
refs/heads/branch_6x from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e2477ec ]

SOLR-11069: CDCR bootstrapping can get into an infinite loop when a core is 
reloaded

(cherry picked from commit ac97931c7e5800b2e314545f54c4d524eb69b73b)


> CDCR bootstrapping can get into an infinite loop when a core is reloaded
> 
>
> Key: SOLR-11069
> URL: https://issues.apache.org/jira/browse/SOLR-11069
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 6.2, 6.3, 6.4, 6.5, 6.6, 7.0
>Reporter: Amrit Sarkar
>Assignee: Erick Erickson
> Attachments: SOLR-11069.patch, SOLR-11069.patch, SOLR-11069.patch
>
>
> {{LASTPROCESSEDVERSION}} (a.b.v. LPV) action for CDCR breaks down due to 
> poorly initialised and maintained buffer log for either source or target 
> cluster core nodes.
> If buffer is enabled for cores of either source or target cluster, it return 
> {{-1}}, *irrespective of number of entries in tlog read by the {{leader}}* 
> node of each shard of respective collection of respective cluster. Once 
> disabled, it starts telling us the correct LPV for each core.
> Due to the same flawed behavior, Update Log Synchroniser may doesn't work 
> properly as expected, i.e. provides incorrect seek to the {{non-leader}} 
> nodes to advance at. I am not sure whether this is an intended behavior for 
> sync but it surely doesn't feel right.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11069) CDCR bootstrapping can get into an infinite loop when a core is reloaded

2017-08-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129614#comment-16129614
 ] 

ASF subversion and git services commented on SOLR-11069:


Commit 34139f7deb698611046263503272267179c0d315 in lucene-solr's branch 
refs/heads/branch_7_0 from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=34139f7 ]

SOLR-11069: CDCR bootstrapping can get into an infinite loop when a core is 
reloaded

(cherry picked from commit ac97931c7e5800b2e314545f54c4d524eb69b73b)


> CDCR bootstrapping can get into an infinite loop when a core is reloaded
> 
>
> Key: SOLR-11069
> URL: https://issues.apache.org/jira/browse/SOLR-11069
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 6.2, 6.3, 6.4, 6.5, 6.6, 7.0
>Reporter: Amrit Sarkar
>Assignee: Erick Erickson
> Attachments: SOLR-11069.patch, SOLR-11069.patch, SOLR-11069.patch
>
>
> {{LASTPROCESSEDVERSION}} (a.b.v. LPV) action for CDCR breaks down due to 
> poorly initialised and maintained buffer log for either source or target 
> cluster core nodes.
> If buffer is enabled for cores of either source or target cluster, it return 
> {{-1}}, *irrespective of number of entries in tlog read by the {{leader}}* 
> node of each shard of respective collection of respective cluster. Once 
> disabled, it starts telling us the correct LPV for each core.
> Due to the same flawed behavior, Update Log Synchroniser may doesn't work 
> properly as expected, i.e. provides incorrect seek to the {{non-leader}} 
> nodes to advance at. I am not sure whether this is an intended behavior for 
> sync but it surely doesn't feel right.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11069) CDCR bootstrapping can get into an infinite loop when a core is reloaded

2017-08-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129591#comment-16129591
 ] 

ASF subversion and git services commented on SOLR-11069:


Commit a850749ab32e57d0bd96a8517798febeaad9dec1 in lucene-solr's branch 
refs/heads/branch_7x from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a850749 ]

SOLR-11069: CDCR bootstrapping can get into an infinite loop when a core is 
reloaded

(cherry picked from commit ac97931)


> CDCR bootstrapping can get into an infinite loop when a core is reloaded
> 
>
> Key: SOLR-11069
> URL: https://issues.apache.org/jira/browse/SOLR-11069
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 6.2, 6.3, 6.4, 6.5, 6.6, 7.0
>Reporter: Amrit Sarkar
>Assignee: Erick Erickson
> Attachments: SOLR-11069.patch, SOLR-11069.patch, SOLR-11069.patch
>
>
> {{LASTPROCESSEDVERSION}} (a.b.v. LPV) action for CDCR breaks down due to 
> poorly initialised and maintained buffer log for either source or target 
> cluster core nodes.
> If buffer is enabled for cores of either source or target cluster, it return 
> {{-1}}, *irrespective of number of entries in tlog read by the {{leader}}* 
> node of each shard of respective collection of respective cluster. Once 
> disabled, it starts telling us the correct LPV for each core.
> Due to the same flawed behavior, Update Log Synchroniser may doesn't work 
> properly as expected, i.e. provides incorrect seek to the {{non-leader}} 
> nodes to advance at. I am not sure whether this is an intended behavior for 
> sync but it surely doesn't feel right.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11069) CDCR bootstrapping can get into an infinite loop when a core is reloaded

2017-08-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129578#comment-16129578
 ] 

ASF subversion and git services commented on SOLR-11069:


Commit ac97931c7e5800b2e314545f54c4d524eb69b73b in lucene-solr's branch 
refs/heads/master from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ac97931 ]

SOLR-11069: CDCR bootstrapping can get into an infinite loop when a core is 
reloaded


> CDCR bootstrapping can get into an infinite loop when a core is reloaded
> 
>
> Key: SOLR-11069
> URL: https://issues.apache.org/jira/browse/SOLR-11069
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 6.2, 6.3, 6.4, 6.5, 6.6, 7.0
>Reporter: Amrit Sarkar
>Assignee: Erick Erickson
> Attachments: SOLR-11069.patch, SOLR-11069.patch, SOLR-11069.patch
>
>
> {{LASTPROCESSEDVERSION}} (a.b.v. LPV) action for CDCR breaks down due to 
> poorly initialised and maintained buffer log for either source or target 
> cluster core nodes.
> If buffer is enabled for cores of either source or target cluster, it return 
> {{-1}}, *irrespective of number of entries in tlog read by the {{leader}}* 
> node of each shard of respective collection of respective cluster. Once 
> disabled, it starts telling us the correct LPV for each core.
> Due to the same flawed behavior, Update Log Synchroniser may doesn't work 
> properly as expected, i.e. provides incorrect seek to the {{non-leader}} 
> nodes to advance at. I am not sure whether this is an intended behavior for 
> sync but it surely doesn't feel right.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11069) CDCR bootstrapping can get into an infinite loop when a core is reloaded

2017-08-16 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-11069:
--
Affects Version/s: 6.2
   6.3
   6.4
   6.5
   6.6

> CDCR bootstrapping can get into an infinite loop when a core is reloaded
> 
>
> Key: SOLR-11069
> URL: https://issues.apache.org/jira/browse/SOLR-11069
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 6.2, 6.3, 6.4, 6.5, 6.6, 7.0
>Reporter: Amrit Sarkar
>Assignee: Erick Erickson
> Attachments: SOLR-11069.patch, SOLR-11069.patch, SOLR-11069.patch
>
>
> {{LASTPROCESSEDVERSION}} (a.b.v. LPV) action for CDCR breaks down due to 
> poorly initialised and maintained buffer log for either source or target 
> cluster core nodes.
> If buffer is enabled for cores of either source or target cluster, it return 
> {{-1}}, *irrespective of number of entries in tlog read by the {{leader}}* 
> node of each shard of respective collection of respective cluster. Once 
> disabled, it starts telling us the correct LPV for each core.
> Due to the same flawed behavior, Update Log Synchroniser may doesn't work 
> properly as expected, i.e. provides incorrect seek to the {{non-leader}} 
> nodes to advance at. I am not sure whether this is an intended behavior for 
> sync but it surely doesn't feel right.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11069) CDCR bootstrapping can get into an infinite loop when a core is reloaded

2017-08-16 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-11069:
--
Summary: CDCR bootstrapping can get into an infinite loop when a core is 
reloaded  (was: LASTPROCESSEDVERSION for CDCR is flawed when buffering is 
enabled)

> CDCR bootstrapping can get into an infinite loop when a core is reloaded
> 
>
> Key: SOLR-11069
> URL: https://issues.apache.org/jira/browse/SOLR-11069
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.0
>Reporter: Amrit Sarkar
>Assignee: Erick Erickson
> Attachments: SOLR-11069.patch, SOLR-11069.patch, SOLR-11069.patch
>
>
> {{LASTPROCESSEDVERSION}} (a.b.v. LPV) action for CDCR breaks down due to 
> poorly initialised and maintained buffer log for either source or target 
> cluster core nodes.
> If buffer is enabled for cores of either source or target cluster, it return 
> {{-1}}, *irrespective of number of entries in tlog read by the {{leader}}* 
> node of each shard of respective collection of respective cluster. Once 
> disabled, it starts telling us the correct LPV for each core.
> Due to the same flawed behavior, Update Log Synchroniser may doesn't work 
> properly as expected, i.e. provides incorrect seek to the {{non-leader}} 
> nodes to advance at. I am not sure whether this is an intended behavior for 
> sync but it surely doesn't feel right.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-11248) Spatial query returning everything for pt(0,0) d=0

2017-08-16 Thread Vaibhav Patel (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Patel closed SOLR-11248.

Resolution: Fixed

> Spatial query returning everything for pt(0,0) d=0
> --
>
> Key: SOLR-11248
> URL: https://issues.apache.org/jira/browse/SOLR-11248
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
> Environment: solr-spec
> 6.6.0
> solr-impl
> 6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:32:53
> lucene-spec
> 6.6.0
> lucene-impl
> 6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:29:46
>Reporter: Vaibhav Patel
>Priority: Blocker
>
> There is an edge case when I specify pt=0,0 and d=0, it seems to return 
> everything.It looks like 
> this(http://localhost:8983/solr/person_core_420_us/select?d=0={!geofilt}=on=0,0=*:*=home_location=json)
> Other distance queries work fine. Can some one confirm this please?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11248) Spatial query returning everything for pt(0,0) d=0

2017-08-16 Thread Vaibhav Patel (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129480#comment-16129480
 ] 

Vaibhav Patel commented on SOLR-11248:
--

Actually that was happening because all of my long, lat was null


Found this:
Using distance projection on non @Spatial enabled entities and/or with a non 
spatial Query will have unexpected results as entities not spatially indexed 
and/or having null values for latitude or longitude will be considered to be at 
(0,0)/(lat,0)/(0,long).

> Spatial query returning everything for pt(0,0) d=0
> --
>
> Key: SOLR-11248
> URL: https://issues.apache.org/jira/browse/SOLR-11248
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
> Environment: solr-spec
> 6.6.0
> solr-impl
> 6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:32:53
> lucene-spec
> 6.6.0
> lucene-impl
> 6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:29:46
>Reporter: Vaibhav Patel
>Priority: Blocker
>
> There is an edge case when I specify pt=0,0 and d=0, it seems to return 
> everything.It looks like 
> this(http://localhost:8983/solr/person_core_420_us/select?d=0={!geofilt}=on=0,0=*:*=home_location=json)
> Other distance queries work fine. Can some one confirm this please?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11177) CoreContainer.load needs to send lazily loaded core descriptors to the proper list rather than send them all to the transient lists.

2017-08-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129465#comment-16129465
 ] 

ASF subversion and git services commented on SOLR-11177:


Commit 2c1f3fd1ed83066cf60240b87a6193392bbc2a9e in lucene-solr's branch 
refs/heads/branch_6_6 from Erick
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2c1f3fd ]

SOLR-11177: CoreContainer.load needs to send lazily loaded core descriptors to 
the proper list rather than send them all to the transient lists.

(cherry picked from commit bf168ad37e4326be28950ede8f958b6c3f1330fa)


> CoreContainer.load needs to send lazily loaded core descriptors to the proper 
> list rather than send them all to the transient lists.
> 
>
> Key: SOLR-11177
> URL: https://issues.apache.org/jira/browse/SOLR-11177
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Fix For: 7.0, 6.7, 6.6.1, master (8.0), 7.1
>
> Attachments: SOLR-11177.patch
>
>
> I suspect this is a minor issue (at least nobody has reported it) but I'm 
> trying to put a bow around transient core handling so I want to examine this 
> more closely.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11177) CoreContainer.load needs to send lazily loaded core descriptors to the proper list rather than send them all to the transient lists.

2017-08-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129463#comment-16129463
 ] 

ASF subversion and git services commented on SOLR-11177:


Commit c73b5429b722b09b9353ec82627a35e2b864b823 in lucene-solr's branch 
refs/heads/branch_7_0 from Erick
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c73b542 ]

SOLR-11177: CoreContainer.load needs to send lazily loaded core descriptors to 
the proper list rather than send them all to the transient lists.

(cherry picked from commit bf168ad37e4326be28950ede8f958b6c3f1330fa)


> CoreContainer.load needs to send lazily loaded core descriptors to the proper 
> list rather than send them all to the transient lists.
> 
>
> Key: SOLR-11177
> URL: https://issues.apache.org/jira/browse/SOLR-11177
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Fix For: 7.0, 6.7, 6.6.1, master (8.0), 7.1
>
> Attachments: SOLR-11177.patch
>
>
> I suspect this is a minor issue (at least nobody has reported it) but I'm 
> trying to put a bow around transient core handling so I want to examine this 
> more closely.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11177) CoreContainer.load needs to send lazily loaded core descriptors to the proper list rather than send them all to the transient lists.

2017-08-16 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-11177:
--
Fix Version/s: 6.6.1
   7.0

> CoreContainer.load needs to send lazily loaded core descriptors to the proper 
> list rather than send them all to the transient lists.
> 
>
> Key: SOLR-11177
> URL: https://issues.apache.org/jira/browse/SOLR-11177
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Fix For: 7.0, 6.7, 6.6.1, master (8.0), 7.1
>
> Attachments: SOLR-11177.patch
>
>
> I suspect this is a minor issue (at least nobody has reported it) but I'm 
> trying to put a bow around transient core handling so I want to examine this 
> more closely.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11122) Creating a core should write a core.properties file first and clean up on failure

2017-08-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129444#comment-16129444
 ] 

ASF subversion and git services commented on SOLR-11122:


Commit ea427f1ac5014d593712a62113add43fe1e28cbb in lucene-solr's branch 
refs/heads/branch_6_6 from Erick
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ea427f1 ]

SOLR-11122: Creating a core should write a core.properties file first and clean 
up on failure

(cherry picked from commit 4041f8a1c97d9703b5d38b65e842e57cb359da64)


> Creating a core should write a core.properties file first and clean up on 
> failure
> -
>
> Key: SOLR-11122
> URL: https://issues.apache.org/jira/browse/SOLR-11122
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Fix For: 7.0, 6.7, 6.6.1, master (8.0), 7.1
>
> Attachments: SOLR-11122.patch, SOLR-11122.patch
>
>
> I've made the handling of core.properties more consistent as part of the 
> pluggable transient core work. However, a new inconsistency came to light. 
> Most of the code assumes that a core.properties file exists, but it wasn't 
> being persisted until the very end of the coreContainer.create process. So 
> any steps part way through core creation that would manipulate the 
> core.properties file wouldn't find it. And if those steps did make a mistake 
> and call persist on the core.properties, create would fail because the 
> core.properties file would be created. Worse, the transient cache handler had 
> no way of knowing whether the core descriptors being added were from create 
> (where the core.properties file hadn't been created yet) or 
> reload/swap/rename. By moving persisting the core.properties earlier in the 
> create process this would be less trappy.
> Any core.properties file created during this process will be removed if the 
> create fails.
> Cores that are simply being _loaded_ on the other hand do _not_ have their 
> core.properties files removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11122) Creating a core should write a core.properties file first and clean up on failure

2017-08-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129442#comment-16129442
 ] 

ASF subversion and git services commented on SOLR-11122:


Commit 2ae77e297a54451a39a407179672123f98024d12 in lucene-solr's branch 
refs/heads/branch_7_0 from Erick
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2ae77e2 ]

SOLR-11122: Creating a core should write a core.properties file first and clean 
up on failure

(cherry picked from commit 4041f8a1c97d9703b5d38b65e842e57cb359da64)


> Creating a core should write a core.properties file first and clean up on 
> failure
> -
>
> Key: SOLR-11122
> URL: https://issues.apache.org/jira/browse/SOLR-11122
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Fix For: 7.0, 6.7, 6.6.1, master (8.0), 7.1
>
> Attachments: SOLR-11122.patch, SOLR-11122.patch
>
>
> I've made the handling of core.properties more consistent as part of the 
> pluggable transient core work. However, a new inconsistency came to light. 
> Most of the code assumes that a core.properties file exists, but it wasn't 
> being persisted until the very end of the coreContainer.create process. So 
> any steps part way through core creation that would manipulate the 
> core.properties file wouldn't find it. And if those steps did make a mistake 
> and call persist on the core.properties, create would fail because the 
> core.properties file would be created. Worse, the transient cache handler had 
> no way of knowing whether the core descriptors being added were from create 
> (where the core.properties file hadn't been created yet) or 
> reload/swap/rename. By moving persisting the core.properties earlier in the 
> create process this would be less trappy.
> Any core.properties file created during this process will be removed if the 
> create fails.
> Cores that are simply being _loaded_ on the other hand do _not_ have their 
> core.properties files removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11122) Creating a core should write a core.properties file first and clean up on failure

2017-08-16 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-11122:
--
Fix Version/s: 6.6.1
   7.0

> Creating a core should write a core.properties file first and clean up on 
> failure
> -
>
> Key: SOLR-11122
> URL: https://issues.apache.org/jira/browse/SOLR-11122
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Fix For: 7.0, 6.7, 6.6.1, master (8.0), 7.1
>
> Attachments: SOLR-11122.patch, SOLR-11122.patch
>
>
> I've made the handling of core.properties more consistent as part of the 
> pluggable transient core work. However, a new inconsistency came to light. 
> Most of the code assumes that a core.properties file exists, but it wasn't 
> being persisted until the very end of the coreContainer.create process. So 
> any steps part way through core creation that would manipulate the 
> core.properties file wouldn't find it. And if those steps did make a mistake 
> and call persist on the core.properties, create would fail because the 
> core.properties file would be created. Worse, the transient cache handler had 
> no way of knowing whether the core descriptors being added were from create 
> (where the core.properties file hadn't been created yet) or 
> reload/swap/rename. By moving persisting the core.properties earlier in the 
> create process this would be less trappy.
> Any core.properties file created during this process will be removed if the 
> create fails.
> Cores that are simply being _loaded_ on the other hand do _not_ have their 
> core.properties files removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10842) Move quickstart.html to Ref Guide

2017-08-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129431#comment-16129431
 ] 

ASF subversion and git services commented on SOLR-10842:


Commit 85678d84ec774d0c02ccab874a69305f710d1272 in lucene-solr's branch 
refs/heads/branch_7_0 from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=85678d8 ]

SOLR-10842: Move Tutorial to Ref Guide


> Move quickstart.html to Ref Guide
> -
>
> Key: SOLR-10842
> URL: https://issues.apache.org/jira/browse/SOLR-10842
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Minor
> Fix For: 7.0
>
> Attachments: SOLR-10842.patch
>
>
> The Solr Quick Start at https://lucene.apache.org/solr/quickstart.html has 
> been problematic to keep up to date - until Ishan just updated it yesterday 
> for 6.6, it said "6.2.1", so hadn't been updated for several releases.
> Now that the Ref Guide is in AsciiDoc format, we can easily use variables for 
> package versions, and it could be released as part of the Ref Guide and kept 
> up to date. It could also integrate links to more information on topics, and 
> users would already be IN the docs, so they would not need to wonder where 
> the docs are.
> There are a few places on the site that will need to be updated to point to 
> the new location, but I can also put a redirect rule into .htaccess so people 
> are redirected to the new location if there are other links "in the wild" 
> that we cannot control. This allows it to be versioned also, if that becomes 
> necessary.
> As part of this, I would like to also update the entire "Getting Started" 
> section of the Ref Guide, which is effectively identical to what was in the 
> first release of the Ref Guide back in 2009 for Solr 1.4 and is in serious 
> need of reconsideration.
> My thought is that moving the page + redoing the Getting Started section 
> would be for 7.0, but if folks are excited about this idea I could move the 
> page for 6.6 and hold off redoing the larger section until 7.0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10842) Move quickstart.html to Ref Guide

2017-08-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129426#comment-16129426
 ] 

ASF subversion and git services commented on SOLR-10842:


Commit 7eea63b45455a76e48c59c912853e6d4a0419ce4 in lucene-solr's branch 
refs/heads/branch_7x from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7eea63b ]

SOLR-10842: Move Tutorial to Ref Guide


> Move quickstart.html to Ref Guide
> -
>
> Key: SOLR-10842
> URL: https://issues.apache.org/jira/browse/SOLR-10842
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Minor
> Fix For: 7.0
>
> Attachments: SOLR-10842.patch
>
>
> The Solr Quick Start at https://lucene.apache.org/solr/quickstart.html has 
> been problematic to keep up to date - until Ishan just updated it yesterday 
> for 6.6, it said "6.2.1", so hadn't been updated for several releases.
> Now that the Ref Guide is in AsciiDoc format, we can easily use variables for 
> package versions, and it could be released as part of the Ref Guide and kept 
> up to date. It could also integrate links to more information on topics, and 
> users would already be IN the docs, so they would not need to wonder where 
> the docs are.
> There are a few places on the site that will need to be updated to point to 
> the new location, but I can also put a redirect rule into .htaccess so people 
> are redirected to the new location if there are other links "in the wild" 
> that we cannot control. This allows it to be versioned also, if that becomes 
> necessary.
> As part of this, I would like to also update the entire "Getting Started" 
> section of the Ref Guide, which is effectively identical to what was in the 
> first release of the Ref Guide back in 2009 for Solr 1.4 and is in serious 
> need of reconsideration.
> My thought is that moving the page + redoing the Getting Started section 
> would be for 7.0, but if folks are excited about this idea I could move the 
> page for 6.6 and hold off redoing the larger section until 7.0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10721) Provide a way to know when Core Discovery is finished and when all async cores are done loading

2017-08-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129422#comment-16129422
 ] 

ASF subversion and git services commented on SOLR-10721:


Commit 02c1b75d44ecd4d17a71fe48978b79bc04d872be in lucene-solr's branch 
refs/heads/branch_6_6 from Erick
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=02c1b75 ]

SOLR-10721: Provide a way to know when Core Discovery is finished and when all 
async cores are done loading


> Provide a way to know when Core Discovery is finished and when all async 
> cores are done loading
> ---
>
> Key: SOLR-10721
> URL: https://issues.apache.org/jira/browse/SOLR-10721
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 7.0, 6.7, 6.6.1
>
> Attachments: SOLR-10721.patch, SOLR-10721.patch
>
>
> Custom transient core implementations could benefit from knowing two things:
> 1> that core discovery is over
> 2> that all cores that are going to be loaded have been loaded, i.e. all 
> loadOnStartup cores are done.
> It should be trivial to add a method to CoreContainer like "isLoaded" that 
> would answer the first question since you can't get past the load() method 
> without all the cores being discovered. I think this is a more generally 
> useful bit of information than just core discovery is done.
> As for the second, that too seems trivial, just add a method to CoreContainer 
> that returns the number of entries in SolrCores.currentlyLoadingCores.
> I'll add this in a few days unless there are objections.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_144) - Build # 272 - Still Unstable!

2017-08-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/272/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.cloud.LeaderElectionContextKeyTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([F4988EC953D2F9BD:7CCCB113FD2E9445]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.LeaderElectionContextKeyTest.test(LeaderElectionContextKeyTest.java:88)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Doc with id=1 not found in http://127.0.0.1:42243/collMinRf_1x3 due to: Path 
not found: /id; rsp={doc=null}

Stack Trace:
java.lang.AssertionError: Doc with id=1 not found in 
http://127.0.0.1:42243/collMinRf_1x3 due to: Path not 

[jira] [Updated] (SOLR-11248) Spatial query returning everything for pt(0,0) d=0

2017-08-16 Thread Vaibhav Patel (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Patel updated SOLR-11248:
-
Priority: Blocker  (was: Minor)

> Spatial query returning everything for pt(0,0) d=0
> --
>
> Key: SOLR-11248
> URL: https://issues.apache.org/jira/browse/SOLR-11248
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
> Environment: solr-spec
> 6.6.0
> solr-impl
> 6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:32:53
> lucene-spec
> 6.6.0
> lucene-impl
> 6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:29:46
>Reporter: Vaibhav Patel
>Priority: Blocker
>
> There is an edge case when I specify pt=0,0 and d=0, it seems to return 
> everything.It looks like 
> this(http://localhost:8983/solr/person_core_420_us/select?d=0={!geofilt}=on=0,0=*:*=home_location=json)
> Other distance queries work fine. Can some one confirm this please?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10721) Provide a way to know when Core Discovery is finished and when all async cores are done loading

2017-08-16 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-10721:
--
Fix Version/s: 6.6.1

> Provide a way to know when Core Discovery is finished and when all async 
> cores are done loading
> ---
>
> Key: SOLR-10721
> URL: https://issues.apache.org/jira/browse/SOLR-10721
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 7.0, 6.7, 6.6.1
>
> Attachments: SOLR-10721.patch, SOLR-10721.patch
>
>
> Custom transient core implementations could benefit from knowing two things:
> 1> that core discovery is over
> 2> that all cores that are going to be loaded have been loaded, i.e. all 
> loadOnStartup cores are done.
> It should be trivial to add a method to CoreContainer like "isLoaded" that 
> would answer the first question since you can't get past the load() method 
> without all the cores being discovered. I think this is a more generally 
> useful bit of information than just core discovery is done.
> As for the second, that too seems trivial, just add a method to CoreContainer 
> that returns the number of entries in SolrCores.currentlyLoadingCores.
> I'll add this in a few days unless there are objections.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10908) CloudSolrStream.toExpression incorrectly handles fq clauses

2017-08-16 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-10908:
--
Fix Version/s: 7.1
   6.6.1

> CloudSolrStream.toExpression incorrectly handles fq clauses
> ---
>
> Key: SOLR-10908
> URL: https://issues.apache.org/jira/browse/SOLR-10908
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6, 7.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Fix For: 7.0, 6.7, 6.6.1, master (8.0), 7.1
>
> Attachments: SOLR-10229.patch, SOLR-10908.patch, SOLR-10908.patch
>
>
> toExpression in at least CloudSolrStream concatenates parameters in a 
> comma-separated list. This is fine for things like sorting but incorrect for 
> fq clauses. If my input is something like
> fq=condition1
> fq=condition2
> it winds up being something like
> fq=condition1,condition2
> I've seen it in this class for this parameter, other classes and other 
> parameters might have the same problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10908) CloudSolrStream.toExpression incorrectly handles fq clauses

2017-08-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129356#comment-16129356
 ] 

ASF subversion and git services commented on SOLR-10908:


Commit 8547474c8ad815baf352cb86d4b1618d7dc5ac8b in lucene-solr's branch 
refs/heads/branch_6_6 from Erick
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8547474 ]

SOLR-10908: CloudSolrStream.toExpression incorrectly handles fq clauses


> CloudSolrStream.toExpression incorrectly handles fq clauses
> ---
>
> Key: SOLR-10908
> URL: https://issues.apache.org/jira/browse/SOLR-10908
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6, 7.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Fix For: 7.0, 6.7, master (8.0)
>
> Attachments: SOLR-10229.patch, SOLR-10908.patch, SOLR-10908.patch
>
>
> toExpression in at least CloudSolrStream concatenates parameters in a 
> comma-separated list. This is fine for things like sorting but incorrect for 
> fq clauses. If my input is something like
> fq=condition1
> fq=condition2
> it winds up being something like
> fq=condition1,condition2
> I've seen it in this class for this parameter, other classes and other 
> parameters might have the same problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11024) ParallelStream should set the StreamContext when constructing SolrStreams

2017-08-16 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-11024:
--
Fix Version/s: 6.6.1

> ParallelStream should set the StreamContext when constructing SolrStreams
> -
>
> Key: SOLR-11024
> URL: https://issues.apache.org/jira/browse/SOLR-11024
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.7, 6.6.1, master (8.0), 7.1
>
> Attachments: SOLR-11024.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10910) Clean up a few details left over from pluggable transient core and untangling CoreDescriptor/CoreContainer references

2017-08-16 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-10910:
--
Fix Version/s: 6.6.1

> Clean up a few details left over from pluggable transient core and untangling 
> CoreDescriptor/CoreContainer references
> -
>
> Key: SOLR-10910
> URL: https://issues.apache.org/jira/browse/SOLR-10910
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Fix For: 7.0, 6.7, 6.6.1
>
> Attachments: SOLR-10910.patch, SOLR-10910.patch
>
>
> There are a few bits of the code from SOLR-10007, SOLR-8906 that could stand 
> some cleanup. For instance, the TransientSolrCoreCache is rather awkwardly 
> hanging around in CoreContainer and would fit more naturally in SolrCores.
> What I've seen so far shouldn't result in incorrect behavior, just cleaning 
> up for the future.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11240) Raise UnInvertedField internal limit

2017-08-16 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129336#comment-16129336
 ] 

Yonik Seeley commented on SOLR-11240:
-

bq. Solution: Due to the values being packed at vInts, bit 31 (the last bit) of 
the integer will never be 1

Ah, clever, an extra bit!

bq. Somewhere in the above bitmasks the highest bit should be set 

Right, looks like it should be:
{code}
+ *   A single entry is thus either 0b0___ 
holding 0-4 vInts (low byte first) or
+ *   0b1___ holding a 31-bit pointer.
{code}

> Raise UnInvertedField internal limit
> 
>
> Key: SOLR-11240
> URL: https://issues.apache.org/jira/browse/SOLR-11240
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 5.5.4, 6.6
>Reporter: Toke Eskildsen
>Assignee: Toke Eskildsen
>Priority: Minor
>  Labels: easyfix
> Fix For: master (8.0)
>
> Attachments: SOLR-11240.patch
>
>
> {{UnInvertedField}} has via {{DocTermOrds}} an internal limitation of 2^24 
> bytes for byte-arrays holding term ordinals. For String faceting on 
> high-cardinality Text fields, this can trigger the exception with "Too many 
> values for UnInvertedField". A search for that phrase shows that the 
> exception is encountered in the wild.
> The limitation is due to the packing being a combination of values and 
> pointers: If the values (term ordinals) for a given document-ID can fit in an 
> integer, they are stored directly. If the value of the first 8 bits in the 
> integer is 1, it signals that the following 3 bytes (24 bits) is a pointer 
> into a byte-array, limiting the array-size to 16M (2^24).
> Solution: Due to the values being packed at vInts, bit 31 (the last bit) of 
> the integer will never be 1 if the integer contains values. This means that 
> this bit it can be used for signalling whether or not the preceding bits 
> should be parsed as values or a pointer. The effective pointer size is thus 
> 2^31, which matches the array-length limit in Java. Changing the signalling 
> mechanism does not affect space requirements and should not affect 
> performance.
> Note that this is only a 100-fold increase ever the 2^24 limit, not an 
> elimination: Performing uninverted Text field faceting on 100M documents with 
> 5K terms each will still raise an exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11248) Spatial query returning everything for pt(0,0) d=0

2017-08-16 Thread Vaibhav Patel (JIRA)
Vaibhav Patel created SOLR-11248:


 Summary: Spatial query returning everything for pt(0,0) d=0
 Key: SOLR-11248
 URL: https://issues.apache.org/jira/browse/SOLR-11248
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrJ
 Environment: solr-spec
6.6.0
solr-impl
6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:32:53
lucene-spec
6.6.0
lucene-impl
6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:29:46
Reporter: Vaibhav Patel
Priority: Minor


There is an edge case when I specify pt=0,0 and d=0, it seems to return 
everything.It looks like 
this(http://localhost:8983/solr/person_core_420_us/select?d=0={!geofilt}=on=0,0=*:*=home_location=json)
Other distance queries work fine. Can some one confirm this please?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11024) ParallelStream should set the StreamContext when constructing SolrStreams

2017-08-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129316#comment-16129316
 ] 

ASF subversion and git services commented on SOLR-11024:


Commit 12b591659e7b40f62c61e175706e564d6145d011 in lucene-solr's branch 
refs/heads/branch_6_6 from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=12b5916 ]

SOLR-11024: ParallelStream should set the StreamContext when constructing 
SolrStreams

(cherry picked from commit 7051a79)


> ParallelStream should set the StreamContext when constructing SolrStreams
> -
>
> Key: SOLR-11024
> URL: https://issues.apache.org/jira/browse/SOLR-11024
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.7, master (8.0), 7.1
>
> Attachments: SOLR-11024.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11024) ParallelStream should set the StreamContext when constructing SolrStreams

2017-08-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129314#comment-16129314
 ] 

ASF subversion and git services commented on SOLR-11024:


Commit 73c83031067bf993fe448252e2c29ecfb8d55d41 in lucene-solr's branch 
refs/heads/branch_7_0 from Erick
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=73c8303 ]

SOLR-11024: ParallelStream should set the StreamContext when constructing 
SolrStreams


> ParallelStream should set the StreamContext when constructing SolrStreams
> -
>
> Key: SOLR-11024
> URL: https://issues.apache.org/jira/browse/SOLR-11024
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.7, master (8.0), 7.1
>
> Attachments: SOLR-11024.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11076) New /autoscaling/history API to return past cluster events and actions

2017-08-16 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  resolved SOLR-11076.
--
Resolution: Fixed

> New /autoscaling/history API to return past cluster events and actions
> --
>
> Key: SOLR-11076
> URL: https://issues.apache.org/jira/browse/SOLR-11076
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Andrzej Bialecki 
> Fix For: master (8.0), 7.1
>
> Attachments: SOLR-11076.patch
>
>
> SOLR-11031 stores events and actions performed in response to those events in 
> the `.system` collection. We should expose this historical data through a new 
> API accessible at {{/autoscaling/history}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Release a 6.6.1

2017-08-16 Thread Anshum Gupta
+1 on getting the fixes into 7.0 if you are confident with those, and if
they are a part of 6.6.1.

Thanks for taking care of this Erick.

On Wed, Aug 16, 2017 at 12:24 PM Erick Erickson 
wrote:

> FYI:
>
> I'll be backporting the following to SOLR 7.0 today:
>
> SOLR-11024: ParallelStream should set the StreamContext when
> constructing SolrStreams:
> SOLR-11177: CoreContainer.load needs to send lazily loaded core
> descriptors to the proper list rather than send them all to the
> transient lists.
> SOLR-11122: Creating a core should write a core.properties file first
> and clean up on failure
>
> and those as well as several others to 6.6.1.
>
> Since some of these depend on others, I need to add them in a specific
> order. I intend to run minimal tests for each JIRA before pushing,
> then when they are all in place go through the full test cycle,
> precommit and all that. I doubt that other than the flurry of commit
> messages anyone will notice, at least if this stuff is as safe to
> backport as I believe people _better_ not.
>
> If these cause any serious problems for the 7.0 code line, feel free
> to back them out any time after today.
>
> Why am I bothering the 7.0 code line? well, it's awkward to have a fix
> in 6.6.1, skip 7.0 and have it show up again in 7.1. We can live with
> some thing not being in 7.0 if any of this causes disruptions though.
>
> Erick
>
> On Wed, Aug 16, 2017 at 11:39 AM, Erik Hatcher 
> wrote:
> > Yes, I’m confident in that patch and its safety, thanks Varun!
> >
> > Erik
> >
> > On Aug 16, 2017, at 2:22 PM, Varun Thacker  wrote:
> >
> > @Erik -If you are confident with the patch and if you think it's safe
> then
> > please go ahead and commit it. I'll try having a look at it tomorrow as
> > well.
> >
> > I will start back-porting issues which I am comfortable with. If others
> can
> > chime in as well it will be great.
> >
> > I'll aim to cut an RC on Monday 21st August evening PST time so everyone
> > get's time to get the fixes in.
> >
> > Any objections in the approach?
> >
> > On Tue, Aug 15, 2017 at 4:56 AM, Erik Hatcher 
> > wrote:
> >>
> >> I’d like to get https://issues.apache.org/jira/browse/SOLR-10874 in
> soon
> >> as well.   Varun, if you’d like to take this one over that’d be fine by
> me
> >> too ;) otherwise prod me back channel and I’ll get to it.  Ideally I
> should
> >> have applied this one ages ago for 7.0, so maybe it can make it in time
> for
> >> 6.6.1 and 7.0.
> >>
> >> Erik
> >>
> >> On Aug 14, 2017, at 5:51 PM, Varun Thacker  wrote:
> >>
> >> From the change log of 6.7 / 7.0 and 7.1 the total count of bug fixes
> >> under lucene and solr are
> >>
> >> Lucene : 7
> >> Solr : 61
> >>
> >>
> >> I'd be happy to volunteer as a release manager for a bug fix 6.6.1
> release
> >> if others in the community think it's a good idea
> >>
> >>
> >
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Updated] (SOLR-11215) Make a metric accessible through a single param

2017-08-16 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-11215:
-
Component/s: metrics

> Make a metric accessible through a single param
> ---
>
> Key: SOLR-11215
> URL: https://issues.apache.org/jira/browse/SOLR-11215
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Noble Paul
>Assignee: Andrzej Bialecki 
> Fix For: 7.1
>
> Attachments: SOLR-11215.diff
>
>
> example
> {code}
> /admin/metrics?key=solr.jvm:classes.loaded=solr.jvm:system.properties:java.specification.version
> {code}
> The above request must return just the two items in their corresponding path



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Release a 6.6.1

2017-08-16 Thread Erick Erickson
FYI:

I'll be backporting the following to SOLR 7.0 today:

SOLR-11024: ParallelStream should set the StreamContext when
constructing SolrStreams:
SOLR-11177: CoreContainer.load needs to send lazily loaded core
descriptors to the proper list rather than send them all to the
transient lists.
SOLR-11122: Creating a core should write a core.properties file first
and clean up on failure

and those as well as several others to 6.6.1.

Since some of these depend on others, I need to add them in a specific
order. I intend to run minimal tests for each JIRA before pushing,
then when they are all in place go through the full test cycle,
precommit and all that. I doubt that other than the flurry of commit
messages anyone will notice, at least if this stuff is as safe to
backport as I believe people _better_ not.

If these cause any serious problems for the 7.0 code line, feel free
to back them out any time after today.

Why am I bothering the 7.0 code line? well, it's awkward to have a fix
in 6.6.1, skip 7.0 and have it show up again in 7.1. We can live with
some thing not being in 7.0 if any of this causes disruptions though.

Erick

On Wed, Aug 16, 2017 at 11:39 AM, Erik Hatcher  wrote:
> Yes, I’m confident in that patch and its safety, thanks Varun!
>
> Erik
>
> On Aug 16, 2017, at 2:22 PM, Varun Thacker  wrote:
>
> @Erik -If you are confident with the patch and if you think it's safe then
> please go ahead and commit it. I'll try having a look at it tomorrow as
> well.
>
> I will start back-porting issues which I am comfortable with. If others can
> chime in as well it will be great.
>
> I'll aim to cut an RC on Monday 21st August evening PST time so everyone
> get's time to get the fixes in.
>
> Any objections in the approach?
>
> On Tue, Aug 15, 2017 at 4:56 AM, Erik Hatcher 
> wrote:
>>
>> I’d like to get https://issues.apache.org/jira/browse/SOLR-10874 in soon
>> as well.   Varun, if you’d like to take this one over that’d be fine by me
>> too ;) otherwise prod me back channel and I’ll get to it.  Ideally I should
>> have applied this one ages ago for 7.0, so maybe it can make it in time for
>> 6.6.1 and 7.0.
>>
>> Erik
>>
>> On Aug 14, 2017, at 5:51 PM, Varun Thacker  wrote:
>>
>> From the change log of 6.7 / 7.0 and 7.1 the total count of bug fixes
>> under lucene and solr are
>>
>> Lucene : 7
>> Solr : 61
>>
>>
>> I'd be happy to volunteer as a release manager for a bug fix 6.6.1 release
>> if others in the community think it's a good idea
>>
>>
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10910) Clean up a few details left over from pluggable transient core and untangling CoreDescriptor/CoreContainer references

2017-08-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129282#comment-16129282
 ] 

ASF subversion and git services commented on SOLR-10910:


Commit b146c91e366093a0b6577e112c0aeb06d6a6898b in lucene-solr's branch 
refs/heads/branch_6_6 from Erick
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b146c91 ]

SOLR-10910: Clean up a few details left over from pluggable transient core and 
untangling CoreDescriptor/CoreContainer references, backport to 6.6.1


> Clean up a few details left over from pluggable transient core and untangling 
> CoreDescriptor/CoreContainer references
> -
>
> Key: SOLR-10910
> URL: https://issues.apache.org/jira/browse/SOLR-10910
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Fix For: 7.0, 6.7
>
> Attachments: SOLR-10910.patch, SOLR-10910.patch
>
>
> There are a few bits of the code from SOLR-10007, SOLR-8906 that could stand 
> some cleanup. For instance, the TransientSolrCoreCache is rather awkwardly 
> hanging around in CoreContainer and would fit more naturally in SolrCores.
> What I've seen so far shouldn't result in incorrect behavior, just cleaning 
> up for the future.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11240) Raise UnInvertedField internal limit

2017-08-16 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129240#comment-16129240
 ] 

Dawid Weiss edited comment on SOLR-11240 at 8/16/17 6:45 PM:
-

Just looking around casually, not verifying in-depth.

{code}
+ *   A single entry is thus either 0b0___ 
holding 0-4 vInts or
+ *   0b0___ holding a 31-bit pointer.
{code}
Somewhere in the above bitmasks the highest bit should be set :)

{code}
+  // TODO: Why is indexedTermsArray not part of this?
   /** Returns total bytes used. */
   public long ramBytesUsed() {
{code}

I'd piggyback that in and correct it in this issue.

{code}
+  @Slow
+  public void testTriggerUnInvertLimit() throws IOException {
{code}

Make it Nightly instead of Slow if it's such a resource-hog?



was (Author: dweiss):
Just looking around casually, not verifying in-depth.

{code}
+ *   A single entry is thus either 0b0___ 
holding 0-4 vInts or
+ *   0b0___ holding a 31-bit pointer.
{code}
Somewhere in the above bitmasks the highest bit should be set :)

{code}
+  // TODO: Why is indexedTermsArray not part of this?
   /** Returns total bytes used. */
   public long ramBytesUsed() {
{code}

I'd piggyback that in and correct it in this issue.

{code}
+  @Slow
+  public void testTriggerUnInvertLimit() throws IOException {
{code}

Make it Nightly instead of Slow if it's such a resource-hog?


> Raise UnInvertedField internal limit
> 
>
> Key: SOLR-11240
> URL: https://issues.apache.org/jira/browse/SOLR-11240
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 5.5.4, 6.6
>Reporter: Toke Eskildsen
>Assignee: Toke Eskildsen
>Priority: Minor
>  Labels: easyfix
> Fix For: master (8.0)
>
> Attachments: SOLR-11240.patch
>
>
> {{UnInvertedField}} has via {{DocTermOrds}} an internal limitation of 2^24 
> bytes for byte-arrays holding term ordinals. For String faceting on 
> high-cardinality Text fields, this can trigger the exception with "Too many 
> values for UnInvertedField". A search for that phrase shows that the 
> exception is encountered in the wild.
> The limitation is due to the packing being a combination of values and 
> pointers: If the values (term ordinals) for a given document-ID can fit in an 
> integer, they are stored directly. If the value of the first 8 bits in the 
> integer is 1, it signals that the following 3 bytes (24 bits) is a pointer 
> into a byte-array, limiting the array-size to 16M (2^24).
> Solution: Due to the values being packed at vInts, bit 31 (the last bit) of 
> the integer will never be 1 if the integer contains values. This means that 
> this bit it can be used for signalling whether or not the preceding bits 
> should be parsed as values or a pointer. The effective pointer size is thus 
> 2^31, which matches the array-length limit in Java. Changing the signalling 
> mechanism does not affect space requirements and should not affect 
> performance.
> Note that this is only a 100-fold increase ever the 2^24 limit, not an 
> elimination: Performing uninverted Text field faceting on 100M documents with 
> 5K terms each will still raise an exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11240) Raise UnInvertedField internal limit

2017-08-16 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129240#comment-16129240
 ] 

Dawid Weiss commented on SOLR-11240:


Just looking around casually, not verifying in-depth.

{code}
+ *   A single entry is thus either 0b0___ 
holding 0-4 vInts or
+ *   0b0___ holding a 31-bit pointer.
{code}
Somewhere in the above bitmasks the highest bit should be set :)

{code}
+  // TODO: Why is indexedTermsArray not part of this?
   /** Returns total bytes used. */
   public long ramBytesUsed() {
{code}

I'd piggyback that in and correct it in this issue.

{code}
+  @@Slow
+  public void testTriggerUnInvertLimit() throws IOException {
{code}


> Raise UnInvertedField internal limit
> 
>
> Key: SOLR-11240
> URL: https://issues.apache.org/jira/browse/SOLR-11240
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 5.5.4, 6.6
>Reporter: Toke Eskildsen
>Assignee: Toke Eskildsen
>Priority: Minor
>  Labels: easyfix
> Fix For: master (8.0)
>
> Attachments: SOLR-11240.patch
>
>
> {{UnInvertedField}} has via {{DocTermOrds}} an internal limitation of 2^24 
> bytes for byte-arrays holding term ordinals. For String faceting on 
> high-cardinality Text fields, this can trigger the exception with "Too many 
> values for UnInvertedField". A search for that phrase shows that the 
> exception is encountered in the wild.
> The limitation is due to the packing being a combination of values and 
> pointers: If the values (term ordinals) for a given document-ID can fit in an 
> integer, they are stored directly. If the value of the first 8 bits in the 
> integer is 1, it signals that the following 3 bytes (24 bits) is a pointer 
> into a byte-array, limiting the array-size to 16M (2^24).
> Solution: Due to the values being packed at vInts, bit 31 (the last bit) of 
> the integer will never be 1 if the integer contains values. This means that 
> this bit it can be used for signalling whether or not the preceding bits 
> should be parsed as values or a pointer. The effective pointer size is thus 
> 2^31, which matches the array-length limit in Java. Changing the signalling 
> mechanism does not affect space requirements and should not affect 
> performance.
> Note that this is only a 100-fold increase ever the 2^24 limit, not an 
> elimination: Performing uninverted Text field faceting on 100M documents with 
> 5K terms each will still raise an exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11240) Raise UnInvertedField internal limit

2017-08-16 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129240#comment-16129240
 ] 

Dawid Weiss edited comment on SOLR-11240 at 8/16/17 6:43 PM:
-

Just looking around casually, not verifying in-depth.

{code}
+ *   A single entry is thus either 0b0___ 
holding 0-4 vInts or
+ *   0b0___ holding a 31-bit pointer.
{code}
Somewhere in the above bitmasks the highest bit should be set :)

{code}
+  // TODO: Why is indexedTermsArray not part of this?
   /** Returns total bytes used. */
   public long ramBytesUsed() {
{code}

I'd piggyback that in and correct it in this issue.

{code}
+  @Slow
+  public void testTriggerUnInvertLimit() throws IOException {
{code}

Make it Nightly instead of Slow if it's such a resource-hog?



was (Author: dweiss):
Just looking around casually, not verifying in-depth.

{code}
+ *   A single entry is thus either 0b0___ 
holding 0-4 vInts or
+ *   0b0___ holding a 31-bit pointer.
{code}
Somewhere in the above bitmasks the highest bit should be set :)

{code}
+  // TODO: Why is indexedTermsArray not part of this?
   /** Returns total bytes used. */
   public long ramBytesUsed() {
{code}

I'd piggyback that in and correct it in this issue.

{code}
+  @@Slow
+  public void testTriggerUnInvertLimit() throws IOException {
{code}


> Raise UnInvertedField internal limit
> 
>
> Key: SOLR-11240
> URL: https://issues.apache.org/jira/browse/SOLR-11240
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 5.5.4, 6.6
>Reporter: Toke Eskildsen
>Assignee: Toke Eskildsen
>Priority: Minor
>  Labels: easyfix
> Fix For: master (8.0)
>
> Attachments: SOLR-11240.patch
>
>
> {{UnInvertedField}} has via {{DocTermOrds}} an internal limitation of 2^24 
> bytes for byte-arrays holding term ordinals. For String faceting on 
> high-cardinality Text fields, this can trigger the exception with "Too many 
> values for UnInvertedField". A search for that phrase shows that the 
> exception is encountered in the wild.
> The limitation is due to the packing being a combination of values and 
> pointers: If the values (term ordinals) for a given document-ID can fit in an 
> integer, they are stored directly. If the value of the first 8 bits in the 
> integer is 1, it signals that the following 3 bytes (24 bits) is a pointer 
> into a byte-array, limiting the array-size to 16M (2^24).
> Solution: Due to the values being packed at vInts, bit 31 (the last bit) of 
> the integer will never be 1 if the integer contains values. This means that 
> this bit it can be used for signalling whether or not the preceding bits 
> should be parsed as values or a pointer. The effective pointer size is thus 
> 2^31, which matches the array-length limit in Java. Changing the signalling 
> mechanism does not affect space requirements and should not affect 
> performance.
> Note that this is only a 100-fold increase ever the 2^24 limit, not an 
> elimination: Performing uninverted Text field faceting on 100M documents with 
> 5K terms each will still raise an exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Release a 6.6.1

2017-08-16 Thread Erik Hatcher
Yes, I’m confident in that patch and its safety, thanks Varun!

Erik

> On Aug 16, 2017, at 2:22 PM, Varun Thacker  wrote:
> 
> @Erik -If you are confident with the patch and if you think it's safe then 
> please go ahead and commit it. I'll try having a look at it tomorrow as well.
> 
> I will start back-porting issues which I am comfortable with. If others can 
> chime in as well it will be great.
> 
> I'll aim to cut an RC on Monday 21st August evening PST time so everyone 
> get's time to get the fixes in.
> 
> Any objections in the approach?
> 
> On Tue, Aug 15, 2017 at 4:56 AM, Erik Hatcher  > wrote:
> I’d like to get https://issues.apache.org/jira/browse/SOLR-10874 
>  in soon as well.   Varun, 
> if you’d like to take this one over that’d be fine by me too ;) otherwise 
> prod me back channel and I’ll get to it.  Ideally I should have applied this 
> one ages ago for 7.0, so maybe it can make it in time for 6.6.1 and 7.0.
> 
>   Erik
> 
>> On Aug 14, 2017, at 5:51 PM, Varun Thacker > > wrote:
>> 
>> From the change log of 6.7 / 7.0 and 7.1 the total count of bug fixes under 
>> lucene and solr are
>> 
>> Lucene : 7
>> Solr : 61
>> 
>> 
>> I'd be happy to volunteer as a release manager for a bug fix 6.6.1 release 
>> if others in the community think it's a good idea
> 
> 



[jira] [Comment Edited] (SOLR-9458) DocumentDictionaryFactory StackOverflowError on many documents

2017-08-16 Thread Walter Underwood (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129210#comment-16129210
 ] 

Walter Underwood edited comment on SOLR-9458 at 8/16/17 6:29 PM:
-

I'm getting the same failure using FileDictionaryFactory with 6.5.1.


{code:xml}
   
  concepts_fuzzy
  FuzzyLookupFactory
  true
  suggest-concepts.txt
  suggest_subjects_infix
  text_lower
  1
  true
  false
  false
  true

{code}



was (Author: wunder):
I'm getting the same failure using FileDictionaryFactory.


{code:xml}
   
  concepts_fuzzy
  FuzzyLookupFactory
  true
  suggest-concepts.txt
  suggest_subjects_infix
  text_lower
  1
  true
  false
  false
  true

{code}


> DocumentDictionaryFactory StackOverflowError on many documents
> --
>
> Key: SOLR-9458
> URL: https://issues.apache.org/jira/browse/SOLR-9458
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Suggester
>Affects Versions: 6.1, 6.2
>Reporter: Chris de Kok
>
> When using the FuzzyLookupFactory in combinarion with the 
> DocumentDictionaryFactory it will throw a stackoverflow trying to build the 
> dictionary.
> Using the HighFrequencyDictionaryFactory works ok but behaves very different.
> ```
> 
> 
> suggest
> suggestions
> suggestions
> FuzzyLookupFactory
> DocumentDictionaryFactory
> suggest_fuzzy
> true
> false
> false
> true
> 0
> 
> 
> null:java.lang.StackOverflowError
>   at 
> org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1311)
>   at 
> org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1311)
>   at 
> org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1311)
>   at 
> org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1311)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9458) DocumentDictionaryFactory StackOverflowError on many documents

2017-08-16 Thread Walter Underwood (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129210#comment-16129210
 ] 

Walter Underwood commented on SOLR-9458:


I'm getting the same failure using FileDictionaryFactory.


{code:xml}
   
  concepts_fuzzy
  FuzzyLookupFactory
  true
  suggest-concepts.txt
  suggest_subjects_infix
  text_lower
  1
  true
  false
  false
  true

{code}


> DocumentDictionaryFactory StackOverflowError on many documents
> --
>
> Key: SOLR-9458
> URL: https://issues.apache.org/jira/browse/SOLR-9458
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Suggester
>Affects Versions: 6.1, 6.2
>Reporter: Chris de Kok
>
> When using the FuzzyLookupFactory in combinarion with the 
> DocumentDictionaryFactory it will throw a stackoverflow trying to build the 
> dictionary.
> Using the HighFrequencyDictionaryFactory works ok but behaves very different.
> ```
> 
> 
> suggest
> suggestions
> suggestions
> FuzzyLookupFactory
> DocumentDictionaryFactory
> suggest_fuzzy
> true
> false
> false
> true
> 0
> 
> 
> null:java.lang.StackOverflowError
>   at 
> org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1311)
>   at 
> org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1311)
>   at 
> org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1311)
>   at 
> org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1311)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Release a 6.6.1

2017-08-16 Thread Varun Thacker
@Erik -If you are confident with the patch and if you think it's safe then
please go ahead and commit it. I'll try having a look at it tomorrow as
well.

I will start back-porting issues which I am comfortable with. If others can
chime in as well it will be great.

I'll aim to cut an RC on Monday 21st August evening PST time so everyone
get's time to get the fixes in.

Any objections in the approach?

On Tue, Aug 15, 2017 at 4:56 AM, Erik Hatcher 
wrote:

> I’d like to get https://issues.apache.org/jira/browse/SOLR-10874 in soon
> as well.   Varun, if you’d like to take this one over that’d be fine by me
> too ;) otherwise prod me back channel and I’ll get to it.  Ideally I should
> have applied this one ages ago for 7.0, so maybe it can make it in time for
> 6.6.1 and 7.0.
>
> Erik
>
> On Aug 14, 2017, at 5:51 PM, Varun Thacker  wrote:
>
> From the change log of 6.7 / 7.0 and 7.1 the total count of bug fixes
> under lucene and solr are
>
> Lucene : 7
> Solr : 61
>
>
> I'd be happy to volunteer as a release manager for a bug fix 6.6.1 release
> if others in the community think it's a good idea
>
>
>


Re: Release a 6.6.1

2017-08-16 Thread Varun Thacker
Hi Anshum,


On Mon, Aug 14, 2017 at 5:42 PM, Anshum Gupta 
wrote:

> +1 for a bug fix release. Do you plan to back port bug fixes from 7.1 too?
> If so, I wouldn't want to release 7.0 without those fixes.
>

>From Lucene's change logs

LUCENE-7916 is the only Jira which seems safe enough to backport to both
6.6.1 and 7.0 . I'd be happy to backport it to both branches if others agree

>From Solr's change logs there are 9 bug fixes in 7.1

These two issues look like large changes which would be tough to backport
to 6.6.1 : SOLR-11011 and SOLR-6086 . Shalin and Dat worked on those so
unless they are comfortable in back-porting those we can leave it out

The remaining 7 issues could be back-ported for 6.6.1 and 7.0 I'd imagine


>
>
> On Mon, Aug 14, 2017 at 2:51 PM Varun Thacker  wrote:
>
>> From the change log of 6.7 / 7.0 and 7.1 the total count of bug fixes
>> under lucene and solr are
>>
>> Lucene : 7
>> Solr : 61
>>
>>
>> I'd be happy to volunteer as a release manager for a bug fix 6.6.1
>> release if others in the community think it's a good idea
>>
>


Re: 7.0 docs

2017-08-16 Thread Cassandra Targett
Thanks Joel!

On Tue, Aug 15, 2017 at 2:10 PM, Joel Bernstein  wrote:
> I wanted to give an update on the 7.0 docs I'm working on.
>
> I've just made the first commit for the new Stream Evaluators in 7.0. I plan
> to finish the function documentation by Aug 18th.
>
> I also plan to add a new documentation page explaining how the new
> statistical programming syntax works. I plan to have this committed by the
> 18th as well.
>
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-9-ea+181) - Build # 271 - Unstable!

2017-08-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/271/
Java: 64bit/jdk-9-ea+181 -XX:-UseCompressedOops -XX:+UseParallelGC 
--illegal-access=deny

2 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Doc with id=1 not found in http://127.0.0.1:39863/x_i/q/collMinRf_1x3 due to: 
Path not found: /id; rsp={doc=null}

Stack Trace:
java.lang.AssertionError: Doc with id=1 not found in 
http://127.0.0.1:39863/x_i/q/collMinRf_1x3 due to: Path not found: /id; 
rsp={doc=null}
at 
__randomizedtesting.SeedInfo.seed([41F6BD555C30D319:C9A2828FF2CCBEE1]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:603)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:558)
at 
org.apache.solr.cloud.HttpPartitionTest.testMinRf(HttpPartitionTest.java:249)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:127)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

Re: Baby steps as new committer

2017-08-16 Thread Erick Erickson
Where to put things in CHANGES is..er...complicated, I've got to go
fix some things up myself. What I _try_ for is to only put an entry in
the earliest version. We're releasing 6.6.1, and some of the stuff I
put in 6.7 will be moved to 6.6.1, so I'll move the CHANGES to 6.6.1
too. By implication, any future version is presumed to have that
change, so having an entry in 6.6.1 means it will be in 6.6.1 6.7, 7.0
and 7.1 as a policy. As always there can be exceptions, but I try to
minimize same.

On Wed, Aug 16, 2017 at 2:51 AM, Andrzej Białecki
 wrote:
>
> On 16 Aug 2017, at 11:14, Toke Eskildsen  wrote:
>
> On Tue, 2017-08-15 at 13:10 -0700, Erick Erickson wrote:
>
> [Git pull vs. patch]
>
> It seems like a patch is the simplest path for a simple fix, so I'll
> start there.
>
> It's confusing to fill in the "Fix version" until you "Resolve" the
> issue, i.e. commit a fix. I leave it blank when raising a JIRA, only
> filling it in when I commit the fixes.
>
>
> That makes sense. I'll adjust when I upload a patch. I'll forget about
> version 5 & 6 and just go for master. When 7.0 has been released, I
> would like to port to 7.1 (we need the fix ourselves for 7.x, so I
> might as well port it for everyone). Or can I port to 7.1 already? I am
> not sure about the state of branches when a release is in progress.
>
>
> There’s a branch_7x for future 7.x releases, including 7.1 - you can go
> ahead and port & commit to that branch. Commits to branch_7_0, which is the
> release branch, should be approved by release manager (for 7.0 this is
> Anshum).
>
> There’s a complicated issue of where to put the CHANGES.txt entry … my take
> on this is that if you know in advance what is the oldest version where the
> fix will be applied, you should add the entry to that section, no matter if
> you first commit to master or some other branch - eg. add it to the section
> on 7.0 if that’s where it first appeared.
>
>
> I read the "Strange Solr JIRA versions"-thread  and what I got from it
> is that I should never type in the version field in JIRA (only use the
> drop-down) and that I to stay clear of any 'x'-versions, should they be
> created by others.
>
>
>
> Thank you,
> Toke Eskildsen, Royal Danish Library
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11238) Solr authorization plugin is not able to pass additional params downstream

2017-08-16 Thread Hrishikesh Gadre (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated SOLR-11238:

Attachment: SOLR-11238.patch

[~ichattopadhyaya] Can you please take a look? The unit test that I have added 
is very similar to the Sentry based document level security logic.

> Solr authorization plugin is not able to pass additional params downstream
> --
>
> Key: SOLR-11238
> URL: https://issues.apache.org/jira/browse/SOLR-11238
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6
>Reporter: Hrishikesh Gadre
> Attachments: SOLR-11238.patch
>
>
> Authorization checks in Solr are implemented by invoking configured 
> authorization plugin with AuthorizationContext object. The plugin is expected 
> to return an AuthorizationResponse object which provides the result (which 
> can be OK/FORBIDDEN/PROMPT).
> In some cases (e.g. document level security implemented in Apache Sentry), it 
> is useful for the authorization plugin to add (or override) the request 
> parameters sent by the user (which are represented as SolrParams in 
> [AuthorizationContext| 
> https://github.com/apache/lucene-solr/blob/3cbbecca026eb2a9491fa4a24ecc2c43c26e58bd/solr/core/src/java/org/apache/solr/security/AuthorizationContext.java#L38]).
>  This jira is to introduce an ability to customize the parameters by the 
> authorization plugin.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5129) If zookeeper is down, SolrCloud nodes will not start correctly, even if zookeeper is started later

2017-08-16 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128911#comment-16128911
 ] 

Varun Thacker commented on SOLR-5129:
-

You can also read solr.xml from zookeeper if you start solr with 
{{-Dsolr.solr.home=zk}} 

> If zookeeper is down, SolrCloud nodes will not start correctly, even if 
> zookeeper is started later
> --
>
> Key: SOLR-5129
> URL: https://issues.apache.org/jira/browse/SOLR-5129
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.4
>Reporter: Shawn Heisey
>Priority: Minor
> Fix For: 4.9, 6.0
>
> Attachments: SOLR-5129.patch
>
>
> Summary of report from user on mailing list:
> If zookeeper is down or doesn't have quorum when you start Solr nodes, they 
> will not function correctly, even if you later start zookeeper.  While 
> zookeeper is down, the log shows connection failures as expected.  When 
> zookeeper comes back, the log shows:
> INFO  - 2013-08-09 15:48:41.528; 
> org.apache.solr.common.cloud.ConnectionManager; Client->ZooKeeper status 
> change trigger but we are already closed
> At that point, Solr (admin UI and all other functions) does not work, and 
> won't work until it is restarted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11215) Make a metric accessible through a single param

2017-08-16 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128872#comment-16128872
 ] 

Andrzej Bialecki  commented on SOLR-11215:
--

Example:
{code}
http://localhost:8983/solr/admin/metrics?key=solr.jvm:system.properties:user.name=solr.node:CONTAINER.fs.totalSpace
{code}
{code}
{
"responseHeader": {
"status": 0,
"QTime": 0
},
"metrics": {
"solr.jvm:system.properties:user.name": "ab",
"solr.node:CONTAINER.fs.totalSpace": 499046809600
}
}
{code}

> Make a metric accessible through a single param
> ---
>
> Key: SOLR-11215
> URL: https://issues.apache.org/jira/browse/SOLR-11215
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Andrzej Bialecki 
> Fix For: 7.1
>
> Attachments: SOLR-11215.diff
>
>
> example
> {code}
> /admin/metrics?key=solr.jvm:classes.loaded=solr.jvm:system.properties:java.specification.version
> {code}
> The above request must return just the two items in their corresponding path



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11247) TestCollectionStateWatchers failures on branch_6x and branch_6_6

2017-08-16 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-11247:


 Summary: TestCollectionStateWatchers failures on branch_6x and 
branch_6_6
 Key: SOLR-11247
 URL: https://issues.apache.org/jira/browse/SOLR-11247
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Tests
Affects Versions: 6.6
Reporter: Shalin Shekhar Mangar


The TestCollectionStateWatchers fails frequently on branch_6x and branch_6_6. I 
tried to runs and it failed both times. One of the reproducing seeds is:
{code}
ant test  -Dtestcase=TestCollectionStateWatchers 
-Dtests.method=testWaitForStateWatcherIsRetainedOnPredicateFailure 
-Dtests.seed=6005F39B5FC69114 -Dtests.slow=true -Dtests.locale=es-GT 
-Dtests.timezone=Antarctica/Vostok -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11215) Make a metric accessible through a single param

2017-08-16 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-11215:
-
Fix Version/s: (was: 7.2)
   7.1

> Make a metric accessible through a single param
> ---
>
> Key: SOLR-11215
> URL: https://issues.apache.org/jira/browse/SOLR-11215
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Andrzej Bialecki 
> Fix For: 7.1
>
> Attachments: SOLR-11215.diff
>
>
> example
> {code}
> /admin/metrics?key=solr.jvm:classes.loaded=solr.jvm:system.properties:java.specification.version
> {code}
> The above request must return just the two items in their corresponding path



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11246) MoveReplica API does not preserve replica type information

2017-08-16 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-11246:


 Summary: MoveReplica API does not preserve replica type information
 Key: SOLR-11246
 URL: https://issues.apache.org/jira/browse/SOLR-11246
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: 6.6, 7.0, 6.6.1
Reporter: Shalin Shekhar Mangar
 Fix For: master (8.0), 7.1


The MoveReplica API does not preserve replica type information during the move. 
This means that a tlog or a pull replica will be re-created after the move as a 
NRT replica.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11215) Make a metric accessible through a single param

2017-08-16 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-11215:
-
Attachment: SOLR-11215.diff

Initial patch:
* I decided that when "key" param is present we ignore all other parameters. 
It's very difficult to figure out what should be the correct behavior 
otherwise, if you take into account all possible combinations of group, 
registry, prefix, etc.
* handler's response doesn't build a nested hierarchy of registry / metric / 
property, instead it uses flat keys as they were passed to the handler.
* keys support escaping the colon separator in registry and metric names, using 
backslash (eg. {{fooRegistry:bar\:Metric:with\:property}}).

> Make a metric accessible through a single param
> ---
>
> Key: SOLR-11215
> URL: https://issues.apache.org/jira/browse/SOLR-11215
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Andrzej Bialecki 
> Fix For: 7.2
>
> Attachments: SOLR-11215.diff
>
>
> example
> {code}
> /admin/metrics?key=solr.jvm:classes.loaded=solr.jvm:system.properties:java.specification.version
> {code}
> The above request must return just the two items in their corresponding path



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11245) Cloud native Dockerfile

2017-08-16 Thread jay vyas (JIRA)
jay vyas created SOLR-11245:
---

 Summary: Cloud native Dockerfile
 Key: SOLR-11245
 URL: https://issues.apache.org/jira/browse/SOLR-11245
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Build
Affects Versions: 6.6
Reporter: jay vyas
 Fix For: master (8.0)


SOLR Should have its own Dockerfile, ideally one that is cloud native (i.e. 
doesn't expect anything special from the operating system in terms of user IDs, 
etc), for deployment, that we can curate and submit changes to as part of the 
official ASF process, rather then externally.  The idea here is that testing 
SOLR regression, as a microservice, is something we should be doing as part of 
our continuous integration, rather then something done externally.

We have a team here that would be more then happy to do the work to port 
whatever existing SOLR dockerfiles are out there into something that is ASF 
maintainable, and cloud native, and easily testable, as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10651) Streaming Expressions statistical functions library

2017-08-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128814#comment-16128814
 ] 

ASF subversion and git services commented on SOLR-10651:


Commit b406b43dbc385a392fc4d5e7ed16f803bde18582 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b406b43 ]

SOLR-10651: Statistical function docs for 7.0 Part 2


> Streaming Expressions statistical functions library
> ---
>
> Key: SOLR-10651
> URL: https://issues.apache.org/jira/browse/SOLR-10651
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>
> This is a ticket for organizing the new statistical programming features of 
> Streaming Expressions. It's also a place for the community to discuss what 
> functions are needed to support statistical programming. 
> Basic Syntax:
> {code}
> let(a = timeseries(...),
> b = timeseries(...),
> c = col(a, count(*)),
> d = col(b, count(*)),
> r = regress(c, d),
> tuple(p = predict(r, 50)))
> {code}
> The expression above is doing the following:
> 1) The let expression is setting variables (a, b, c, d, r).
> 2) Variables *a* and *b* are the output of timeseries() Streaming 
> Expressions. These will be stored in memory as lists of Tuples containing the 
> time series results.
> 3) Variables *c* and *d* are set using the *col* evaluator. The col evaluator 
> extracts a column of numbers from a list of tuples. In the example *col* is 
> extracting the count\(*\) field from the two time series result sets.
> 4) Variable *r* is the output from the *regress* evaluator. The regress 
> evaluator performs a simple regression analysis on two columns of numbers.
> 5) Once the variables are set, a single Streaming Expression is run by the 
> *let* expression. In the example the *tuple* expression is run. The tuple 
> expression outputs a single Tuple with name/value pairs. Any Streaming 
> Expression can be run by the *let* expression so this can be a complex 
> program. The streaming expression run by *let* has access to all the 
> variables defined earlier.
> 6) The tuple expression in the example has one name / value pair. The name 
> *p* is set to the output of the *predict* evaluator. The predict evaluator is 
> predicting the value of a dependent variable based on the independent 
> variable 50. The regression result stored in variable *r* is used to make the 
> prediction.
> 7) The output of this expression will be a single tuple with the value of the 
> predict function in the *p* field.
> The growing list of issues linked to this ticket are the array manipulation 
> and statistical functions that will form the basis of the stats library. The 
> vast majority of these functions are backed by algorithms in Apache Commons 
> Math. Other machine learning and math libraries will follow.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11240) Raise UnInvertedField internal limit

2017-08-16 Thread Toke Eskildsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toke Eskildsen updated SOLR-11240:
--
Attachment: SOLR-11240.patch

Patch for master. Running {{ant test}} reported failing unit-tests for Cdcr & 
Cloud, but those areas are pretty far from the patch and the tests also fails 
when running without the patch.

Note the addition of the slow {{testTriggerUnInvertLimit}} in 
{{TestDocTermOrds}}. It takes about 10-15 seconds to run on a modern machine 
with SSD. I find that to be problematic, but I don't know any way to very 
quickly build an index with high enough term-cardinality to reach the old limit.

Barring errors, the fix should be complete and potential back-porting to 7x (or 
6) seems trivial. I invite anyone to review the patch.

> Raise UnInvertedField internal limit
> 
>
> Key: SOLR-11240
> URL: https://issues.apache.org/jira/browse/SOLR-11240
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 5.5.4, 6.6
>Reporter: Toke Eskildsen
>Assignee: Toke Eskildsen
>Priority: Minor
>  Labels: easyfix
> Fix For: master (8.0)
>
> Attachments: SOLR-11240.patch
>
>
> {{UnInvertedField}} has via {{DocTermOrds}} an internal limitation of 2^24 
> bytes for byte-arrays holding term ordinals. For String faceting on 
> high-cardinality Text fields, this can trigger the exception with "Too many 
> values for UnInvertedField". A search for that phrase shows that the 
> exception is encountered in the wild.
> The limitation is due to the packing being a combination of values and 
> pointers: If the values (term ordinals) for a given document-ID can fit in an 
> integer, they are stored directly. If the value of the first 8 bits in the 
> integer is 1, it signals that the following 3 bytes (24 bits) is a pointer 
> into a byte-array, limiting the array-size to 16M (2^24).
> Solution: Due to the values being packed at vInts, bit 31 (the last bit) of 
> the integer will never be 1 if the integer contains values. This means that 
> this bit it can be used for signalling whether or not the preceding bits 
> should be parsed as values or a pointer. The effective pointer size is thus 
> 2^31, which matches the array-length limit in Java. Changing the signalling 
> mechanism does not affect space requirements and should not affect 
> performance.
> Note that this is only a 100-fold increase ever the 2^24 limit, not an 
> elimination: Performing uninverted Text field faceting on 100M documents with 
> 5K terms each will still raise an exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11240) Raise UnInvertedField internal limit

2017-08-16 Thread Toke Eskildsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toke Eskildsen updated SOLR-11240:
--
Affects Version/s: (was: master (8.0))

> Raise UnInvertedField internal limit
> 
>
> Key: SOLR-11240
> URL: https://issues.apache.org/jira/browse/SOLR-11240
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 5.5.4, 6.6
>Reporter: Toke Eskildsen
>Assignee: Toke Eskildsen
>Priority: Minor
>  Labels: easyfix
> Fix For: master (8.0)
>
>
> {{UnInvertedField}} has via {{DocTermOrds}} an internal limitation of 2^24 
> bytes for byte-arrays holding term ordinals. For String faceting on 
> high-cardinality Text fields, this can trigger the exception with "Too many 
> values for UnInvertedField". A search for that phrase shows that the 
> exception is encountered in the wild.
> The limitation is due to the packing being a combination of values and 
> pointers: If the values (term ordinals) for a given document-ID can fit in an 
> integer, they are stored directly. If the value of the first 8 bits in the 
> integer is 1, it signals that the following 3 bytes (24 bits) is a pointer 
> into a byte-array, limiting the array-size to 16M (2^24).
> Solution: Due to the values being packed at vInts, bit 31 (the last bit) of 
> the integer will never be 1 if the integer contains values. This means that 
> this bit it can be used for signalling whether or not the preceding bits 
> should be parsed as values or a pointer. The effective pointer size is thus 
> 2^31, which matches the array-length limit in Java. Changing the signalling 
> mechanism does not affect space requirements and should not affect 
> performance.
> Note that this is only a 100-fold increase ever the 2^24 limit, not an 
> elimination: Performing uninverted Text field faceting on 100M documents with 
> 5K terms each will still raise an exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11240) Raise UnInvertedField internal limit

2017-08-16 Thread Toke Eskildsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toke Eskildsen updated SOLR-11240:
--
Fix Version/s: (was: 6.6)
   (was: 5.5.4)

> Raise UnInvertedField internal limit
> 
>
> Key: SOLR-11240
> URL: https://issues.apache.org/jira/browse/SOLR-11240
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: 5.5.4, 6.6
>Reporter: Toke Eskildsen
>Assignee: Toke Eskildsen
>Priority: Minor
>  Labels: easyfix
> Fix For: master (8.0)
>
>
> {{UnInvertedField}} has via {{DocTermOrds}} an internal limitation of 2^24 
> bytes for byte-arrays holding term ordinals. For String faceting on 
> high-cardinality Text fields, this can trigger the exception with "Too many 
> values for UnInvertedField". A search for that phrase shows that the 
> exception is encountered in the wild.
> The limitation is due to the packing being a combination of values and 
> pointers: If the values (term ordinals) for a given document-ID can fit in an 
> integer, they are stored directly. If the value of the first 8 bits in the 
> integer is 1, it signals that the following 3 bytes (24 bits) is a pointer 
> into a byte-array, limiting the array-size to 16M (2^24).
> Solution: Due to the values being packed at vInts, bit 31 (the last bit) of 
> the integer will never be 1 if the integer contains values. This means that 
> this bit it can be used for signalling whether or not the preceding bits 
> should be parsed as values or a pointer. The effective pointer size is thus 
> 2^31, which matches the array-length limit in Java. Changing the signalling 
> mechanism does not affect space requirements and should not affect 
> performance.
> Note that this is only a 100-fold increase ever the 2^24 limit, not an 
> elimination: Performing uninverted Text field faceting on 100M documents with 
> 5K terms each will still raise an exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11069) LASTPROCESSEDVERSION for CDCR is flawed when buffering is enabled

2017-08-16 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128716#comment-16128716
 ] 

Shalin Shekhar Mangar commented on SOLR-11069:
--

Looks good to me, Erick! Thanks for fixing this.

> LASTPROCESSEDVERSION for CDCR is flawed when buffering is enabled
> -
>
> Key: SOLR-11069
> URL: https://issues.apache.org/jira/browse/SOLR-11069
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.0
>Reporter: Amrit Sarkar
>Assignee: Erick Erickson
> Attachments: SOLR-11069.patch, SOLR-11069.patch, SOLR-11069.patch
>
>
> {{LASTPROCESSEDVERSION}} (a.b.v. LPV) action for CDCR breaks down due to 
> poorly initialised and maintained buffer log for either source or target 
> cluster core nodes.
> If buffer is enabled for cores of either source or target cluster, it return 
> {{-1}}, *irrespective of number of entries in tlog read by the {{leader}}* 
> node of each shard of respective collection of respective cluster. Once 
> disabled, it starts telling us the correct LPV for each core.
> Due to the same flawed behavior, Update Log Synchroniser may doesn't work 
> properly as expected, i.e. provides incorrect seek to the {{non-leader}} 
> nodes to advance at. I am not sure whether this is an intended behavior for 
> sync but it surely doesn't feel right.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-11005) inconsistency when maxShardsPerNode used along with policies

2017-08-16 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reopened SOLR-11005:
---

This is not required anymore because of SOLR-11239

> inconsistency when maxShardsPerNode used along with policies
> 
>
> Key: SOLR-11005
> URL: https://issues.apache.org/jira/browse/SOLR-11005
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 7.1
>
>
> The attribute maxShardsPerNode conflicts with the conditions in the new 
> Policy framework
> for example , I can say maxShardsPerNode=5 and I can have a policy 
> {code}
> { replica:"<3" , shard: "#ANY", node:"#ANY"}
> {code}
> So, it makes no sense to persist this attribute in collection state.json . 
> Ideally, we would like to keep this as a part of the policy and policy only.
> h3. proposed new behavior
> if the new policy framework is being used {maxShardsPerNode} should result in 
> creating a new collection specific policy with the correct condition. for 
> example, if a collection "x" is created with the parameter 
> {{maxShardsPerNode=2}} we will  create a new policy in autoscaling.json
> {code}
> {
> "policies":{
> "x_COLL_POLICY" : [{replica:"<3", shard:"#ANY" , node:"ANY"}]
> }
> }
> {code}
> this policy will be referred to in the state.json. There will be no attribute 
> called {{maxShardsPerNode}} persisted to the state.json.
> if there is already a policy being specified for the collection, solr should 
> throw an error asking the user to edit the policy directly
> h3.the name is bad
> We must rename the attribute {{maxShardsPerNode}} to {{maxReplicasPerNode}}. 
> This should be a backward compatible change. The old name will continue to 
> work and the API would give a friendly warning if the old name is used



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11239) Deprecate maxSHardsPerNode when autoscaling policies are used

2017-08-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-11239.
--
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.0

Thanks Noble!

> Deprecate maxSHardsPerNode when autoscaling policies are used
> -
>
> Key: SOLR-11239
> URL: https://issues.apache.org/jira/browse/SOLR-11239
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Blocker
> Fix For: 7.0, master (8.0)
>
> Attachments: SOLR-11239.patch, SOLR-11239.patch
>
>
> We have found out that {{maxShardPerNode}} is not compatible with the new 
> auto scaling policies. So we need to deprecate that parameter when the 
> autoscaling policies are used.
> the {{bin/solr}} script passes that parameter all the time irrespective of 
> whether the user needs it or not. 
> We need to fix it for 7.0 itself



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11239) Deprecate maxSHardsPerNode when autoscaling policies are used

2017-08-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128677#comment-16128677
 ] 

ASF subversion and git services commented on SOLR-11239:


Commit 057f9a63f62901fea73e03021ab319c26354b508 in lucene-solr's branch 
refs/heads/branch_7_0 from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=057f9a6 ]

SOLR-11239: A special value of -1 can be specified for 'maxShardsPerNode' to 
denote that there is no limit. The bin/solr script send maxShardsPerNode=-1 
when creating collections. The use of maxShardsPerNode is not supported when a 
cluster policy is in effect or when a collection specific policy is specified 
during collection creation

(cherry picked from commit 7a576ffa1b1f4b9632ff2767686fe203949c2aaf)

# Conflicts:
#   solr/CHANGES.txt

(cherry picked from commit 73015a6)


> Deprecate maxSHardsPerNode when autoscaling policies are used
> -
>
> Key: SOLR-11239
> URL: https://issues.apache.org/jira/browse/SOLR-11239
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Blocker
> Attachments: SOLR-11239.patch, SOLR-11239.patch
>
>
> We have found out that {{maxShardPerNode}} is not compatible with the new 
> auto scaling policies. So we need to deprecate that parameter when the 
> autoscaling policies are used.
> the {{bin/solr}} script passes that parameter all the time irrespective of 
> whether the user needs it or not. 
> We need to fix it for 7.0 itself



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11239) Deprecate maxSHardsPerNode when autoscaling policies are used

2017-08-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128675#comment-16128675
 ] 

ASF subversion and git services commented on SOLR-11239:


Commit 73015a676733e743e45c2d467cf13cd173a0 in lucene-solr's branch 
refs/heads/branch_7x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=73015a6 ]

SOLR-11239: A special value of -1 can be specified for 'maxShardsPerNode' to 
denote that there is no limit. The bin/solr script send maxShardsPerNode=-1 
when creating collections. The use of maxShardsPerNode is not supported when a 
cluster policy is in effect or when a collection specific policy is specified 
during collection creation

(cherry picked from commit 7a576ffa1b1f4b9632ff2767686fe203949c2aaf)

# Conflicts:
#   solr/CHANGES.txt


> Deprecate maxSHardsPerNode when autoscaling policies are used
> -
>
> Key: SOLR-11239
> URL: https://issues.apache.org/jira/browse/SOLR-11239
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Blocker
> Attachments: SOLR-11239.patch, SOLR-11239.patch
>
>
> We have found out that {{maxShardPerNode}} is not compatible with the new 
> auto scaling policies. So we need to deprecate that parameter when the 
> autoscaling policies are used.
> the {{bin/solr}} script passes that parameter all the time irrespective of 
> whether the user needs it or not. 
> We need to fix it for 7.0 itself



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11239) Deprecate maxSHardsPerNode when autoscaling policies are used

2017-08-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128672#comment-16128672
 ] 

ASF subversion and git services commented on SOLR-11239:


Commit 7a576ffa1b1f4b9632ff2767686fe203949c2aaf in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7a576ff ]

SOLR-11239: A special value of -1 can be specified for 'maxShardsPerNode' to 
denote that there is no limit. The bin/solr script send maxShardsPerNode=-1 
when creating collections. The use of maxShardsPerNode is not supported when a 
cluster policy is in effect or when a collection specific policy is specified 
during collection creation


> Deprecate maxSHardsPerNode when autoscaling policies are used
> -
>
> Key: SOLR-11239
> URL: https://issues.apache.org/jira/browse/SOLR-11239
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Blocker
> Attachments: SOLR-11239.patch, SOLR-11239.patch
>
>
> We have found out that {{maxShardPerNode}} is not compatible with the new 
> auto scaling policies. So we need to deprecate that parameter when the 
> autoscaling policies are used.
> the {{bin/solr}} script passes that parameter all the time irrespective of 
> whether the user needs it or not. 
> We need to fix it for 7.0 itself



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+181) - Build # 20327 - Still unstable!

2017-08-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20327/
Java: 32bit/jdk-9-ea+181 -client -XX:+UseParallelGC --illegal-access=deny

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Doc with id=1 not found in http://127.0.0.1:38323/collMinRf_1x3 due to: Path 
not found: /id; rsp={doc=null}

Stack Trace:
java.lang.AssertionError: Doc with id=1 not found in 
http://127.0.0.1:38323/collMinRf_1x3 due to: Path not found: /id; rsp={doc=null}
at 
__randomizedtesting.SeedInfo.seed([262B93D78B5E421F:AE7FAC0D25A22FE7]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:603)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:558)
at 
org.apache.solr.cloud.HttpPartitionTest.testMinRf(HttpPartitionTest.java:249)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:127)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Resolved] (SOLR-11243) Replica Placement rules are ignored if a cluster policy exists

2017-08-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-11243.
--
Resolution: Fixed

> Replica Placement rules are ignored if a cluster policy exists
> --
>
> Key: SOLR-11243
> URL: https://issues.apache.org/jira/browse/SOLR-11243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 7.0, master (8.0)
>
> Attachments: SOLR-11243.patch, SOLR-11243_test_fix.patch
>
>
> Due to a bug introduced with policy framework, if a cluster policy exists 
> then the policy framework is used regardless of whether the collection is 
> configured with replica placement rules.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11243) Replica Placement rules are ignored if a cluster policy exists

2017-08-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128662#comment-16128662
 ] 

ASF subversion and git services commented on SOLR-11243:


Commit 12e46d4a8179d2c1d2f2d450d29d8bfc0a95d01d in lucene-solr's branch 
refs/heads/branch_7_0 from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=12e46d4 ]

SOLR-11243: Replica Placement rules are ignored if a cluster policy exists

(cherry picked from commit ae43ffe354ff1d0389b8586d628f391ca4d85915)

(cherry picked from commit 687ebe8)


> Replica Placement rules are ignored if a cluster policy exists
> --
>
> Key: SOLR-11243
> URL: https://issues.apache.org/jira/browse/SOLR-11243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 7.0, master (8.0)
>
> Attachments: SOLR-11243.patch, SOLR-11243_test_fix.patch
>
>
> Due to a bug introduced with policy framework, if a cluster policy exists 
> then the policy framework is used regardless of whether the collection is 
> configured with replica placement rules.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11243) Replica Placement rules are ignored if a cluster policy exists

2017-08-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128663#comment-16128663
 ] 

ASF subversion and git services commented on SOLR-11243:


Commit 07aa8e42f7425c7615d14df5e9c0294d47a7d70a in lucene-solr's branch 
refs/heads/branch_7_0 from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=07aa8e4 ]

SOLR-11243: Fix for the AutoScalingHandlerTest.testReadApi

(cherry picked from commit 8d8c629)

(cherry picked from commit f3bffb8)


> Replica Placement rules are ignored if a cluster policy exists
> --
>
> Key: SOLR-11243
> URL: https://issues.apache.org/jira/browse/SOLR-11243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 7.0, master (8.0)
>
> Attachments: SOLR-11243.patch, SOLR-11243_test_fix.patch
>
>
> Due to a bug introduced with policy framework, if a cluster policy exists 
> then the policy framework is used regardless of whether the collection is 
> configured with replica placement rules.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11243) Replica Placement rules are ignored if a cluster policy exists

2017-08-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128656#comment-16128656
 ] 

ASF subversion and git services commented on SOLR-11243:


Commit 687ebe83d7c1dff96aa351ecc4fdea25b8db48a5 in lucene-solr's branch 
refs/heads/branch_7x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=687ebe8 ]

SOLR-11243: Replica Placement rules are ignored if a cluster policy exists

(cherry picked from commit ae43ffe354ff1d0389b8586d628f391ca4d85915)


> Replica Placement rules are ignored if a cluster policy exists
> --
>
> Key: SOLR-11243
> URL: https://issues.apache.org/jira/browse/SOLR-11243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 7.0, master (8.0)
>
> Attachments: SOLR-11243.patch, SOLR-11243_test_fix.patch
>
>
> Due to a bug introduced with policy framework, if a cluster policy exists 
> then the policy framework is used regardless of whether the collection is 
> configured with replica placement rules.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11243) Replica Placement rules are ignored if a cluster policy exists

2017-08-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128657#comment-16128657
 ] 

ASF subversion and git services commented on SOLR-11243:


Commit f3bffb85997ee7a8e0a5d67ad55ab263bd0ecf6c in lucene-solr's branch 
refs/heads/branch_7x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f3bffb8 ]

SOLR-11243: Fix for the AutoScalingHandlerTest.testReadApi

(cherry picked from commit 8d8c629)


> Replica Placement rules are ignored if a cluster policy exists
> --
>
> Key: SOLR-11243
> URL: https://issues.apache.org/jira/browse/SOLR-11243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 7.0, master (8.0)
>
> Attachments: SOLR-11243.patch, SOLR-11243_test_fix.patch
>
>
> Due to a bug introduced with policy framework, if a cluster policy exists 
> then the policy framework is used regardless of whether the collection is 
> configured with replica placement rules.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11244) Query DSL for Solr

2017-08-16 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-11244:

Description: 
It will be great if Solr has a powerful query DSL. This ticket is an extension 
of [http://yonik.com/solr-json-request-api/].

Here are several examples of Query DSL
{code}
curl -XGET http://localhost:8983/solr/query -d '
{
"query" : {
"lucene" : {
"df" : "content",
"query" : "solr lucene"
}
}
}
{code}
the above example can be rewritten as (because lucene is the default qparser)
{code}
curl -XGET http://localhost:8983/solr/query -d '
{
"query" : "content:(solr lucene)"
}
{code}
more complex example:
{code}
curl -XGET http://localhost:8983/solr/query -d '
{ 
"query" : {
"boost" : {
"query" : {
"lucene" : {
"q.op" : "AND",
"df" : "cat_s",
"query" : "A"
}
}
"b" : "log(popularity)"
}
}
}
{code}

I call it Json Query Object (JQO) and It defined as :
- It can be a valid query string for Lucene query parser, for example : 
"title:solr"
- It can be a valid local parameters string, for example : "{!dismax 
qf=myfield}solr rocks"
- It can be a json object with structure like this 
{code}
{
  "query-parser-name" : {
 "param1" : "value1",
 "param2" : "value2",
 "query" : ,
 "another-param" : 
  }
}
{code}
Therefore the above dismax query can be rewritten as ( be noticed that the 
query argument in local parameters, is put as value of {{query}} field )
{code}
{
  "dismax" : {
 "qf" : "myfield"
 "query" : "solr rocks"
  }
}
{code}

I will attach an HTML, contain more examples of Query DSL.

  was:
It will be great if Solr has a powerful query DSL. This ticket is an extension 
of [http://yonik.com/solr-json-request-api/].

Here are several examples of Query DSL
{code}
curl -XGET http://localhost:8983/solr/query -d '
{
"query" : {
"lucene" : {
"df" : "content",
"query" : "solr lucene"
}
}
}
{code}
the above example can be rewritten as (because lucene is the default qparser)
{code}
curl -XGET http://localhost:8983/solr/query -d '
{
"query" : "content:(solr lucene)"
}
{code}
more complex example:
{code}
curl -XGET http://localhost:8983/solr/query -d '
{ 
"query" : {
"boost" : {
"query" : {
"lucene" : {
"q.op" : "AND",
"df" : "cat_s",
"query" : "A"
}
}
"b" : "log(popularity)"
}
}
}
{code}

I call it Json Query Object (JQO) and It defined as :
- It can be a valid query string for Lucene query parser, for example : 
"title:solr"
- It can be a valid local parameters string, for example : "{!dismax 
qf=myfield}solr rocks"
- It can be a json object with structure like this 
{code}
{
  "query-parser-name" : {
 "param1" : "value1",
 "param2" : "value2",
 "query" : ,
 "another-param" : 
  }
}
{code}
Therefore the above dismax query can be rewritten as ( be noticed that the 
query argument in local parameters, is put as value of {{query}} field )
{
  "dismax" : {
 "qf" : "myfield"
 "query" : "solr rocks"
  }
}

I will attach an HTML, contain more examples of Query DSL.


> Query DSL for Solr
> --
>
> Key: SOLR-11244
> URL: https://issues.apache.org/jira/browse/SOLR-11244
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
> Attachments: SOLR-11244.patch, Solr Query DSL - examples.html
>
>
> It will be great if Solr has a powerful query DSL. This ticket is an 
> extension of [http://yonik.com/solr-json-request-api/].
> Here are several examples of Query DSL
> {code}
> curl -XGET http://localhost:8983/solr/query -d '
> {
> "query" : {
> "lucene" : {
> "df" : "content",
> "query" : "solr lucene"
> }
> }
> }
> {code}
> the above example can be rewritten as (because lucene is the default qparser)
> {code}
> curl -XGET http://localhost:8983/solr/query -d '
> {
> "query" : "content:(solr lucene)"
> }
> {code}
> more complex example:
> {code}
> curl -XGET http://localhost:8983/solr/query -d '
> { 
> "query" : {
> "boost" : {
> "query" : {
> "lucene" : {
> "q.op" : "AND",
> "df" : "cat_s",
> "query" : "A"
> }
> }
> "b" : "log(popularity)"
> }
> }
> }
> {code}
> I call it Json Query Object (JQO) and It defined as :
> - It can be a valid query string for Lucene query parser, for 

[jira] [Updated] (SOLR-11244) Query DSL for Solr

2017-08-16 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-11244:

Description: 
It will be great if Solr has a powerful query DSL. This ticket is an extension 
of [http://yonik.com/solr-json-request-api/].

Here are several examples of Query DSL
{code}
curl -XGET http://localhost:8983/solr/query -d '
{
"query" : {
"lucene" : {
"df" : "content",
"query" : "solr lucene"
}
}
}
{code}
the above example can be rewritten as (because lucene is the default qparser)
{code}
curl -XGET http://localhost:8983/solr/query -d '
{
"query" : "content:(solr lucene)"
}
{code}
more complex example:
{code}
curl -XGET http://localhost:8983/solr/query -d '
{ 
"query" : {
"boost" : {
"query" : {
"lucene" : {
"q.op" : "AND",
"df" : "cat_s",
"query" : "A"
}
}
"b" : "log(popularity)"
}
}
}
{code}

I call it Json Query Object (JQO) and It defined as :
- It can be a valid query string for Lucene query parser, for example : 
"title:solr"
- It can be a valid local parameters string, for example : "{!dismax 
qf=myfield}solr rocks"
- It can be a json object with structure like this 
{code}
{
  "query-parser-name" : {
 "param1" : "value1",
 "param2" : "value2",
 "query" : ,
 "another-param" : 
  }
}
{code}
Therefore the above dismax query can be rewritten as ( be noticed that the 
query argument in local parameters, is put as value of {{query}} field )
{
  "dismax" : {
 "qf" : "myfield"
 "query" : "solr rocks"
  }
}

I will attach an HTML, contain more examples of Query DSL.

  was:
It will be great if Solr has a powerful query DSL. This ticket is an extension 
of [http://yonik.com/solr-json-request-api/].

Here are definition of Json Query Object (JQO) :
- It can be a valid query string for Lucene query parser, for example : 
"title:solr"
- It can be a valid local parameters string, for example : "{!dismax 
qf=myfield}solr rocks"
- It can be a json object with structure like this 
{code}
{
  "query-parser-name" : {
 "param1" : "value1",
 "param2" : "value2",
 "query" : ,
 "another-param" : 
  }
}
{code}
Therefore the above dismax query can be rewritten as ( be noticed that the 
query argument in local parameters, is put as value of {{query}} field )
{
  "dismax" : {
 "qf" : "myfield"
 "query" : "solr rocks"
  }
}

I will attach an HTML, contain more examples of Query DSL.


> Query DSL for Solr
> --
>
> Key: SOLR-11244
> URL: https://issues.apache.org/jira/browse/SOLR-11244
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
> Attachments: SOLR-11244.patch, Solr Query DSL - examples.html
>
>
> It will be great if Solr has a powerful query DSL. This ticket is an 
> extension of [http://yonik.com/solr-json-request-api/].
> Here are several examples of Query DSL
> {code}
> curl -XGET http://localhost:8983/solr/query -d '
> {
> "query" : {
> "lucene" : {
> "df" : "content",
> "query" : "solr lucene"
> }
> }
> }
> {code}
> the above example can be rewritten as (because lucene is the default qparser)
> {code}
> curl -XGET http://localhost:8983/solr/query -d '
> {
> "query" : "content:(solr lucene)"
> }
> {code}
> more complex example:
> {code}
> curl -XGET http://localhost:8983/solr/query -d '
> { 
> "query" : {
> "boost" : {
> "query" : {
> "lucene" : {
> "q.op" : "AND",
> "df" : "cat_s",
> "query" : "A"
> }
> }
> "b" : "log(popularity)"
> }
> }
> }
> {code}
> I call it Json Query Object (JQO) and It defined as :
> - It can be a valid query string for Lucene query parser, for example : 
> "title:solr"
> - It can be a valid local parameters string, for example : "{!dismax 
> qf=myfield}solr rocks"
> - It can be a json object with structure like this 
> {code}
> {
>   "query-parser-name" : {
>  "param1" : "value1",
>  "param2" : "value2",
>  "query" : ,
>  "another-param" : 
>   }
> }
> {code}
> Therefore the above dismax query can be rewritten as ( be noticed that the 
> query argument in local parameters, is put as value of {{query}} field )
> {
>   "dismax" : {
>  "qf" : "myfield"
>  "query" : "solr rocks"
>   }
> }
> I will attach an HTML, contain more examples of Query DSL.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: 

[jira] [Comment Edited] (SOLR-11244) Query DSL for Solr

2017-08-16 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128632#comment-16128632
 ] 

Cao Manh Dat edited comment on SOLR-11244 at 8/16/17 11:03 AM:
---

Here are a patch for this feature, it contains 
- test for json query dsls
- a boolean query parser with syntax like this
{code}
{
  "bool" : {
"must" : [, , ...],
"must_not" : [, , ...],
"filter" : [, , ...],
"should" : [, , ...]
  }
}
{code}
- the implementation of query dsl. Basically, the json query object is 
converted into a local parameter string. For example, the below JQO
{code}
boost : {   
query : {
lucene : {   
df : cat_s,   
query : A 
}   
}, 
b : 1.5   
} 
{code}
is converted into
{code}
{!boost b=1.5 v='{!lucene df=cat_s v=A}' }
{code}


was (Author: caomanhdat):
Here are a patch for this feature, it contains 
- test for json query dsls
- a boolean query parser with syntax like this
{code}
{
  "bool" : {
"must" : [, , ...],
"must_not" : [, , ...],
"filter" : [, , ...],
"should" : [, , ...]
  }
}
{code}
- the implementation of query dsl. Basically, the json query object is 
converted into a local parameter string. For example, the below JQO
{code}
boost : {   
query : {
lucene : {   
df : cat_s,   
query : A 
}   
}, 
b : 1.5   
} 
{code}
is converted into
{!boost b=1.5 v='{!lucene df=cat_s v=A}' }

> Query DSL for Solr
> --
>
> Key: SOLR-11244
> URL: https://issues.apache.org/jira/browse/SOLR-11244
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
> Attachments: SOLR-11244.patch, Solr Query DSL - examples.html
>
>
> It will be great if Solr has a powerful query DSL. This ticket is an 
> extension of [http://yonik.com/solr-json-request-api/].
> Here are definition of Json Query Object (JQO) :
> - It can be a valid query string for Lucene query parser, for example : 
> "title:solr"
> - It can be a valid local parameters string, for example : "{!dismax 
> qf=myfield}solr rocks"
> - It can be a json object with structure like this 
> {code}
> {
>   "query-parser-name" : {
>  "param1" : "value1",
>  "param2" : "value2",
>  "query" : ,
>  "another-param" : 
>   }
> }
> {code}
> Therefore the above dismax query can be rewritten as ( be noticed that the 
> query argument in local parameters, is put as value of {{query}} field )
> {
>   "dismax" : {
>  "qf" : "myfield"
>  "query" : "solr rocks"
>   }
> }
> I will attach an HTML, contain more examples of Query DSL.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11244) Query DSL for Solr

2017-08-16 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-11244:

Attachment: SOLR-11244.patch

Here are a patch for this feature, it contains 
- test for json query dsls
- a boolean query parser with syntax like this
{code}
{
  "bool" : {
"must" : [, , ...],
"must_not" : [, , ...],
"filter" : [, , ...],
"should" : [, , ...]
  }
}
{code}
- the implementation of query dsl. Basically, the json query object is 
converted into a local parameter string. For example, the below JQO
{code}
boost : {   
query : {
lucene : {   
df : cat_s,   
query : A 
}   
}, 
b : 1.5   
} 
{code}
is converted into
{!boost b=1.5 v='{!lucene df=cat_s v=A}' }

> Query DSL for Solr
> --
>
> Key: SOLR-11244
> URL: https://issues.apache.org/jira/browse/SOLR-11244
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
> Attachments: SOLR-11244.patch, Solr Query DSL - examples.html
>
>
> It will be great if Solr has a powerful query DSL. This ticket is an 
> extension of [http://yonik.com/solr-json-request-api/].
> Here are definition of Json Query Object (JQO) :
> - It can be a valid query string for Lucene query parser, for example : 
> "title:solr"
> - It can be a valid local parameters string, for example : "{!dismax 
> qf=myfield}solr rocks"
> - It can be a json object with structure like this 
> {code}
> {
>   "query-parser-name" : {
>  "param1" : "value1",
>  "param2" : "value2",
>  "query" : ,
>  "another-param" : 
>   }
> }
> {code}
> Therefore the above dismax query can be rewritten as ( be noticed that the 
> query argument in local parameters, is put as value of {{query}} field )
> {
>   "dismax" : {
>  "qf" : "myfield"
>  "query" : "solr rocks"
>   }
> }
> I will attach an HTML, contain more examples of Query DSL.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11164) OriginalScoreFeature causes NullPointerException during feature logging with SolrCloud mode.

2017-08-16 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128629#comment-16128629
 ] 

Christine Poerschke commented on SOLR-11164:


bq. Is it better to create the patch from the master branch?

Good question. Yes, I think so, typically patches are against master branch 
(unless the issue is broken only in (say) branch_7x) and once committed to 
master branch it would be cherry-pick backported to other branches. 
https://wiki.apache.org/solr/HowToContribute has (much) more detailed info e.g. 
including _ant precommit_ etc.

> OriginalScoreFeature causes NullPointerException during feature logging with 
> SolrCloud mode.
> 
>
> Key: SOLR-11164
> URL: https://issues.apache.org/jira/browse/SOLR-11164
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Affects Versions: 6.6
>Reporter: Yuki Yano
> Attachments: SOLR-11164.patch, SOLR-11164.patch
>
>
> In FeatureTransfer, OriginalScoreFeature uses original Query instance 
> preserved in LTRScoringQuery for the evaluation.
> This query is set in RankQuery#wrap during QueryComponent#process.
> With SolrCloud mode, document searches take two steps: finding top-N document 
> ids, and filling documents of found ids.
> In this case, FeatureTransformer works in the second step and tries to 
> extract features with LTRScoringQuery built in QueryComponent#prepare.
> However, because the second step doesn't call QueryComponent#process, the 
> original query of LTRScoringQuery remains null and this causes 
> NullPointerException while evaluating OriginalScoreFeature.
> We can get the original query from ResultContext which is an argument of 
> DocTransformer#setContext, thus this problem can solve by using it if 
> LTRScoringQuery doesn't have correct original query.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11244) Query DSL for Solr

2017-08-16 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-11244:

Attachment: Solr Query DSL - examples.html

> Query DSL for Solr
> --
>
> Key: SOLR-11244
> URL: https://issues.apache.org/jira/browse/SOLR-11244
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
> Attachments: Solr Query DSL - examples.html
>
>
> It will be great if Solr has a powerful query DSL. This ticket is an 
> extension of [http://yonik.com/solr-json-request-api/].
> Here are definition of Json Query Object (JQO) :
> - It can be a valid query string for Lucene query parser, for example : 
> "title:solr"
> - It can be a valid local parameters string, for example : "{!dismax 
> qf=myfield}solr rocks"
> - It can be a json object with structure like this 
> {code}
> {
>   "query-parser-name" : {
>  "param1" : "value1",
>  "param2" : "value2",
>  "query" : ,
>  "another-param" : 
>   }
> }
> {code}
> Therefore the above dismax query can be rewritten as ( be noticed that the 
> query argument in local parameters, is put as value of {{query}} field )
> {
>   "dismax" : {
>  "qf" : "myfield"
>  "query" : "solr rocks"
>   }
> }
> I will attach an HTML, contain more examples of Query DSL.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11244) Query DSL for Solr

2017-08-16 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-11244:

Description: 
It will be great if Solr has a powerful query DSL. This ticket is an extension 
of [http://yonik.com/solr-json-request-api/].

Here are definition of Json Query Object (JQO) :
- It can be a valid query string for Lucene query parser, for example : 
"title:solr"
- It can be a valid local parameters string, for example : "{!dismax 
qf=myfield}solr rocks"
- It can be a json object with structure like this 
{code}
{
  "query-parser-name" : {
 "param1" : "value1",
 "param2" : "value2",
 "query" : ,
 "another-param" : 
  }
}
{code}
Therefore the above dismax query can be rewritten as ( be noticed that the 
query argument in local parameters, is put as value of {{query}} field )
{
  "dismax" : {
 "qf" : "myfield"
 "query" : "solr rocks"
  }
}

I will attach an HTML, contain more examples of Query DSL.

  was:
It will be great if Solr has a powerful query DSL. This ticket is an extension 
of [http://yonik.com/solr-json-request-api/].

Here are definition of Json Query Object (JQO) :
- It can be a valid query string for Lucene query parser, for example : 
"title:solr"
- It can be a valid local parameters string, for example : "{!dismax 
qf=myfield}solr rocks"
- It can be a json object with structure like this 
{code}
{
  "query-parser-name" : {
 "param1" : "value1",
 "param2" : "value2",
 "query" : ,
 "another-param" : 
  }
}
{code}
Therefore the above dismax query can be rewritten as
{
  "dismax" : {
 "qf" : "myfield"
 "query" : "solr rocks"
  }
}
Be noticed that the query argument in local parameters, is put as value of 
{{query}} field. I will attach an HTML, contain more examples of Query DSL.


> Query DSL for Solr
> --
>
> Key: SOLR-11244
> URL: https://issues.apache.org/jira/browse/SOLR-11244
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
> Attachments: Solr Query DSL - examples.html
>
>
> It will be great if Solr has a powerful query DSL. This ticket is an 
> extension of [http://yonik.com/solr-json-request-api/].
> Here are definition of Json Query Object (JQO) :
> - It can be a valid query string for Lucene query parser, for example : 
> "title:solr"
> - It can be a valid local parameters string, for example : "{!dismax 
> qf=myfield}solr rocks"
> - It can be a json object with structure like this 
> {code}
> {
>   "query-parser-name" : {
>  "param1" : "value1",
>  "param2" : "value2",
>  "query" : ,
>  "another-param" : 
>   }
> }
> {code}
> Therefore the above dismax query can be rewritten as ( be noticed that the 
> query argument in local parameters, is put as value of {{query}} field )
> {
>   "dismax" : {
>  "qf" : "myfield"
>  "query" : "solr rocks"
>   }
> }
> I will attach an HTML, contain more examples of Query DSL.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11244) Query DSL for Solr

2017-08-16 Thread Cao Manh Dat (JIRA)
Cao Manh Dat created SOLR-11244:
---

 Summary: Query DSL for Solr
 Key: SOLR-11244
 URL: https://issues.apache.org/jira/browse/SOLR-11244
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Cao Manh Dat
Assignee: Cao Manh Dat


It will be great if Solr has a powerful query DSL. This ticket is an extension 
of [http://yonik.com/solr-json-request-api/].

Here are definition of Json Query Object (JQO) :
- It can be a valid query string for Lucene query parser, for example : 
"title:solr"
- It can be a valid local parameters string, for example : "{!dismax 
qf=myfield}solr rocks"
- It can be a json object with structure like this 
{code}
{
  "query-parser-name" : {
 "param1" : "value1",
 "param2" : "value2",
 "query" : ,
 "another-param" : 
  }
}
{code}
Therefore the above dismax query can be rewritten as
{
  "dismax" : {
 "qf" : "myfield"
 "query" : "solr rocks"
  }
}
Be noticed that the query argument in local parameters, is put as value of 
{{query}} field. I will attach an HTML, contain more examples of Query DSL.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Baby steps as new committer

2017-08-16 Thread Andrzej Białecki

> On 16 Aug 2017, at 11:14, Toke Eskildsen  wrote:
> 
> On Tue, 2017-08-15 at 13:10 -0700, Erick Erickson wrote:
> 
> [Git pull vs. patch]
> 
> It seems like a patch is the simplest path for a simple fix, so I'll
> start there.
> 
>> It's confusing to fill in the "Fix version" until you "Resolve" the
>> issue, i.e. commit a fix. I leave it blank when raising a JIRA, only
>> filling it in when I commit the fixes.
> 
> That makes sense. I'll adjust when I upload a patch. I'll forget about
> version 5 & 6 and just go for master. When 7.0 has been released, I
> would like to port to 7.1 (we need the fix ourselves for 7.x, so I
> might as well port it for everyone). Or can I port to 7.1 already? I am
> not sure about the state of branches when a release is in progress. 

There’s a branch_7x for future 7.x releases, including 7.1 - you can go ahead 
and port & commit to that branch. Commits to branch_7_0, which is the release 
branch, should be approved by release manager (for 7.0 this is Anshum).

There’s a complicated issue of where to put the CHANGES.txt entry … my take on 
this is that if you know in advance what is the oldest version where the fix 
will be applied, you should add the entry to that section, no matter if you 
first commit to master or some other branch - eg. add it to the section on 7.0 
if that’s where it first appeared.

> 
> I read the "Strange Solr JIRA versions"-thread  and what I got from it
> is that I should never type in the version field in JIRA (only use the
> drop-down) and that I to stay clear of any 'x'-versions, should they be
> created by others.
> 
> 
> 
> Thank you,
> Toke Eskildsen, Royal Danish Library
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 



Re: Baby steps as new committer

2017-08-16 Thread Toke Eskildsen
On Tue, 2017-08-15 at 13:10 -0700, Erick Erickson wrote:

[Git pull vs. patch]

It seems like a patch is the simplest path for a simple fix, so I'll
start there.

> It's confusing to fill in the "Fix version" until you "Resolve" the
> issue, i.e. commit a fix. I leave it blank when raising a JIRA, only
> filling it in when I commit the fixes.

That makes sense. I'll adjust when I upload a patch. I'll forget about
version 5 & 6 and just go for master. When 7.0 has been released, I
would like to port to 7.1 (we need the fix ourselves for 7.x, so I
might as well port it for everyone). Or can I port to 7.1 already? I am
not sure about the state of branches when a release is in progress. 

I read the "Strange Solr JIRA versions"-thread  and what I got from it
is that I should never type in the version field in JIRA (only use the
drop-down) and that I to stay clear of any 'x'-versions, should they be
created by others.



Thank you,
Toke Eskildsen, Royal Danish Library


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_144) - Build # 20326 - Still Failing!

2017-08-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20326/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Doc with id=1 not found in http://127.0.0.1:38751/collMinRf_1x3 due to: Path 
not found: /id; rsp={doc=null}

Stack Trace:
java.lang.AssertionError: Doc with id=1 not found in 
http://127.0.0.1:38751/collMinRf_1x3 due to: Path not found: /id; rsp={doc=null}
at 
__randomizedtesting.SeedInfo.seed([EF1DFB4F6CFCBE47:6749C495C200D3BF]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:603)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:558)
at 
org.apache.solr.cloud.HttpPartitionTest.testMinRf(HttpPartitionTest.java:249)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:127)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Commented] (SOLR-11239) Deprecate maxSHardsPerNode when autoscaling policies are used

2017-08-16 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128458#comment-16128458
 ] 

Noble Paul commented on SOLR-11239:
---

Thanks [~shalinmangar]. LGTM . +1

> Deprecate maxSHardsPerNode when autoscaling policies are used
> -
>
> Key: SOLR-11239
> URL: https://issues.apache.org/jira/browse/SOLR-11239
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Blocker
> Attachments: SOLR-11239.patch, SOLR-11239.patch
>
>
> We have found out that {{maxShardPerNode}} is not compatible with the new 
> auto scaling policies. So we need to deprecate that parameter when the 
> autoscaling policies are used.
> the {{bin/solr}} script passes that parameter all the time irrespective of 
> whether the user needs it or not. 
> We need to fix it for 7.0 itself



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11243) Replica Placement rules are ignored if a cluster policy exists

2017-08-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128453#comment-16128453
 ] 

ASF subversion and git services commented on SOLR-11243:


Commit 8d8c629425d2f0369c08d34baf48654dd5326b0a in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8d8c629 ]

SOLR-11243: Fix for the AutoScalingHandlerTest.testReadApi


> Replica Placement rules are ignored if a cluster policy exists
> --
>
> Key: SOLR-11243
> URL: https://issues.apache.org/jira/browse/SOLR-11243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 7.0, master (8.0)
>
> Attachments: SOLR-11243.patch, SOLR-11243_test_fix.patch
>
>
> Due to a bug introduced with policy framework, if a cluster policy exists 
> then the policy framework is used regardless of whether the collection is 
> configured with replica placement rules.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11243) Replica Placement rules are ignored if a cluster policy exists

2017-08-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-11243:
-
Attachment: SOLR-11243_test_fix.patch

Patch that fixes AutoScalingHandlerTest.testReadApi

> Replica Placement rules are ignored if a cluster policy exists
> --
>
> Key: SOLR-11243
> URL: https://issues.apache.org/jira/browse/SOLR-11243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 7.0, master (8.0)
>
> Attachments: SOLR-11243.patch, SOLR-11243_test_fix.patch
>
>
> Due to a bug introduced with policy framework, if a cluster policy exists 
> then the policy framework is used regardless of whether the collection is 
> configured with replica placement rules.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11239) Deprecate maxSHardsPerNode when autoscaling policies are used

2017-08-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-11239:
-
Attachment: SOLR-11239.patch

Updated patch with fixes to the new test. This is ready. I'll run precommit and 
tests and then commit.

> Deprecate maxSHardsPerNode when autoscaling policies are used
> -
>
> Key: SOLR-11239
> URL: https://issues.apache.org/jira/browse/SOLR-11239
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Blocker
> Attachments: SOLR-11239.patch, SOLR-11239.patch
>
>
> We have found out that {{maxShardPerNode}} is not compatible with the new 
> auto scaling policies. So we need to deprecate that parameter when the 
> autoscaling policies are used.
> the {{bin/solr}} script passes that parameter all the time irrespective of 
> whether the user needs it or not. 
> We need to fix it for 7.0 itself



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5129) If zookeeper is down, SolrCloud nodes will not start correctly, even if zookeeper is started later

2017-08-16 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128429#comment-16128429
 ] 

Cao Manh Dat commented on SOLR-5129:


[~hgadre] Thanks for the hint about code. 

About exposing this parameter view solr.xml is not a good idea. Because the zk 
connection constructed here is used for loading solr.xml ( {{solr.xml}} is read 
after this property is read )

> If zookeeper is down, SolrCloud nodes will not start correctly, even if 
> zookeeper is started later
> --
>
> Key: SOLR-5129
> URL: https://issues.apache.org/jira/browse/SOLR-5129
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.4
>Reporter: Shawn Heisey
>Priority: Minor
> Fix For: 4.9, 6.0
>
> Attachments: SOLR-5129.patch
>
>
> Summary of report from user on mailing list:
> If zookeeper is down or doesn't have quorum when you start Solr nodes, they 
> will not function correctly, even if you later start zookeeper.  While 
> zookeeper is down, the log shows connection failures as expected.  When 
> zookeeper comes back, the log shows:
> INFO  - 2013-08-09 15:48:41.528; 
> org.apache.solr.common.cloud.ConnectionManager; Client->ZooKeeper status 
> change trigger but we are already closed
> At that point, Solr (admin UI and all other functions) does not work, and 
> won't work until it is restarted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5129) If zookeeper is down, SolrCloud nodes will not start correctly, even if zookeeper is started later

2017-08-16 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128429#comment-16128429
 ] 

Cao Manh Dat edited comment on SOLR-5129 at 8/16/17 7:18 AM:
-

[~hgadre] Thanks for the hint about code. 

About exposing this parameter via solr.xml is not a good idea. Because the zk 
connection constructed here is used for loading solr.xml ( {{solr.xml}} is read 
after this property is read )


was (Author: caomanhdat):
[~hgadre] Thanks for the hint about code. 

About exposing this parameter view solr.xml is not a good idea. Because the zk 
connection constructed here is used for loading solr.xml ( {{solr.xml}} is read 
after this property is read )

> If zookeeper is down, SolrCloud nodes will not start correctly, even if 
> zookeeper is started later
> --
>
> Key: SOLR-5129
> URL: https://issues.apache.org/jira/browse/SOLR-5129
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.4
>Reporter: Shawn Heisey
>Priority: Minor
> Fix For: 4.9, 6.0
>
> Attachments: SOLR-5129.patch
>
>
> Summary of report from user on mailing list:
> If zookeeper is down or doesn't have quorum when you start Solr nodes, they 
> will not function correctly, even if you later start zookeeper.  While 
> zookeeper is down, the log shows connection failures as expected.  When 
> zookeeper comes back, the log shows:
> INFO  - 2013-08-09 15:48:41.528; 
> org.apache.solr.common.cloud.ConnectionManager; Client->ZooKeeper status 
> change trigger but we are already closed
> At that point, Solr (admin UI and all other functions) does not work, and 
> won't work until it is restarted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11243) Replica Placement rules are ignored if a cluster policy exists

2017-08-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128420#comment-16128420
 ] 

ASF subversion and git services commented on SOLR-11243:


Commit ae43ffe354ff1d0389b8586d628f391ca4d85915 in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ae43ffe ]

SOLR-11243: Replica Placement rules are ignored if a cluster policy exists


> Replica Placement rules are ignored if a cluster policy exists
> --
>
> Key: SOLR-11243
> URL: https://issues.apache.org/jira/browse/SOLR-11243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 7.0, master (8.0)
>
> Attachments: SOLR-11243.patch
>
>
> Due to a bug introduced with policy framework, if a cluster policy exists 
> then the policy framework is used regardless of whether the collection is 
> configured with replica placement rules.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11243) Replica Placement rules are ignored if a cluster policy exists

2017-08-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-11243:
-
Attachment: SOLR-11243.patch

Patch with test and fix.

> Replica Placement rules are ignored if a cluster policy exists
> --
>
> Key: SOLR-11243
> URL: https://issues.apache.org/jira/browse/SOLR-11243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 7.0, master (8.0)
>
> Attachments: SOLR-11243.patch
>
>
> Due to a bug introduced with policy framework, if a cluster policy exists 
> then the policy framework is used regardless of whether the collection is 
> configured with replica placement rules.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11243) Replica Placement rules are ignored if a cluster policy exists

2017-08-16 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-11243:


 Summary: Replica Placement rules are ignored if a cluster policy 
exists
 Key: SOLR-11243
 URL: https://issues.apache.org/jira/browse/SOLR-11243
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 7.0, master (8.0)


Due to a bug introduced with policy framework, if a cluster policy exists then 
the policy framework is used regardless of whether the collection is configured 
with replica placement rules.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org