[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 826 - Unstable!

2018-09-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/826/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud.testCreateCollectionAddReplica

Error Message:
Timeout waiting for collection to become active Live Nodes: 
[127.0.0.1:10001_solr, 127.0.0.1:10004_solr, 127.0.0.1:1_solr, 
127.0.0.1:10002_solr, 127.0.0.1:10003_solr] Last available state: 
DocCollection(testCreateCollectionAddReplica//clusterstate.json/8)={   
"replicationFactor":"1",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"1",   "tlogReplicas":"0",   
"autoCreated":"true",   "policy":"c1",   "shards":{"shard1":{   
"replicas":{"core_node1":{   
"core":"testCreateCollectionAddReplica_shard1_replica_n1",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":10240,   "node_name":"127.0.0.1:10003_solr",   
"state":"active",   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6,   "SEARCHER.searcher.numDocs":0}}, 
  "range":"8000-7fff",   "state":"active"}}}

Stack Trace:
java.lang.AssertionError: Timeout waiting for collection to become active
Live Nodes: [127.0.0.1:10001_solr, 127.0.0.1:10004_solr, 127.0.0.1:1_solr, 
127.0.0.1:10002_solr, 127.0.0.1:10003_solr]
Last available state: 
DocCollection(testCreateCollectionAddReplica//clusterstate.json/8)={
  "replicationFactor":"1",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"1",
  "tlogReplicas":"0",
  "autoCreated":"true",
  "policy":"c1",
  "shards":{"shard1":{
  "replicas":{"core_node1":{
  "core":"testCreateCollectionAddReplica_shard1_replica_n1",
  "SEARCHER.searcher.maxDoc":0,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":10240,
  "node_name":"127.0.0.1:10003_solr",
  "state":"active",
  "type":"NRT",
  "INDEX.sizeInGB":9.5367431640625E-6,
  "SEARCHER.searcher.numDocs":0}},
  "range":"8000-7fff",
  "state":"active"}}}
at 
__randomizedtesting.SeedInfo.seed([25666E2637231D33:A5460B082660F595]:0)
at 
org.apache.solr.cloud.CloudTestUtils.waitForState(CloudTestUtils.java:70)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud.testCreateCollectionAddReplica(TestSimPolicyCloud.java:123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 

[jira] [Commented] (SOLR-12028) BadApple and AwaitsFix annotations usage

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606691#comment-16606691
 ] 

ASF subversion and git services commented on SOLR-12028:


Commit de38f8168e822811728884310df7bfb9604e2a6e in lucene-solr's branch 
refs/heads/branch_7x from [~cp.erick...@gmail.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=de38f81 ]

SOLR-12028: BadApple and AwaitsFix annotations usage

(cherry picked from commit 0dc66c236d5f61caad96e36454b4b15fbde35720)


> BadApple and AwaitsFix annotations usage
> 
>
> Key: SOLR-12028
> URL: https://issues.apache.org/jira/browse/SOLR-12028
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12016-buildsystem.patch, SOLR-12028-3-Mar.patch, 
> SOLR-12028-sysprops-reproduce.patch, SOLR-12028.patch, SOLR-12028.patch
>
>
> There's a long discussion of this topic at SOLR-12016. Here's a summary:
> - BadApple annotations are used for tests that intermittently fail, say < 30% 
> of the time. Tests that fail more often shold be moved to AwaitsFix. This is, 
> of course, a judgement call
> - AwaitsFix annotations are used for tests that, for some reason, the problem 
> can't be fixed immediately. Likely reasons are third-party dependencies, 
> extreme difficulty tracking down, dependency on another JIRA etc.
> Jenkins jobs will typically run with BadApple disabled to cut down on noise. 
> Periodically Jenkins jobs will be run with BadApples enabled so BadApple 
> tests won't be lost and reports can be generated. Tests that run with 
> BadApples disabled that fail require _immediate_ attention.
> The default for developers is that BadApple is enabled.
> If you are working on one of these tests and cannot get the test to fail 
> locally, it is perfectly acceptable to comment the annotation out. You should 
> let the dev list know that this is deliberate.
> This JIRA is a placeholder for BadApple tests to point to between the times 
> they're identified as BadApple and they're either fixed or changed to 
> AwaitsFix or assigned their own JIRA.
> I've assigned this to myself to track so I don't lose track of it. No one 
> person will fix all of these issues, this will be an ongoing technical debt 
> cleanup effort.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Odd attribution for pushes

2018-09-06 Thread Erick Erickson
Recent pushes from me are being linked to Chris Erickson's profile,
apparently a Groovy/Grails sort.

An example is: from SOLR-12732 is.

Commit 9e04375dc193d3815e9d755514a960f902c60cd2 in lucene-solr's
branch refs/heads/master from Chris Erickson

I've pinged infra, this is just FYI in case you wonder who to blame.

Erick

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12357) TRA: Pre-emptively create next collection

2018-09-06 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-12357.
-
   Resolution: Fixed
 Assignee: David Smiley
Fix Version/s: 7.5

What I committed is evolved a little; just simplifying the 
TrackingUpdateProcessorFactory a bit further.  And I fixed a stupid temporary 
change I had on the group name in the test so that we name the group after the 
running test and not a constant.  Beasting yielded no problems so I'm feeling 
pretty good about it.

> TRA: Pre-emptively create next collection 
> --
>
> Key: SOLR-12357
> URL: https://issues.apache.org/jira/browse/SOLR-12357
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.5
>
> Attachments: SOLR-12357.patch
>
>  Time Spent: 9.5h
>  Remaining Estimate: 0h
>
> When adding data to a Time Routed Alias (TRA), we sometimes need to create 
> new collections.  Today we only do this synchronously – on-demand when a 
> document is coming in.  But this can add delays as the documents inbound are 
> held up for a collection to be created.  And, there may be a problem like a 
> lack of resources (e.g. ample SolrCloud nodes with space) that the policy 
> framework defines.  Such problems could be rectified sooner rather than later 
> assume there is log alerting in place (definitely out of scope here).
> Pre-emptive TRA collection needs a time window configuration parameter, 
> perhaps named something like "preemptiveCreateWindowMs".  If a document's 
> timestamp is within this time window _from the end time of the head/lead 
> collection_ then the collection can be created pre-eptively.  If no data is 
> being sent to the TRA, no collections will be auto created, nor will it 
> happen if older data is being added.  It may be convenient to effectively 
> limit this time setting to the _smaller_ of this value and the TRA interval 
> window, which I think is a fine limitation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12357) TRA: Pre-emptively create next collection

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606678#comment-16606678
 ] 

ASF subversion and git services commented on SOLR-12357:


Commit bfafeb7cd64e34940eca7a688596f63c14b192fc in lucene-solr's branch 
refs/heads/branch_7x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bfafeb7 ]

SOLR-12357: TRA preemptiveCreateMath option.
Simplified test utility TrackingUpdateProcessorFactory.
Reverted some attempts the TRA used to make in avoiding overseer communication 
(too complicated).
Closes #433

(cherry picked from commit 21d130c3edf8bfb21a3428fc95e5b67d6be757e7)


> TRA: Pre-emptively create next collection 
> --
>
> Key: SOLR-12357
> URL: https://issues.apache.org/jira/browse/SOLR-12357
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Major
> Attachments: SOLR-12357.patch
>
>  Time Spent: 9.5h
>  Remaining Estimate: 0h
>
> When adding data to a Time Routed Alias (TRA), we sometimes need to create 
> new collections.  Today we only do this synchronously – on-demand when a 
> document is coming in.  But this can add delays as the documents inbound are 
> held up for a collection to be created.  And, there may be a problem like a 
> lack of resources (e.g. ample SolrCloud nodes with space) that the policy 
> framework defines.  Such problems could be rectified sooner rather than later 
> assume there is log alerting in place (definitely out of scope here).
> Pre-emptive TRA collection needs a time window configuration parameter, 
> perhaps named something like "preemptiveCreateWindowMs".  If a document's 
> timestamp is within this time window _from the end time of the head/lead 
> collection_ then the collection can be created pre-eptively.  If no data is 
> being sent to the TRA, no collections will be auto created, nor will it 
> happen if older data is being added.  It may be convenient to effectively 
> limit this time setting to the _smaller_ of this value and the TRA interval 
> window, which I think is a fine limitation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12028) BadApple and AwaitsFix annotations usage

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606674#comment-16606674
 ] 

ASF subversion and git services commented on SOLR-12028:


Commit 0dc66c236d5f61caad96e36454b4b15fbde35720 in lucene-solr's branch 
refs/heads/master from [~cp.erick...@gmail.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0dc66c2 ]

SOLR-12028: BadApple and AwaitsFix annotations usage


> BadApple and AwaitsFix annotations usage
> 
>
> Key: SOLR-12028
> URL: https://issues.apache.org/jira/browse/SOLR-12028
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12016-buildsystem.patch, SOLR-12028-3-Mar.patch, 
> SOLR-12028-sysprops-reproduce.patch, SOLR-12028.patch, SOLR-12028.patch
>
>
> There's a long discussion of this topic at SOLR-12016. Here's a summary:
> - BadApple annotations are used for tests that intermittently fail, say < 30% 
> of the time. Tests that fail more often shold be moved to AwaitsFix. This is, 
> of course, a judgement call
> - AwaitsFix annotations are used for tests that, for some reason, the problem 
> can't be fixed immediately. Likely reasons are third-party dependencies, 
> extreme difficulty tracking down, dependency on another JIRA etc.
> Jenkins jobs will typically run with BadApple disabled to cut down on noise. 
> Periodically Jenkins jobs will be run with BadApples enabled so BadApple 
> tests won't be lost and reports can be generated. Tests that run with 
> BadApples disabled that fail require _immediate_ attention.
> The default for developers is that BadApple is enabled.
> If you are working on one of these tests and cannot get the test to fail 
> locally, it is perfectly acceptable to comment the annotation out. You should 
> let the dev list know that this is deliberate.
> This JIRA is a placeholder for BadApple tests to point to between the times 
> they're identified as BadApple and they're either fixed or changed to 
> AwaitsFix or assigned their own JIRA.
> I've assigned this to myself to track so I don't lose track of it. No one 
> person will fix all of these issues, this will be an ongoing technical debt 
> cleanup effort.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #433: SOLR-12357 Premptive creation of collections ...

2018-09-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/lucene-solr/pull/433


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12357) TRA: Pre-emptively create next collection

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606671#comment-16606671
 ] 

ASF subversion and git services commented on SOLR-12357:


Commit 21d130c3edf8bfb21a3428fc95e5b67d6be757e7 in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=21d130c ]

SOLR-12357: TRA preemptiveCreateMath option.
Simplified test utility TrackingUpdateProcessorFactory.
Reverted some attempts the TRA used to make in avoiding overseer communication 
(too complicated).
Closes #433


> TRA: Pre-emptively create next collection 
> --
>
> Key: SOLR-12357
> URL: https://issues.apache.org/jira/browse/SOLR-12357
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Major
> Attachments: SOLR-12357.patch
>
>  Time Spent: 9h 20m
>  Remaining Estimate: 0h
>
> When adding data to a Time Routed Alias (TRA), we sometimes need to create 
> new collections.  Today we only do this synchronously – on-demand when a 
> document is coming in.  But this can add delays as the documents inbound are 
> held up for a collection to be created.  And, there may be a problem like a 
> lack of resources (e.g. ample SolrCloud nodes with space) that the policy 
> framework defines.  Such problems could be rectified sooner rather than later 
> assume there is log alerting in place (definitely out of scope here).
> Pre-emptive TRA collection needs a time window configuration parameter, 
> perhaps named something like "preemptiveCreateWindowMs".  If a document's 
> timestamp is within this time window _from the end time of the head/lead 
> collection_ then the collection can be created pre-eptively.  If no data is 
> being sent to the TRA, no collections will be auto created, nor will it 
> happen if older data is being added.  It may be convenient to effectively 
> limit this time setting to the _smaller_ of this value and the TRA interval 
> window, which I think is a fine limitation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12732) TestLogWatcher failure on Jenkins

2018-09-06 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606639#comment-16606639
 ] 

Erick Erickson edited comment on SOLR-12732 at 9/7/18 2:56 AM:
---

2,000 iterations later and I still can't get this to fail (without this patch). 
So checking this since it's a test-only change and I'll close this JIRA after 
the various Jenkins jobs have had a chance to chew on it for a while. Assuming 
it doesn't fail for a while of course.


was (Author: erickerickson):
2,000 iterations later and I still can't get this to fail (without this patch). 
So checking this since it's a test-only change and I'll close this JIRA after 
the various Jenkins jobs have had a chance to chew on it for a while.

> TestLogWatcher failure on Jenkins
> -
>
> Key: SOLR-12732
> URL: https://issues.apache.org/jira/browse/SOLR-12732
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-12732.patch
>
>
> I'm 99% certain this is a test artifact, I think I see the problem. It'll 
> take me a lot of beasting to nail it though.
> Working hypothesis is that the when we test for whether the new searcher has 
> no messages, we can test for no messages being logged against the watcher 
> before the new one _really_ gets active.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12732) TestLogWatcher failure on Jenkins

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606655#comment-16606655
 ] 

ASF subversion and git services commented on SOLR-12732:


Commit dd2f85f011277a1d1170838120d9c7b7c8f34ebc in lucene-solr's branch 
refs/heads/branch_7x from [~cp.erick...@gmail.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=dd2f85f ]

SOLR-12732: TestLogWatcher failure on Jenkins

(cherry picked from commit 9e04375dc193d3815e9d755514a960f902c60cd2)


> TestLogWatcher failure on Jenkins
> -
>
> Key: SOLR-12732
> URL: https://issues.apache.org/jira/browse/SOLR-12732
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-12732.patch
>
>
> I'm 99% certain this is a test artifact, I think I see the problem. It'll 
> take me a lot of beasting to nail it though.
> Working hypothesis is that the when we test for whether the new searcher has 
> no messages, we can test for no messages being logged against the watcher 
> before the new one _really_ gets active.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12642) SolrCmdDistributor should send updates in batch when use Http2SolrClient?

2018-09-06 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606648#comment-16606648
 ] 

Cao Manh Dat edited comment on SOLR-12642 at 9/7/18 2:51 AM:
-

Hi guys, thanks to [~shalinmangar] works on 
[https://github.com/shalinmangar/solr-perf-tools]. I was able to test the 
performance between jira/http2 branch and master branch. The log results are 
attached. But I will summary it here.

There are 4 tests, all of them are testing the perfomance of indexing. Only the 
last test shows difference between branches since it is the only test using 
SolrCloud setup.

The 4th test using CloudSolrClient to index 33M wiki documents on a collection 
with one shard with 1 leader and 1 NRT replica.

 

The tests are run on a single 
[https://www.packet.net/bare-metal/servers/c1-small/] machine
|Documents indexed: 2620|
|Bytes indexed: 32244883917.0|
| |*jira/http2 branch*|*master branch*|
|Time taken (total) in sec|1,572.90|2415.1|
|Garbage generated by replica node (in MB)|266,847.40|1,131,187.50|
|Garbage generated by leader node (in MB)|1,006,244.00|1,351,830.70|
|Time in GC for replica (ms)|13.3|90.9|
|Time in GC for leader (ms)|88.2|99|
|Average System Load|10.157|13.525|
|Average CPU Time of replica node (800 total)|78.812|332.467|
|Average CPU Time of leader node (800 total)|513.968|369.281|
|Average CPU Load of replica node (%)|10.657|41.28|
|Average CPU Load of leader node (%)|64.048|46.359|

Note: 800 in CPU time means, the total power of 8 threads per second

As we can see the significant improvement on jira/http2 branch. The only 
downside here is CPU Time seems increased by 40% on leader node. I think that 
by solving this issue the CPU Time will decrease in leader node, but I'm not 
sure how much it will decrease. May be the CPU increased because the rate of 
indexing documents in master is much faster in jira/http2 branch. Furthermore I 
tried to do this issue but it is quite complex and hidden errors can happen. 

*Therefore I think that this issue is not a must for jira/http2 for merging 
into master branch.*


was (Author: caomanhdat):
Hi guys, thanks to [~shalinmangar] works on 
[https://github.com/shalinmangar/solr-perf-tools]. I was able to test the 
performance between jira/http2 branch and master branch. The log results are 
attached. But I will summary it here.

There are 4 tests, all of them are testing the perfomance of indexing. Only the 
last test shows difference between branches since it is the only test using 
SolrCloud setup.

The 4th test using CloudSolrClient to index 33M wiki documents on a collection 
with one shard with 1 leader and 1 NRT replica.

 

The tests are run on [https://www.packet.net/bare-metal/servers/c1-small/] 
|Documents indexed: 2620|
|Bytes indexed: 32244883917.0|
| |*jira/http2 branch*|*master branch*|
|Time taken (total) in sec|1,572.90|2415.1|
|Garbage generated by replica node (in MB)|266,847.40|1,131,187.50|
|Garbage generated by leader node (in MB)|1,006,244.00|1,351,830.70|
|Time in GC for replica (ms)|13.3|90.9|
|Time in GC for leader (ms)|88.2|99|
|Average System Load|10.157|13.525|
|Average CPU Time of replica node (800 total)|78.812|332.467|
|Average CPU Time of leader node (800 total)|513.968|369.281|
|Average CPU Load of replica node (%)|10.657|41.28|
|Average CPU Load of leader node (%)|64.048|46.359|

Note: 800 in CPU time means, the total power of 8 threads per second

As we can see the significant improvement on jira/http2 branch. The only 
downside here is CPU Time seems increased by 40% on leader node. I think that 
by solving this issue the CPU Time will decrease in leader node, but I'm not 
sure how much it will decrease. May be the CPU increased because the rate of 
indexing documents in master is much faster in jira/http2 branch. Furthermore I 
tried to do this issue but it is quite complex and hidden errors can happen. 

*Therefore I think that this issue is not a must for jira/http2 for merging 
into master branch.*

> SolrCmdDistributor should send updates in batch when use Http2SolrClient?
> -
>
> Key: SOLR-12642
> URL: https://issues.apache.org/jira/browse/SOLR-12642
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Priority: Major
> Attachments: http2-branch.log, master-branch.log
>
>
> In the past, batch updates are sent in a single stream from the leader, the 
> replica will create a single thread to parse all the updates. For the 
> simplicity of {{SOLR-12605}}, the leader is now sending individual updates to 
> replicas, therefore they are now parsing updates in different threads which 
> increase the usage of memory and CPU.
> In the past, this is an 

[jira] [Comment Edited] (SOLR-12642) SolrCmdDistributor should send updates in batch when use Http2SolrClient?

2018-09-06 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606648#comment-16606648
 ] 

Cao Manh Dat edited comment on SOLR-12642 at 9/7/18 2:51 AM:
-

Hi guys, thanks to [~shalinmangar] works on 
[https://github.com/shalinmangar/solr-perf-tools]. I was able to test the 
performance between jira/http2 branch and master branch. The log results are 
attached. But I will summary it here.

There are 4 tests, all of them are testing the perfomance of indexing. Only the 
last test shows difference between branches since it is the only test using 
SolrCloud setup.

The 4th test using CloudSolrClient to index 33M wiki documents on a collection 
with one shard with 1 leader and 1 NRT replica.

 

The tests are run on [https://www.packet.net/bare-metal/servers/c1-small/] 
|Documents indexed: 2620|
|Bytes indexed: 32244883917.0|
| |*jira/http2 branch*|*master branch*|
|Time taken (total) in sec|1,572.90|2415.1|
|Garbage generated by replica node (in MB)|266,847.40|1,131,187.50|
|Garbage generated by leader node (in MB)|1,006,244.00|1,351,830.70|
|Time in GC for replica (ms)|13.3|90.9|
|Time in GC for leader (ms)|88.2|99|
|Average System Load|10.157|13.525|
|Average CPU Time of replica node (800 total)|78.812|332.467|
|Average CPU Time of leader node (800 total)|513.968|369.281|
|Average CPU Load of replica node (%)|10.657|41.28|
|Average CPU Load of leader node (%)|64.048|46.359|

Note: 800 in CPU time means, the total power of 8 threads per second

As we can see the significant improvement on jira/http2 branch. The only 
downside here is CPU Time seems increased by 40% on leader node. I think that 
by solving this issue the CPU Time will decrease in leader node, but I'm not 
sure how much it will decrease. May be the CPU increased because the rate of 
indexing documents in master is much faster in jira/http2 branch. Furthermore I 
tried to do this issue but it is quite complex and hidden errors can happen. 

*Therefore I think that this issue is not a must for jira/http2 for merging 
into master branch.*


was (Author: caomanhdat):
Hi guys, thanks to [~shalinmangar] works on 
[https://github.com/shalinmangar/solr-perf-tools]. I was able to test the 
performance between jira/http2 branch and master branch. The log results are 
attached. But I will summary it here.

There are 4 tests, all of them are testing the perfomance of indexing. Only the 
last test shows difference between branches since it is the only test using 
SolrCloud setup.

The 4th test using CloudSolrClient to index 33M wiki documents on a collection 
with one shard with 1 leader and 1 NRT replica.

 
|Documents indexed: 2620|
|Bytes indexed: 32244883917.0|
| |*jira/http2 branch*|*master branch*|
|Time taken (total) in sec|1,572.90|2415.1|
|Garbage generated by replica node (in MB)|266,847.40|1,131,187.50|
|Garbage generated by leader node (in MB)|1,006,244.00|1,351,830.70|
|Time in GC for replica (ms)|13.3|90.9|
|Time in GC for leader (ms)|88.2|99|
|Average System Load|10.157|13.525|
|Average CPU Time of replica node (800 total)|78.812|332.467|
|Average CPU Time of leader node (800 total)|513.968|369.281|
|Average CPU Load of replica node (%)|10.657|41.28|
|Average CPU Load of leader node (%)|64.048|46.359|

Note: 800 in CPU time means, the total power of 8 threads per second

As we can see the significant improvement on jira/http2 branch. The only 
downside here is CPU Time seems increased by 40% on leader node. I think that 
by solving this issue the CPU Time will decrease in leader node, but I'm not 
sure how much it will decrease. May be the CPU increased because the rate of 
indexing documents in master is much faster in jira/http2 branch. Furthermore I 
tried to do this issue but it is quite complex and hidden errors can happen. 

*Therefore I think that this issue is not a must for jira/http2 for merging 
into master branch.*

> SolrCmdDistributor should send updates in batch when use Http2SolrClient?
> -
>
> Key: SOLR-12642
> URL: https://issues.apache.org/jira/browse/SOLR-12642
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Priority: Major
> Attachments: http2-branch.log, master-branch.log
>
>
> In the past, batch updates are sent in a single stream from the leader, the 
> replica will create a single thread to parse all the updates. For the 
> simplicity of {{SOLR-12605}}, the leader is now sending individual updates to 
> replicas, therefore they are now parsing updates in different threads which 
> increase the usage of memory and CPU.
> In the past, this is an unacceptable approach, because, for every update, we 
> must create different connections to 

[jira] [Updated] (SOLR-12642) SolrCmdDistributor should send updates in batch when use Http2SolrClient?

2018-09-06 Thread Cao Manh Dat (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-12642:

Attachment: master-branch.log
http2-branch.log

> SolrCmdDistributor should send updates in batch when use Http2SolrClient?
> -
>
> Key: SOLR-12642
> URL: https://issues.apache.org/jira/browse/SOLR-12642
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Priority: Major
> Attachments: http2-branch.log, master-branch.log
>
>
> In the past, batch updates are sent in a single stream from the leader, the 
> replica will create a single thread to parse all the updates. For the 
> simplicity of {{SOLR-12605}}, the leader is now sending individual updates to 
> replicas, therefore they are now parsing updates in different threads which 
> increase the usage of memory and CPU.
> In the past, this is an unacceptable approach, because, for every update, we 
> must create different connections to replicas. But with the support of 
> HTTP/2, all updates will be sent in a single connection from leader to a 
> replica. Therefore the cost is not as high as it used to be.
> On the other hand, sending individual updates will improve the indexing 
> performance and better error-handling for failures of a single update in a 
> batch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12642) SolrCmdDistributor should send updates in batch when use Http2SolrClient?

2018-09-06 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606648#comment-16606648
 ] 

Cao Manh Dat commented on SOLR-12642:
-

Hi guys, thanks to [~shalinmangar] works on 
[https://github.com/shalinmangar/solr-perf-tools]. I was able to test the 
performance between jira/http2 branch and master branch. The log results are 
attached. But I will summary it here.

There are 4 tests, all of them are testing the perfomance of indexing. Only the 
last test shows difference between branches since it is the only test using 
SolrCloud setup.

The 4th test using CloudSolrClient to index 33M wiki documents on a collection 
with one shard with 1 leader and 1 NRT replica.

 
|Documents indexed: 2620|
|Bytes indexed: 32244883917.0|
| |*jira/http2 branch*|*master branch*|
|Time taken (total) in sec|1,572.90|2415.1|
|Garbage generated by replica node (in MB)|266,847.40|1,131,187.50|
|Garbage generated by leader node (in MB)|1,006,244.00|1,351,830.70|
|Time in GC for replica (ms)|13.3|90.9|
|Time in GC for leader (ms)|88.2|99|
|Average System Load|10.157|13.525|
|Average CPU Time of replica node (800 total)|78.812|332.467|
|Average CPU Time of leader node (800 total)|513.968|369.281|
|Average CPU Load of replica node (%)|10.657|41.28|
|Average CPU Load of leader node (%)|64.048|46.359|

Note: 800 in CPU time means, the total power of 8 threads per second

As we can see the significant improvement on jira/http2 branch. The only 
downside here is CPU Time seems increased by 40% on leader node. I think that 
by solving this issue the CPU Time will decrease in leader node, but I'm not 
sure how much it will decrease. May be the CPU increased because the rate of 
indexing documents in master is much faster in jira/http2 branch. Furthermore I 
tried to do this issue but it is quite complex and hidden errors can happen. 

*Therefore I think that this issue is not a must for jira/http2 for merging 
into master branch.*

> SolrCmdDistributor should send updates in batch when use Http2SolrClient?
> -
>
> Key: SOLR-12642
> URL: https://issues.apache.org/jira/browse/SOLR-12642
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Priority: Major
>
> In the past, batch updates are sent in a single stream from the leader, the 
> replica will create a single thread to parse all the updates. For the 
> simplicity of {{SOLR-12605}}, the leader is now sending individual updates to 
> replicas, therefore they are now parsing updates in different threads which 
> increase the usage of memory and CPU.
> In the past, this is an unacceptable approach, because, for every update, we 
> must create different connections to replicas. But with the support of 
> HTTP/2, all updates will be sent in a single connection from leader to a 
> replica. Therefore the cost is not as high as it used to be.
> On the other hand, sending individual updates will improve the indexing 
> performance and better error-handling for failures of a single update in a 
> batch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12732) TestLogWatcher failure on Jenkins

2018-09-06 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606639#comment-16606639
 ] 

Erick Erickson commented on SOLR-12732:
---

2,000 iterations later and I still can't get this to fail (without this patch). 
So checking this since it's a test-only change and I'll close this JIRA after 
the various Jenkins jobs have had a chance to chew on it for a while.

> TestLogWatcher failure on Jenkins
> -
>
> Key: SOLR-12732
> URL: https://issues.apache.org/jira/browse/SOLR-12732
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-12732.patch
>
>
> I'm 99% certain this is a test artifact, I think I see the problem. It'll 
> take me a lot of beasting to nail it though.
> Working hypothesis is that the when we test for whether the new searcher has 
> no messages, we can test for no messages being logged against the watcher 
> before the new one _really_ gets active.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12732) TestLogWatcher failure on Jenkins

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606640#comment-16606640
 ] 

ASF subversion and git services commented on SOLR-12732:


Commit 9e04375dc193d3815e9d755514a960f902c60cd2 in lucene-solr's branch 
refs/heads/master from [~cp.erick...@gmail.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9e04375 ]

SOLR-12732: TestLogWatcher failure on Jenkins


> TestLogWatcher failure on Jenkins
> -
>
> Key: SOLR-12732
> URL: https://issues.apache.org/jira/browse/SOLR-12732
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-12732.patch
>
>
> I'm 99% certain this is a test artifact, I think I see the problem. It'll 
> take me a lot of beasting to nail it though.
> Working hypothesis is that the when we test for whether the new searcher has 
> no messages, we can test for no messages being logged against the watcher 
> before the new one _really_ gets active.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12732) TestLogWatcher failure on Jenkins

2018-09-06 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-12732:
--
Attachment: SOLR-12732.patch

> TestLogWatcher failure on Jenkins
> -
>
> Key: SOLR-12732
> URL: https://issues.apache.org/jira/browse/SOLR-12732
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-12732.patch
>
>
> I'm 99% certain this is a test artifact, I think I see the problem. It'll 
> take me a lot of beasting to nail it though.
> Working hypothesis is that the when we test for whether the new searcher has 
> no messages, we can test for no messages being logged against the watcher 
> before the new one _really_ gets active.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4818 - Still Unstable!

2018-09-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4818/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.uninverting.TestNumericTerms32

Error Message:
The test or suite printed 56270527 bytes to stdout and stderr, even though the 
limit was set to 8192 bytes. Increase the limit with @Limit, ignore it 
completely with @SuppressSysoutChecks or run with -Dtests.verbose=true

Stack Trace:
java.lang.AssertionError: The test or suite printed 56270527 bytes to stdout 
and stderr, even though the limit was set to 8192 bytes. Increase the limit 
with @Limit, ignore it completely with @SuppressSysoutChecks or run with 
-Dtests.verbose=true
at __randomizedtesting.SeedInfo.seed([D097D6CFDEA12720]:0)
at 
org.apache.lucene.util.TestRuleLimitSysouts.afterIfSuccessful(TestRuleLimitSysouts.java:211)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterIfSuccessful(TestRuleAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:37)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 13602 lines...]
   [junit4] Suite: org.apache.solr.uninverting.TestNumericTerms32
   [junit4]   2> 1807460 INFO  (READER2) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=0=json} status=0 QTime=0
   [junit4]   2> 1807460 INFO  (READER3) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=6=json} status=0 QTime=0
   [junit4]   2> 1807460 INFO  (READER2) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={q=id:8=true=json} hits=0 status=0 
QTime=0
   [junit4]   2> 1807460 INFO  (READER3) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=0=json} status=0 QTime=0
   [junit4]   2> 1807460 INFO  (READER2) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=15=json} status=0 QTime=0
   [junit4]   2> 1807460 INFO  (READER3) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=2=json} status=0 QTime=0
   [junit4]   2> 1807460 INFO  (READER2) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=4=json} status=0 QTime=0
   [junit4]   2> 1807460 INFO  (READER3) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=6=json} status=0 QTime=0
   [junit4]   2> 1807460 INFO  (READER2) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=17=json} status=0 QTime=0
   [junit4]   2> 1807460 INFO  (READER3) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=14=json} status=0 QTime=0
   [junit4]   2> 1807460 INFO  (READER2) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=3=json} status=0 QTime=0
   [junit4]   2> 1807460 INFO  (READER3) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=10=json} status=0 QTime=0
   [junit4]   2> 1807460 INFO  (READER2) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=15=json} status=0 QTime=0
   [junit4]   2> 1807460 INFO  (READER3) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=0=json} status=0 QTime=0
   [junit4]   2> 1807460 INFO  (READER2) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=0=json} status=0 QTime=0
   [junit4]   2> 1807460 INFO  (READER3) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={q=id:14=true=json} hits=0 status=0 
QTime=0
   [junit4]   2> 1807460 INFO  (READER2) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={q=id:11=true=json} hits=0 status=0 
QTime=0
   [junit4]   2> 1807460 INFO  (READER2) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=12=json} status=0 QTime=0
   [junit4]   2> 1807460 INFO  (READER3) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={q=id:1=true=json} hits=0 status=0 
QTime=0
   [junit4]   2> 1807460 INFO  (READER2) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=0=json} status=0 QTime=0
   [junit4]   2> 1807460 INFO  (READER3) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null 

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_172) - Build # 2700 - Still Unstable!

2018-09-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2700/
Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.UnloadDistributedZkTest.test

Error Message:
Timeout occured while waiting response from server at: http://127.0.0.1:35189

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:35189
at 
__randomizedtesting.SeedInfo.seed([70467D520B167411:F8124288A5EA19E9]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.UnloadDistributedZkTest.testCoreUnloadAndLeaders(UnloadDistributedZkTest.java:307)
at 
org.apache.solr.cloud.UnloadDistributedZkTest.test(UnloadDistributedZkTest.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (SOLR-12749) timeseries() expression missing sum() results for empty buckets

2018-09-06 Thread Joel Bernstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606576#comment-16606576
 ] 

Joel Bernstein commented on SOLR-12749:
---

Ok, just need to update the CHANGES.txt. I'll get to that shortly.

> timeseries() expression missing sum() results for empty buckets
> ---
>
> Key: SOLR-12749
> URL: https://issues.apache.org/jira/browse/SOLR-12749
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 7.4
>Reporter: Jan Høydahl
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12749.patch
>
>
> See solr-user post 
> [https://lists.apache.org/thread.html/aeacef8fd8cee980bb74f2f6b7e1c3fd0b7ead7d7a0e7b79dd48659f@%3Csolr-user.lucene.apache.org%3E]
>  
> We have a timeseries expression with gap="+1DAY" and a sum(imps_l) to 
> aggregate sums of an integer for each bucket. Now, some day buckets do not 
> contain any documents at all, and instead of returning a tuple with value 0, 
> it returns a tuple with no entry at all for the sum, see the bucket for 
> date_dt 2018-06-22 below:
> {code:javascript}
> {
>  "result-set": {
>    "docs": [
>  {
>    "sum(imps_l)": 0,
>    "date_dt": "2018-06-21",
>    "count(*)": 5
>  },
>  {
>    "date_dt": "2018-06-22",
>    "count(*)": 0
>  },
>  {
>    "EOF": true,
>    "RESPONSE_TIME": 3
>  }
>    ]
>  }
> }{code}
> Now when we want to convert this into a column using col(a,'sum(imps_l)') 
> then that array will get mostly numbers but also some string entries 
> 'sum(imps_l)' which is the key name. I need purely integers in the column.
> Should the timeseries() have output values for all functions even if there 
> are no documents in the bucket? Or is there something similar to the select() 
> expression that can take a stream of tuples not originating directly from 
> search() and replace values? Or is there perhaps a function that can loop 
> through the column produced by col() and replace non-numeric values with 0?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12749) timeseries() expression missing sum() results for empty buckets

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606575#comment-16606575
 ] 

ASF subversion and git services commented on SOLR-12749:


Commit 7715bd02e644be4277f1b0b8d387c838657b3096 in lucene-solr's branch 
refs/heads/branch_7x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7715bd0 ]

SOLR-12749: timeseries() expression missing sum() results for empty buckets


> timeseries() expression missing sum() results for empty buckets
> ---
>
> Key: SOLR-12749
> URL: https://issues.apache.org/jira/browse/SOLR-12749
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 7.4
>Reporter: Jan Høydahl
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12749.patch
>
>
> See solr-user post 
> [https://lists.apache.org/thread.html/aeacef8fd8cee980bb74f2f6b7e1c3fd0b7ead7d7a0e7b79dd48659f@%3Csolr-user.lucene.apache.org%3E]
>  
> We have a timeseries expression with gap="+1DAY" and a sum(imps_l) to 
> aggregate sums of an integer for each bucket. Now, some day buckets do not 
> contain any documents at all, and instead of returning a tuple with value 0, 
> it returns a tuple with no entry at all for the sum, see the bucket for 
> date_dt 2018-06-22 below:
> {code:javascript}
> {
>  "result-set": {
>    "docs": [
>  {
>    "sum(imps_l)": 0,
>    "date_dt": "2018-06-21",
>    "count(*)": 5
>  },
>  {
>    "date_dt": "2018-06-22",
>    "count(*)": 0
>  },
>  {
>    "EOF": true,
>    "RESPONSE_TIME": 3
>  }
>    ]
>  }
> }{code}
> Now when we want to convert this into a column using col(a,'sum(imps_l)') 
> then that array will get mostly numbers but also some string entries 
> 'sum(imps_l)' which is the key name. I need purely integers in the column.
> Should the timeseries() have output values for all functions even if there 
> are no documents in the bucket? Or is there something similar to the select() 
> expression that can take a stream of tuples not originating directly from 
> search() and replace values? Or is there perhaps a function that can loop 
> through the column produced by col() and replace non-numeric values with 0?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12749) timeseries() expression missing sum() results for empty buckets

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606571#comment-16606571
 ] 

ASF subversion and git services commented on SOLR-12749:


Commit 98611d33a7f334ece5faba594120ac3398a0009d in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=98611d3 ]

SOLR-12749: timeseries() expression missing sum() results for empty buckets


> timeseries() expression missing sum() results for empty buckets
> ---
>
> Key: SOLR-12749
> URL: https://issues.apache.org/jira/browse/SOLR-12749
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 7.4
>Reporter: Jan Høydahl
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12749.patch
>
>
> See solr-user post 
> [https://lists.apache.org/thread.html/aeacef8fd8cee980bb74f2f6b7e1c3fd0b7ead7d7a0e7b79dd48659f@%3Csolr-user.lucene.apache.org%3E]
>  
> We have a timeseries expression with gap="+1DAY" and a sum(imps_l) to 
> aggregate sums of an integer for each bucket. Now, some day buckets do not 
> contain any documents at all, and instead of returning a tuple with value 0, 
> it returns a tuple with no entry at all for the sum, see the bucket for 
> date_dt 2018-06-22 below:
> {code:javascript}
> {
>  "result-set": {
>    "docs": [
>  {
>    "sum(imps_l)": 0,
>    "date_dt": "2018-06-21",
>    "count(*)": 5
>  },
>  {
>    "date_dt": "2018-06-22",
>    "count(*)": 0
>  },
>  {
>    "EOF": true,
>    "RESPONSE_TIME": 3
>  }
>    ]
>  }
> }{code}
> Now when we want to convert this into a column using col(a,'sum(imps_l)') 
> then that array will get mostly numbers but also some string entries 
> 'sum(imps_l)' which is the key name. I need purely integers in the column.
> Should the timeseries() have output values for all functions even if there 
> are no documents in the bucket? Or is there something similar to the select() 
> expression that can take a stream of tuples not originating directly from 
> search() and replace values? Or is there perhaps a function that can loop 
> through the column produced by col() and replace non-numeric values with 0?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12749) timeseries() expression missing sum() results for empty buckets

2018-09-06 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-12749:
--
Attachment: SOLR-12749.patch

> timeseries() expression missing sum() results for empty buckets
> ---
>
> Key: SOLR-12749
> URL: https://issues.apache.org/jira/browse/SOLR-12749
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 7.4
>Reporter: Jan Høydahl
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12749.patch
>
>
> See solr-user post 
> [https://lists.apache.org/thread.html/aeacef8fd8cee980bb74f2f6b7e1c3fd0b7ead7d7a0e7b79dd48659f@%3Csolr-user.lucene.apache.org%3E]
>  
> We have a timeseries expression with gap="+1DAY" and a sum(imps_l) to 
> aggregate sums of an integer for each bucket. Now, some day buckets do not 
> contain any documents at all, and instead of returning a tuple with value 0, 
> it returns a tuple with no entry at all for the sum, see the bucket for 
> date_dt 2018-06-22 below:
> {code:javascript}
> {
>  "result-set": {
>    "docs": [
>  {
>    "sum(imps_l)": 0,
>    "date_dt": "2018-06-21",
>    "count(*)": 5
>  },
>  {
>    "date_dt": "2018-06-22",
>    "count(*)": 0
>  },
>  {
>    "EOF": true,
>    "RESPONSE_TIME": 3
>  }
>    ]
>  }
> }{code}
> Now when we want to convert this into a column using col(a,'sum(imps_l)') 
> then that array will get mostly numbers but also some string entries 
> 'sum(imps_l)' which is the key name. I need purely integers in the column.
> Should the timeseries() have output values for all functions even if there 
> are no documents in the bucket? Or is there something similar to the select() 
> expression that can take a stream of tuples not originating directly from 
> search() and replace values? Or is there perhaps a function that can loop 
> through the column produced by col() and replace non-numeric values with 0?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-10.0.1) - Build # 777 - Still Unstable!

2018-09-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/777/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ActionThrottleTest.testAZeroNanoTimeReturnInWait

Error Message:
994ms

Stack Trace:
java.lang.AssertionError: 994ms
at 
__randomizedtesting.SeedInfo.seed([8A165D92C8D8D7FD:497DA6A07C992A1E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.ActionThrottleTest.testAZeroNanoTimeReturnInWait(ActionThrottleTest.java:113)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 13954 lines...]
   [junit4] Suite: org.apache.solr.cloud.ActionThrottleTest
   [junit4]   2> 2894968 INFO  
(SUITE-ActionThrottleTest-seed#[8A165D92C8D8D7FD]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> 

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-11-ea+28) - Build # 22813 - Unstable!

2018-09-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22813/
Java: 64bit/jdk-11-ea+28 -XX:+UseCompressedOops -XX:+UseSerialGC

33 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.io.stream.StreamDecoratorTest

Error Message:
6 threads leaked from SUITE scope at 
org.apache.solr.client.solrj.io.stream.StreamDecoratorTest: 1) 
Thread[id=281, name=zkConnectionManagerCallback-108-thread-1, state=WAITING, 
group=TGRP-StreamDecoratorTest] at 
java.base@11/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@11/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)  
   at 
java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@11/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 
java.base@11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
 at 
java.base@11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 at 
java.base@11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at java.base@11/java.lang.Thread.run(Thread.java:834)2) 
Thread[id=280, 
name=TEST-StreamDecoratorTest.testParallelExecutorStream-seed#[D42E47B65AEF40EE]-EventThread,
 state=WAITING, group=TGRP-StreamDecoratorTest] at 
java.base@11/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@11/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)  
   at 
java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@11/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 
app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)3) 
Thread[id=284, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-StreamDecoratorTest] at 
java.base@11/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@11/java.lang.Thread.run(Thread.java:834)4) 
Thread[id=278, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-StreamDecoratorTest] at 
java.base@11/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@11/java.lang.Thread.run(Thread.java:834)5) 
Thread[id=285, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-StreamDecoratorTest] at 
java.base@11/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@11/java.lang.Thread.run(Thread.java:834)6) 
Thread[id=279, 
name=TEST-StreamDecoratorTest.testParallelExecutorStream-seed#[D42E47B65AEF40EE]-SendThread(127.0.0.1:34445),
 state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at 
java.base@11/java.lang.Thread.sleep(Native Method) at 
app//org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:105)
 at 
app//org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1000)
 at 
app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1063)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 6 threads leaked from SUITE 
scope at org.apache.solr.client.solrj.io.stream.StreamDecoratorTest: 
   1) Thread[id=281, name=zkConnectionManagerCallback-108-thread-1, 
state=WAITING, group=TGRP-StreamDecoratorTest]
at java.base@11/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@11/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
at 
java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
at 
java.base@11/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
at 
java.base@11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
at 
java.base@11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
at 
java.base@11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base@11/java.lang.Thread.run(Thread.java:834)
   2) Thread[id=280, 
name=TEST-StreamDecoratorTest.testParallelExecutorStream-seed#[D42E47B65AEF40EE]-EventThread,
 state=WAITING, group=TGRP-StreamDecoratorTest]
at java.base@11/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@11/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
at 
java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
at 

[JENKINS-EA] Lucene-Solr-master-Windows (64bit/jdk-11-ea+28) - Build # 7509 - Still Unstable!

2018-09-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7509/
Java: 64bit/jdk-11-ea+28 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

9 tests failed.
FAILED:  
org.apache.solr.cloud.TestSkipOverseerOperations.testSkipLeaderOperations

Error Message:
IOException occured when talking to server at: https://127.0.0.1:58522/solr

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: https://127.0.0.1:58522/solr
at 
__randomizedtesting.SeedInfo.seed([3124C82A68D02CF2:C1CE1BD6B7F35C98]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:657)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.TestSkipOverseerOperations.testSkipLeaderOperations(TestSkipOverseerOperations.java:71)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
   

[jira] [Updated] (SOLR-11943) Add machine learning functions for location data

2018-09-06 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11943:
--
Fix Version/s: 7.5
   master (8.0)

> Add machine learning functions for location data
> 
>
> Key: SOLR-11943
> URL: https://issues.apache.org/jira/browse/SOLR-11943
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11943.patch
>
>
> This ticket will add the following functions / features:
> 1) *locationVectors* function: Reads a list of tuples that contain a 
> *location* field type and returns a *matrix* of lat/long vectors. 
> 2) Add support for *haversinMeters* distance measure.
> With the addition of these two functions we'll have the ability to do various 
> distance based machine learning algorithms (distance matrices, clustering, 
> knn regression etc...) with location data.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11943) Add machine learning functions for location data

2018-09-06 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-11943.
---
Resolution: Fixed

> Add machine learning functions for location data
> 
>
> Key: SOLR-11943
> URL: https://issues.apache.org/jira/browse/SOLR-11943
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11943.patch
>
>
> This ticket will add the following functions / features:
> 1) *locationVectors* function: Reads a list of tuples that contain a 
> *location* field type and returns a *matrix* of lat/long vectors. 
> 2) Add support for *haversinMeters* distance measure.
> With the addition of these two functions we'll have the ability to do various 
> distance based machine learning algorithms (distance matrices, clustering, 
> knn regression etc...) with location data.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12612) Accept any key in cluster properties

2018-09-06 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe resolved SOLR-12612.
--
   Resolution: Fixed
Fix Version/s: 7.5

> Accept any key in cluster properties
> 
>
> Key: SOLR-12612
> URL: https://issues.apache.org/jira/browse/SOLR-12612
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4, master (8.0)
>Reporter: jefferyyuan
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12612-docs.patch
>
>
> Cluster properties is a good place to store configuration data that's shared 
> in the whole cluster: solr and other (authorized) apps can easily read and 
> update them.
>  
> It would be very useful if we can store extra data in cluster properties 
> which would act as a centralized property management system between solr and 
> its related apps (like manager or monitor apps).
>  
> And the change would be also very simple.
> We can also require all extra property starts with prefix like: extra_
>  
> PR: https://github.com/apache/lucene-solr/pull/429
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #429: Accept any key in cluster properties

2018-09-06 Thread tflobbe
Github user tflobbe commented on the issue:

https://github.com/apache/lucene-solr/pull/429
  
Sorry @jefferyyuan, I forgot to mention the PR in the commit, so you'll 
need to manually close it. Here is the commit: 
0af269fb4975e404e07d6e512cfbbac206920672


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12612) Accept any key in cluster properties

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606423#comment-16606423
 ] 

ASF subversion and git services commented on SOLR-12612:


Commit cdfc9986e83a906e2b990079f58e4fff48e02ead in lucene-solr's branch 
refs/heads/branch_7x from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cdfc998 ]

SOLR-12612: Accept custom keys in cluster properties (doc changes)

Also added missing known cluster properties


> Accept any key in cluster properties
> 
>
> Key: SOLR-12612
> URL: https://issues.apache.org/jira/browse/SOLR-12612
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4, master (8.0)
>Reporter: jefferyyuan
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: SOLR-12612-docs.patch
>
>
> Cluster properties is a good place to store configuration data that's shared 
> in the whole cluster: solr and other (authorized) apps can easily read and 
> update them.
>  
> It would be very useful if we can store extra data in cluster properties 
> which would act as a centralized property management system between solr and 
> its related apps (like manager or monitor apps).
>  
> And the change would be also very simple.
> We can also require all extra property starts with prefix like: extra_
>  
> PR: https://github.com/apache/lucene-solr/pull/429
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12612) Accept any key in cluster properties

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606422#comment-16606422
 ] 

ASF subversion and git services commented on SOLR-12612:


Commit ccd9f6fccb2fe7312150cb2844dbd4fbfaf1e7e6 in lucene-solr's branch 
refs/heads/master from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ccd9f6f ]

SOLR-12612: Accept custom keys in cluster properties (doc changes)

Also added missing known cluster properties


> Accept any key in cluster properties
> 
>
> Key: SOLR-12612
> URL: https://issues.apache.org/jira/browse/SOLR-12612
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4, master (8.0)
>Reporter: jefferyyuan
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: SOLR-12612-docs.patch
>
>
> Cluster properties is a good place to store configuration data that's shared 
> in the whole cluster: solr and other (authorized) apps can easily read and 
> update them.
>  
> It would be very useful if we can store extra data in cluster properties 
> which would act as a centralized property management system between solr and 
> its related apps (like manager or monitor apps).
>  
> And the change would be also very simple.
> We can also require all extra property starts with prefix like: extra_
>  
> PR: https://github.com/apache/lucene-solr/pull/429
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12612) Accept any key in cluster properties

2018-09-06 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe reassigned SOLR-12612:


Assignee: Tomás Fernández Löbbe

> Accept any key in cluster properties
> 
>
> Key: SOLR-12612
> URL: https://issues.apache.org/jira/browse/SOLR-12612
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4, master (8.0)
>Reporter: jefferyyuan
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Fix For: master (8.0)
>
>
> Cluster properties is a good place to store configuration data that's shared 
> in the whole cluster: solr and other (authorized) apps can easily read and 
> update them.
>  
> It would be very useful if we can store extra data in cluster properties 
> which would act as a centralized property management system between solr and 
> its related apps (like manager or monitor apps).
>  
> And the change would be also very simple.
> We can also require all extra property starts with prefix like: extra_
>  
> PR: https://github.com/apache/lucene-solr/pull/429
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12612) Accept any key in cluster properties

2018-09-06 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606395#comment-16606395
 ] 

Tomás Fernández Löbbe commented on SOLR-12612:
--

I'll update the docs shortly

> Accept any key in cluster properties
> 
>
> Key: SOLR-12612
> URL: https://issues.apache.org/jira/browse/SOLR-12612
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4, master (8.0)
>Reporter: jefferyyuan
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Fix For: master (8.0)
>
>
> Cluster properties is a good place to store configuration data that's shared 
> in the whole cluster: solr and other (authorized) apps can easily read and 
> update them.
>  
> It would be very useful if we can store extra data in cluster properties 
> which would act as a centralized property management system between solr and 
> its related apps (like manager or monitor apps).
>  
> And the change would be also very simple.
> We can also require all extra property starts with prefix like: extra_
>  
> PR: https://github.com/apache/lucene-solr/pull/429
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12612) Accept any key in cluster properties

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606393#comment-16606393
 ] 

ASF subversion and git services commented on SOLR-12612:


Commit 50c92f1a0a1006af5b03ce276796b4378e0ecdc9 in lucene-solr's branch 
refs/heads/branch_7x from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=50c92f1 ]

SOLR-12612: Accept custom keys in cluster properties

Cluster properties restriction of known keys only is relaxed, and now unknown 
properties starting with "ext."
will be allowed. This allows custom to plugins set their own cluster properties.


> Accept any key in cluster properties
> 
>
> Key: SOLR-12612
> URL: https://issues.apache.org/jira/browse/SOLR-12612
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4, master (8.0)
>Reporter: jefferyyuan
>Priority: Minor
> Fix For: master (8.0)
>
>
> Cluster properties is a good place to store configuration data that's shared 
> in the whole cluster: solr and other (authorized) apps can easily read and 
> update them.
>  
> It would be very useful if we can store extra data in cluster properties 
> which would act as a centralized property management system between solr and 
> its related apps (like manager or monitor apps).
>  
> And the change would be also very simple.
> We can also require all extra property starts with prefix like: extra_
>  
> PR: https://github.com/apache/lucene-solr/pull/429
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12612) Accept any key in cluster properties

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606390#comment-16606390
 ] 

ASF subversion and git services commented on SOLR-12612:


Commit 0af269fb4975e404e07d6e512cfbbac206920672 in lucene-solr's branch 
refs/heads/master from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0af269f ]

SOLR-12612: Accept custom keys in cluster properties

Cluster properties restriction of known keys only is relaxed, and now unknown 
properties starting with "ext."
will be allowed. This allows custom to plugins set their own cluster properties.


> Accept any key in cluster properties
> 
>
> Key: SOLR-12612
> URL: https://issues.apache.org/jira/browse/SOLR-12612
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4, master (8.0)
>Reporter: jefferyyuan
>Priority: Minor
> Fix For: master (8.0)
>
>
> Cluster properties is a good place to store configuration data that's shared 
> in the whole cluster: solr and other (authorized) apps can easily read and 
> update them.
>  
> It would be very useful if we can store extra data in cluster properties 
> which would act as a centralized property management system between solr and 
> its related apps (like manager or monitor apps).
>  
> And the change would be also very simple.
> We can also require all extra property starts with prefix like: extra_
>  
> PR: https://github.com/apache/lucene-solr/pull/429
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10) - Build # 2699 - Still Unstable!

2018-09-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2699/
Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestLeaderInitiatedRecoveryThread.testPublishDownState

Error Message:
expected: but was:

Stack Trace:
java.lang.AssertionError: expected: but was:
at 
__randomizedtesting.SeedInfo.seed([77B87EB15AA2C9FB:29C5DC4FFEBF8006]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.TestLeaderInitiatedRecoveryThread.testPublishDownState(TestLeaderInitiatedRecoveryThread.java:191)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Comment Edited] (SOLR-12208) Don't use "INDEX.sizeInBytes" as a tag name in policy calculations

2018-09-06 Thread Cassandra Targett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606227#comment-16606227
 ] 

Cassandra Targett edited comment on SOLR-12208 at 9/6/18 6:32 PM:
--

[~ab], [~noble.paul]: The Ref Guide still references {{INDEX.sizeInBytes}} in 3 
places in 2 files: 2x in {{metrics-history.adoc}} and 1x in 
{{solrcloud-autoscaling-triggers.adoc}}.

ALL of these references need to be changed to {{INDEX.sizeInGB}}, correct?

In {{solrcloud-autoscaling-triggers.adoc}}, the {{aboveBytes} parameter 
additionally says the value should be entered in bytes, but it will be compared 
to a value in GB. Is that accurate after this metric change?


was (Author: ctargett):
[~ab], [~noble.paul]: The Ref Guide still references {{INDEX.sizeInBytes}} in 3 
places in 2 files: 2x in {{metrics-history.adoc}} and 1x in 
{{solrcloud-autoscaling-triggers.adoc}}. 

ALL of these references need to be changed to {{INDEX.sizeInGB}}, correct?

> Don't use "INDEX.sizeInBytes" as a tag name in policy calculations
> --
>
> Key: SOLR-12208
> URL: https://issues.apache.org/jira/browse/SOLR-12208
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12208.patch, SOLR-12208.patch, SOLR-12208.patch
>
>
> CORE_IDX and FREEDISK ConditionType reuse this metric name, but they assume 
> the values are expressed in gigabytes. This alone is confusing considering 
> the name of the metric.
> Additionally, it causes conflicts in the simulation framework that would 
> require substantial changes to resolve (ReplicaInfo-s in 
> SimClusterStateProvider keep metric values in their variables, expressed in 
> original units - but then the Policy assumes it can put the values expressed 
> in GB under the same key... hilarity ensues).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12208) Don't use "INDEX.sizeInBytes" as a tag name in policy calculations

2018-09-06 Thread Cassandra Targett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606227#comment-16606227
 ] 

Cassandra Targett edited comment on SOLR-12208 at 9/6/18 6:32 PM:
--

[~ab], [~noble.paul]: The Ref Guide still references {{INDEX.sizeInBytes}} in 3 
places in 2 files: 2x in {{metrics-history.adoc}} and 1x in 
{{solrcloud-autoscaling-triggers.adoc}}.

ALL of these references need to be changed to {{INDEX.sizeInGB}}, correct?

In {{solrcloud-autoscaling-triggers.adoc}}, the {{aboveBytes}} parameter 
additionally says the value should be entered in bytes, but it will be compared 
to a value in GB. Is that accurate after this metric change?


was (Author: ctargett):
[~ab], [~noble.paul]: The Ref Guide still references {{INDEX.sizeInBytes}} in 3 
places in 2 files: 2x in {{metrics-history.adoc}} and 1x in 
{{solrcloud-autoscaling-triggers.adoc}}.

ALL of these references need to be changed to {{INDEX.sizeInGB}}, correct?

In {{solrcloud-autoscaling-triggers.adoc}}, the {{aboveBytes} parameter 
additionally says the value should be entered in bytes, but it will be compared 
to a value in GB. Is that accurate after this metric change?

> Don't use "INDEX.sizeInBytes" as a tag name in policy calculations
> --
>
> Key: SOLR-12208
> URL: https://issues.apache.org/jira/browse/SOLR-12208
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12208.patch, SOLR-12208.patch, SOLR-12208.patch
>
>
> CORE_IDX and FREEDISK ConditionType reuse this metric name, but they assume 
> the values are expressed in gigabytes. This alone is confusing considering 
> the name of the metric.
> Additionally, it causes conflicts in the simulation framework that would 
> require substantial changes to resolve (ReplicaInfo-s in 
> SimClusterStateProvider keep metric values in their variables, expressed in 
> original units - but then the Policy assumes it can put the values expressed 
> in GB under the same key... hilarity ensues).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12208) Don't use "INDEX.sizeInBytes" as a tag name in policy calculations

2018-09-06 Thread Cassandra Targett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606227#comment-16606227
 ] 

Cassandra Targett commented on SOLR-12208:
--

[~ab], [~noble.paul]: The Ref Guide still references {{INDEX.sizeInBytes}} in 3 
places in 2 files: 2x in {{metrics-history.adoc}} and 1x in 
{{solrcloud-autoscaling-triggers.adoc}}. 

ALL of these references need to be changed to {{INDEX.sizeInGB}}, correct?

> Don't use "INDEX.sizeInBytes" as a tag name in policy calculations
> --
>
> Key: SOLR-12208
> URL: https://issues.apache.org/jira/browse/SOLR-12208
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12208.patch, SOLR-12208.patch, SOLR-12208.patch
>
>
> CORE_IDX and FREEDISK ConditionType reuse this metric name, but they assume 
> the values are expressed in gigabytes. This alone is confusing considering 
> the name of the metric.
> Additionally, it causes conflicts in the simulation framework that would 
> require substantial changes to resolve (ReplicaInfo-s in 
> SimClusterStateProvider keep metric values in their variables, expressed in 
> original units - but then the Policy assumes it can put the values expressed 
> in GB under the same key... hilarity ensues).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10697) Improve defaults for maxConnectionsPerHost

2018-09-06 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606225#comment-16606225
 ] 

Varun Thacker commented on SOLR-10697:
--

Thanks Cassandra! I didn't occur to me that this was documented 

> Improve defaults for maxConnectionsPerHost
> --
>
> Key: SOLR-10697
> URL: https://issues.apache.org/jira/browse/SOLR-10697
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-10697.patch, SOLR-10697.patch, SOLR-10697.patch
>
>
> Twice recently I've increased 
> {{HttpShardHandlerFactory#maxConnectionsPerHost}} at a client and it helped 
> improve query latencies a lot.
> Should we increase the default to say 100 ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10697) Improve defaults for maxConnectionsPerHost

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606219#comment-16606219
 ] 

ASF subversion and git services commented on SOLR-10697:


Commit 42f1fe1d4b02e8d9b1b79debcd0a98ae3ab87f0f in lucene-solr's branch 
refs/heads/branch_7x from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=42f1fe1 ]

SOLR-10697: update Ref Guide for default value changes


> Improve defaults for maxConnectionsPerHost
> --
>
> Key: SOLR-10697
> URL: https://issues.apache.org/jira/browse/SOLR-10697
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-10697.patch, SOLR-10697.patch, SOLR-10697.patch
>
>
> Twice recently I've increased 
> {{HttpShardHandlerFactory#maxConnectionsPerHost}} at a client and it helped 
> improve query latencies a lot.
> Should we increase the default to say 100 ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10697) Improve defaults for maxConnectionsPerHost

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606218#comment-16606218
 ] 

ASF subversion and git services commented on SOLR-10697:


Commit 8caa34c4cfe1c23beddc6861646558138adb87ad in lucene-solr's branch 
refs/heads/master from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8caa34c ]

SOLR-10697: update Ref Guide for default value changes


> Improve defaults for maxConnectionsPerHost
> --
>
> Key: SOLR-10697
> URL: https://issues.apache.org/jira/browse/SOLR-10697
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-10697.patch, SOLR-10697.patch, SOLR-10697.patch
>
>
> Twice recently I've increased 
> {{HttpShardHandlerFactory#maxConnectionsPerHost}} at a client and it helped 
> improve query latencies a lot.
> Should we increase the default to say 100 ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11943) Add machine learning functions for location data

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606217#comment-16606217
 ] 

ASF subversion and git services commented on SOLR-11943:


Commit d1b97a66b5d0f7d9e69e7bca74c4de6eaba6f4ac in lucene-solr's branch 
refs/heads/branch_7x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d1b97a6 ]

SOLR-11943: Update CHANGES.txt


> Add machine learning functions for location data
> 
>
> Key: SOLR-11943
> URL: https://issues.apache.org/jira/browse/SOLR-11943
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-11943.patch
>
>
> This ticket will add the following functions / features:
> 1) *locationVectors* function: Reads a list of tuples that contain a 
> *location* field type and returns a *matrix* of lat/long vectors. 
> 2) Add support for *haversinMeters* distance measure.
> With the addition of these two functions we'll have the ability to do various 
> distance based machine learning algorithms (distance matrices, clustering, 
> knn regression etc...) with location data.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11943) Add machine learning functions for location data

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606211#comment-16606211
 ] 

ASF subversion and git services commented on SOLR-11943:


Commit c684773e8df0c12eb490b53e41eedb5de0686b1e in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c684773 ]

SOLR-11943: Update CHANGES.txt


> Add machine learning functions for location data
> 
>
> Key: SOLR-11943
> URL: https://issues.apache.org/jira/browse/SOLR-11943
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-11943.patch
>
>
> This ticket will add the following functions / features:
> 1) *locationVectors* function: Reads a list of tuples that contain a 
> *location* field type and returns a *matrix* of lat/long vectors. 
> 2) Add support for *haversinMeters* distance measure.
> With the addition of these two functions we'll have the ability to do various 
> distance based machine learning algorithms (distance matrices, clustering, 
> knn regression etc...) with location data.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12742) Improve documentation for auto add replicas and trigger documentation

2018-09-06 Thread Cassandra Targett (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-12742:
-
Fix Version/s: 7.5

> Improve documentation for auto add replicas and trigger documentation
> -
>
> Key: SOLR-12742
> URL: https://issues.apache.org/jira/browse/SOLR-12742
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, documentation, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: master (8.0), 7.5
>
>
> After SOLR-12715, the documentation (and user story) around the auto add 
> replicas feature is inconsistent. This is because even though we can now add 
> new replicas automatically on new nodes, the {{autoAddReplicas=true}} 
> parameter on the create collection API only moves replicas around to replace 
> the lost replicas. We should try to fix our documentation first to describe 
> these two features in a coherent way and open another issue, if needed, to 
> change the behavior of autoAddReplicas parameter in Solr 8.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12742) Improve documentation for auto add replicas and trigger documentation

2018-09-06 Thread Cassandra Targett (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett reassigned SOLR-12742:


Assignee: Cassandra Targett

> Improve documentation for auto add replicas and trigger documentation
> -
>
> Key: SOLR-12742
> URL: https://issues.apache.org/jira/browse/SOLR-12742
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, documentation, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: master (8.0), 7.5
>
>
> After SOLR-12715, the documentation (and user story) around the auto add 
> replicas feature is inconsistent. This is because even though we can now add 
> new replicas automatically on new nodes, the {{autoAddReplicas=true}} 
> parameter on the create collection API only moves replicas around to replace 
> the lost replicas. We should try to fix our documentation first to describe 
> these two features in a coherent way and open another issue, if needed, to 
> change the behavior of autoAddReplicas parameter in Solr 8.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12742) Improve documentation for auto add replicas and trigger documentation

2018-09-06 Thread Cassandra Targett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606205#comment-16606205
 ] 

Cassandra Targett commented on SOLR-12742:
--

I made a couple of commits already for the overall Trigger documentation, but 
forgot about this issue and attached them to Shalin's SOLR-12716. The SHAs for 
those are:

master: cac589b803c518a388366a506a0067254e5b6c22

branch_7x: c85904288dd370f13c0a1287b2fcc38ff8a73159

> Improve documentation for auto add replicas and trigger documentation
> -
>
> Key: SOLR-12742
> URL: https://issues.apache.org/jira/browse/SOLR-12742
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, documentation, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0)
>
>
> After SOLR-12715, the documentation (and user story) around the auto add 
> replicas feature is inconsistent. This is because even though we can now add 
> new replicas automatically on new nodes, the {{autoAddReplicas=true}} 
> parameter on the create collection API only moves replicas around to replace 
> the lost replicas. We should try to fix our documentation first to describe 
> these two features in a coherent way and open another issue, if needed, to 
> change the behavior of autoAddReplicas parameter in Solr 8.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12716) NodeLostTrigger should support deleting replicas from lost nodes

2018-09-06 Thread Cassandra Targett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606203#comment-16606203
 ] 

Cassandra Targett commented on SOLR-12716:
--

Oops, forgot Shalin had made a new issue for some of the docs changes - my 
commits ^^ should have gone there. Will copy the SHAs into a comment on that 
issue also.

> NodeLostTrigger should support deleting replicas from lost nodes
> 
>
> Key: SOLR-12716
> URL: https://issues.apache.org/jira/browse/SOLR-12716
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12716.patch
>
>
> NodeLostTrigger only moves replicas from the lost node to other nodes in the 
> cluster. We should add a way to delete replicas of the lost node from the 
> cluster state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11943) Add machine learning functions for location data

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606196#comment-16606196
 ] 

ASF subversion and git services commented on SOLR-11943:


Commit fbf2885ae96fcaa499a3018158b5c94a25c048be in lucene-solr's branch 
refs/heads/branch_7x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fbf2885 ]

SOLR-11943: Add machine learning functions for location data


> Add machine learning functions for location data
> 
>
> Key: SOLR-11943
> URL: https://issues.apache.org/jira/browse/SOLR-11943
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-11943.patch
>
>
> This ticket will add the following functions / features:
> 1) *locationVectors* function: Reads a list of tuples that contain a 
> *location* field type and returns a *matrix* of lat/long vectors. 
> 2) Add support for *haversinMeters* distance measure.
> With the addition of these two functions we'll have the ability to do various 
> distance based machine learning algorithms (distance matrices, clustering, 
> knn regression etc...) with location data.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9418) Statistical Phrase Identifier

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606190#comment-16606190
 ] 

ASF subversion and git services commented on SOLR-9418:
---

Commit 3f5c8d7e83055ab4d2e313ab7909505a75a30ae6 in lucene-solr's branch 
refs/heads/branch_7x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3f5c8d7 ]

SOLR-9418: Added a new (experimental) PhrasesIdentificationComponent for 
identifying potential phrases in query input based on overlapping shingles in 
the index

(cherry picked from commit 597bd5db77465e1282ebf722264423d631861596)


> Statistical Phrase Identifier
> -
>
> Key: SOLR-9418
> URL: https://issues.apache.org/jira/browse/SOLR-9418
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Akash Mehta
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-9418.patch, SOLR-9418.patch, SOLR-9418.patch, 
> SOLR-9418.zip
>
>
> h2. *Summary:*
> The Statistical Phrase Identifier is a Solr contribution that takes in a 
> string of text and then leverages a language model (an Apache Lucene/Solr 
> inverted index) to predict how the inputted text should be divided into 
> phrases. The intended purpose of this tool is to parse short-text queries 
> into phrases prior to executing a keyword search (as opposed parsing out each 
> keyword as a single term).
> It is being generously donated to the Solr project by CareerBuilder, with the 
> original source code and a quickly demo-able version located here:  
> [https://github.com/careerbuilder/statistical-phrase-identifier|https://github.com/careerbuilder/statistical-phrase-identifier,]
> h2. *Purpose:*
> Assume you're building a job search engine, and one of your users searches 
> for the following:
>  _machine learning research and development Portland, OR software engineer 
> AND hadoop, java_
> Most search engines will natively parse this query into the following boolean 
> representation:
>  _(machine AND learning AND research AND development AND Portland) OR 
> (software AND engineer AND hadoop AND java)_
> While this query may still yield relevant results, it is clear that the 
> intent of the user wasn't understood very well at all. By leveraging the 
> Statistical Phrase Identifier on this string prior to query parsing, you can 
> instead expect the following parsing:
> _{machine learning} \{and} \{research and development} \{Portland, OR} 
> \{software engineer} \{AND} \{hadoop,} \{java}_
> It is then possile to modify all the multi-word phrases prior to executing 
> the search:
>  _"machine learning" and "research and development" "Portland, OR" "software 
> engineer" AND hadoop, java_
> Of course, you could do your own query parsing to specifically handle the 
> boolean syntax, but the following would eventually be interpreted correctly 
> by Apache Solr and most other search engines:
>  _"machine learning" AND "research and development" AND "Portland, OR" AND 
> "software engineer" AND hadoop AND java_ 
> h2. *History:*
> This project was originally implemented by the search team at CareerBuilder 
> in the summer of 2015 for use as part of their semantic search system. In the 
> summer of 2016, Akash Mehta, implemented a much simpler version as a proof of 
> concept based upon publicly available information about the CareerBuilder 
> implementation (the first attached patch).  In July of 2018, CareerBuilder 
> open sourced their original version 
> ([https://github.com/careerbuilder/statistical-phrase-identifier),|https://github.com/careerbuilder/statistical-phrase-identifier,]
>  and agreed to also donate the code to the Apache Software foundation as a 
> Solr contribution. An Solr patch with the CareerBuilder version was added to 
> this issue on September 5th, 2018, and community feedback and contributions 
> are encouraged.
> This issue was originally titled the "Probabilistic Query Parser", but the 
> name has now been updated to "Statistical Phrase Identifier" to avoid 
> ambiguity with Solr's query parsers (per some of the feedback on this issue), 
> as the implementation is actually just a mechanism for identifying phrases 
> statistically from a string and is NOT a Solr query parser. 
> h2. *Example usage:*
> h3. (See contrib readme or configuration files in the patch for full 
> configuration details)
> h3. *{{Request:}}*
> {code:java}
> http://localhost:8983/solr/spi/parse?q=darth vader obi wan kenobi anakin 
> skywalker toad x men magneto professor xavier{code}
> h3. *{{Response:}}* 
> {code:java}
> {
>   "responseHeader":{
>     "status":0,
>     "QTime":25},
>     "top_parsed_query":"{darth vader} {obi wan kenobi} {anakin skywalker} 
> {toad} {x men} {magneto} {professor xavier}",
>     

[jira] [Commented] (SOLR-11943) Add machine learning functions for location data

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606186#comment-16606186
 ] 

ASF subversion and git services commented on SOLR-11943:


Commit b8e87a101017711d634733242d5563eef836365e in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b8e87a1 ]

SOLR-11943: Add machine learning functions for location data


> Add machine learning functions for location data
> 
>
> Key: SOLR-11943
> URL: https://issues.apache.org/jira/browse/SOLR-11943
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-11943.patch
>
>
> This ticket will add the following functions / features:
> 1) *locationVectors* function: Reads a list of tuples that contain a 
> *location* field type and returns a *matrix* of lat/long vectors. 
> 2) Add support for *haversinMeters* distance measure.
> With the addition of these two functions we'll have the ability to do various 
> distance based machine learning algorithms (distance matrices, clustering, 
> knn regression etc...) with location data.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12441) Add deeply nested documents URP

2018-09-06 Thread Cassandra Targett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606183#comment-16606183
 ] 

Cassandra Targett commented on SOLR-12441:
--

I don't see this new Nested NestedUpdateProcessorFactory added to the URP page 
({{update-request-processors.adoc}}) in the Ref Guide - it's not supposed to be 
hidden from users, is it?

For URPs so far, we've generally just added a link to the javadocs and a short 
description - I would be willing to make sure it's added it if someone could 
give me a sentence or two about what it does/when to use it so I don't have to 
study this issue to try to figure it out myself.

> Add deeply nested documents URP
> ---
>
> Key: SOLR-12441
> URL: https://issues.apache.org/jira/browse/SOLR-12441
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.5
>
>  Time Spent: 7h 50m
>  Remaining Estimate: 0h
>
> As discussed in 
> [SOLR-12298|https://issues.apache.org/jira/browse/SOLR-12298], there ought to 
> be an URP to add metadata fields to childDocuments in order to allow a 
> transformer to rebuild the original document hierarchy.
> {quote}I propose we add the following fields:
>  # __nestParent__
>  # _nestLevel_
>  # __nestPath__
> __nestParent__: This field wild will store the document's parent docId, to be 
> used for building the whole hierarchy, using a new document transformer, as 
> suggested by Jan on the mailing list.
> _nestLevel_: This field will store the level of the specified field in the 
> document, using an int value. This field can be used for the parentFilter, 
> eliminating the need to provide a parentFilter, which will be set by default 
> as "_level_:queriedFieldLevel".
> _nestLevel_: This field will contain the full path, separated by a specific 
> reserved char e.g., '.'
>  for example: "first.second.third".
>  This will enable users to search for a specific path, or provide a regular 
> expression to search for fields sharing the same name in different levels of 
> the document, filtering using the level key if needed.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8468) A ByteBuffer based Directory implementation (and associated classes)

2018-09-06 Thread Dawid Weiss (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606179#comment-16606179
 ] 

Dawid Weiss commented on LUCENE-8468:
-

Thanks Adrien.

> A ByteBuffer based Directory implementation (and associated classes)
> 
>
> Key: LUCENE-8468
> URL: https://issues.apache.org/jira/browse/LUCENE-8468
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 7.5
>
> Attachments: LUCENE-8468.patch
>
>
> A factored-out sub-patch with ByteBufferDirectory and associated index 
> inputs, outputs, etc. and tests. No refactorings or cleanups to any other 
> classes (these will go in to master after 8.0 branch is cut).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9418) Statistical Phrase Identifier

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606169#comment-16606169
 ] 

ASF subversion and git services commented on SOLR-9418:
---

Commit 597bd5db77465e1282ebf722264423d631861596 in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=597bd5d ]

SOLR-9418: Added a new (experimental) PhrasesIdentificationComponent for 
identifying potential phrases in query input based on overlapping shingles in 
the index


> Statistical Phrase Identifier
> -
>
> Key: SOLR-9418
> URL: https://issues.apache.org/jira/browse/SOLR-9418
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Akash Mehta
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-9418.patch, SOLR-9418.patch, SOLR-9418.patch, 
> SOLR-9418.zip
>
>
> h2. *Summary:*
> The Statistical Phrase Identifier is a Solr contribution that takes in a 
> string of text and then leverages a language model (an Apache Lucene/Solr 
> inverted index) to predict how the inputted text should be divided into 
> phrases. The intended purpose of this tool is to parse short-text queries 
> into phrases prior to executing a keyword search (as opposed parsing out each 
> keyword as a single term).
> It is being generously donated to the Solr project by CareerBuilder, with the 
> original source code and a quickly demo-able version located here:  
> [https://github.com/careerbuilder/statistical-phrase-identifier|https://github.com/careerbuilder/statistical-phrase-identifier,]
> h2. *Purpose:*
> Assume you're building a job search engine, and one of your users searches 
> for the following:
>  _machine learning research and development Portland, OR software engineer 
> AND hadoop, java_
> Most search engines will natively parse this query into the following boolean 
> representation:
>  _(machine AND learning AND research AND development AND Portland) OR 
> (software AND engineer AND hadoop AND java)_
> While this query may still yield relevant results, it is clear that the 
> intent of the user wasn't understood very well at all. By leveraging the 
> Statistical Phrase Identifier on this string prior to query parsing, you can 
> instead expect the following parsing:
> _{machine learning} \{and} \{research and development} \{Portland, OR} 
> \{software engineer} \{AND} \{hadoop,} \{java}_
> It is then possile to modify all the multi-word phrases prior to executing 
> the search:
>  _"machine learning" and "research and development" "Portland, OR" "software 
> engineer" AND hadoop, java_
> Of course, you could do your own query parsing to specifically handle the 
> boolean syntax, but the following would eventually be interpreted correctly 
> by Apache Solr and most other search engines:
>  _"machine learning" AND "research and development" AND "Portland, OR" AND 
> "software engineer" AND hadoop AND java_ 
> h2. *History:*
> This project was originally implemented by the search team at CareerBuilder 
> in the summer of 2015 for use as part of their semantic search system. In the 
> summer of 2016, Akash Mehta, implemented a much simpler version as a proof of 
> concept based upon publicly available information about the CareerBuilder 
> implementation (the first attached patch).  In July of 2018, CareerBuilder 
> open sourced their original version 
> ([https://github.com/careerbuilder/statistical-phrase-identifier),|https://github.com/careerbuilder/statistical-phrase-identifier,]
>  and agreed to also donate the code to the Apache Software foundation as a 
> Solr contribution. An Solr patch with the CareerBuilder version was added to 
> this issue on September 5th, 2018, and community feedback and contributions 
> are encouraged.
> This issue was originally titled the "Probabilistic Query Parser", but the 
> name has now been updated to "Statistical Phrase Identifier" to avoid 
> ambiguity with Solr's query parsers (per some of the feedback on this issue), 
> as the implementation is actually just a mechanism for identifying phrases 
> statistically from a string and is NOT a Solr query parser. 
> h2. *Example usage:*
> h3. (See contrib readme or configuration files in the patch for full 
> configuration details)
> h3. *{{Request:}}*
> {code:java}
> http://localhost:8983/solr/spi/parse?q=darth vader obi wan kenobi anakin 
> skywalker toad x men magneto professor xavier{code}
> h3. *{{Response:}}* 
> {code:java}
> {
>   "responseHeader":{
>     "status":0,
>     "QTime":25},
>     "top_parsed_query":"{darth vader} {obi wan kenobi} {anakin skywalker} 
> {toad} {x men} {magneto} {professor xavier}",
>     "top_parsed_phrases":[
>       "darth vader",
>       "obi wan kenobi",
>     

[jira] [Updated] (SOLR-11943) Add machine learning functions for location data

2018-09-06 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11943:
--
Description: 
This ticket will add the following functions / features:

1) *locationVectors* function: Reads a list of tuples that contain a *location* 
field type and returns a *matrix* of lat/long vectors. 

2) Add support for *haversinMeters* distance measure.

With the addition of these two functions we'll have the ability to do various 
distance based machine learning algorithms (distance matrices, clustering, knn 
regression etc...) with location data.

 

  was:
This ticket will add the following functions / features:

1) *locationVectors* function: Reads a list of tuples that contain a *location* 
field type and returns a *matrix* of lat/long vectors. 

2) Add support for *haversinMeters* distance measure.

 


> Add machine learning functions for location data
> 
>
> Key: SOLR-11943
> URL: https://issues.apache.org/jira/browse/SOLR-11943
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-11943.patch
>
>
> This ticket will add the following functions / features:
> 1) *locationVectors* function: Reads a list of tuples that contain a 
> *location* field type and returns a *matrix* of lat/long vectors. 
> 2) Add support for *haversinMeters* distance measure.
> With the addition of these two functions we'll have the ability to do various 
> distance based machine learning algorithms (distance matrices, clustering, 
> knn regression etc...) with location data.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10.0.1) - Build # 2698 - Still Unstable!

2018-09-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2698/
Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.DirectoryFactoryTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.DirectoryFactoryTest:  
   1) Thread[id=16, name=Log4j2-TF-1-AsyncLoggerConfig--1, state=TIMED_WAITING, 
group=TGRP-DirectoryFactoryTest] at 
java.base@10.0.1/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@10.0.1/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@10.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2117)
 at 
app//com.lmax.disruptor.TimeoutBlockingWaitStrategy.waitFor(TimeoutBlockingWaitStrategy.java:38)
 at 
app//com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
 at 
app//com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)   
  at java.base@10.0.1/java.lang.Thread.run(Thread.java:844)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.DirectoryFactoryTest: 
   1) Thread[id=16, name=Log4j2-TF-1-AsyncLoggerConfig--1, state=TIMED_WAITING, 
group=TGRP-DirectoryFactoryTest]
at java.base@10.0.1/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@10.0.1/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
at 
java.base@10.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2117)
at 
app//com.lmax.disruptor.TimeoutBlockingWaitStrategy.waitFor(TimeoutBlockingWaitStrategy.java:38)
at 
app//com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
at 
app//com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)
at java.base@10.0.1/java.lang.Thread.run(Thread.java:844)
at __randomizedtesting.SeedInfo.seed([F10518C97ADC6E79]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.DirectoryFactoryTest

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=16, 
name=Log4j2-TF-1-AsyncLoggerConfig--1, state=TIMED_WAITING, 
group=TGRP-DirectoryFactoryTest] at 
java.base@10.0.1/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@10.0.1/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@10.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2117)
 at 
app//com.lmax.disruptor.TimeoutBlockingWaitStrategy.waitFor(TimeoutBlockingWaitStrategy.java:38)
 at 
app//com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
 at 
app//com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)   
  at java.base@10.0.1/java.lang.Thread.run(Thread.java:844)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=16, name=Log4j2-TF-1-AsyncLoggerConfig--1, state=TIMED_WAITING, 
group=TGRP-DirectoryFactoryTest]
at java.base@10.0.1/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@10.0.1/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
at 
java.base@10.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2117)
at 
app//com.lmax.disruptor.TimeoutBlockingWaitStrategy.waitFor(TimeoutBlockingWaitStrategy.java:38)
at 
app//com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
at 
app//com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)
at java.base@10.0.1/java.lang.Thread.run(Thread.java:844)
at __randomizedtesting.SeedInfo.seed([F10518C97ADC6E79]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.DirectoryFactoryTest

Error Message:
The test or suite printed 5951360 bytes to stdout and stderr, even though the 
limit was set to 8192 bytes. Increase the limit with @Limit, ignore it 
completely with @SuppressSysoutChecks or run with -Dtests.verbose=true

Stack Trace:
java.lang.AssertionError: The test or suite printed 5951360 bytes to stdout and 
stderr, even though the limit was set to 8192 bytes. Increase the limit with 
@Limit, ignore it completely with @SuppressSysoutChecks or run with 
-Dtests.verbose=true
at __randomizedtesting.SeedInfo.seed([F10518C97ADC6E79]:0)
at 
org.apache.lucene.util.TestRuleLimitSysouts.afterIfSuccessful(TestRuleLimitSysouts.java:211)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterIfSuccessful(TestRuleAdapter.java:36)
at 

[jira] [Updated] (LUCENE-8489) Provide List type constructors for BaseCompositeReader based Readers

2018-09-06 Thread Namgyu Kim (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namgyu Kim updated LUCENE-8489:
---
Description: 
Currently, Reader based on BaseCompositeReader(MultiReader, 
ParallelCompositeReader, DirectoryReader) does not support List type 
constructor.

In fact, this does not make a big difference in performance, but users will 
think positively if the API supports more variants.

I will add the following to support this.

1) MultiReader
{code:java}
public MultiReader(List subReaders) throws IOException {
  this(subReaders, true);
}

public MultiReader(List subReaders, boolean closeSubReaders) 
throws IOException {
  this((IndexReader[]) subReaders.toArray(), closeSubReaders);
}
{code}
2) ParallelCompositeReader
{code:java}
public ParallelCompositeReader(List readers) throws 
IOException {
  this(true, readers);
}

public ParallelCompositeReader(boolean closeSubReaders, List 
readers) throws IOException {
  this(closeSubReaders, readers, readers);
}

public ParallelCompositeReader(boolean closeSubReaders, List 
readers, List storedFieldReaders) throws IOException {
  this(closeSubReaders, (CompositeReader[]) readers.toArray(), 
(CompositeReader[]) storedFieldReaders.toArray());
}
{code}
3) DirectoryReader
{code:java}
protected DirectoryReader(Directory directory, List segmentReaders) 
throws IOException {
  super(segmentReaders);
  this.directory = directory;
}
{code}
4) BaseCompositeReader
{code:java}
@SuppressWarnings("unchecked")
protected BaseCompositeReader(List subReaders) throws IOException {
  this((R[]) subReaders.toArray());
}
{code}
5) Test
 I plan to write a test case in "TestParallelCompositeReader".
 After write the test case, I will upload the patch.

 

If you have any questions or requests, please left any comments :D

  was:
Currently, Reader based on BaseCompositeReader(MultiReader, 
ParallelCompositeReader, DirectoryReader) does not support List type 
constructor.

In fact, this does not make a big difference in performance, but users will 
think positively if the API supports more variants.

I will modify the following to support this.

1) MultiReader
{code:java}
public MultiReader(List subReaders) throws IOException {
  this(subReaders, true);
}

public MultiReader(List subReaders, boolean closeSubReaders) 
throws IOException {
  this((IndexReader[]) subReaders.toArray(), closeSubReaders);
}
{code}
2) ParallelCompositeReader
{code:java}
public ParallelCompositeReader(List readers) throws 
IOException {
  this(true, readers);
}

public ParallelCompositeReader(boolean closeSubReaders, List 
readers) throws IOException {
  this(closeSubReaders, readers, readers);
}

public ParallelCompositeReader(boolean closeSubReaders, List 
readers, List storedFieldReaders) throws IOException {
  this(closeSubReaders, (CompositeReader[]) readers.toArray(), 
(CompositeReader[]) storedFieldReaders.toArray());
}
{code}
3) DirectoryReader
{code:java}
protected DirectoryReader(Directory directory, List segmentReaders) 
throws IOException {
  super(segmentReaders);
  this.directory = directory;
}
{code}
4) BaseCompositeReader
{code:java}
@SuppressWarnings("unchecked")
protected BaseCompositeReader(List subReaders) throws IOException {
  this((R[]) subReaders.toArray());
}
{code}
5) Test
 I plan to write a test case in "TestParallelCompositeReader".
 After write the test case, I will upload the patch.

 

If you have any questions or requests, please left any comments :D


> Provide List type constructors for BaseCompositeReader based Readers
> 
>
> Key: LUCENE-8489
> URL: https://issues.apache.org/jira/browse/LUCENE-8489
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: Namgyu Kim
>Priority: Major
>  Labels: usability
>
> Currently, Reader based on BaseCompositeReader(MultiReader, 
> ParallelCompositeReader, DirectoryReader) does not support List type 
> constructor.
> In fact, this does not make a big difference in performance, but users will 
> think positively if the API supports more variants.
> I will add the following to support this.
> 1) MultiReader
> {code:java}
> public MultiReader(List subReaders) throws IOException {
>   this(subReaders, true);
> }
> public MultiReader(List subReaders, boolean closeSubReaders) 
> throws IOException {
>   this((IndexReader[]) subReaders.toArray(), closeSubReaders);
> }
> {code}
> 2) ParallelCompositeReader
> {code:java}
> public ParallelCompositeReader(List readers) throws 
> IOException {
>   this(true, readers);
> }
> public ParallelCompositeReader(boolean closeSubReaders, List 
> readers) throws IOException {
>   this(closeSubReaders, readers, readers);
> }
> public ParallelCompositeReader(boolean closeSubReaders, List 
> readers, List storedFieldReaders) 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 2049 - Unstable!

2018-09-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/2049/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.logging.TestLogWatcher.testLog4jWatcher

Error Message:
expected:<-1> but was:<1536248914186>

Stack Trace:
java.lang.AssertionError: expected:<-1> but was:<1536248914186>
at 
__randomizedtesting.SeedInfo.seed([47FCD21E1B9AA7D4:BA67825EE29218DA]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.logging.TestLogWatcher.testLog4jWatcher(TestLogWatcher.java:49)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12756 lines...]
   [junit4] Suite: org.apache.solr.logging.TestLogWatcher
   [junit4]   

[jira] [Created] (LUCENE-8489) Provide List type constructors for BaseCompositeReader based Readers

2018-09-06 Thread Namgyu Kim (JIRA)
Namgyu Kim created LUCENE-8489:
--

 Summary: Provide List type constructors for BaseCompositeReader 
based Readers
 Key: LUCENE-8489
 URL: https://issues.apache.org/jira/browse/LUCENE-8489
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Namgyu Kim


Currently, Reader based on BaseCompositeReader(MultiReader, 
ParallelCompositeReader, DirectoryReader) does not support List type 
constructor.

In fact, this does not make a big difference in performance, but users will 
think positively if the API supports more variants.

I will modify the following to support this.

1) MultiReader
{code:java}
public MultiReader(List subReaders) throws IOException {
  this(subReaders, true);
}

public MultiReader(List subReaders, boolean closeSubReaders) 
throws IOException {
  this((IndexReader[]) subReaders.toArray(), closeSubReaders);
}
{code}
2) ParallelCompositeReader
{code:java}
public ParallelCompositeReader(List readers) throws 
IOException {
  this(true, readers);
}

public ParallelCompositeReader(boolean closeSubReaders, List 
readers) throws IOException {
  this(closeSubReaders, readers, readers);
}

public ParallelCompositeReader(boolean closeSubReaders, List 
readers, List storedFieldReaders) throws IOException {
  this(closeSubReaders, (CompositeReader[]) readers.toArray(), 
(CompositeReader[]) storedFieldReaders.toArray());
}
{code}
3) DirectoryReader
{code:java}
protected DirectoryReader(Directory directory, List segmentReaders) 
throws IOException {
  super(segmentReaders);
  this.directory = directory;
}
{code}
4) BaseCompositeReader
{code:java}
@SuppressWarnings("unchecked")
protected BaseCompositeReader(List subReaders) throws IOException {
  this((R[]) subReaders.toArray());
}
{code}
5) Test
 I plan to write a test case in "TestParallelCompositeReader".
 After write the test case, I will upload the patch.

 

If you have any questions or requests, please left any comments :D



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11943) Add machine learning functions for location data

2018-09-06 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11943:
--
Attachment: SOLR-11943.patch

> Add machine learning functions for location data
> 
>
> Key: SOLR-11943
> URL: https://issues.apache.org/jira/browse/SOLR-11943
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-11943.patch
>
>
> This ticket will add the following functions / features:
> 1) *locationVectors* function: Reads a list of tuples that contain a 
> *location* field type and returns a *matrix* of lat/long vectors. 
> 2) Add support for *haversinMeters* distance measure.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12716) NodeLostTrigger should support deleting replicas from lost nodes

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606111#comment-16606111
 ] 

ASF subversion and git services commented on SOLR-12716:


Commit c85904288dd370f13c0a1287b2fcc38ff8a73159 in lucene-solr's branch 
refs/heads/branch_7x from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c859042 ]

SOLR-12716: Move common params to top of page; insert links to common param 
section for each trigger; improve consistency


> NodeLostTrigger should support deleting replicas from lost nodes
> 
>
> Key: SOLR-12716
> URL: https://issues.apache.org/jira/browse/SOLR-12716
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12716.patch
>
>
> NodeLostTrigger only moves replicas from the lost node to other nodes in the 
> cluster. We should add a way to delete replicas of the lost node from the 
> cluster state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12722) ChildDocTransformer should have fl param

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606110#comment-16606110
 ] 

ASF subversion and git services commented on SOLR-12722:


Commit a84f84c2f65f714cb003a6c2af730d32fa75f2e7 in lucene-solr's branch 
refs/heads/branch_7x from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a84f84c ]

SOLR-12722: expand "params" -> "parameters" (plus a bunch of other things I 
found in unrelated transformer examples)


> ChildDocTransformer should have fl param
> 
>
> Key: SOLR-12722
> URL: https://issues.apache.org/jira/browse/SOLR-12722
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.5
>
> Attachments: SOLR-12722.patch
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> There is a long-overdue TODO in ChildDocTransformer, to be able to pass an fl 
> param to specify which fields should be fetched by ChildDocTransformer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12684) Document speed gotchas and partitionKeys usage for ParallelStream

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606109#comment-16606109
 ] 

ASF subversion and git services commented on SOLR-12684:


Commit d6978717c4ab161f1f6d597a4468302b2a38b24a in lucene-solr's branch 
refs/heads/branch_7x from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d697871 ]

SOLR-12684: put expression names and params in monospace


> Document speed gotchas and partitionKeys usage for ParallelStream
> -
>
> Key: SOLR-12684
> URL: https://issues.apache.org/jira/browse/SOLR-12684
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12684.patch, SOLR-12684.patch, SOLR-12684.patch, 
> SOLR-12684.patch
>
>
> The aim of this Jira is to beef up the ref guide around parallel stream
> There are two things I want to address:
>  
> Firstly usage of partitionKeys :
> This line in the ref guide indicates that parallel stream keys should always 
> be the same as the underlying sort criteria 
> {code:java}
> The parallel function maintains the sort order of the tuples returned by the 
> worker nodes, so the sort criteria of the parallel function must match up 
> with the sort order of the tuples returned by the workers.
> {code}
> But as discussed on SOLR-12635 , Joel provided an example
> {code:java}
> The hash partitioner just needs to send documents to the same worker node. 
> You could do that with just one partitioning key
> For example if you sort on year, month and day. You could partition on year 
> only and still be fine as long as there was enough different years to spread 
> the records around the worker nodes.{code}
> So we should make this more clear in the ref guide.
> Let's also document that specifying more than 4 partitionKeys will throw an 
> error after SOLR-12683
>  
> At this point the user will understand how to use partitonKeys . It's related 
> to the sort criteria but should not have all the sort fields 
>  
> We should now mention a trick where the user could warn up the hash queries 
> as they are always run on the whole document set ( irrespective of the filter 
> criterias )
> also users should only use parallel when the docs matching post filter 
> criterias is very large .  
> {code:java}
> 
> 
> :{!hash workers=6 worker=0} name="partitionKeys">myPartitionKey
> :{!hash workers=6 worker=1} name="partitionKeys">myPartitionKey
> :{!hash workers=6 worker=2} name="partitionKeys">myPartitionKey
> :{!hash workers=6 worker=3} name="partitionKeys">myPartitionKey
> :{!hash workers=6 worker=4} name="partitionKeys">myPartitionKey
> :{!hash workers=6 worker=5} name="partitionKeys">myPartitionKey
> 
> {code}
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11943) Add machine learning functions for location data

2018-09-06 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11943:
--
Description: 
This ticket will add the following functions / features:

1) *locationVectors* function: Reads a list of tuples that contain a *location* 
field type and returns a *matrix* of lat/long vectors. 

2) Add support for *haversinMeters* distance measure.

 

  was:
This ticket will add the following functions / features:

1) *locationVectors* function: Reads a list of tuples that contain a *location* 
field type and returns a *matrix* of lat/long vectors. 

2) *dbscan* function: dbscan clustering which can be used to cluster the 
lat/long matrix. 

3) Add support for *haversinMeters* and *haversinKilometers* distance to three 
functions: *knn*, *distance* and *dbscan*. This will support nearest neighbor 
searching, distance matrices and clustering of location data.

 


> Add machine learning functions for location data
> 
>
> Key: SOLR-11943
> URL: https://issues.apache.org/jira/browse/SOLR-11943
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> This ticket will add the following functions / features:
> 1) *locationVectors* function: Reads a list of tuples that contain a 
> *location* field type and returns a *matrix* of lat/long vectors. 
> 2) Add support for *haversinMeters* distance measure.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12684) Document speed gotchas and partitionKeys usage for ParallelStream

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606094#comment-16606094
 ] 

ASF subversion and git services commented on SOLR-12684:


Commit 9c364b2d8640e84a2fe3b7a8d8adfc20d3d53e38 in lucene-solr's branch 
refs/heads/master from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9c364b2 ]

SOLR-12684: put expression names and params in monospace


> Document speed gotchas and partitionKeys usage for ParallelStream
> -
>
> Key: SOLR-12684
> URL: https://issues.apache.org/jira/browse/SOLR-12684
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12684.patch, SOLR-12684.patch, SOLR-12684.patch, 
> SOLR-12684.patch
>
>
> The aim of this Jira is to beef up the ref guide around parallel stream
> There are two things I want to address:
>  
> Firstly usage of partitionKeys :
> This line in the ref guide indicates that parallel stream keys should always 
> be the same as the underlying sort criteria 
> {code:java}
> The parallel function maintains the sort order of the tuples returned by the 
> worker nodes, so the sort criteria of the parallel function must match up 
> with the sort order of the tuples returned by the workers.
> {code}
> But as discussed on SOLR-12635 , Joel provided an example
> {code:java}
> The hash partitioner just needs to send documents to the same worker node. 
> You could do that with just one partitioning key
> For example if you sort on year, month and day. You could partition on year 
> only and still be fine as long as there was enough different years to spread 
> the records around the worker nodes.{code}
> So we should make this more clear in the ref guide.
> Let's also document that specifying more than 4 partitionKeys will throw an 
> error after SOLR-12683
>  
> At this point the user will understand how to use partitonKeys . It's related 
> to the sort criteria but should not have all the sort fields 
>  
> We should now mention a trick where the user could warn up the hash queries 
> as they are always run on the whole document set ( irrespective of the filter 
> criterias )
> also users should only use parallel when the docs matching post filter 
> criterias is very large .  
> {code:java}
> 
> 
> :{!hash workers=6 worker=0} name="partitionKeys">myPartitionKey
> :{!hash workers=6 worker=1} name="partitionKeys">myPartitionKey
> :{!hash workers=6 worker=2} name="partitionKeys">myPartitionKey
> :{!hash workers=6 worker=3} name="partitionKeys">myPartitionKey
> :{!hash workers=6 worker=4} name="partitionKeys">myPartitionKey
> :{!hash workers=6 worker=5} name="partitionKeys">myPartitionKey
> 
> {code}
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12722) ChildDocTransformer should have fl param

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606095#comment-16606095
 ] 

ASF subversion and git services commented on SOLR-12722:


Commit 00ce9e067b8797b7dd0f1014c938354a59e15024 in lucene-solr's branch 
refs/heads/master from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=00ce9e0 ]

SOLR-12722: expand "params" -> "parameters (plus a bunch of other things I 
found in unrelated transformer examples)


> ChildDocTransformer should have fl param
> 
>
> Key: SOLR-12722
> URL: https://issues.apache.org/jira/browse/SOLR-12722
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.5
>
> Attachments: SOLR-12722.patch
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> There is a long-overdue TODO in ChildDocTransformer, to be able to pass an fl 
> param to specify which fields should be fetched by ChildDocTransformer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12752) Autoscaling triggers don't bring core properties along

2018-09-06 Thread James Strassburg (JIRA)
James Strassburg created SOLR-12752:
---

 Summary: Autoscaling triggers don't bring core properties along
 Key: SOLR-12752
 URL: https://issues.apache.org/jira/browse/SOLR-12752
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: AutoScaling
Affects Versions: 7.3.1
Reporter: James Strassburg


During a nodeLost or nodeAdded event, when replicas get moved to new nodes any 
core properties that were defined during collection creation are lost. When the 
cores get created the result is errors like the following:

products_20180904200015_shard1_replica_n39: 
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
Could not load conf for core products_20180904200015_shard1_replica_n39: Can't 
load schema schema.xml: No system property or default value specified for 
synonyms_datasource value:jdbc/${synonyms_datasource}

While the configoverlay.json and ConfigAPI is probably a better fit for what 
we're doing, SOLR-11529 is keeping us from moving in that direction.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12716) NodeLostTrigger should support deleting replicas from lost nodes

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606096#comment-16606096
 ] 

ASF subversion and git services commented on SOLR-12716:


Commit cac589b803c518a388366a506a0067254e5b6c22 in lucene-solr's branch 
refs/heads/master from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cac589b ]

SOLR-12716: Move common params to top of page; insert links to common param 
section for each trigger; improve consistency


> NodeLostTrigger should support deleting replicas from lost nodes
> 
>
> Key: SOLR-12716
> URL: https://issues.apache.org/jira/browse/SOLR-12716
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12716.patch
>
>
> NodeLostTrigger only moves replicas from the lost node to other nodes in the 
> cluster. We should add a way to delete replicas of the lost node from the 
> cluster state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12128) Setting the same field to null multiple times in a row throws NullPointerException

2018-09-06 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606044#comment-16606044
 ] 

Munendra S N commented on SOLR-12128:
-

Faced the similar while re-producing SOLR-12127.

This line the problem here. *getFieldValues(fname)* could return null. 
Iterating over null produces NPE.
Ideally, tlog would not contain null values for a field hence, there is no null 
check. Fixing SOLR-12127 will also fix this issue as this arise only when 
setting field value to null and field is non-stored, non-indexed and 
single-valued with docvalues enabled
{code:java}
https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/component/RealTimeGetComponent.java#L693
{code}


> Setting the same field to null multiple times in a row throws 
> NullPointerException
> --
>
> Key: SOLR-12128
> URL: https://issues.apache.org/jira/browse/SOLR-12128
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.6.2
>Reporter: Oliver Kuldmäe
>Priority: Critical
>
> I have a query that tries to set the same field to null multiple times in a 
> row. This results in a NullPointerException. Tested on 6.6.2.
> Stack trace:
> {code:java}
> java.lang.NullPointerException
> at 
> org.apache.solr.handler.component.RealTimeGetComponent.toSolrInputDocument(RealTimeGetComponent.java:667)
> at 
> org.apache.solr.handler.component.RealTimeGetComponent.getInputDocumentFromTlog(RealTimeGetComponent.java:539)
> at 
> org.apache.solr.handler.component.RealTimeGetComponent.getInputDocument(RealTimeGetComponent.java:593)
> at 
> org.apache.solr.update.processor.AtomicUpdateDocumentMerger.doInPlaceUpdateMerge(AtomicUpdateDocumentMerger.java:253)
> at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.getUpdatedDocument(DistributedUpdateProcessor.java:1352)
> at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1078)
> at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:748)
> at 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
> at org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:261)
> at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:188)
> at 
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
> at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
> at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:534)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
> at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> at 

[jira] [Commented] (SOLR-12727) Upgrade ZooKeeper dependency to 3.4.13

2018-09-06 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16605932#comment-16605932
 ] 

Erick Erickson commented on SOLR-12727:
---

Here's what I have so far. I'll be out today so putting up this preliminary 
patch in case someone who, you know, like, understands ZK wants to take a 
glance.

There are two always-failing tests:

SaslZkACLProviderTest
TestConfigSetsAPIZkFailure

I "fixed" TestConfigSetsAPIZkFailure by the change to ZkTestServer, 
-  zooKeeperServer.shutdown();
+  zooKeeperServer.shutdown(true);

Why did I do that? Well, I looked at the ZooKeeper code for shutdown and saw 
that the entire code path that was erroring out with an NPE could be avoided by 
passing "true". IOW I made a random change that stopped the error without 
having a clue what the consequences are. I don't consider it a fix until I 
understand why the old code did not error out here, but at least it points to 
where to start looking.

If you think the above is a hint that I'd be grateful if someone who 
understands more about ZooKeeper and the ZkTestServer would chime in, you're 
right ;).

[~markrmil...@gmail.com][~gchanan] [~tomasflobbe][~anshum] you've touched this 
file in the past, any wisdom?

I'll look more tonight when I get back home.

> Upgrade ZooKeeper dependency to 3.4.13
> --
>
> Key: SOLR-12727
> URL: https://issues.apache.org/jira/browse/SOLR-12727
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.4
>Reporter: Shawn Heisey
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12727.patch
>
>
> Upgrade ZK dependency to 3.4.13.  This fixes ZOOKEEPER-2184 which will make 
> the ZK client re-resolve the server hostnames when a connection fails.  This 
> will fix issues where a failed ZK container is replaced with a new one that 
> has a different IP address and DNS gets updated with the new address.
> Typically these upgrades do not require code changes, but that should be 
> verified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12727) Upgrade ZooKeeper dependency to 3.4.13

2018-09-06 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-12727:
--
Attachment: SOLR-12727.patch

> Upgrade ZooKeeper dependency to 3.4.13
> --
>
> Key: SOLR-12727
> URL: https://issues.apache.org/jira/browse/SOLR-12727
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.4
>Reporter: Shawn Heisey
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12727.patch
>
>
> Upgrade ZK dependency to 3.4.13.  This fixes ZOOKEEPER-2184 which will make 
> the ZK client re-resolve the server hostnames when a connection fails.  This 
> will fix issues where a failed ZK container is replaced with a new one that 
> has a different IP address and DNS gets updated with the new address.
> Typically these upgrades do not require code changes, but that should be 
> verified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12127) Using atomic updates to remove docValues type dynamic field does not work

2018-09-06 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16605908#comment-16605908
 ] 

Munendra S N commented on SOLR-12127:
-

[~oliverkuldmae]
I did little debugging to check how Solr decides if it is atomic or inplace 
update. In inplace updates, only set and inc operations are allowed. A check 
can be added here,  if the operation is set and value is null or empty list, 
treat the update as atomic update, which would solve the issue 
[^SOLR-12127.patch] 
I have attached the first draft (it doesn't include tests)

> Using atomic updates to remove docValues type dynamic field does not work
> -
>
> Key: SOLR-12127
> URL: https://issues.apache.org/jira/browse/SOLR-12127
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.6.2, 7.2
>Reporter: Oliver Kuldmäe
>Priority: Critical
> Attachments: SOLR-12127.patch
>
>
> I have defined a dynamic field which is stored=false, indexed=false and 
> docValues=true. Attempting to set this field's value to null via atomic 
> update does not remove the field from the document. However, the document's 
> version is updated. Using atomic updates to set a value for the field does 
> work. Tested on 6.6.2 and 7.2.1. 
> An example of a non-working update query:
> {code:java}
> 
> 
> 
> 372335
> 
> 
> 
> 
> {code}
>  
> An example of a working update query:
> {code:java}
> 
> 
> 
> 372335
>  update="set">1521472499
> 
> 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12127) Using atomic updates to remove docValues type dynamic field does not work

2018-09-06 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-12127:

Attachment: SOLR-12127.patch

> Using atomic updates to remove docValues type dynamic field does not work
> -
>
> Key: SOLR-12127
> URL: https://issues.apache.org/jira/browse/SOLR-12127
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.6.2, 7.2
>Reporter: Oliver Kuldmäe
>Priority: Critical
> Attachments: SOLR-12127.patch
>
>
> I have defined a dynamic field which is stored=false, indexed=false and 
> docValues=true. Attempting to set this field's value to null via atomic 
> update does not remove the field from the document. However, the document's 
> version is updated. Using atomic updates to set a value for the field does 
> work. Tested on 6.6.2 and 7.2.1. 
> An example of a non-working update query:
> {code:java}
> 
> 
> 
> 372335
> 
> 
> 
> 
> {code}
>  
> An example of a working update query:
> {code:java}
> 
> 
> 
> 372335
>  update="set">1521472499
> 
> 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12751) Multi word searching is not working getting random search results

2018-09-06 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-12751.
---
Resolution: Not A Bug

This issue tracker is not a support portal. Please raise this question on the 
user's list at solr-u...@lucene.apache.org, see: 
(http://lucene.apache.org/solr/community.html#mailing-lists-irc) there are a 
_lot_ more people watching that list who may be able to help and you'll 
probably get responses much more quickly.

 

If it's determined that this really is a code issue or enhancement to Solr and 
not a configuration/usage problem, we can raise a new JIRA or reopen this one.

 

Your problem is likely that the default operator is OR and Solr is doing 
exactly what you're telling it to. =query is your friend.

> Multi word searching is not working getting random search results
> -
>
> Key: SOLR-12751
> URL: https://issues.apache.org/jira/browse/SOLR-12751
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
> Environment: I am currently running solr on Linux platform.
> NAME="Red Hat Enterprise Linux Server"
> VERSION="7.5"
> openjdk version "1.8.0_181"
>  
> AEM version: 6.2
>  
>  
>  
>  
>Reporter: Jagadish Muddapati
>Priority: Blocker
>  Labels: newbie
>
> I recently integrate solr to AEM and when i do search for multiple words the 
> search results are getting randomly.
>  
> search words: {color:#ff}*Intermodal schedule*{color}
> Results: First solr displaying the search results related to 
> {color:#ff}Intermodal{color} and after few pages I am seeing the serch 
> term {color:#ff}schedule {color}related pages randomly. I am not getting 
> the results related to multi words on the page.
> For example: I am not seeing the results like [Terminals & *Schedules* | 
> *Intermodal* | Shipping Options ... page on starting and getting random 
> results and the  [Terminals & *Schedules* | *Intermodal* | Shipping Options 
> *...* page displaying after the 40 results.
>  
> Here is the query on browser URL:
> [http://test-servername/content/nscorp/en/search-results.html?start=0=Intermodal+Schedule|http://servername/content/nscorp/en/search-results.html?start=0=Intermodal+Schedule]
>  
> I am using solr version 7.4
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7862) Should BKD cells store their min/max packed values?

2018-09-06 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16605853#comment-16605853
 ] 

Adrien Grand commented on LUCENE-7862:
--

The improvement in QPS look indeed very significant in some cases! For very 
little overhead. The patch looks good to me, maybe the 
{{System.arraycopy(minPackedValue, 0, maxPackedValue, 0, packedBytesLength)}} 
call would benefit from a comment explaining that we are copying common 
prefixes.

bq. Maybe we should only ad this extra information to the index when number of 
dimensions > 1

+1

> Should BKD cells store their min/max packed values?
> ---
>
> Key: LUCENE-7862
> URL: https://issues.apache.org/jira/browse/LUCENE-7862
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7862.patch, LUCENE-7862.patch
>
>
> The index of the BKD tree already allows to know lower and upper bounds of 
> values in a given dimension. However the actual range of values might be more 
> narrow than what the index tells us, especially if splitting on one dimension 
> reduces the range of values in at least one other dimension. For instance 
> this tends to be the case with range fields: since we enforce that lower 
> bounds are less than upper bounds, splitting on one dimension will also 
> affect the range of values in the other dimension.
> So I'm wondering whether we should store the actual range of values for each 
> dimension in leaf blocks, this will hopefully allow to figure out that either 
> none or all values match in a block without having to check them all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12751) Multi word searching is not working getting random search results

2018-09-06 Thread Jagadish Muddapati (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jagadish Muddapati updated SOLR-12751:
--
Description: 
I recently integrate solr to AEM and when i do search for multiple words the 
search results are getting randomly.

 

search words: {color:#ff}*Intermodal schedule*{color}

Results: First solr displaying the search results related to 
{color:#ff}Intermodal{color} and after few pages I am seeing the serch term 
{color:#ff}schedule {color}related pages randomly. I am not getting the 
results related to multi words on the page.

For example: I am not seeing the results like [Terminals & *Schedules* | 
*Intermodal* | Shipping Options ... page on starting and getting random results 
and the  [Terminals & *Schedules* | *Intermodal* | Shipping Options *...* page 
displaying after the 40 results.

 

Here is the query on browser URL:

[http://test-servername/content/nscorp/en/search-results.html?start=0=Intermodal+Schedule|http://servername/content/nscorp/en/search-results.html?start=0=Intermodal+Schedule]

 

I am using solr version 7.4

 

  was:
I recently integrate solr to AEM and when i do search for multiple words the 
search results are getting randomly.

 

search words: {color:#ff}*Intermodal schedule*{color}

Results: First solr displaying the search results related to 
{color:#ff}Intermodal{color} and after few pages I am seeing the serch term 
{color:#ff}schedule {color}related pages randomly. I am not getting the 
results related to multi words on the page.

For example: I am not seeing the results like [Terminals & *Schedules* | 
*Intermodal* | Shipping Options 
*...*|http://www.nscorp.com/content/nscorp/en/shipping-options/intermodal/terminals-and-schedules.html]
 page on starting and getting random results and the  [Terminals & *Schedules* 
| *Intermodal* | Shipping Options *...* page displaying after the 40 results.

 

Here is the query on browser URL:

[http://test-servername/content/nscorp/en/search-results.html?start=0=Intermodal+Schedule|http://servername/content/nscorp/en/search-results.html?start=0=Intermodal+Schedule]

 

I am using solr version 7.4

 


> Multi word searching is not working getting random search results
> -
>
> Key: SOLR-12751
> URL: https://issues.apache.org/jira/browse/SOLR-12751
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
> Environment: I am currently running solr on Linux platform.
> NAME="Red Hat Enterprise Linux Server"
> VERSION="7.5"
> openjdk version "1.8.0_181"
>  
> AEM version: 6.2
>  
>  
>  
>  
>Reporter: Jagadish Muddapati
>Priority: Blocker
>  Labels: newbie
>
> I recently integrate solr to AEM and when i do search for multiple words the 
> search results are getting randomly.
>  
> search words: {color:#ff}*Intermodal schedule*{color}
> Results: First solr displaying the search results related to 
> {color:#ff}Intermodal{color} and after few pages I am seeing the serch 
> term {color:#ff}schedule {color}related pages randomly. I am not getting 
> the results related to multi words on the page.
> For example: I am not seeing the results like [Terminals & *Schedules* | 
> *Intermodal* | Shipping Options ... page on starting and getting random 
> results and the  [Terminals & *Schedules* | *Intermodal* | Shipping Options 
> *...* page displaying after the 40 results.
>  
> Here is the query on browser URL:
> [http://test-servername/content/nscorp/en/search-results.html?start=0=Intermodal+Schedule|http://servername/content/nscorp/en/search-results.html?start=0=Intermodal+Schedule]
>  
> I am using solr version 7.4
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12751) Multi word searching is not working getting random search results

2018-09-06 Thread Jagadish Muddapati (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jagadish Muddapati updated SOLR-12751:
--
Description: 
I recently integrate solr to AEM and when i do search for multiple words the 
search results are getting randomly.

 

search words: {color:#ff}*Intermodal schedule*{color}

Results: First solr displaying the search results related to 
{color:#ff}Intermodal{color} and after few pages I am seeing the serch term 
{color:#ff}schedule {color}related pages randomly. I am not getting the 
results related to multi words on the page.

For example: I am not seeing the results like [Terminals & *Schedules* | 
*Intermodal* | Shipping Options 
*...*|http://www.nscorp.com/content/nscorp/en/shipping-options/intermodal/terminals-and-schedules.html]
 page on starting and getting random results and the  [Terminals & *Schedules* 
| *Intermodal* | Shipping Options *...* page displaying after the 40 results.

 

Here is the query on browser URL:

[http://test-servername/content/nscorp/en/search-results.html?start=0=Intermodal+Schedule|http://servername/content/nscorp/en/search-results.html?start=0=Intermodal+Schedule]

 

I am using solr version 7.4

 

  was:
I recently integrate solr to AEM and when i do search for multiple words the 
search results are getting randomly.

 

search words: {color:#FF}*Intermodal schedule*{color}

Results: First solr displaying the search results related to 
{color:#FF}Intermodal{color} and after few pages I am seeing the serch term 
{color:#FF}schedule {color}related pages randomly. I am not getting the 
results related to multi words on the page.

For example: I am not seeing the results like [Terminals & *Schedules* | 
*Intermodal* | Shipping Options 
*...*|http://www.nscorp.com/content/nscorp/en/shipping-options/intermodal/terminals-and-schedules.html]
 page on starting and getting random results and the  [Terminals & *Schedules* 
| *Intermodal* | Shipping Options 
*...*|http://www.nscorp.com/content/nscorp/en/shipping-options/intermodal/terminals-and-schedules.html]
 page displaying after the 40 results.

 

Here is the query on browser URL:

[http://test-servername/content/nscorp/en/search-results.html?start=0=Intermodal+Schedule|http://servername/content/nscorp/en/search-results.html?start=0=Intermodal+Schedule]

 

I am using solr version 7.4

 


> Multi word searching is not working getting random search results
> -
>
> Key: SOLR-12751
> URL: https://issues.apache.org/jira/browse/SOLR-12751
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
> Environment: I am currently running solr on Linux platform.
> NAME="Red Hat Enterprise Linux Server"
> VERSION="7.5"
> openjdk version "1.8.0_181"
>  
> AEM version: 6.2
>  
>  
>  
>  
>Reporter: Jagadish Muddapati
>Priority: Blocker
>  Labels: newbie
>
> I recently integrate solr to AEM and when i do search for multiple words the 
> search results are getting randomly.
>  
> search words: {color:#ff}*Intermodal schedule*{color}
> Results: First solr displaying the search results related to 
> {color:#ff}Intermodal{color} and after few pages I am seeing the serch 
> term {color:#ff}schedule {color}related pages randomly. I am not getting 
> the results related to multi words on the page.
> For example: I am not seeing the results like [Terminals & *Schedules* | 
> *Intermodal* | Shipping Options 
> *...*|http://www.nscorp.com/content/nscorp/en/shipping-options/intermodal/terminals-and-schedules.html]
>  page on starting and getting random results and the  [Terminals & 
> *Schedules* | *Intermodal* | Shipping Options *...* page displaying after the 
> 40 results.
>  
> Here is the query on browser URL:
> [http://test-servername/content/nscorp/en/search-results.html?start=0=Intermodal+Schedule|http://servername/content/nscorp/en/search-results.html?start=0=Intermodal+Schedule]
>  
> I am using solr version 7.4
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12751) Multi word searching is not working getting random search results

2018-09-06 Thread Jagadish Muddapati (JIRA)
Jagadish Muddapati created SOLR-12751:
-

 Summary: Multi word searching is not working getting random search 
results
 Key: SOLR-12751
 URL: https://issues.apache.org/jira/browse/SOLR-12751
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
 Environment: I am currently running solr on Linux platform.

NAME="Red Hat Enterprise Linux Server"
VERSION="7.5"

openjdk version "1.8.0_181"

 

AEM version: 6.2

 

 

 

 
Reporter: Jagadish Muddapati


I recently integrate solr to AEM and when i do search for multiple words the 
search results are getting randomly.

 

search words: {color:#FF}*Intermodal schedule*{color}

Results: First solr displaying the search results related to 
{color:#FF}Intermodal{color} and after few pages I am seeing the serch term 
{color:#FF}schedule {color}related pages randomly. I am not getting the 
results related to multi words on the page.

For example: I am not seeing the results like [Terminals & *Schedules* | 
*Intermodal* | Shipping Options 
*...*|http://www.nscorp.com/content/nscorp/en/shipping-options/intermodal/terminals-and-schedules.html]
 page on starting and getting random results and the  [Terminals & *Schedules* 
| *Intermodal* | Shipping Options 
*...*|http://www.nscorp.com/content/nscorp/en/shipping-options/intermodal/terminals-and-schedules.html]
 page displaying after the 40 results.

 

Here is the query on browser URL:

[http://test-servername/content/nscorp/en/search-results.html?start=0=Intermodal+Schedule|http://servername/content/nscorp/en/search-results.html?start=0=Intermodal+Schedule]

 

I am using solr version 7.4

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-11-ea+28) - Build # 2697 - Still Unstable!

2018-09-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2697/
Java: 64bit/jdk-11-ea+28 -XX:+UseCompressedOops -XX:+UseParallelGC

5 tests failed.
FAILED:  org.apache.solr.cloud.api.collections.ShardSplitTest.test

Error Message:
Wrong doc count on shard1_0. See SOLR-5309 expected:<352> but was:<353>

Stack Trace:
java.lang.AssertionError: Wrong doc count on shard1_0. See SOLR-5309 
expected:<352> but was:<353>
at 
__randomizedtesting.SeedInfo.seed([FAFA4B0118621B81:72AE74DBB69E7679]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.checkDocCountsAndShardStates(ShardSplitTest.java:910)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.splitByUniqueKeyTest(ShardSplitTest.java:702)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.test(ShardSplitTest.java:107)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4817 - Unstable!

2018-09-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4817/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.legacy.TestNumericTokenStream

Error Message:
The test or suite printed 369275 bytes to stdout and stderr, even though the 
limit was set to 8192 bytes. Increase the limit with @Limit, ignore it 
completely with @SuppressSysoutChecks or run with -Dtests.verbose=true

Stack Trace:
java.lang.AssertionError: The test or suite printed 369275 bytes to stdout and 
stderr, even though the limit was set to 8192 bytes. Increase the limit with 
@Limit, ignore it completely with @SuppressSysoutChecks or run with 
-Dtests.verbose=true
at __randomizedtesting.SeedInfo.seed([51FB2A94A31BF2C9]:0)
at 
org.apache.lucene.util.TestRuleLimitSysouts.afterIfSuccessful(TestRuleLimitSysouts.java:211)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterIfSuccessful(TestRuleAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:37)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 13735 lines...]
   [junit4] Suite: org.apache.solr.legacy.TestNumericTokenStream
   [junit4]   2> 2043104 INFO  (READER2) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={q=id:0=true=json} hits=0 status=0 
QTime=0
   [junit4]   2> 2043104 INFO  (READER1) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=34=json} status=0 QTime=0
   [junit4]   2> 2043104 INFO  (READER2) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=41=json} status=0 QTime=0
   [junit4]   2> 2043104 INFO  (READER1) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={q=id:37=true=json} hits=0 status=0 
QTime=0
   [junit4]   2> 2043104 INFO  (READER2) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=92=json} status=0 QTime=0
   [junit4]   2> 2043104 INFO  (READER1) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={q=id:78=true=json} hits=0 status=0 
QTime=0
   [junit4]   2> 2043104 INFO  (READER2) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=33=json} status=0 QTime=0
   [junit4]   2> 2043104 INFO  (READER1) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=70=json} status=0 QTime=0
   [junit4]   2> 2043104 INFO  (READER2) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=92=json} status=0 QTime=0
   [junit4]   2> 2043104 INFO  (READER1) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={q=id:18=true=json} hits=0 status=0 
QTime=0
   [junit4]   2> 2043104 INFO  (READER2) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={q=id:20=true=json} hits=0 status=0 
QTime=0
   [junit4]   2> 2043104 INFO  (READER1) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={q=id:29=true=json} hits=0 status=0 
QTime=0
   [junit4]   2> 2043104 INFO  (READER2) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=40=json} status=0 QTime=0
   [junit4]   2> 2043104 INFO  (READER1) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=40=json} status=0 QTime=0
   [junit4]   2> 2043104 INFO  (READER2) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={q=id:60=true=json} hits=0 status=0 
QTime=0
   [junit4]   2> 2043104 INFO  (READER1) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=37=json} status=0 QTime=0
   [junit4]   2> 2043104 INFO  (READER2) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=33=json} status=0 QTime=0
   [junit4]   2> 2043104 INFO  (READER1) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={q=id:0=true=json} hits=0 status=0 
QTime=0
   [junit4]   2> 2043104 INFO  (READER2) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=68=json} status=0 QTime=0
   [junit4]   2> 2043104 INFO  (READER1) [] o.a.s.c.S.Request [collection1] 
 webapp=null path=null params={qt=/get=64=json} status=0 QTime=0
   [junit4]   2> 2043104 INFO  (READER2) [] o.a.s.c.S.Request 

[jira] [Commented] (SOLR-12749) timeseries() expression missing sum() results for empty buckets

2018-09-06 Thread Joel Bernstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16605748#comment-16605748
 ] 

Joel Bernstein commented on SOLR-12749:
---

We had this discussion at Alfresco and decided that a null value in a 
timeseries is not helpful. I can commit a fix which adds the zero for the 
buckets that have null values returned buy the JSON facet API.

> timeseries() expression missing sum() results for empty buckets
> ---
>
> Key: SOLR-12749
> URL: https://issues.apache.org/jira/browse/SOLR-12749
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 7.4
>Reporter: Jan Høydahl
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: master (8.0), 7.5
>
>
> See solr-user post 
> [https://lists.apache.org/thread.html/aeacef8fd8cee980bb74f2f6b7e1c3fd0b7ead7d7a0e7b79dd48659f@%3Csolr-user.lucene.apache.org%3E]
>  
> We have a timeseries expression with gap="+1DAY" and a sum(imps_l) to 
> aggregate sums of an integer for each bucket. Now, some day buckets do not 
> contain any documents at all, and instead of returning a tuple with value 0, 
> it returns a tuple with no entry at all for the sum, see the bucket for 
> date_dt 2018-06-22 below:
> {code:javascript}
> {
>  "result-set": {
>    "docs": [
>  {
>    "sum(imps_l)": 0,
>    "date_dt": "2018-06-21",
>    "count(*)": 5
>  },
>  {
>    "date_dt": "2018-06-22",
>    "count(*)": 0
>  },
>  {
>    "EOF": true,
>    "RESPONSE_TIME": 3
>  }
>    ]
>  }
> }{code}
> Now when we want to convert this into a column using col(a,'sum(imps_l)') 
> then that array will get mostly numbers but also some string entries 
> 'sum(imps_l)' which is the key name. I need purely integers in the column.
> Should the timeseries() have output values for all functions even if there 
> are no documents in the bucket? Or is there something similar to the select() 
> expression that can take a stream of tuples not originating directly from 
> search() and replace values? Or is there perhaps a function that can loop 
> through the column produced by col() and replace non-numeric values with 0?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8478) combine TermScorer constructors' implementation

2018-09-06 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16605745#comment-16605745
 ] 

Adrien Grand commented on LUCENE-8478:
--

bq. so not sure if this is worth pursuing further or not.

Another problem is that one reason why things are the way they are today is to 
avoid wrapping the postings iterator whenever possible for performance reasons. 
The latest patch always wraps it with a SlowImpactsEnum + ImpactsDISI. I'm 
leaning towards keeping things the way they are today.

> combine TermScorer constructors' implementation
> ---
>
> Key: LUCENE-8478
> URL: https://issues.apache.org/jira/browse/LUCENE-8478
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: master (8.0)
>Reporter: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-8478.patch, LUCENE-8478.patch
>
>
> We currently have two {{TermScorer}} constructor variants and it's not 
> immediately obvious how and why their implementations are the way they are as 
> far as initialisations and initialisation order is concerned. Combination of 
> the logic could make the commonalities and differences clearer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12749) timeseries() expression missing sum() results for empty buckets

2018-09-06 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-12749:
-

Assignee: Joel Bernstein

> timeseries() expression missing sum() results for empty buckets
> ---
>
> Key: SOLR-12749
> URL: https://issues.apache.org/jira/browse/SOLR-12749
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 7.4
>Reporter: Jan Høydahl
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: master (8.0), 7.5
>
>
> See solr-user post 
> [https://lists.apache.org/thread.html/aeacef8fd8cee980bb74f2f6b7e1c3fd0b7ead7d7a0e7b79dd48659f@%3Csolr-user.lucene.apache.org%3E]
>  
> We have a timeseries expression with gap="+1DAY" and a sum(imps_l) to 
> aggregate sums of an integer for each bucket. Now, some day buckets do not 
> contain any documents at all, and instead of returning a tuple with value 0, 
> it returns a tuple with no entry at all for the sum, see the bucket for 
> date_dt 2018-06-22 below:
> {code:javascript}
> {
>  "result-set": {
>    "docs": [
>  {
>    "sum(imps_l)": 0,
>    "date_dt": "2018-06-21",
>    "count(*)": 5
>  },
>  {
>    "date_dt": "2018-06-22",
>    "count(*)": 0
>  },
>  {
>    "EOF": true,
>    "RESPONSE_TIME": 3
>  }
>    ]
>  }
> }{code}
> Now when we want to convert this into a column using col(a,'sum(imps_l)') 
> then that array will get mostly numbers but also some string entries 
> 'sum(imps_l)' which is the key name. I need purely integers in the column.
> Should the timeseries() have output values for all functions even if there 
> are no documents in the bucket? Or is there something similar to the select() 
> expression that can take a stream of tuples not originating directly from 
> search() and replace values? Or is there perhaps a function that can loop 
> through the column produced by col() and replace non-numeric values with 0?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_172) - Build # 22810 - Still Unstable!

2018-09-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22810/
Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.update.TransactionLogTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.update.TransactionLogTest:  
   1) Thread[id=15, name=Log4j2-TF-1-AsyncLoggerConfig--1, state=TIMED_WAITING, 
group=TGRP-TransactionLogTest] at sun.misc.Unsafe.park(Native Method)   
  at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
com.lmax.disruptor.TimeoutBlockingWaitStrategy.waitFor(TimeoutBlockingWaitStrategy.java:38)
 at 
com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
 at 
com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)
 at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.update.TransactionLogTest: 
   1) Thread[id=15, name=Log4j2-TF-1-AsyncLoggerConfig--1, state=TIMED_WAITING, 
group=TGRP-TransactionLogTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 
com.lmax.disruptor.TimeoutBlockingWaitStrategy.waitFor(TimeoutBlockingWaitStrategy.java:38)
at 
com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
at 
com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([7657F97F2A7AB2E2]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.update.TransactionLogTest

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=15, 
name=Log4j2-TF-1-AsyncLoggerConfig--1, state=TIMED_WAITING, 
group=TGRP-TransactionLogTest] at sun.misc.Unsafe.park(Native Method)   
  at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
com.lmax.disruptor.TimeoutBlockingWaitStrategy.waitFor(TimeoutBlockingWaitStrategy.java:38)
 at 
com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
 at 
com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)
 at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=15, name=Log4j2-TF-1-AsyncLoggerConfig--1, state=TIMED_WAITING, 
group=TGRP-TransactionLogTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 
com.lmax.disruptor.TimeoutBlockingWaitStrategy.waitFor(TimeoutBlockingWaitStrategy.java:38)
at 
com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
at 
com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([7657F97F2A7AB2E2]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.update.TransactionLogTest

Error Message:
The test or suite printed 133952 bytes to stdout and stderr, even though the 
limit was set to 8192 bytes. Increase the limit with @Limit, ignore it 
completely with @SuppressSysoutChecks or run with -Dtests.verbose=true

Stack Trace:
java.lang.AssertionError: The test or suite printed 133952 bytes to stdout and 
stderr, even though the limit was set to 8192 bytes. Increase the limit with 
@Limit, ignore it completely with @SuppressSysoutChecks or run with 
-Dtests.verbose=true
at __randomizedtesting.SeedInfo.seed([7657F97F2A7AB2E2]:0)
at 
org.apache.lucene.util.TestRuleLimitSysouts.afterIfSuccessful(TestRuleLimitSysouts.java:211)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterIfSuccessful(TestRuleAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:37)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Commented] (LUCENE-8468) A ByteBuffer based Directory implementation (and associated classes)

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16605694#comment-16605694
 ] 

ASF subversion and git services commented on LUCENE-8468:
-

Commit a889dbd54f7498015d882d5e23e1271db8ce8004 in lucene-solr's branch 
refs/heads/branch_7x from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a889dbd ]

LUCENE-8468: Add sliceDescription to the toString() of ByteBuffersIndexInput.

This fixes test failures in 
TestLucene50CompoundFormat#testResourceNameInsideCompoundFile.


> A ByteBuffer based Directory implementation (and associated classes)
> 
>
> Key: LUCENE-8468
> URL: https://issues.apache.org/jira/browse/LUCENE-8468
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 7.5
>
> Attachments: LUCENE-8468.patch
>
>
> A factored-out sub-patch with ByteBufferDirectory and associated index 
> inputs, outputs, etc. and tests. No refactorings or cleanups to any other 
> classes (these will go in to master after 8.0 branch is cut).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8468) A ByteBuffer based Directory implementation (and associated classes)

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16605691#comment-16605691
 ] 

ASF subversion and git services commented on LUCENE-8468:
-

Commit 1a006556e5999eb17d34bef1db08af0773d4e9b6 in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1a00655 ]

LUCENE-8468: Add sliceDescription to the toString() of ByteBuffersIndexInput.

This fixes test failures in 
TestLucene50CompoundFormat#testResourceNameInsideCompoundFile.


> A ByteBuffer based Directory implementation (and associated classes)
> 
>
> Key: LUCENE-8468
> URL: https://issues.apache.org/jira/browse/LUCENE-8468
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 7.5
>
> Attachments: LUCENE-8468.patch
>
>
> A factored-out sub-patch with ByteBufferDirectory and associated index 
> inputs, outputs, etc. and tests. No refactorings or cleanups to any other 
> classes (these will go in to master after 8.0 branch is cut).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12750) Migrate API should lock the collection instead of shard

2018-09-06 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-12750:


 Summary: Migrate API should lock the collection instead of shard
 Key: SOLR-12750
 URL: https://issues.apache.org/jira/browse/SOLR-12750
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
 Fix For: master (8.0), 7.5


The Migrate API acquires the lock at the shard level and not even all of the 
relevant ones because the API can affect many shards and that information is 
not available at the time when the locking decisions are made. It is best if 
the Migrate API locks the entire collection instead of shard.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-10.0.1) - Build # 776 - Still Unstable!

2018-09-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/776/
Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger

Error Message:
expected:<3> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<3> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([61821015936DB5A0:24926970AA2C68D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.scheduledTriggerTest(ScheduledTriggerTest.java:113)
at 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger(ScheduledTriggerTest.java:66)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  

[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-11-ea+28) - Build # 2696 - Still Unstable!

2018-09-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2696/
Java: 64bit/jdk-11-ea+28 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
org.apache.solr.cloud.rule.RulesTest.testPortRuleInPresenceOfClusterPolicy

Error Message:
Could not find collection : portRuleColl2

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : portRuleColl2
at 
__randomizedtesting.SeedInfo.seed([7F5C8BB89524CFC8:C674073822FD8580]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118)
at 
org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:258)
at 
org.apache.solr.cloud.rule.RulesTest.testPortRuleInPresenceOfClusterPolicy(RulesTest.java:119)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  

[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_172) - Build # 7508 - Still Unstable!

2018-09-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7508/
Java: 32bit/jdk1.8.0_172 -server -XX:+UseSerialGC

6 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitMixedReplicaTypes

Error Message:
unexpected shard state expected: but was:

Stack Trace:
java.lang.AssertionError: unexpected shard state expected: but 
was:
at 
__randomizedtesting.SeedInfo.seed([35B5E033281E28DD:8D76B493D4C5FDA8]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.verifyShard(ShardSplitTest.java:372)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.doSplitMixedReplicaTypes(ShardSplitTest.java:364)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitMixedReplicaTypes(ShardSplitTest.java:331)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Created] (SOLR-12749) timeseries() expression missing sum() results for empty buckets

2018-09-06 Thread JIRA
Jan Høydahl created SOLR-12749:
--

 Summary: timeseries() expression missing sum() results for empty 
buckets
 Key: SOLR-12749
 URL: https://issues.apache.org/jira/browse/SOLR-12749
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: streaming expressions
Affects Versions: 7.4
Reporter: Jan Høydahl
 Fix For: master (8.0), 7.5


See solr-user post 
[https://lists.apache.org/thread.html/aeacef8fd8cee980bb74f2f6b7e1c3fd0b7ead7d7a0e7b79dd48659f@%3Csolr-user.lucene.apache.org%3E]

 

We have a timeseries expression with gap="+1DAY" and a sum(imps_l) to aggregate 
sums of an integer for each bucket. Now, some day buckets do not contain any 
documents at all, and instead of returning a tuple with value 0, it returns a 
tuple with no entry at all for the sum, see the bucket for date_dt 2018-06-22 
below:
{code:javascript}
{
 "result-set": {
   "docs": [
 {
   "sum(imps_l)": 0,
   "date_dt": "2018-06-21",
   "count(*)": 5
 },
 {
   "date_dt": "2018-06-22",
   "count(*)": 0
 },
 {
   "EOF": true,
   "RESPONSE_TIME": 3
 }
   ]
 }
}{code}
Now when we want to convert this into a column using col(a,'sum(imps_l)') then 
that array will get mostly numbers but also some string entries 'sum(imps_l)' 
which is the key name. I need purely integers in the column.

Should the timeseries() have output values for all functions even if there are 
no documents in the bucket? Or is there something similar to the select() 
expression that can take a stream of tuples not originating directly from 
search() and replace values? Or is there perhaps a function that can loop 
through the column produced by col() and replace non-numeric values with 0?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12745) Wrong header levels on BasicAuth refGuide page

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16605465#comment-16605465
 ] 

ASF subversion and git services commented on SOLR-12745:


Commit 9f6ff20d1ec70ce011a9f8329437aa8740afadd7 in lucene-solr's branch 
refs/heads/branch_7x from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9f6ff20 ]

SOLR-12745: Wrong header levels on BasicAuth refGuide page

(cherry picked from commit 285b743)


> Wrong header levels on BasicAuth refGuide page
> --
>
> Key: SOLR-12745
> URL: https://issues.apache.org/jira/browse/SOLR-12745
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.4
>Reporter: Jan Høydahl
>Priority: Minor
> Fix For: master (8.0), 7.5
>
>
> See 
> [https://lucene.apache.org/solr/guide/7_4/basic-authentication-plugin.html#using-basic-auth-with-solrj]
>  and 
> [https://lucene.apache.org/solr/guide/7_4/basic-authentication-plugin.html#using-the-solr-control-script-with-basic-auth]
> Both of these should be 2nd level paragraphs, not third below "Editing 
> Authentication Plugin Configuration" as today



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12745) Wrong header levels on BasicAuth refGuide page

2018-09-06 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reassigned SOLR-12745:
--

Assignee: Jan Høydahl

> Wrong header levels on BasicAuth refGuide page
> --
>
> Key: SOLR-12745
> URL: https://issues.apache.org/jira/browse/SOLR-12745
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.4
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Minor
> Fix For: master (8.0), 7.5
>
>
> See 
> [https://lucene.apache.org/solr/guide/7_4/basic-authentication-plugin.html#using-basic-auth-with-solrj]
>  and 
> [https://lucene.apache.org/solr/guide/7_4/basic-authentication-plugin.html#using-the-solr-control-script-with-basic-auth]
> Both of these should be 2nd level paragraphs, not third below "Editing 
> Authentication Plugin Configuration" as today



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12745) Wrong header levels on BasicAuth refGuide page

2018-09-06 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-12745.

   Resolution: Fixed
Fix Version/s: master (8.0)

> Wrong header levels on BasicAuth refGuide page
> --
>
> Key: SOLR-12745
> URL: https://issues.apache.org/jira/browse/SOLR-12745
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.4
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Minor
> Fix For: master (8.0), 7.5
>
>
> See 
> [https://lucene.apache.org/solr/guide/7_4/basic-authentication-plugin.html#using-basic-auth-with-solrj]
>  and 
> [https://lucene.apache.org/solr/guide/7_4/basic-authentication-plugin.html#using-the-solr-control-script-with-basic-auth]
> Both of these should be 2nd level paragraphs, not third below "Editing 
> Authentication Plugin Configuration" as today



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12745) Wrong header levels on BasicAuth refGuide page

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16605463#comment-16605463
 ] 

ASF subversion and git services commented on SOLR-12745:


Commit 285b743a8bff96e3f436f40bcc86f3529a0d8951 in lucene-solr's branch 
refs/heads/master from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=285b743 ]

SOLR-12745: Wrong header levels on BasicAuth refGuide page


> Wrong header levels on BasicAuth refGuide page
> --
>
> Key: SOLR-12745
> URL: https://issues.apache.org/jira/browse/SOLR-12745
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.4
>Reporter: Jan Høydahl
>Priority: Minor
> Fix For: 7.5
>
>
> See 
> [https://lucene.apache.org/solr/guide/7_4/basic-authentication-plugin.html#using-basic-auth-with-solrj]
>  and 
> [https://lucene.apache.org/solr/guide/7_4/basic-authentication-plugin.html#using-the-solr-control-script-with-basic-auth]
> Both of these should be 2nd level paragraphs, not third below "Editing 
> Authentication Plugin Configuration" as today



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12743) Memory leak introduced in Solr 7.3.0

2018-09-06 Thread Markus Jelsma (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16605460#comment-16605460
 ] 

Markus Jelsma commented on SOLR-12743:
--

That is a most regrettable typo, i meant i can't/cannot reproduce it locally, 
even when i introduce continuous indexing and querying. That is the whole 
problem i have, perhaps Björn can. I'll ask him on the list!

> Memory leak introduced in Solr 7.3.0
> 
>
> Key: SOLR-12743
> URL: https://issues.apache.org/jira/browse/SOLR-12743
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3, 7.3.1, 7.4
>Reporter: Tomás Fernández Löbbe
>Priority: Critical
>
> Reported initially by [~markus17]([1], [2]), but other users have had the 
> same issue [3]. Some of the key parts:
> {noformat}
> Some facts:
> * problem started after upgrading from 7.2.1 to 7.3.0;
> * it occurs only in our main text search collection, all other collections 
> are unaffected;
> * despite what i said earlier, it is so far unreproducible outside 
> production, even when mimicking production as good as we can;
> * SortedIntDocSet instances and ConcurrentLRUCache$CacheEntry instances are 
> both leaked on commit;
> * filterCache is enabled using FastLRUCache;
> * filter queries are simple field:value using strings, and three filter query 
> for time range using [NOW/DAY TO NOW+1DAY/DAY] syntax for 'today', 'last 
> week' and 'last month', but rarely used;
> * reloading the core manually frees OldGen;
> * custom URP's don't cause the problem, disabling them doesn't solve it;
> * the collection uses custom extensions for QueryComponent and 
> QueryElevationComponent, ExtendedDismaxQParser and MoreLikeThisQParser, a 
> whole bunch of TokenFilters, and several DocTransformers and due it being 
> only reproducible on production, i really cannot switch these back to 
> Solr/Lucene versions;
> * useFilterForSortedQuery is/was not defined in schema so it was default 
> (true?), SOLR-11769 could be the culprit, i disabled it just now only for the 
> node running 7.4.0, rest of collection runs 7.2.1;
> {noformat}
> {noformat}
> You were right, it was leaking exactly one SolrIndexSearcher instance on each 
> commit. 
> {noformat}
> And from Björn Häuser ([3]):
> {noformat}
> Problem Suspect 1
> 91 instances of "org.apache.solr.search.SolrIndexSearcher", loaded by 
> "org.eclipse.jetty.webapp.WebAppClassLoader @ 0x6807d1048" occupy 
> 1.981.148.336 (38,26%) bytes. 
> Biggest instances:
>         • org.apache.solr.search.SolrIndexSearcher @ 0x6ffd47ea8 - 70.087.272 
> (1,35%) bytes. 
>         • org.apache.solr.search.SolrIndexSearcher @ 0x79ea9c040 - 65.678.264 
> (1,27%) bytes. 
>         • org.apache.solr.search.SolrIndexSearcher @ 0x6855ad680 - 63.050.600 
> (1,22%) bytes. 
> Problem Suspect 2
> 223 instances of "org.apache.solr.util.ConcurrentLRUCache", loaded by 
> "org.eclipse.jetty.webapp.WebAppClassLoader @ 0x6807d1048" occupy 
> 1.373.110.208 (26,52%) bytes. 
> {noformat}
> More details in the email threads.
> [1] 
> [http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201804.mbox/%3Czarafa.5ae201c6.2f85.218a781d795b07b1%40mail1.ams.nl.openindex.io%3E]
>  [2] 
> [http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201806.mbox/%3Czarafa.5b351537.7b8c.647ddc93059f68eb%40mail1.ams.nl.openindex.io%3E]
>  [3] 
> [http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201809.mbox/%3c7b5e78c6-8cf6-42ee-8d28-872230ded...@gmail.com%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8481) Javadocs should no longer reference RAMDirectory

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16605454#comment-16605454
 ] 

ASF subversion and git services commented on LUCENE-8481:
-

Commit fc8d9eba1ebf779e3cbda487bae854f7b17549b0 in lucene-solr's branch 
refs/heads/branch_7x from [~dawid.weiss]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fc8d9eb ]

Revert "LUCENE-8481: Javadocs should no longer reference RAMDirectory."

This reverts commit 3cd58d130e403f11cbbd0cd2673a6a58da361854.


> Javadocs should no longer reference RAMDirectory
> 
>
> Key: LUCENE-8481
> URL: https://issues.apache.org/jira/browse/LUCENE-8481
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 7.5
>
> Attachments: LUCENE-8481.patch
>
>
> Since RAMDirectory is deprecated, we shouldn't show examples using it 
> anymore. See eg. 
> https://github.com/apache/lucene-solr/blob/a1ec716e107807f1dc24923cc7a91d0c5e64a7e1/lucene/core/src/java/overview.html#L36.
>  cc [~dweiss]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8481) Javadocs should no longer reference RAMDirectory

2018-09-06 Thread Dawid Weiss (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-8481.
-
   Resolution: Fixed
 Assignee: Dawid Weiss
Fix Version/s: 7.5

> Javadocs should no longer reference RAMDirectory
> 
>
> Key: LUCENE-8481
> URL: https://issues.apache.org/jira/browse/LUCENE-8481
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 7.5
>
> Attachments: LUCENE-8481.patch
>
>
> Since RAMDirectory is deprecated, we shouldn't show examples using it 
> anymore. See eg. 
> https://github.com/apache/lucene-solr/blob/a1ec716e107807f1dc24923cc7a91d0c5e64a7e1/lucene/core/src/java/overview.html#L36.
>  cc [~dweiss]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8481) Javadocs should no longer reference RAMDirectory

2018-09-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16605449#comment-16605449
 ] 

ASF subversion and git services commented on LUCENE-8481:
-

Commit 3cd58d130e403f11cbbd0cd2673a6a58da361854 in lucene-solr's branch 
refs/heads/branch_7x from [~dawid.weiss]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3cd58d1 ]

LUCENE-8481: Javadocs should no longer reference RAMDirectory.


> Javadocs should no longer reference RAMDirectory
> 
>
> Key: LUCENE-8481
> URL: https://issues.apache.org/jira/browse/LUCENE-8481
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8481.patch
>
>
> Since RAMDirectory is deprecated, we shouldn't show examples using it 
> anymore. See eg. 
> https://github.com/apache/lucene-solr/blob/a1ec716e107807f1dc24923cc7a91d0c5e64a7e1/lucene/core/src/java/overview.html#L36.
>  cc [~dweiss]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >