[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_131) - Build # 4 - Unstable!

2017-07-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/4/
Java: 64bit/jdk1.8.0_131 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery

Error Message:
Expected a collection with one shard and two replicas null Live Nodes: 
[127.0.0.1:60344_solr, 127.0.0.1:60349_solr] Last available state: 
DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/9)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "core":"MissingSegmentRecoveryTest_shard1_replica_n1", 
  "base_url":"http://127.0.0.1:60349/solr";,   
"node_name":"127.0.0.1:60349_solr",   "state":"down",   
"type":"NRT"}, "core_node2":{   
"core":"MissingSegmentRecoveryTest_shard1_replica_n2",   
"base_url":"http://127.0.0.1:60344/solr";,   
"node_name":"127.0.0.1:60344_solr",   "state":"active",   
"type":"NRT",   "leader":"true",   "router":{"name":"compositeId"}, 
  "maxShardsPerNode":"1",   "autoAddReplicas":"false",   "nrtReplicas":"2",   
"tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expected a collection with one shard and two replicas
null
Live Nodes: [127.0.0.1:60344_solr, 127.0.0.1:60349_solr]
Last available state: 
DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/9)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "core":"MissingSegmentRecoveryTest_shard1_replica_n1",
  "base_url":"http://127.0.0.1:60349/solr";,
  "node_name":"127.0.0.1:60349_solr",
  "state":"down",
  "type":"NRT"},
"core_node2":{
  "core":"MissingSegmentRecoveryTest_shard1_replica_n2",
  "base_url":"http://127.0.0.1:60344/solr";,
  "node_name":"127.0.0.1:60344_solr",
  "state":"active",
  "type":"NRT",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([32FB351318002F20:62AEAD104121993D]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:269)
at 
org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery(MissingSegmentRecoveryTest.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluat

[jira] [Commented] (LUCENE-7899) Add "exists" query for doc values

2017-07-05 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074334#comment-16074334
 ] 

Adrien Grand commented on LUCENE-7899:
--

Isn't it what {{FieldValueQuery}} is about?

> Add "exists" query for doc values
> -
>
> Key: LUCENE-7899
> URL: https://issues.apache.org/jira/browse/LUCENE-7899
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 7.1
>
>
> I don't think we have a query today to efficiently test whether a doc values 
> field exists (has any value) for each document in the index?
> Now that we use iterators to access doc values, this should be an efficient 
> query: we can return the DISI we get for the doc values.
> ElasticSearch indexes its own field to record which field names occur in a 
> document, so it's able to do "exists" for any field (not just doc values 
> fields), but I think doc values fields we can just get "for free".
> I haven't started on this ... just wanted to open the issue first for 
> discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10879) DELETEREPLICA and DELETENODE commands should prevent data loss when replicationFactor==1

2017-07-05 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  resolved SOLR-10879.
--
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.0

> DELETEREPLICA and DELETENODE commands should prevent data loss when 
> replicationFactor==1
> 
>
> Key: SOLR-10879
> URL: https://issues.apache.org/jira/browse/SOLR-10879
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6, 7.0, 6.7
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
> Fix For: 7.0, master (8.0)
>
>
> There should be some level of protection against inadvertent data loss when 
> issuing these commands when replicationFactor is 1 - deleting a node or a 
> replica in this case will be equivalent to completely deleting some shards.
> This is further complicated by the replica types - there could be still 
> remaining replicas after the operation, but if they are all of PULL type then 
> none of them will ever become a shard leader.
> We could require that  the command should fail in such case unless a boolean 
> option "force==true" is specified.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_131) - Build # 8 - Still unstable!

2017-07-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/8/
Java: 32bit/jdk1.8.0_131 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.test

Error Message:
Could not find collection : movereplicatest_coll

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : 
movereplicatest_coll
at 
__randomizedtesting.SeedInfo.seed([50EE59DAB5542202:D8BA66001BA84FFA]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:194)
at 
org.apache.solr.cloud.MoveReplicaTest.getRandomReplica(MoveReplicaTest.java:185)
at org.apache.solr.cloud.MoveReplicaTest.test(MoveReplicaTest.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 11047 lines...]
   [junit4] Suite: org.apache.solr.cloud.MoveReplicaHDFSTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/tem

[jira] [Commented] (LUCENE-7897) RangeQuery optimization in IndexOrDocValuesQuery

2017-07-05 Thread Murali Krishna P (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074350#comment-16074350
 ] 

Murali Krishna P commented on LUCENE-7897:
--

Adrien, range query matches only 13% of the docs here, most likely that the 
negative range query won't kick in.

I agree it is going to be hard to figure out the threshold. I am trying to make 
sense of the cost calculation in IndexOrDocValuesQuery and 
Boolean2ScorerSupplier. To select points or docvalues, there are 3 costs being 
considered:
1. TermQuery cost -> docfreq (from other scorers)
2. PointsQuery cost -> estimatePointcount
3. DocvalueQuery cost -> maxDoc

2&3 are part of IndexOrDocValuesQuery and it returns min of those 2 as it's 
cost. But choice of points or docvalues is not done based on this cost. It is 
considering the minCost across all scorers to decide that. If the cost of 
IndexOrDocValuesQuery > minCost, it choses docvalues. This is bit 
counter-intuitive for me,  I was thinking IndexOrDocValuesQuery would take a 
hint of the matches from other scorers and calculate the cost accordingly. It 
seems like that happens in score supplier by comparing with minScore. 

Here is a proposal based on my understanding: Consider a situation of fetching 
N docs via IndexOrDocvaluesQuery:
1. Points: Would cost estimatePointcount/1024. This is to consider the cost of 
reads(1024 is the docids in a point block), we probably need to factor in cost 
of sorting the docids across multiple point splits as well.
2. Docvalues: N (assumes 1 read for each doc from the columnar store). Given 
the various encodings and sequential read, N may not be the right approach 
though(thoughts?). Currently this cost from DocValueProducer seems to be maxDoc 
(or #value if it is sparse) for the entry irrespective of how many we are 
actually fetching. But this cost is probably not getting considered as the 
condition currently in most cases would translate to "docfreq < pointEstimate ? 
docvalues : points".

Let me know whether this approach of reducing cost of points in 
IndexOrDocvaluesQuery makes sense. I know we might endup with wrong decision on 
other side now. We could probably benchmark by changing the queries to make 
docfreq match different percentages of points. 

> RangeQuery optimization in IndexOrDocValuesQuery 
> -
>
> Key: LUCENE-7897
> URL: https://issues.apache.org/jira/browse/LUCENE-7897
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: trunk, 7.0
>Reporter: Murali Krishna P
>
> For range queries, Lucene uses either Points or Docvalues based on cost 
> estimation 
> (https://lucene.apache.org/core/6_5_0/core/org/apache/lucene/search/IndexOrDocValuesQuery.html).
>  Scorer is chosen based on the minCost here: 
> https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/search/Boolean2ScorerSupplier.java#L16
> However, the cost calculation for TermQuery and IndexOrDocvalueQuery seems to 
> have same weightage. Essentially, cost depends upon the docfreq in TermDict, 
> number of points visited and number of docvalues. In a situation where 
> docfreq is not too restrictive, this is lot of lookups for docvalues and 
> using points would have been better.
> Following query with 1M matches, takes 60ms with docvalues, but only 27ms 
> with points. If I change the query to "message:*", which matches all docs, it 
> choses the points(since cost is same), but with message:xyz it choses 
> docvalues eventhough doc frequency is 1million which results in many docvalue 
> fetches. Would it make sense to change the cost of docvalues query to be 
> higher or use points if the docfreq is too high for the term query(find an 
> optimum threshold where points cost < docvalue cost)?
> {noformat}
> {
>   "query": {
> "bool": {
>   "must": [
> {
>   "query_string": {
> "query": "message:xyz"
>   }
> },
> {
>   "range": {
> "@timestamp": {
>   "gte": 149865240,
>   "lte": 149890500,
>   "format": "epoch_millis"
> }
>   }
> }
>   ]
> }
>   }
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7898) Remove hasSegID from SegmentInfos

2017-07-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074364#comment-16074364
 ] 

ASF subversion and git services commented on LUCENE-7898:
-

Commit 8aace330405842c92b1df1616d32fb7362300172 in lucene-solr's branch 
refs/heads/branch_7_0 from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8aace33 ]

LUCENE-7898: Remove hasSegID from SegmentInfos.


> Remove hasSegID from SegmentInfos
> -
>
> Key: LUCENE-7898
> URL: https://issues.apache.org/jira/browse/LUCENE-7898
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
> Fix For: 7.0, master (8.0)
>
> Attachments: LUCENE-7898.patch
>
>
> This is only necesarry for backward compatibility with pre-5.3 indices, which 
> 7.0 does not need to support.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7898) Remove hasSegID from SegmentInfos

2017-07-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074365#comment-16074365
 ] 

ASF subversion and git services commented on LUCENE-7898:
-

Commit 758cbd98a7aa020ad67aea775028badf0be6418c in lucene-solr's branch 
refs/heads/branch_7x from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=758cbd9 ]

LUCENE-7898: Remove hasSegID from SegmentInfos.


> Remove hasSegID from SegmentInfos
> -
>
> Key: LUCENE-7898
> URL: https://issues.apache.org/jira/browse/LUCENE-7898
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
> Fix For: 7.0, master (8.0)
>
> Attachments: LUCENE-7898.patch
>
>
> This is only necesarry for backward compatibility with pre-5.3 indices, which 
> 7.0 does not need to support.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7898) Remove hasSegID from SegmentInfos

2017-07-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074367#comment-16074367
 ] 

ASF subversion and git services commented on LUCENE-7898:
-

Commit 708462eded31917be9805431ba169c8c81e89d67 in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=708462e ]

LUCENE-7898: Remove hasSegID from SegmentInfos.


> Remove hasSegID from SegmentInfos
> -
>
> Key: LUCENE-7898
> URL: https://issues.apache.org/jira/browse/LUCENE-7898
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
> Fix For: 7.0, master (8.0)
>
> Attachments: LUCENE-7898.patch
>
>
> This is only necesarry for backward compatibility with pre-5.3 indices, which 
> 7.0 does not need to support.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11007) index new fileds

2017-07-05 Thread Thaer Samar (JIRA)
Thaer Samar created SOLR-11007:
--

 Summary: index new fileds
 Key: SOLR-11007
 URL: https://issues.apache.org/jira/browse/SOLR-11007
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Thaer Samar


Hi,

We are trying to index documents of different types. Document have different 
fields. fields are known at indexing time. We run a query on a database and we 
index what comes using query variables as field names in solr. Our current 
solution: we use dynamic fields with prefix, for example feature_i_*, the issue 
with that
1)  we need to define the type of the dynamic field and to be able to cover the 
type of discovered fields we define the following
 feature_i_* for integers, feature_t_* for string, feature_d_* for double, 
1.a) this means we need to check the type of the discovered field and then put 
in the corresponding dynamic field
2) at search time, we need to know the right prefix

We are looking for help to find away to ignore the prefix and check of the type




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11007) index new discovered fileds of different types

2017-07-05 Thread Thaer Samar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thaer Samar updated SOLR-11007:
---
Summary: index new discovered fileds of different types  (was: index new 
fileds)

> index new discovered fileds of different types
> --
>
> Key: SOLR-11007
> URL: https://issues.apache.org/jira/browse/SOLR-11007
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Thaer Samar
>
> Hi,
> We are trying to index documents of different types. Document have different 
> fields. fields are known at indexing time. We run a query on a database and 
> we index what comes using query variables as field names in solr. Our current 
> solution: we use dynamic fields with prefix, for example feature_i_*, the 
> issue with that
> 1)  we need to define the type of the dynamic field and to be able to cover 
> the type of discovered fields we define the following
>  feature_i_* for integers, feature_t_* for string, feature_d_* for double, 
> 
> 1.a) this means we need to check the type of the discovered field and then 
> put in the corresponding dynamic field
> 2) at search time, we need to know the right prefix
> We are looking for help to find away to ignore the prefix and check of the 
> type



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7898) Remove hasSegID from SegmentInfos

2017-07-05 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-7898.
--
Resolution: Fixed

Thanks for having a look Mike.

> Remove hasSegID from SegmentInfos
> -
>
> Key: LUCENE-7898
> URL: https://issues.apache.org/jira/browse/LUCENE-7898
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
> Fix For: 7.0, master (8.0)
>
> Attachments: LUCENE-7898.patch
>
>
> This is only necesarry for backward compatibility with pre-5.3 indices, which 
> 7.0 does not need to support.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7882) Maybe expression compiler should cache recently compiled expressions?

2017-07-05 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074396#comment-16074396
 ] 

Dawid Weiss commented on LUCENE-7882:
-

Code cache is an area I don't have much experience with. It's interesting the 
cache grows without being pruned though -- zombie methods are effectively dead 
code, should be freed over time. Can you repeat and add 
{{-XX:+UseCodeCacheFlushing}} switch, see if this helps?

Separately from that, the hang you experienced is another thing to worry about. 
Could be an interaction with code cache (resource deadlock somewhere). Sigh.

> Maybe expression compiler should cache recently compiled expressions?
> -
>
> Key: LUCENE-7882
> URL: https://issues.apache.org/jira/browse/LUCENE-7882
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/expressions
>Reporter: Michael McCandless
>
> I've been running search performance tests using a simple expression 
> ({{_score + ln(1000+unit_sales)}}) for sorting and hit this odd bottleneck:
> {noformat}
> "pool-1-thread-30" #70 prio=5 os_prio=0 tid=0x7eea7000a000 nid=0x1ea8a 
> waiting for monitor entry [0x7eea867dd000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at 
> org.apache.lucene.expressions.js.JavascriptCompiler$CompiledExpression.evaluate(_score
>  + ln(1000+unit_sales))
>   at 
> org.apache.lucene.expressions.ExpressionFunctionValues.doubleValue(ExpressionFunctionValues.java:49)
>   at 
> com.amazon.lucene.OrderedVELeafCollector.collectInternal(OrderedVELeafCollector.java:123)
>   at 
> com.amazon.lucene.OrderedVELeafCollector.collect(OrderedVELeafCollector.java:108)
>   at 
> org.apache.lucene.search.MultiCollectorManager$Collectors$LeafCollectors.collect(MultiCollectorManager.java:102)
>   at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:241)
>   at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:184)
>   at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:658)
>   at org.apache.lucene.search.IndexSearcher$5.call(IndexSearcher.java:600)
>   at org.apache.lucene.search.IndexSearcher$5.call(IndexSearcher.java:597)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> I couldn't see any {{synchronized}} in the sources here, so I'm not sure 
> which object monitor it's blocked on.
> I was accidentally compiling a new expression for every query, and that 
> bottleneck would cause overall QPS to slow down drastically (~4X slower after 
> ~1 hour of redline tests), as if the JVM is getting slower and slower to 
> evaluate each expression the more expressions I had compiled.
> I tested JDK 9-ea and it also kept slowing down over time as the performance 
> test ran.
> Maybe we should put a small cache in front of the expressions compiler to 
> make it less trappy?  Or maybe we can get to the root cause of why the JVM 
> slows down more and more, the more expressions you compile?
> I won't have time to work on this in the near future so if anyone else feels 
> the itch, please scratch it!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7882) Maybe expression compiler should cache recently compiled expressions?

2017-07-05 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074399#comment-16074399
 ] 

Dawid Weiss commented on LUCENE-7882:
-

Here is a bit more in-depth info.
https://docs.oracle.com/javase/8/embedded/develop-apps-platforms/codecache.htm

> Maybe expression compiler should cache recently compiled expressions?
> -
>
> Key: LUCENE-7882
> URL: https://issues.apache.org/jira/browse/LUCENE-7882
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/expressions
>Reporter: Michael McCandless
>
> I've been running search performance tests using a simple expression 
> ({{_score + ln(1000+unit_sales)}}) for sorting and hit this odd bottleneck:
> {noformat}
> "pool-1-thread-30" #70 prio=5 os_prio=0 tid=0x7eea7000a000 nid=0x1ea8a 
> waiting for monitor entry [0x7eea867dd000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at 
> org.apache.lucene.expressions.js.JavascriptCompiler$CompiledExpression.evaluate(_score
>  + ln(1000+unit_sales))
>   at 
> org.apache.lucene.expressions.ExpressionFunctionValues.doubleValue(ExpressionFunctionValues.java:49)
>   at 
> com.amazon.lucene.OrderedVELeafCollector.collectInternal(OrderedVELeafCollector.java:123)
>   at 
> com.amazon.lucene.OrderedVELeafCollector.collect(OrderedVELeafCollector.java:108)
>   at 
> org.apache.lucene.search.MultiCollectorManager$Collectors$LeafCollectors.collect(MultiCollectorManager.java:102)
>   at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:241)
>   at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:184)
>   at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:658)
>   at org.apache.lucene.search.IndexSearcher$5.call(IndexSearcher.java:600)
>   at org.apache.lucene.search.IndexSearcher$5.call(IndexSearcher.java:597)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> I couldn't see any {{synchronized}} in the sources here, so I'm not sure 
> which object monitor it's blocked on.
> I was accidentally compiling a new expression for every query, and that 
> bottleneck would cause overall QPS to slow down drastically (~4X slower after 
> ~1 hour of redline tests), as if the JVM is getting slower and slower to 
> evaluate each expression the more expressions I had compiled.
> I tested JDK 9-ea and it also kept slowing down over time as the performance 
> test ran.
> Maybe we should put a small cache in front of the expressions compiler to 
> make it less trappy?  Or maybe we can get to the root cause of why the JVM 
> slows down more and more, the more expressions you compile?
> I won't have time to work on this in the near future so if anyone else feels 
> the itch, please scratch it!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+175) - Build # 20063 - Still Unstable!

2017-07-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20063/
Java: 32bit/jdk-9-ea+175 -client -XX:+UseG1GC --illegal-access=deny

7 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest

Error Message:
3 threads leaked from SUITE scope at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest: 1) 
Thread[id=6641, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest] at 
java.base@9/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@9/java.lang.Thread.run(Thread.java:844)2) 
Thread[id=6643, 
name=TEST-ChaosMonkeyNothingIsSafeWithPullReplicasTest.test-seed#[B7FBADB89613311A]-EventThread,
 state=WAITING, group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest]
 at java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)   
  at 
java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2062)
 at 
java.base@9/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
 at 
app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501)3) 
Thread[id=6642, 
name=TEST-ChaosMonkeyNothingIsSafeWithPullReplicasTest.test-seed#[B7FBADB89613311A]-SendThread(127.0.0.1:42593),
 state=TIMED_WAITING, group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest]  
   at java.base@9/java.lang.Thread.sleep(Native Method) at 
app//org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
 at 
app//org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:997)
 at 
app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1060)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 3 threads leaked from SUITE 
scope at org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest: 
   1) Thread[id=6641, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest]
at java.base@9/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.base@9/java.lang.Thread.run(Thread.java:844)
   2) Thread[id=6643, 
name=TEST-ChaosMonkeyNothingIsSafeWithPullReplicasTest.test-seed#[B7FBADB89613311A]-EventThread,
 state=WAITING, group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest]
at java.base@9/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@9/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
at 
java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2062)
at 
java.base@9/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
at 
app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501)
   3) Thread[id=6642, 
name=TEST-ChaosMonkeyNothingIsSafeWithPullReplicasTest.test-seed#[B7FBADB89613311A]-SendThread(127.0.0.1:42593),
 state=TIMED_WAITING, group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest]
at java.base@9/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
at 
app//org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:997)
at 
app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1060)
at __randomizedtesting.SeedInfo.seed([B7FBADB89613311A]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=6642, 
name=TEST-ChaosMonkeyNothingIsSafeWithPullReplicasTest.test-seed#[B7FBADB89613311A]-SendThread(127.0.0.1:42593),
 state=TIMED_WAITING, group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest]  
   at java.base@9/java.lang.Thread.sleep(Native Method) at 
app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1051)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=6642, 
name=TEST-ChaosMonkeyNothingIsSafeWithPullReplicasTest.test-seed#[B7FBADB89613311A]-SendThread(127.0.0.1:42593),
 state=TIMED_WAITING, group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest]
at java.base@9/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1051)
at __randomizedtesting.SeedInfo.seed([B7FBADB89613311A]:0)


FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest.test

Error Message:
Tim

[jira] [Assigned] (SOLR-10986) TestScoreJoinQPScore.testDeleteByScoreJoinQuery() failure: mismatch: '0'!='1' @ response/numFound

2017-07-05 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev reassigned SOLR-10986:
---

Assignee: Mikhail Khludnev

> TestScoreJoinQPScore.testDeleteByScoreJoinQuery() failure: mismatch: '0'!='1' 
> @ response/numFound
> -
>
> Key: SOLR-10986
> URL: https://issues.apache.org/jira/browse/SOLR-10986
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6, 7.0, master (8.0), 7.1
>Reporter: Steve Rowe
>Assignee: Mikhail Khludnev
>
> Reproduces for me on branch_6x but not on master, from 
> [https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/3861/] - {{git 
> bisect}} blames commit {{c215c78}} on SOLR-9217:
> {noformat}
> Checking out Revision 9947a811e83cc0f848f9ddaa37a4137f19efff1a 
> (refs/remotes/origin/branch_6x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestScoreJoinQPScore -Dtests.method=testDeleteByScoreJoinQuery 
> -Dtests.seed=6DE98178CA5DE220 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=el-GR -Dtests.timezone=Asia/Vientiane -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.02s J1 | 
> TestScoreJoinQPScore.testDeleteByScoreJoinQuery <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: mismatch: '0'!='1' 
> @ response/numFound
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([6DE98178CA5DE220:7A8B1D8F401EA807]:0)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:989)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:936)
>[junit4]>  at 
> org.apache.solr.search.join.TestScoreJoinQPScore.testDeleteByScoreJoinQuery(TestScoreJoinQPScore.java:125)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {t_description=BlockTreeOrds(blocksize=128), 
> title_stemmed=PostingsFormat(name=Memory doPackFST= false), 
> price_s=BlockTreeOrds(blocksize=128), name=BlockTreeOrds(blocksize=128), 
> id=BlockTreeOrds(blocksize=128), 
> text=PostingsFormat(name=LuceneVarGapFixedInterval), 
> movieId_s=BlockTreeOrds(blocksize=128), title=PostingsFormat(name=Memory 
> doPackFST= false), title_lettertok=BlockTreeOrds(blocksize=128), 
> productId_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128)))},
>  docValues:{}, maxPointsInLeafNode=166, maxMBSortInHeap=7.4808509338680995, 
> sim=RandomSimilarity(queryNorm=false,coord=yes): {}, locale=el-GR, 
> timezone=Asia/Vientiane
>[junit4]   2> NOTE: Linux 4.10.0-21-generic i386/Oracle Corporation 
> 1.8.0_131 (32-bit)/cpus=8,threads=1,free=159538432,total=510918656
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7895) Add hooks to QueryBuilder to allow for the construction of MultiTermQueries in phrases

2017-07-05 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074433#comment-16074433
 ] 

Alan Woodward commented on LUCENE-7895:
---

I'd contest that it's giving first-class support.  None of the standard query 
parsers will actually use this hook, but it will be there for users who need 
it.  It's all very well saying that these queries are slow, but it still should 
be possible to run them if there's no alternative.

> Add hooks to QueryBuilder to allow for the construction of MultiTermQueries 
> in phrases
> --
>
> Key: LUCENE-7895
> URL: https://issues.apache.org/jira/browse/LUCENE-7895
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: LUCENE-7895.patch
>
>
> QueryBuilder currently allows subclasses to override simple term query 
> construction, which lets you support wildcard querying.  However, there is 
> currently no easy way to override phrase query construction to support 
> wildcards.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 4 - Unstable!

2017-07-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/4/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.handler.admin.MetricsHandlerTest.testPropertyFilter

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([5E6FE03A9FDA6514:3C021E7B5054052A]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at org.junit.Assert.assertNotNull(Assert.java:537)
at 
org.apache.solr.handler.admin.MetricsHandlerTest.testPropertyFilter(MetricsHandlerTest.java:201)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12690 lines...]
   [junit4] Suite: org.apache.solr.handler.admin.MetricsHandlerTest
   [junit4]   2> Creating dataDir: 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/build/solr-core/test/J1/temp/s

[jira] [Created] (SOLR-11008) TestMetricsHandler.testPropertyFilter failure

2017-07-05 Thread Alan Woodward (JIRA)
Alan Woodward created SOLR-11008:


 Summary: TestMetricsHandler.testPropertyFilter failure
 Key: SOLR-11008
 URL: https://issues.apache.org/jira/browse/SOLR-11008
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Alan Woodward


This happens pretty frequently - see  
https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19455/ for latest fail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11008) TestMetricsHandler.testPropertyFilter failure

2017-07-05 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074445#comment-16074445
 ] 

Alan Woodward commented on SOLR-11008:
--

>From the log, it looks as though what's happening is the metric tests are 
>being run before the core has properly finished initialising (it's still 
>loading various spellchecker indices), so the core metrics haven't been 
>registered yet.  The fix, I think, is to add a call to 
>h.getCoreContainer().waitForLoadingCoresToFinish(timeout) in the beforeClass() 
>method.  We could also use a lighter solrconfig here, given that we're not 
>actually using the spell checker in the tests anywhere.

> TestMetricsHandler.testPropertyFilter failure
> -
>
> Key: SOLR-11008
> URL: https://issues.apache.org/jira/browse/SOLR-11008
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>
> This happens pretty frequently - see  
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19455/ for latest 
> fail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11009) FacetModule throws NullPointerException when all shard requests fail with shards.tolerant=true

2017-07-05 Thread Yuki Yano (JIRA)
Yuki Yano created SOLR-11009:


 Summary: FacetModule throws NullPointerException when all shard 
requests fail with shards.tolerant=true
 Key: SOLR-11009
 URL: https://issues.apache.org/jira/browse/SOLR-11009
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Facet Module
Affects Versions: 6.6
Reporter: Yuki Yano


FacetModule uses FacetMerger.Context for preserving the information of shards 
during the distributed search. This context is created as null first, and will 
be initialized when the first response is returned from one of shards.
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.6.0/solr/core/src/java/org/apache/solr/search/facet/FacetModule.java#L280

If shards.tolerant=true is set as the request, this initializing code may not 
be called if shard returns some errors. Therefore, if all shards fail to get 
results, the context will remain null.
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.6.0/solr/core/src/java/org/apache/solr/search/facet/FacetModule.java#L275

After that, in the STAGE_GET_FIELDS phase, FacetModule checks if there are any 
refinements possible by using the context. Unfortunately, because the context 
can be null as noted above, this check may end with NullPointerException.
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.6.0/solr/core/src/java/org/apache/solr/search/facet/FacetModule.java#L183

You can reproduced this error by following steps.
1. set socketTimeout of shardHandlerFactory to very short (for example, 10ms).
2. do facet search with shards.tolerant=true

The solution is very simple, just add null check before touching the context.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11009) FacetModule throws NullPointerException when all shard requests fail with shards.tolerant=true

2017-07-05 Thread Yuki Yano (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Yano updated SOLR-11009:
-
Attachment: SOLR-11009.patch

> FacetModule throws NullPointerException when all shard requests fail with 
> shards.tolerant=true
> --
>
> Key: SOLR-11009
> URL: https://issues.apache.org/jira/browse/SOLR-11009
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 6.6
>Reporter: Yuki Yano
> Attachments: SOLR-11009.patch
>
>
> FacetModule uses FacetMerger.Context for preserving the information of shards 
> during the distributed search. This context is created as null first, and 
> will be initialized when the first response is returned from one of shards.
> https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.6.0/solr/core/src/java/org/apache/solr/search/facet/FacetModule.java#L280
> If shards.tolerant=true is set as the request, this initializing code may not 
> be called if shard returns some errors. Therefore, if all shards fail to get 
> results, the context will remain null.
> https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.6.0/solr/core/src/java/org/apache/solr/search/facet/FacetModule.java#L275
> After that, in the STAGE_GET_FIELDS phase, FacetModule checks if there are 
> any refinements possible by using the context. Unfortunately, because the 
> context can be null as noted above, this check may end with 
> NullPointerException.
> https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.6.0/solr/core/src/java/org/apache/solr/search/facet/FacetModule.java#L183
> You can reproduced this error by following steps.
> 1. set socketTimeout of shardHandlerFactory to very short (for example, 10ms).
> 2. do facet search with shards.tolerant=true
> The solution is very simple, just add null check before touching the context.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11009) FacetModule throws NullPointerException when all shard requests fail with shards.tolerant=true

2017-07-05 Thread Yuki Yano (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Yano updated SOLR-11009:
-
Description: 
FacetModule uses FacetMerger.Context for preserving the information of shards 
during the distributed search. This context is created as null first, and will 
be initialized when the first response is returned from one of shards.
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.6.0/solr/core/src/java/org/apache/solr/search/facet/FacetModule.java#L280

If shards.tolerant=true is set as the request, this initializing code may not 
be called if shard returns some errors. Therefore, if all shards fail to get 
results, the context will remain null.
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.6.0/solr/core/src/java/org/apache/solr/search/facet/FacetModule.java#L275

After that, in the STAGE_GET_FIELDS phase, FacetModule checks if there are any 
refinements possible by using the context. Unfortunately, because the context 
can be null as noted above, this check may end with NullPointerException.
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.6.0/solr/core/src/java/org/apache/solr/search/facet/FacetModule.java#L183

You can reproduce this error by following steps.
1. set socketTimeout of shardHandlerFactory to very short (for example, 10ms).
2. do facet search with shards.tolerant=true

The solution is very simple, just add null check before touching the context.

  was:
FacetModule uses FacetMerger.Context for preserving the information of shards 
during the distributed search. This context is created as null first, and will 
be initialized when the first response is returned from one of shards.
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.6.0/solr/core/src/java/org/apache/solr/search/facet/FacetModule.java#L280

If shards.tolerant=true is set as the request, this initializing code may not 
be called if shard returns some errors. Therefore, if all shards fail to get 
results, the context will remain null.
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.6.0/solr/core/src/java/org/apache/solr/search/facet/FacetModule.java#L275

After that, in the STAGE_GET_FIELDS phase, FacetModule checks if there are any 
refinements possible by using the context. Unfortunately, because the context 
can be null as noted above, this check may end with NullPointerException.
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.6.0/solr/core/src/java/org/apache/solr/search/facet/FacetModule.java#L183

You can reproduced this error by following steps.
1. set socketTimeout of shardHandlerFactory to very short (for example, 10ms).
2. do facet search with shards.tolerant=true

The solution is very simple, just add null check before touching the context.


> FacetModule throws NullPointerException when all shard requests fail with 
> shards.tolerant=true
> --
>
> Key: SOLR-11009
> URL: https://issues.apache.org/jira/browse/SOLR-11009
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 6.6
>Reporter: Yuki Yano
> Attachments: SOLR-11009.patch
>
>
> FacetModule uses FacetMerger.Context for preserving the information of shards 
> during the distributed search. This context is created as null first, and 
> will be initialized when the first response is returned from one of shards.
> https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.6.0/solr/core/src/java/org/apache/solr/search/facet/FacetModule.java#L280
> If shards.tolerant=true is set as the request, this initializing code may not 
> be called if shard returns some errors. Therefore, if all shards fail to get 
> results, the context will remain null.
> https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.6.0/solr/core/src/java/org/apache/solr/search/facet/FacetModule.java#L275
> After that, in the STAGE_GET_FIELDS phase, FacetModule checks if there are 
> any refinements possible by using the context. Unfortunately, because the 
> context can be null as noted above, this check may end with 
> NullPointerException.
> https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.6.0/solr/core/src/java/org/apache/solr/search/facet/FacetModule.java#L183
> You can reproduce this error by following steps.
> 1. set socketTimeout of shardHandlerFactory to very short (for example, 10ms).
> 2. do facet search with shards.tolerant=true
> The solution is very simple, just add null check before touching the context.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lu

[jira] [Created] (SOLR-11010) OutOfMemoryError in tests when using HDFS BlockCache

2017-07-05 Thread Andrzej Bialecki (JIRA)
Andrzej Bialecki  created SOLR-11010:


 Summary: OutOfMemoryError in tests when using HDFS BlockCache
 Key: SOLR-11010
 URL: https://issues.apache.org/jira/browse/SOLR-11010
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: hdfs
Affects Versions: 7.0, master (8.0)
Reporter: Andrzej Bialecki 


Spin-off from SOLR-10878: the newly added {{MoveReplicaHDFSTest}} fails on 
jenkins (but rarely locally) with the following stacktrace:
{code}
   [junit4]   2> 13619 ERROR (qtp1885193567-48) [n:127.0.0.1:50324_solr 
c:movereplicatest_coll s:shard2 r:core_node4 
x:movereplicatest_coll_shard2_replica_n2] o.a.s.h.RequestHandlerBase 
org.apache.solr.common.SolrException: Error CREATEing SolrCore 
'movereplicatest_coll_shard2_replica_n2': Unable to create core 
[movereplicatest_coll_shard2_replica_n2] Caused by: Direct buffer memory
   [junit4]   2>at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:938)
   [junit4]   2>at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$164(CoreAdminOperation.java:91)
   [junit4]   2>at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:384)
   [junit4]   2>at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:389)
   [junit4]   2>at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)
   [junit4]   2>at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
   [junit4]   2>at 
org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:745)
   [junit4]   2>at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:726)
   [junit4]   2>at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:507)
   [junit4]   2>at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:378)
   [junit4]   2>at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:322)
   [junit4]   2>at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
   [junit4]   2>at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
   [junit4]   2>at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
   [junit4]   2>at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
   [junit4]   2>at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
   [junit4]   2>at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
   [junit4]   2>at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
   [junit4]   2>at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
   [junit4]   2>at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
   [junit4]   2>at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
   [junit4]   2>at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:395)
   [junit4]   2>at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
   [junit4]   2>at 
org.eclipse.jetty.server.Server.handle(Server.java:534)
   [junit4]   2>at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
   [junit4]   2>at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
   [junit4]   2>at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
   [junit4]   2>at 
org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
   [junit4]   2>at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
   [junit4]   2>at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
   [junit4]   2>at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
   [junit4]   2>at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
   [junit4]   2>at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
   [junit4]   2>at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
   [junit4]   2>at java.lang.Thread.run(Thread.java:745)
   [junit4]   2> Caused by: org.apache.solr.common.SolrException: Unable to 
create core [movereplicatest_coll_shard2_replica_n2]
   [junit4]   2>at

[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-9-ea+175) - Build # 9 - Still Unstable!

2017-07-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/9/
Java: 64bit/jdk-9-ea+175 -XX:+UseCompressedOops -XX:+UseParallelGC 
--illegal-access=deny

1 tests failed.
FAILED:  
org.apache.lucene.index.TestMixedDocValuesUpdates.testManyReopensAndFields

Error Message:
invalid binary value for doc=0, field=f2, 
reader=_y(7.1.0):C435:fieldInfosGen=2:dvGen=2 expected:<7> but was:<6>

Stack Trace:
java.lang.AssertionError: invalid binary value for doc=0, field=f2, 
reader=_y(7.1.0):C435:fieldInfosGen=2:dvGen=2 expected:<7> but was:<6>
at 
__randomizedtesting.SeedInfo.seed([D57106AE532F4164:E38D6481D2DA2278]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.lucene.index.TestMixedDocValuesUpdates.testManyReopensAndFields(TestMixedDocValuesUpdates.java:141)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 1479 lines...]
   [junit4] Suite: org.apache.lucene.index.TestMixedDocValuesUpdates
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestMixedDocValuesUpdates -Dtests.method=testManyReopensAndFields 
-Dtests.seed=D57106AE532F4164 -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=qu-PE -Dtests.ti

[jira] [Commented] (LUCENE-7899) Add "exists" query for doc values

2017-07-05 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074479#comment-16074479
 ] 

Michael McCandless commented on LUCENE-7899:


Aha, it is!  Thanks [~jpountz]!  Maybe we should rename it to something that 
has "doc values" and maybe "exists" in the name?

> Add "exists" query for doc values
> -
>
> Key: LUCENE-7899
> URL: https://issues.apache.org/jira/browse/LUCENE-7899
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 7.1
>
>
> I don't think we have a query today to efficiently test whether a doc values 
> field exists (has any value) for each document in the index?
> Now that we use iterators to access doc values, this should be an efficient 
> query: we can return the DISI we get for the doc values.
> ElasticSearch indexes its own field to record which field names occur in a 
> document, so it's able to do "exists" for any field (not just doc values 
> fields), but I think doc values fields we can just get "for free".
> I haven't started on this ... just wanted to open the issue first for 
> discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7897) RangeQuery optimization in IndexOrDocValuesQuery

2017-07-05 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074487#comment-16074487
 ] 

Adrien Grand commented on LUCENE-7897:
--

Thanks for checking how many documents match the range query. Your 
understanding of the way things are working today is correct.

bq. But choice of points or docvalues is not done based on this cost. It is 
considering the minCost across all scorers to decide that. If the cost of 
IndexOrDocValuesQuery > minCost, it chooses docvalues. This is bit 
counter-intuitive for me, I was thinking IndexOrDocValuesQuery would take a 
hint of the matches from other scorers and calculate the cost accordingly. It 
seems like that happens in score supplier by comparing with minScore.

Would it more more intuitive if {{IndexOrDocValuesQuery}} returned 
{{indexScorerSupplier.cost()}} directly? This is what should happen in practice 
anyway. Taking the min only helps when the approximation of 
{{estimatePointCount}} returns a number that is greater than the number of docs 
that have a value in the index but we could easily remove it and it should not 
hurt.

bq. Points: Would cost estimatePointcount/1024

Right now the cost we are using is an estimation of the number of matches. You 
are right that a more interesting metric would be the cost of building the 
scorer, but as you wrote this becomes more complicated as we need to fold in 
the cost of sorting the documents, etc.  I am a bit afraid of opening a can of 
worms if we start doing something like this.  However you have a point that for 
a similar value of the {{cost}}, the index query can be expected to be more 
efficient than the doc-values based query because it can more easily amortize 
the cost of matching documents across documents. As a first step, maybe it 
would make sense to give an arbitrary penalty for doc-values queries and only 
use them if we only need to check something like 1/8th of matching documents? 
Like you said this kind of things might end up with a wrong decision on the 
other side, but maybe it is better as queries that provide good iterators are a 
safer bet under doubt?

> RangeQuery optimization in IndexOrDocValuesQuery 
> -
>
> Key: LUCENE-7897
> URL: https://issues.apache.org/jira/browse/LUCENE-7897
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: trunk, 7.0
>Reporter: Murali Krishna P
>
> For range queries, Lucene uses either Points or Docvalues based on cost 
> estimation 
> (https://lucene.apache.org/core/6_5_0/core/org/apache/lucene/search/IndexOrDocValuesQuery.html).
>  Scorer is chosen based on the minCost here: 
> https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/search/Boolean2ScorerSupplier.java#L16
> However, the cost calculation for TermQuery and IndexOrDocvalueQuery seems to 
> have same weightage. Essentially, cost depends upon the docfreq in TermDict, 
> number of points visited and number of docvalues. In a situation where 
> docfreq is not too restrictive, this is lot of lookups for docvalues and 
> using points would have been better.
> Following query with 1M matches, takes 60ms with docvalues, but only 27ms 
> with points. If I change the query to "message:*", which matches all docs, it 
> choses the points(since cost is same), but with message:xyz it choses 
> docvalues eventhough doc frequency is 1million which results in many docvalue 
> fetches. Would it make sense to change the cost of docvalues query to be 
> higher or use points if the docfreq is too high for the term query(find an 
> optimum threshold where points cost < docvalue cost)?
> {noformat}
> {
>   "query": {
> "bool": {
>   "must": [
> {
>   "query_string": {
> "query": "message:xyz"
>   }
> },
> {
>   "range": {
> "@timestamp": {
>   "gte": 149865240,
>   "lte": 149890500,
>   "format": "epoch_millis"
> }
>   }
> }
>   ]
> }
>   }
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11010) OutOfMemoryError in tests when using HDFS BlockCache

2017-07-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074488#comment-16074488
 ] 

ASF subversion and git services commented on SOLR-11010:


Commit 48b4960e0c093b480b8328f324992a7006054f17 in lucene-solr's branch 
refs/heads/master from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=48b4960 ]

SOLR-11010 Tentative fix for jenkins test failures.


> OutOfMemoryError in tests when using HDFS BlockCache
> 
>
> Key: SOLR-11010
> URL: https://issues.apache.org/jira/browse/SOLR-11010
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: hdfs
>Affects Versions: 7.0, master (8.0)
>Reporter: Andrzej Bialecki 
>
> Spin-off from SOLR-10878: the newly added {{MoveReplicaHDFSTest}} fails on 
> jenkins (but rarely locally) with the following stacktrace:
> {code}
>[junit4]   2> 13619 ERROR (qtp1885193567-48) [n:127.0.0.1:50324_solr 
> c:movereplicatest_coll s:shard2 r:core_node4 
> x:movereplicatest_coll_shard2_replica_n2] o.a.s.h.RequestHandlerBase 
> org.apache.solr.common.SolrException: Error CREATEing SolrCore 
> 'movereplicatest_coll_shard2_replica_n2': Unable to create core 
> [movereplicatest_coll_shard2_replica_n2] Caused by: Direct buffer memory
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.create(CoreContainer.java:938)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$164(CoreAdminOperation.java:91)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:384)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:389)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)
>[junit4]   2>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:745)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:726)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:507)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:378)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:322)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>[junit4]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>[junit4]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:395)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>[junit4]   2>  at 
> org.eclipse.jetty.server.Server.handle(Server.java:534)
>[junit4]   2>  at 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>[junit4]   2>  at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>[junit4]   2>  at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>[junit4]   2>  at 
> org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>[junit4]   2>  at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>[junit4]   2>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>[junit4]   2>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)

[jira] [Commented] (LUCENE-7899) Add "exists" query for doc values

2017-07-05 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074491#comment-16074491
 ] 

Adrien Grand commented on LUCENE-7899:
--

I think we should, I already failed multiple times to remember the name of this 
query even though I knew we had one. Something like 
{{DocValuesFieldExistsQuery}}? I'm tempted to go with a shorted name, but at 
the same time I'd like to reserve name space so that we could also have a 
norms-based implementation so that we could also find documents that have a 
value for text fields.

> Add "exists" query for doc values
> -
>
> Key: LUCENE-7899
> URL: https://issues.apache.org/jira/browse/LUCENE-7899
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 7.1
>
>
> I don't think we have a query today to efficiently test whether a doc values 
> field exists (has any value) for each document in the index?
> Now that we use iterators to access doc values, this should be an efficient 
> query: we can return the DISI we get for the doc values.
> ElasticSearch indexes its own field to record which field names occur in a 
> document, so it's able to do "exists" for any field (not just doc values 
> fields), but I think doc values fields we can just get "for free".
> I haven't started on this ... just wanted to open the issue first for 
> discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-6357) Using query time Join in deleteByQuery throws ClassCastException

2017-07-05 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev reopened SOLR-6357:

  Assignee: Mikhail Khludnev  (was: Timothy Potter)

broken by SOLR-10986, SOLR-9217 

> Using query time Join in deleteByQuery throws ClassCastException
> 
>
> Key: SOLR-6357
> URL: https://issues.apache.org/jira/browse/SOLR-6357
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 4.9
>Reporter: Arcadius Ahouansou
>Assignee: Mikhail Khludnev
> Fix For: 5.3, 6.0
>
> Attachments: SOLR-6357.patch
>
>
> Consider the following input document where we have:
> - 1 Samsung mobile phone and
> - 2 manufactures: Apple and Samsung.
> {code}
> [
>{
>   "id":"galaxy note ii",
>   "cat":"product",
>   "manu_s":"samsung"
>},
>{
>   "id":"samsung",
>   "cat":"manufacturer",
>   "name":"Samsung Electronics"
>},
>{
>   "id":"apple",
>   "cat":"manufacturer",
>   "name":"Apple Inc"
>}
> ]
> {code}
> My objective is to delete from the default index all manufacturers not having 
> any product in the index.
> After indexing (  curl 'http://localhost:8983/solr/update?commit=true' -H 
> "Content-Type: text/json" --data-binary @delete-by-join-query.json )
> I went to
> {code}http://localhost:8983/solr/select?q=cat:manufacturer -{!join 
> from=manu_s to=id}cat:product
> {code}
> and I could see only Apple, the only manufacturer not having any product in 
> the index.
> However, when I use that same query for deletion: 
> {code}
> http://localhost:8983/solr/update?commit=true&stream.body=cat:manufacturer
>  -{!join from=manu_s to=id}cat:product
> {code}
> I get
> {code}
> java.lang.ClassCastException: org.apache.lucene.search.IndexSearcher cannot 
> be cast to org.apache.solr.search.SolrIndexSearcher
>   at 
> org.apache.solr.search.JoinQuery.createWeight(JoinQParserPlugin.java:143)
>   at 
> org.apache.lucene.search.BooleanQuery$BooleanWeight.(BooleanQuery.java:185)
>   at 
> org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:526)
>   at 
> org.apache.lucene.search.BooleanQuery$BooleanWeight.(BooleanQuery.java:185)
>   at 
> org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:526)
>   at 
> org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:684)
>   at 
> org.apache.lucene.search.QueryWrapperFilter.getDocIdSet(QueryWrapperFilter.java:55)
>   at 
> org.apache.lucene.index.BufferedUpdatesStream.applyQueryDeletes(BufferedUpdatesStream.java:552)
>   at 
> org.apache.lucene.index.BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:287)
>   at 
> {code}
> This seems to be a bug.
> Looking at the source code, the exception is happening in {code}
>  @Override
>   public Weight createWeight(IndexSearcher searcher) throws IOException {
> return new JoinQueryWeight((SolrIndexSearcher)searcher);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10942) qt param is not working partially in solr5.5

2017-07-05 Thread Pavithra Dhakshinamurthy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074498#comment-16074498
 ] 

Pavithra Dhakshinamurthy commented on SOLR-10942:
-

[~dsmiley] or others, why was it changed? It was actually working fine with 4.X 
versions with qt param and default request handler. 
Now, we need to change our REST requests to adapt to this change and we have 
many such scenarios.

> qt param is not working partially in solr5.5
> 
>
> Key: SOLR-10942
> URL: https://issues.apache.org/jira/browse/SOLR-10942
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5
>Reporter: Pavithra Dhakshinamurthy
>
> qt param is working fine if fieldname is provided with the request, but it is 
> not working if just the search term is provided. 
> For Example: 
> http://localhost:8983/solr/core2/select?q=states&wt=xml&indent=true&qt=/country
>   is not working, where as 
> http://localhost:8983/solr/core2/select?q=countryName:states&wt=xml&indent=true&qt=/country
>is working. 
> Does any body faced this issue?
> This is how we have defined the request handler
>  
>   
>  explicit
>  10
>  edismax
>  countryName^100 countryCode^60 addrcountry^20 
> mailaddresscountry^20
>   
>



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7899) Add "exists" query for doc values

2017-07-05 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-7899:
---
Fix Version/s: (was: 7.1)
   7.0

> Add "exists" query for doc values
> -
>
> Key: LUCENE-7899
> URL: https://issues.apache.org/jira/browse/LUCENE-7899
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 7.0
>
>
> I don't think we have a query today to efficiently test whether a doc values 
> field exists (has any value) for each document in the index?
> Now that we use iterators to access doc values, this should be an efficient 
> query: we can return the DISI we get for the doc values.
> ElasticSearch indexes its own field to record which field names occur in a 
> document, so it's able to do "exists" for any field (not just doc values 
> fields), but I think doc values fields we can just get "for free".
> I haven't started on this ... just wanted to open the issue first for 
> discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7899) Add "exists" query for doc values

2017-07-05 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074505#comment-16074505
 ] 

Michael McCandless commented on LUCENE-7899:


bq. Something like DocValuesFieldExistsQuery? 

+1, I'll work up a patch; I think we should do this for 7.0?

> Add "exists" query for doc values
> -
>
> Key: LUCENE-7899
> URL: https://issues.apache.org/jira/browse/LUCENE-7899
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 7.0
>
>
> I don't think we have a query today to efficiently test whether a doc values 
> field exists (has any value) for each document in the index?
> Now that we use iterators to access doc values, this should be an efficient 
> query: we can return the DISI we get for the doc values.
> ElasticSearch indexes its own field to record which field names occur in a 
> document, so it's able to do "exists" for any field (not just doc values 
> fields), but I think doc values fields we can just get "for free".
> I haven't started on this ... just wanted to open the issue first for 
> discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7899) Rename FieldValueQuery to DocValuesFieldExistsQuery

2017-07-05 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-7899:
---
Summary: Rename FieldValueQuery to DocValuesFieldExistsQuery  (was: Add 
"exists" query for doc values)

> Rename FieldValueQuery to DocValuesFieldExistsQuery
> ---
>
> Key: LUCENE-7899
> URL: https://issues.apache.org/jira/browse/LUCENE-7899
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 7.0
>
>
> I don't think we have a query today to efficiently test whether a doc values 
> field exists (has any value) for each document in the index?
> Now that we use iterators to access doc values, this should be an efficient 
> query: we can return the DISI we get for the doc values.
> ElasticSearch indexes its own field to record which field names occur in a 
> document, so it's able to do "exists" for any field (not just doc values 
> fields), but I think doc values fields we can just get "for free".
> I haven't started on this ... just wanted to open the issue first for 
> discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Feature freeze @ 7.0 branch

2017-07-05 Thread Michael McCandless
Hi Anshum, I'd like to do https://issues.apache.org/jira/browse/LUCENE-7899
for 7.0; it's a simple rename, which I think we should do on major
release.  I'll get a patch up shortly.

Thanks,

Mike McCandless

http://blog.mikemccandless.com

On Tue, Jul 4, 2017 at 12:40 PM, Anshum Gupta 
wrote:

> Sure Ab, this is an important bug fix.
>
> -Anshum
>
> On Tue, Jul 4, 2017 at 9:35 AM Andrzej Białecki <
> andrzej.biale...@lucidworks.com> wrote:
>
>> SOLR-10878 and SOLR-10879 didn’t make it before the branches were cut,
>> but I think they should be included in 7x and 7_0 - I’m going to
>> cherry-pick the commits from master.
>>
>> On 3 Jul 2017, at 22:29, Anshum Gupta  wrote:
>>
>> Hi,
>>
>> I just wanted to call it out and remove any confusions around the fact
>> that we shouldn’t we committing ‘new features’ to branch_7_0. As far as
>> whatever was already agreed upon in previous communications, let’s get that
>> stuff in if it’s ready or almost there. For everything else, kindly check
>> before you commit to the release branch.
>>
>> Let us make sure that the bugs and edge cases are all taken care of, the
>> deprecations, and cleanups too.
>>
>> P.S: Feel free to commit bug fixes without checking, but make sure that
>> we aren’t hiding features in those commits.
>>
>>
>> -Anshum
>>
>>
>>
>>
>>


[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 4 - Still Unstable!

2017-07-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/4/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest.test

Error Message:
Timeout occured while waiting response from server at: http://127.0.0.1:48893

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:48893
at 
__randomizedtesting.SeedInfo.seed([8E387CB6F67BDCDF:66C436C5887B127]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:637)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:252)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1121)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1667)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1694)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest.test(ChaosMonkeyNothingIsSafeWithPullReplicasTest.java:297)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOv

[jira] [Updated] (LUCENE-7899) Rename FieldValueQuery to DocValuesFieldExistsQuery

2017-07-05 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-7899:
---
Attachment: LUCENE-7899.patch

Simple patch.

> Rename FieldValueQuery to DocValuesFieldExistsQuery
> ---
>
> Key: LUCENE-7899
> URL: https://issues.apache.org/jira/browse/LUCENE-7899
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 7.0
>
> Attachments: LUCENE-7899.patch
>
>
> I don't think we have a query today to efficiently test whether a doc values 
> field exists (has any value) for each document in the index?
> Now that we use iterators to access doc values, this should be an efficient 
> query: we can return the DISI we get for the doc values.
> ElasticSearch indexes its own field to record which field names occur in a 
> document, so it's able to do "exists" for any field (not just doc values 
> fields), but I think doc values fields we can just get "for free".
> I haven't started on this ... just wanted to open the issue first for 
> discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7899) Rename FieldValueQuery to DocValuesFieldExistsQuery

2017-07-05 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-7899:
---
Priority: Blocker  (was: Major)

> Rename FieldValueQuery to DocValuesFieldExistsQuery
> ---
>
> Key: LUCENE-7899
> URL: https://issues.apache.org/jira/browse/LUCENE-7899
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Blocker
> Fix For: 7.0
>
> Attachments: LUCENE-7899.patch
>
>
> I don't think we have a query today to efficiently test whether a doc values 
> field exists (has any value) for each document in the index?
> Now that we use iterators to access doc values, this should be an efficient 
> query: we can return the DISI we get for the doc values.
> ElasticSearch indexes its own field to record which field names occur in a 
> document, so it's able to do "exists" for any field (not just doc values 
> fields), but I think doc values fields we can just get "for free".
> I haven't started on this ... just wanted to open the issue first for 
> discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7899) Rename FieldValueQuery to DocValuesFieldExistsQuery

2017-07-05 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074528#comment-16074528
 ] 

Adrien Grand commented on LUCENE-7899:
--

+1

> Rename FieldValueQuery to DocValuesFieldExistsQuery
> ---
>
> Key: LUCENE-7899
> URL: https://issues.apache.org/jira/browse/LUCENE-7899
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Blocker
> Fix For: 7.0
>
> Attachments: LUCENE-7899.patch
>
>
> I don't think we have a query today to efficiently test whether a doc values 
> field exists (has any value) for each document in the index?
> Now that we use iterators to access doc values, this should be an efficient 
> query: we can return the DISI we get for the doc values.
> ElasticSearch indexes its own field to record which field names occur in a 
> document, so it's able to do "exists" for any field (not just doc values 
> fields), but I think doc values fields we can just get "for free".
> I haven't started on this ... just wanted to open the issue first for 
> discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7899) Rename FieldValueQuery to DocValuesFieldExistsQuery

2017-07-05 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074529#comment-16074529
 ] 

Adrien Grand commented on LUCENE-7899:
--

Maybe add a note to {{lucene/MIGRATE.txt}} before pushing to branch_7_0 and 
branch_7x?

> Rename FieldValueQuery to DocValuesFieldExistsQuery
> ---
>
> Key: LUCENE-7899
> URL: https://issues.apache.org/jira/browse/LUCENE-7899
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Blocker
> Fix For: 7.0
>
> Attachments: LUCENE-7899.patch
>
>
> I don't think we have a query today to efficiently test whether a doc values 
> field exists (has any value) for each document in the index?
> Now that we use iterators to access doc values, this should be an efficient 
> query: we can return the DISI we get for the doc values.
> ElasticSearch indexes its own field to record which field names occur in a 
> document, so it's able to do "exists" for any field (not just doc values 
> fields), but I think doc values fields we can just get "for free".
> I haven't started on this ... just wanted to open the issue first for 
> discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7895) Add hooks to QueryBuilder to allow for the construction of MultiTermQueries in phrases

2017-07-05 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074530#comment-16074530
 ] 

Adrien Grand commented on LUCENE-7895:
--

bq. It's all very well saying that these queries are slow, but it still should 
be possible to run them if there's no alternative.

Actually I wish we never added {{SpanMultiTermQueryWrapper}}. It creates a 
{{SpanTermQuery}} for every matching term without enforcing any limit, which 
makes it very trappy: you can easily end up with very slow queries or ever 
out-of-memory errors at rewrite time. Can this prefix-inside-phrase problem be 
solved at index-time instead with something like edge-ngrams? I'd be ok with 
adding hooks to {{QueryBuilder}} that make it easier to handle an index-time 
solution of this problem, but if the main use-case is to make it easier to 
create a {{SpanMultiTermQueryWrapper}}, then it feels like adding an API only 
to enable usage of a trappy query to me.

> Add hooks to QueryBuilder to allow for the construction of MultiTermQueries 
> in phrases
> --
>
> Key: LUCENE-7895
> URL: https://issues.apache.org/jira/browse/LUCENE-7895
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: LUCENE-7895.patch
>
>
> QueryBuilder currently allows subclasses to override simple term query 
> construction, which lets you support wildcard querying.  However, there is 
> currently no easy way to override phrase query construction to support 
> wildcards.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7882) Maybe expression compiler should cache recently compiled expressions?

2017-07-05 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074536#comment-16074536
 ] 

Michael McCandless commented on LUCENE-7882:


Sorry, I did not experience a full hang/deadlock: I only experienced that the 
QPS capacity of the searcher went down drastically for periods of time.  The 
pattern is odd ... at first the QPS is high, then it gradually slows down, then 
it enters periods where it's extremely slow, like 10X slower than "normal" for 
maybe ~20-30 seconds, then it gets somewhat faster again, then another super 
slow period.  But it never outright hangs.

It is odd to me that the code cache was allowed to grow to nearly it's maximum 
size; seems like these methods should very quickly become dead since after that 
one query executes, it should no longer be referenced.

bq. Can you repeat and add -XX:+UseCodeCacheFlushing switch, see if this helps?

OK I'll try that, may be a while before I can though.  Though from its 
description it seems like it shouldn't be necessary here, i.e. the JVM should 
be able to tell, quickly, that these methods are not referenced anymore.  But I 
know very little about this part of the JVM!

> Maybe expression compiler should cache recently compiled expressions?
> -
>
> Key: LUCENE-7882
> URL: https://issues.apache.org/jira/browse/LUCENE-7882
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/expressions
>Reporter: Michael McCandless
>
> I've been running search performance tests using a simple expression 
> ({{_score + ln(1000+unit_sales)}}) for sorting and hit this odd bottleneck:
> {noformat}
> "pool-1-thread-30" #70 prio=5 os_prio=0 tid=0x7eea7000a000 nid=0x1ea8a 
> waiting for monitor entry [0x7eea867dd000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at 
> org.apache.lucene.expressions.js.JavascriptCompiler$CompiledExpression.evaluate(_score
>  + ln(1000+unit_sales))
>   at 
> org.apache.lucene.expressions.ExpressionFunctionValues.doubleValue(ExpressionFunctionValues.java:49)
>   at 
> com.amazon.lucene.OrderedVELeafCollector.collectInternal(OrderedVELeafCollector.java:123)
>   at 
> com.amazon.lucene.OrderedVELeafCollector.collect(OrderedVELeafCollector.java:108)
>   at 
> org.apache.lucene.search.MultiCollectorManager$Collectors$LeafCollectors.collect(MultiCollectorManager.java:102)
>   at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:241)
>   at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:184)
>   at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:658)
>   at org.apache.lucene.search.IndexSearcher$5.call(IndexSearcher.java:600)
>   at org.apache.lucene.search.IndexSearcher$5.call(IndexSearcher.java:597)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> I couldn't see any {{synchronized}} in the sources here, so I'm not sure 
> which object monitor it's blocked on.
> I was accidentally compiling a new expression for every query, and that 
> bottleneck would cause overall QPS to slow down drastically (~4X slower after 
> ~1 hour of redline tests), as if the JVM is getting slower and slower to 
> evaluate each expression the more expressions I had compiled.
> I tested JDK 9-ea and it also kept slowing down over time as the performance 
> test ran.
> Maybe we should put a small cache in front of the expressions compiler to 
> make it less trappy?  Or maybe we can get to the root cause of why the JVM 
> slows down more and more, the more expressions you compile?
> I won't have time to work on this in the near future so if anyone else feels 
> the itch, please scratch it!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 3 - Still Unstable!

2017-07-05 Thread Michael McCandless
Grrr I'll dig.

Mike McCandless

http://blog.mikemccandless.com

On Tue, Jul 4, 2017 at 11:49 PM, Policeman Jenkins Server <
jenk...@thetaphi.de> wrote:

> Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/3/
> Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC
>
> 1 tests failed.
> FAILED:  org.apache.lucene.index.TestMixedDocValuesUpdates.
> testManyReopensAndFields
>
> Error Message:
> invalid binary value for doc=0, field=f2, reader=_f(7.0.0):c68
> expected:<5> but was:<4>
>
> Stack Trace:
> java.lang.AssertionError: invalid binary value for doc=0, field=f2,
> reader=_f(7.0.0):c68 expected:<5> but was:<4>
> at __randomizedtesting.SeedInfo.seed([CFF4D78D701ACBC9:
> F908B5A2F1EFA8D5]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.failNotEquals(Assert.java:647)
> at org.junit.Assert.assertEquals(Assert.java:128)
> at org.junit.Assert.assertEquals(Assert.java:472)
> at org.apache.lucene.index.TestMixedDocValuesUpdates.
> testManyReopensAndFields(TestMixedDocValuesUpdates.java:141)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(
> RandomizedRunner.java:1713)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(
> RandomizedRunner.java:907)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(
> RandomizedRunner.java:943)
> at com.carrotsearch.randomizedtesting.
> RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
> at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(
> TestRuleSetupTeardownChained.java:49)
> at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(
> AbstractBeforeAfterRule.java:45)
> at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(
> TestRuleThreadAndTestName.java:48)
> at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures
> $1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at org.apache.lucene.util.TestRuleMarkFailure$1.
> evaluate(TestRuleMarkFailure.java:47)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at com.carrotsearch.randomizedtesting.ThreadLeakControl$
> StatementRunner.run(ThreadLeakControl.java:368)
> at com.carrotsearch.randomizedtesting.ThreadLeakControl.
> forkTimeoutingTask(ThreadLeakControl.java:817)
> at com.carrotsearch.randomizedtesting.
> ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
> at com.carrotsearch.randomizedtesting.RandomizedRunner.
> runSingleTest(RandomizedRunner.java:916)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(
> RandomizedRunner.java:802)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(
> RandomizedRunner.java:852)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(
> RandomizedRunner.java:863)
> at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(
> AbstractBeforeAfterRule.java:45)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(
> TestRuleStoreClassName.java:41)
> at com.carrotsearch.randomizedtesting.rules.
> NoShadowingOrOverridesOnMethodsRule$1.evaluate(
> NoShadowingOrOverridesOnMethodsRule.java:40)
> at com.carrotsearch.randomizedtesting.rules.
> NoShadowingOrOverridesOnMethodsRule$1.evaluate(
> NoShadowingOrOverridesOnMethodsRule.java:40)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(
> TestRuleAssertionsRequired.java:53)
> at org.apache.lucene.util.TestRuleMarkFailure$1.
> evaluate(TestRuleMarkFailure.java:47)
> at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures
> $1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(
> TestRuleIgnoreTestSuites.java:54)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at com.carrotsearch.randomizedtesting.ThreadLeakControl$
> StatementRunner.run(ThreadLeakControl.java:368)
> at java.lang.Thread.run(Thread.java:748)
>
>
>
>
> Build Log:
> [...truncated 1498 lines...]
>

[jira] [Commented] (LUCENE-7882) Maybe expression compiler should cache recently compiled expressions?

2017-07-05 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074552#comment-16074552
 ] 

Dawid Weiss commented on LUCENE-7882:
-

As far as I understand it should sweep code cache from time to time... 

http://markmail.org/thread/acpxk7ogdunvfhry

Don't know whether there is a compaction strategy (whether the already compiled 
code can be relocated around); if not, then the free size tells you nothing 
about fragmentation and lookup for a free block may be slowing things down. 

An interesting problem.

> Maybe expression compiler should cache recently compiled expressions?
> -
>
> Key: LUCENE-7882
> URL: https://issues.apache.org/jira/browse/LUCENE-7882
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/expressions
>Reporter: Michael McCandless
>
> I've been running search performance tests using a simple expression 
> ({{_score + ln(1000+unit_sales)}}) for sorting and hit this odd bottleneck:
> {noformat}
> "pool-1-thread-30" #70 prio=5 os_prio=0 tid=0x7eea7000a000 nid=0x1ea8a 
> waiting for monitor entry [0x7eea867dd000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at 
> org.apache.lucene.expressions.js.JavascriptCompiler$CompiledExpression.evaluate(_score
>  + ln(1000+unit_sales))
>   at 
> org.apache.lucene.expressions.ExpressionFunctionValues.doubleValue(ExpressionFunctionValues.java:49)
>   at 
> com.amazon.lucene.OrderedVELeafCollector.collectInternal(OrderedVELeafCollector.java:123)
>   at 
> com.amazon.lucene.OrderedVELeafCollector.collect(OrderedVELeafCollector.java:108)
>   at 
> org.apache.lucene.search.MultiCollectorManager$Collectors$LeafCollectors.collect(MultiCollectorManager.java:102)
>   at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:241)
>   at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:184)
>   at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:658)
>   at org.apache.lucene.search.IndexSearcher$5.call(IndexSearcher.java:600)
>   at org.apache.lucene.search.IndexSearcher$5.call(IndexSearcher.java:597)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> I couldn't see any {{synchronized}} in the sources here, so I'm not sure 
> which object monitor it's blocked on.
> I was accidentally compiling a new expression for every query, and that 
> bottleneck would cause overall QPS to slow down drastically (~4X slower after 
> ~1 hour of redline tests), as if the JVM is getting slower and slower to 
> evaluate each expression the more expressions I had compiled.
> I tested JDK 9-ea and it also kept slowing down over time as the performance 
> test ran.
> Maybe we should put a small cache in front of the expressions compiler to 
> make it less trappy?  Or maybe we can get to the root cause of why the JVM 
> slows down more and more, the more expressions you compile?
> I won't have time to work on this in the near future so if anyone else feels 
> the itch, please scratch it!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7895) Add hooks to QueryBuilder to allow for the construction of MultiTermQueries in phrases

2017-07-05 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074583#comment-16074583
 ] 

Alan Woodward commented on LUCENE-7895:
---

I opened LUCENE-6513 a while back to try and stop the memory blowup caused by 
SpanMTQWrapper, maybe that's worth revisiting?

Prefix ngrams will work for prefix queries, but doesn't generalize, 
unfortunately.

> Add hooks to QueryBuilder to allow for the construction of MultiTermQueries 
> in phrases
> --
>
> Key: LUCENE-7895
> URL: https://issues.apache.org/jira/browse/LUCENE-7895
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: LUCENE-7895.patch
>
>
> QueryBuilder currently allows subclasses to override simple term query 
> construction, which lets you support wildcard querying.  However, there is 
> currently no easy way to override phrase query construction to support 
> wildcards.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10986) TestScoreJoinQPScore.testDeleteByScoreJoinQuery() failure: mismatch: '0'!='1' @ response/numFound

2017-07-05 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-10986:

Attachment: SOLR-10986.patch

Despite the problem per se has no solution. Here is the band aid 
[^SOLR-10986.patch]. 
It looks like even deleteByQ can work with join query. However, it doesn't see 
uncommitted docs, which deleteByQ is supposed to see. 
I'm kindly asking for a brief review, we have an opportunity to bring this fix 
to 7.0.
Package tests pass. 

> TestScoreJoinQPScore.testDeleteByScoreJoinQuery() failure: mismatch: '0'!='1' 
> @ response/numFound
> -
>
> Key: SOLR-10986
> URL: https://issues.apache.org/jira/browse/SOLR-10986
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6, 7.0, master (8.0), 7.1
>Reporter: Steve Rowe
>Assignee: Mikhail Khludnev
> Attachments: SOLR-10986.patch
>
>
> Reproduces for me on branch_6x but not on master, from 
> [https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/3861/] - {{git 
> bisect}} blames commit {{c215c78}} on SOLR-9217:
> {noformat}
> Checking out Revision 9947a811e83cc0f848f9ddaa37a4137f19efff1a 
> (refs/remotes/origin/branch_6x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestScoreJoinQPScore -Dtests.method=testDeleteByScoreJoinQuery 
> -Dtests.seed=6DE98178CA5DE220 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=el-GR -Dtests.timezone=Asia/Vientiane -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.02s J1 | 
> TestScoreJoinQPScore.testDeleteByScoreJoinQuery <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: mismatch: '0'!='1' 
> @ response/numFound
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([6DE98178CA5DE220:7A8B1D8F401EA807]:0)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:989)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:936)
>[junit4]>  at 
> org.apache.solr.search.join.TestScoreJoinQPScore.testDeleteByScoreJoinQuery(TestScoreJoinQPScore.java:125)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {t_description=BlockTreeOrds(blocksize=128), 
> title_stemmed=PostingsFormat(name=Memory doPackFST= false), 
> price_s=BlockTreeOrds(blocksize=128), name=BlockTreeOrds(blocksize=128), 
> id=BlockTreeOrds(blocksize=128), 
> text=PostingsFormat(name=LuceneVarGapFixedInterval), 
> movieId_s=BlockTreeOrds(blocksize=128), title=PostingsFormat(name=Memory 
> doPackFST= false), title_lettertok=BlockTreeOrds(blocksize=128), 
> productId_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128)))},
>  docValues:{}, maxPointsInLeafNode=166, maxMBSortInHeap=7.4808509338680995, 
> sim=RandomSimilarity(queryNorm=false,coord=yes): {}, locale=el-GR, 
> timezone=Asia/Vientiane
>[junit4]   2> NOTE: Linux 4.10.0-21-generic i386/Oracle Corporation 
> 1.8.0_131 (32-bit)/cpus=8,threads=1,free=159538432,total=510918656
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10827) factor out abstract FilteringSolrMetricReporter

2017-07-05 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-10827.

   Resolution: Fixed
Fix Version/s: 7.1
   master (8.0)

Thanks Andrzej for the review!

> factor out abstract FilteringSolrMetricReporter
> ---
>
> Key: SOLR-10827
> URL: https://issues.apache.org/jira/browse/SOLR-10827
> Project: Solr
>  Issue Type: Task
>  Components: metrics
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: master (8.0), 7.1
>
> Attachments: SOLR-10827.patch
>
>
> Currently multiple SolrMetricReporter classes have their own local filter 
> settings, a common setting somewhere will reduce code duplication for 
> existing, future and custom reporters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10957) fix potential NPE in SolrCoreParser.init

2017-07-05 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-10957.

   Resolution: Fixed
Fix Version/s: 7.1
   master (8.0)

> fix potential NPE in SolrCoreParser.init
> 
>
> Key: SOLR-10957
> URL: https://issues.apache.org/jira/browse/SOLR-10957
> Project: Solr
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: master (8.0), 7.1
>
> Attachments: SOLR-10957.patch, SOLR-10957.patch
>
>
> [SolrQueryRequestBase|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/request/SolrQueryRequestBase.java]
>  accommodates requests with a null SolrCore and this small change is for 
> SolrCoreParser.init to do likewise.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7838) Add a knn classifier based on fuzzified term queries

2017-07-05 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili updated LUCENE-7838:

Summary: Add a knn classifier based on fuzzified term queries  (was: Add a 
knn classifier based on fuzzy like this)

> Add a knn classifier based on fuzzified term queries
> 
>
> Key: LUCENE-7838
> URL: https://issues.apache.org/jira/browse/LUCENE-7838
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 7.0
>
>
> FLT mixes fuzzy and MLT, in the context of Lucene based classification it 
> might be useful to add such a fuzziness to a dedicated KNN classifier (based 
> on FLT queries).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7838) Add a knn classifier based on fuzzified term queries

2017-07-05 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili resolved LUCENE-7838.
-
Resolution: Fixed

I'm marking this as resolved, improvements will come in subsequent issues.

> Add a knn classifier based on fuzzified term queries
> 
>
> Key: LUCENE-7838
> URL: https://issues.apache.org/jira/browse/LUCENE-7838
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 7.0
>
>
> FLT mixes fuzzy and MLT, in the context of Lucene based classification it 
> might be useful to add such a fuzziness to a dedicated KNN classifier (based 
> on FLT queries).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10046) Create UninvertDocValuesMergePolicy

2017-07-05 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-10046.

Resolution: Fixed

Thanks Keith!

> Create UninvertDocValuesMergePolicy
> ---
>
> Key: SOLR-10046
> URL: https://issues.apache.org/jira/browse/SOLR-10046
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Keith Laban
>Assignee: Christine Poerschke
> Fix For: 7.0
>
>
> Create a merge policy that can detect schema changes and use 
> UninvertingReader to uninvert fields and write docvalues into merged segments 
> when a field has docvalues enabled.
> The current behavior is to write null values in the merged segment which can 
> lead to data integrity problems when sorting or faceting pending a full 
> reindex. 
> With this patch it would still be recommended to reindex when adding 
> docvalues for performance reasons, as it not guarenteed all segments will be 
> merged with docvalues turned on.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Windows (64bit/jdk-9-ea+173) - Build # 6709 - Unstable!

2017-07-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6709/
Java: 64bit/jdk-9-ea+173 -XX:-UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:55491/y_/h

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:55491/y_/h
at 
__randomizedtesting.SeedInfo.seed([C23AC172A73FEA93:4A6EFEA809C3876B]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:637)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:252)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1121)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1667)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1694)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest.test(ChaosMonkeyNothingIsSafeWithPullReplicasTest.java:297)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.ran

[jira] [Commented] (LUCENE-7838) Add a knn classifier based on fuzzified term queries

2017-07-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074624#comment-16074624
 ] 

ASF subversion and git services commented on LUCENE-7838:
-

Commit 8ccb61c0af3c38dab6f1a62eafb836fb6415e55c in lucene-solr's branch 
refs/heads/master from [~teofili]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8ccb61c ]

LUCENE-7823, LUCENE-7838 - added missing entires in changes.txt


> Add a knn classifier based on fuzzified term queries
> 
>
> Key: LUCENE-7838
> URL: https://issues.apache.org/jira/browse/LUCENE-7838
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 7.0
>
>
> FLT mixes fuzzy and MLT, in the context of Lucene based classification it 
> might be useful to add such a fuzziness to a dedicated KNN classifier (based 
> on FLT queries).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7823) Have a naive bayes classifier which uses plain BM25 scores instead of plain frequencies

2017-07-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074623#comment-16074623
 ] 

ASF subversion and git services commented on LUCENE-7823:
-

Commit 8ccb61c0af3c38dab6f1a62eafb836fb6415e55c in lucene-solr's branch 
refs/heads/master from [~teofili]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8ccb61c ]

LUCENE-7823, LUCENE-7838 - added missing entires in changes.txt


> Have a naive bayes classifier which uses plain BM25 scores instead of plain 
> frequencies
> ---
>
> Key: LUCENE-7823
> URL: https://issues.apache.org/jira/browse/LUCENE-7823
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 7.0
>
>
> {{SimpleNaiveBayesClassifier}} users term frequencies with add one smoothing 
> to calculate likelihood and just tf for prior. Given Lucene has switched to 
> BM25 it would be better to have a different impl which uses BM25 
> scoring as a probability measure of both prior and likelihood.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7823) Have a naive bayes classifier which uses plain BM25 scores instead of plain frequencies

2017-07-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074642#comment-16074642
 ] 

ASF subversion and git services commented on LUCENE-7823:
-

Commit 056501be8b1aed17ef2244c06c4a2c1367eba166 in lucene-solr's branch 
refs/heads/branch_7x from [~teofili]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=056501b ]

LUCENE-7823, LUCENE-7838 - added missing entires in changes.txt

(cherry picked from commit 8ccb61c)


> Have a naive bayes classifier which uses plain BM25 scores instead of plain 
> frequencies
> ---
>
> Key: LUCENE-7823
> URL: https://issues.apache.org/jira/browse/LUCENE-7823
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 7.0
>
>
> {{SimpleNaiveBayesClassifier}} users term frequencies with add one smoothing 
> to calculate likelihood and just tf for prior. Given Lucene has switched to 
> BM25 it would be better to have a different impl which uses BM25 
> scoring as a probability measure of both prior and likelihood.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7838) Add a knn classifier based on fuzzified term queries

2017-07-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074643#comment-16074643
 ] 

ASF subversion and git services commented on LUCENE-7838:
-

Commit 056501be8b1aed17ef2244c06c4a2c1367eba166 in lucene-solr's branch 
refs/heads/branch_7x from [~teofili]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=056501b ]

LUCENE-7823, LUCENE-7838 - added missing entires in changes.txt

(cherry picked from commit 8ccb61c)


> Add a knn classifier based on fuzzified term queries
> 
>
> Key: LUCENE-7838
> URL: https://issues.apache.org/jira/browse/LUCENE-7838
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 7.0
>
>
> FLT mixes fuzzy and MLT, in the context of Lucene based classification it 
> might be useful to add such a fuzziness to a dedicated KNN classifier (based 
> on FLT queries).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-11007) index new discovered fileds of different types

2017-07-05 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-11007.

Resolution: Information Provided

This question should be asked at the Solr Users mailing list. It will be seen 
by many more people there and allow for better discussion. JIRA is for 
bugs/features in the Solr product itself.

> index new discovered fileds of different types
> --
>
> Key: SOLR-11007
> URL: https://issues.apache.org/jira/browse/SOLR-11007
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Thaer Samar
>
> Hi,
> We are trying to index documents of different types. Document have different 
> fields. fields are known at indexing time. We run a query on a database and 
> we index what comes using query variables as field names in solr. Our current 
> solution: we use dynamic fields with prefix, for example feature_i_*, the 
> issue with that
> 1)  we need to define the type of the dynamic field and to be able to cover 
> the type of discovered fields we define the following
>  feature_i_* for integers, feature_t_* for string, feature_d_* for double, 
> 
> 1.a) this means we need to check the type of the discovered field and then 
> put in the corresponding dynamic field
> 2) at search time, we need to know the right prefix
> We are looking for help to find away to ignore the prefix and check of the 
> type



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (32bit/jdk-9-ea+175) - Build # 10 - Still Unstable!

2017-07-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/10/
Java: 32bit/jdk-9-ea+175 -client -XX:+UseSerialGC --illegal-access=deny

1 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.test

Error Message:
Could not find collection : movereplicatest_coll

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : 
movereplicatest_coll
at 
__randomizedtesting.SeedInfo.seed([B20348801F008A26:3A57775AB1FCE7DE]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:194)
at 
org.apache.solr.cloud.MoveReplicaTest.getRandomReplica(MoveReplicaTest.java:185)
at org.apache.solr.cloud.MoveReplicaTest.test(MoveReplicaTest.java:70)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 11921 lines...]
   [junit4] Suite: org.apache.solr.cloud.MoveReplicaHDFSTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr

Re: Feature freeze @ 7.0 branch

2017-07-05 Thread Mikhail Khludnev
Is it worth to push https://issues.apache.org/jira/browse/SOLR-10986 fixes
regression in https://issues.apache.org/jira/browse/SOLR-6357 into 7.0?

On Wed, Jul 5, 2017 at 12:48 PM, Michael McCandless <
luc...@mikemccandless.com> wrote:

> Hi Anshum, I'd like to do https://issues.apache.org/
> jira/browse/LUCENE-7899 for 7.0; it's a simple rename, which I think we
> should do on major release.  I'll get a patch up shortly.
>
> Thanks,
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> On Tue, Jul 4, 2017 at 12:40 PM, Anshum Gupta 
> wrote:
>
>> Sure Ab, this is an important bug fix.
>>
>> -Anshum
>>
>> On Tue, Jul 4, 2017 at 9:35 AM Andrzej Białecki <
>> andrzej.biale...@lucidworks.com> wrote:
>>
>>> SOLR-10878 and SOLR-10879 didn’t make it before the branches were cut,
>>> but I think they should be included in 7x and 7_0 - I’m going to
>>> cherry-pick the commits from master.
>>>
>>> On 3 Jul 2017, at 22:29, Anshum Gupta  wrote:
>>>
>>> Hi,
>>>
>>> I just wanted to call it out and remove any confusions around the fact
>>> that we shouldn’t we committing ‘new features’ to branch_7_0. As far as
>>> whatever was already agreed upon in previous communications, let’s get that
>>> stuff in if it’s ready or almost there. For everything else, kindly check
>>> before you commit to the release branch.
>>>
>>> Let us make sure that the bugs and edge cases are all taken care of, the
>>> deprecations, and cleanups too.
>>>
>>> P.S: Feel free to commit bug fixes without checking, but make sure that
>>> we aren’t hiding features in those commits.
>>>
>>>
>>> -Anshum
>>>
>>>
>>>
>>>
>>>
>


-- 
Sincerely yours
Mikhail Khludnev


[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1346 - Still Unstable

2017-07-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1346/

2 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test

Error Message:
The Monkey ran for over 45 seconds and no jetties were stopped - this is worth 
investigating!

Stack Trace:
java.lang.AssertionError: The Monkey ran for over 45 seconds and no jetties 
were stopped - this is worth investigating!
at 
__randomizedtesting.SeedInfo.seed([816B775345BBAEC2:93F4889EB47C33A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.apache.solr.cloud.ChaosMonkey.stopTheMonkey(ChaosMonkey.java:587)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test(ChaosMonkeySafeLeaderTest.java:133)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.

[jira] [Commented] (SOLR-11010) OutOfMemoryError in tests when using HDFS BlockCache

2017-07-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074684#comment-16074684
 ] 

ASF subversion and git services commented on SOLR-11010:


Commit a915e9b74fa170844994f06c91dc3bc46359e9be in lucene-solr's branch 
refs/heads/branch_7x from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a915e9b ]

SOLR-11010 Tentative fix for jenkins test failures.


> OutOfMemoryError in tests when using HDFS BlockCache
> 
>
> Key: SOLR-11010
> URL: https://issues.apache.org/jira/browse/SOLR-11010
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: hdfs
>Affects Versions: 7.0, master (8.0)
>Reporter: Andrzej Bialecki 
>
> Spin-off from SOLR-10878: the newly added {{MoveReplicaHDFSTest}} fails on 
> jenkins (but rarely locally) with the following stacktrace:
> {code}
>[junit4]   2> 13619 ERROR (qtp1885193567-48) [n:127.0.0.1:50324_solr 
> c:movereplicatest_coll s:shard2 r:core_node4 
> x:movereplicatest_coll_shard2_replica_n2] o.a.s.h.RequestHandlerBase 
> org.apache.solr.common.SolrException: Error CREATEing SolrCore 
> 'movereplicatest_coll_shard2_replica_n2': Unable to create core 
> [movereplicatest_coll_shard2_replica_n2] Caused by: Direct buffer memory
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.create(CoreContainer.java:938)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$164(CoreAdminOperation.java:91)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:384)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:389)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)
>[junit4]   2>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:745)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:726)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:507)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:378)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:322)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>[junit4]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>[junit4]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:395)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>[junit4]   2>  at 
> org.eclipse.jetty.server.Server.handle(Server.java:534)
>[junit4]   2>  at 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>[junit4]   2>  at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>[junit4]   2>  at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>[junit4]   2>  at 
> org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>[junit4]   2>  at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>[junit4]   2>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>[junit4]   2>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:1

[jira] [Commented] (SOLR-11010) OutOfMemoryError in tests when using HDFS BlockCache

2017-07-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074686#comment-16074686
 ] 

ASF subversion and git services commented on SOLR-11010:


Commit 819670b4f1abb246569390ae224ea751fed19f9a in lucene-solr's branch 
refs/heads/branch_7_0 from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=819670b ]

SOLR-11010 Tentative fix for jenkins test failures.


> OutOfMemoryError in tests when using HDFS BlockCache
> 
>
> Key: SOLR-11010
> URL: https://issues.apache.org/jira/browse/SOLR-11010
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: hdfs
>Affects Versions: 7.0, master (8.0)
>Reporter: Andrzej Bialecki 
>
> Spin-off from SOLR-10878: the newly added {{MoveReplicaHDFSTest}} fails on 
> jenkins (but rarely locally) with the following stacktrace:
> {code}
>[junit4]   2> 13619 ERROR (qtp1885193567-48) [n:127.0.0.1:50324_solr 
> c:movereplicatest_coll s:shard2 r:core_node4 
> x:movereplicatest_coll_shard2_replica_n2] o.a.s.h.RequestHandlerBase 
> org.apache.solr.common.SolrException: Error CREATEing SolrCore 
> 'movereplicatest_coll_shard2_replica_n2': Unable to create core 
> [movereplicatest_coll_shard2_replica_n2] Caused by: Direct buffer memory
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.create(CoreContainer.java:938)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$164(CoreAdminOperation.java:91)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:384)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:389)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)
>[junit4]   2>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:745)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:726)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:507)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:378)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:322)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>[junit4]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>[junit4]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:395)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>[junit4]   2>  at 
> org.eclipse.jetty.server.Server.handle(Server.java:534)
>[junit4]   2>  at 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>[junit4]   2>  at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>[junit4]   2>  at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>[junit4]   2>  at 
> org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>[junit4]   2>  at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>[junit4]   2>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>[junit4]   2>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:

[JENKINS] Lucene-Tests-MMAP-master - Build # 379 - Failure

2017-07-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Tests-MMAP-master/379/

1 tests failed.
FAILED:  
org.apache.lucene.index.TestNumericDocValuesUpdates.testBiasedMixOfRandomUpdates

Error Message:
doc-184 value expected:<9059077129260286029> but was:<-2998781697446825852>

Stack Trace:
java.lang.AssertionError: doc-184 value expected:<9059077129260286029> but 
was:<-2998781697446825852>
at 
__randomizedtesting.SeedInfo.seed([A86AC67E2AD00762:AC4D7EA12735688]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at 
org.apache.lucene.index.TestNumericDocValuesUpdates.testBiasedMixOfRandomUpdates(TestNumericDocValuesUpdates.java:169)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 290 lines...]
   [junit4] Suite: org.apache.lucene.index.TestNumericDocValuesUpdates
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestNumericDocValuesUpdates 
-Dtests.method=testBiasedMixOfRandomUpdates -Dtests.seed=A86AC67E2AD00762 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.directory=MMapDirectory 
-Dtests.locale=en-IE -Dtests.timezone=Australia/Queensland -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] FAILURE 6.86s J1 | 
TestNumericDocValuesUpdat

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_131) - Build # 5 - Still Unstable!

2017-07-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/5/
Java: 64bit/jdk1.8.0_131 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.handler.admin.MBeansHandlerTest.testDiff

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([9EA07FCF933C5DC2:5BB6BB54838A65A2]:0)
at 
org.apache.solr.handler.admin.SolrInfoMBeanHandler.diffObject(SolrInfoMBeanHandler.java:240)
at 
org.apache.solr.handler.admin.SolrInfoMBeanHandler.diffNamedList(SolrInfoMBeanHandler.java:219)
at 
org.apache.solr.handler.admin.SolrInfoMBeanHandler.getDiff(SolrInfoMBeanHandler.java:187)
at 
org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:87)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2473)
at org.apache.solr.util.TestHarness.query(TestHarness.java:337)
at org.apache.solr.util.TestHarness.query(TestHarness.java:319)
at 
org.apache.solr.handler.admin.MBeansHandlerTest.testDiff(MBeansHandlerTest.java:57)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java

[jira] [Commented] (LUCENE-7899) Rename FieldValueQuery to DocValuesFieldExistsQuery

2017-07-05 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074709#comment-16074709
 ] 

Michael McCandless commented on LUCENE-7899:


bq. Maybe add a note to lucene/MIGRATE.txt before pushing to branch_7_0 and 
branch_7x?

+1, will do

> Rename FieldValueQuery to DocValuesFieldExistsQuery
> ---
>
> Key: LUCENE-7899
> URL: https://issues.apache.org/jira/browse/LUCENE-7899
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Blocker
> Fix For: 7.0
>
> Attachments: LUCENE-7899.patch
>
>
> I don't think we have a query today to efficiently test whether a doc values 
> field exists (has any value) for each document in the index?
> Now that we use iterators to access doc values, this should be an efficient 
> query: we can return the DISI we get for the doc values.
> ElasticSearch indexes its own field to record which field names occur in a 
> document, so it's able to do "exists" for any field (not just doc values 
> fields), but I think doc values fields we can just get "for free".
> I haven't started on this ... just wanted to open the issue first for 
> discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7899) Rename FieldValueQuery to DocValuesFieldExistsQuery

2017-07-05 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074744#comment-16074744
 ] 

Uwe Schindler commented on LUCENE-7899:
---

bq. ElasticSearch indexes its own field to record which field names occur in a 
document, so it's able to do "exists" for any field (not just doc values 
fields), but I think doc values fields we can just get "for free".

IMHO, this is still the preferable method of doing this as you only need one 
field and you can quickly lookup all documents with a simple inverted index 
query. I generally recommend the same strategy also to Solr users (they just 
have to do it manually). Index size is in most cases not a problem, as the term 
index is small and the posting list is highly compressed!

> Rename FieldValueQuery to DocValuesFieldExistsQuery
> ---
>
> Key: LUCENE-7899
> URL: https://issues.apache.org/jira/browse/LUCENE-7899
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Blocker
> Fix For: 7.0
>
> Attachments: LUCENE-7899.patch
>
>
> I don't think we have a query today to efficiently test whether a doc values 
> field exists (has any value) for each document in the index?
> Now that we use iterators to access doc values, this should be an efficient 
> query: we can return the DISI we get for the doc values.
> ElasticSearch indexes its own field to record which field names occur in a 
> document, so it's able to do "exists" for any field (not just doc values 
> fields), but I think doc values fields we can just get "for free".
> I haven't started on this ... just wanted to open the issue first for 
> discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10942) qt param is not working partially in solr5.5

2017-07-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074747#comment-16074747
 ] 

Jan Høydahl commented on SOLR-10942:


You can still have qt work, but you will then need to rename your handler from 
{{/country}} to {{country}} and use {{qt=country}}, since slash prefixed 
handlers are reserved for explicit URLs only. See SOLR-3161 for background on 
the restriction. Also, with 7.0 release soon to come, {{handleSelect}} will 
default to false and is gone from examples and will probably vanish in 8.0, see 
SOLR-6807...

> qt param is not working partially in solr5.5
> 
>
> Key: SOLR-10942
> URL: https://issues.apache.org/jira/browse/SOLR-10942
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5
>Reporter: Pavithra Dhakshinamurthy
>
> qt param is working fine if fieldname is provided with the request, but it is 
> not working if just the search term is provided. 
> For Example: 
> http://localhost:8983/solr/core2/select?q=states&wt=xml&indent=true&qt=/country
>   is not working, where as 
> http://localhost:8983/solr/core2/select?q=countryName:states&wt=xml&indent=true&qt=/country
>is working. 
> Does any body faced this issue?
> This is how we have defined the request handler
>  
>   
>  explicit
>  10
>  edismax
>  countryName^100 countryCode^60 addrcountry^20 
> mailaddresscountry^20
>   
>



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-9215) QT parameter doesn't appear to function anymore

2017-07-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl closed SOLR-9215.
-
Resolution: Not A Problem

Closing as "not a problem". Please open another JIRA for the SolrParams 
simplification idea.

> QT parameter doesn't appear to function anymore
> ---
>
> Key: SOLR-9215
> URL: https://issues.apache.org/jira/browse/SOLR-9215
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0, 7.0
>Reporter: Markus Jelsma
> Fix For: 7.0, 6.2
>
>
> The qt parameter doesn't seem to work anymore. A call directly to the /terms 
> handler returns actual terms, as expected. Using the select handler but with 
> qt=terms returns noting.
> http://localhost:8983/solr/logs/select?qt=terms&terms=true&terms.fl=compound_digest&terms.limit=100&terms.sort=index
> {code}
> 
> 
> 
>   0
>   0
>   
> terms
> true
> compound_digest
> 100
> index
>   
> 
> 
> 
> 
> {code}
> A peculiar detail, my unit tests that rely on the qt parameter are not 
> affected.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10986) TestScoreJoinQPScore.testDeleteByScoreJoinQuery() failure: mismatch: '0'!='1' @ response/numFound

2017-07-05 Thread Andrey Kudryavtsev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074773#comment-16074773
 ] 

Andrey Kudryavtsev commented on SOLR-10986:
---

Same trick for OtherCoreJoinQuery ?

> TestScoreJoinQPScore.testDeleteByScoreJoinQuery() failure: mismatch: '0'!='1' 
> @ response/numFound
> -
>
> Key: SOLR-10986
> URL: https://issues.apache.org/jira/browse/SOLR-10986
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6, 7.0, master (8.0), 7.1
>Reporter: Steve Rowe
>Assignee: Mikhail Khludnev
> Attachments: SOLR-10986.patch
>
>
> Reproduces for me on branch_6x but not on master, from 
> [https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/3861/] - {{git 
> bisect}} blames commit {{c215c78}} on SOLR-9217:
> {noformat}
> Checking out Revision 9947a811e83cc0f848f9ddaa37a4137f19efff1a 
> (refs/remotes/origin/branch_6x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestScoreJoinQPScore -Dtests.method=testDeleteByScoreJoinQuery 
> -Dtests.seed=6DE98178CA5DE220 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=el-GR -Dtests.timezone=Asia/Vientiane -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.02s J1 | 
> TestScoreJoinQPScore.testDeleteByScoreJoinQuery <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: mismatch: '0'!='1' 
> @ response/numFound
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([6DE98178CA5DE220:7A8B1D8F401EA807]:0)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:989)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:936)
>[junit4]>  at 
> org.apache.solr.search.join.TestScoreJoinQPScore.testDeleteByScoreJoinQuery(TestScoreJoinQPScore.java:125)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {t_description=BlockTreeOrds(blocksize=128), 
> title_stemmed=PostingsFormat(name=Memory doPackFST= false), 
> price_s=BlockTreeOrds(blocksize=128), name=BlockTreeOrds(blocksize=128), 
> id=BlockTreeOrds(blocksize=128), 
> text=PostingsFormat(name=LuceneVarGapFixedInterval), 
> movieId_s=BlockTreeOrds(blocksize=128), title=PostingsFormat(name=Memory 
> doPackFST= false), title_lettertok=BlockTreeOrds(blocksize=128), 
> productId_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128)))},
>  docValues:{}, maxPointsInLeafNode=166, maxMBSortInHeap=7.4808509338680995, 
> sim=RandomSimilarity(queryNorm=false,coord=yes): {}, locale=el-GR, 
> timezone=Asia/Vientiane
>[junit4]   2> NOTE: Linux 4.10.0-21-generic i386/Oracle Corporation 
> 1.8.0_131 (32-bit)/cpus=8,threads=1,free=159538432,total=510918656
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_131) - Build # 20065 - Unstable!

2017-07-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20065/
Java: 64bit/jdk1.8.0_131 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestStressInPlaceUpdates.stressTest

Error Message:
RTG: 14=[version=1572089370964393984, intValue=2,longValue=22] <==VS==> 
SolrDocument{id=14, title_s=title14, val1_i_dvo=2, val2_l_dvo=20, 
_version_=1572089370957053952, inplace_updatable_int_with_default=666, 
inplace_updatable_float_with_default=42.0} expected:<22> but 
was:<20>

Stack Trace:
java.lang.AssertionError: RTG: 14=[version=1572089370964393984, 
intValue=2,longValue=22] <==VS==> SolrDocument{id=14, title_s=title14, 
val1_i_dvo=2, val2_l_dvo=20, _version_=1572089370957053952, 
inplace_updatable_int_with_default=666, 
inplace_updatable_float_with_default=42.0} expected:<22> but 
was:<20>
at 
__randomizedtesting.SeedInfo.seed([2943FA0203A63F2:69F2E00D1EEFB708]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at 
org.apache.solr.cloud.TestStressInPlaceUpdates.stressTest(TestStressInPlaceUpdates.java:453)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1

[jira] [Commented] (SOLR-10986) TestScoreJoinQPScore.testDeleteByScoreJoinQuery() failure: mismatch: '0'!='1' @ response/numFound

2017-07-05 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074795#comment-16074795
 ] 

Mikhail Khludnev commented on SOLR-10986:
-

It's either already there or it's not vulnerable to it. 

> TestScoreJoinQPScore.testDeleteByScoreJoinQuery() failure: mismatch: '0'!='1' 
> @ response/numFound
> -
>
> Key: SOLR-10986
> URL: https://issues.apache.org/jira/browse/SOLR-10986
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6, 7.0, master (8.0), 7.1
>Reporter: Steve Rowe
>Assignee: Mikhail Khludnev
> Attachments: SOLR-10986.patch
>
>
> Reproduces for me on branch_6x but not on master, from 
> [https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/3861/] - {{git 
> bisect}} blames commit {{c215c78}} on SOLR-9217:
> {noformat}
> Checking out Revision 9947a811e83cc0f848f9ddaa37a4137f19efff1a 
> (refs/remotes/origin/branch_6x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestScoreJoinQPScore -Dtests.method=testDeleteByScoreJoinQuery 
> -Dtests.seed=6DE98178CA5DE220 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=el-GR -Dtests.timezone=Asia/Vientiane -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.02s J1 | 
> TestScoreJoinQPScore.testDeleteByScoreJoinQuery <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: mismatch: '0'!='1' 
> @ response/numFound
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([6DE98178CA5DE220:7A8B1D8F401EA807]:0)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:989)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:936)
>[junit4]>  at 
> org.apache.solr.search.join.TestScoreJoinQPScore.testDeleteByScoreJoinQuery(TestScoreJoinQPScore.java:125)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {t_description=BlockTreeOrds(blocksize=128), 
> title_stemmed=PostingsFormat(name=Memory doPackFST= false), 
> price_s=BlockTreeOrds(blocksize=128), name=BlockTreeOrds(blocksize=128), 
> id=BlockTreeOrds(blocksize=128), 
> text=PostingsFormat(name=LuceneVarGapFixedInterval), 
> movieId_s=BlockTreeOrds(blocksize=128), title=PostingsFormat(name=Memory 
> doPackFST= false), title_lettertok=BlockTreeOrds(blocksize=128), 
> productId_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128)))},
>  docValues:{}, maxPointsInLeafNode=166, maxMBSortInHeap=7.4808509338680995, 
> sim=RandomSimilarity(queryNorm=false,coord=yes): {}, locale=el-GR, 
> timezone=Asia/Vientiane
>[junit4]   2> NOTE: Linux 4.10.0-21-generic i386/Oracle Corporation 
> 1.8.0_131 (32-bit)/cpus=8,threads=1,free=159538432,total=510918656
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #215: SOLR-10123: Fix to better support numeric PointField...

2017-07-05 Thread HoustonPutman
Github user HoustonPutman commented on the issue:

https://github.com/apache/lucene-solr/pull/215
  
If you are asking where in the component are they handled, it's in the 
ExpressionFactory.createField() method where Trie and Point fields are 
separated.

If you are asking about the tests, they work with both Trie and Point 
fields.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7897) RangeQuery optimization in IndexOrDocValuesQuery

2017-07-05 Thread Murali Krishna P (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074818#comment-16074818
 ] 

Murali Krishna P commented on LUCENE-7897:
--

bq. Would it more more intuitive if IndexOrDocValuesQuery returned 
indexScorerSupplier.cost() directly?
Definitely, that is more sensible. It is very unlikely that doc values will be 
less than point estimation. If the docvalue returned a smaller score than the 
docfreq(from the term scorer), it would have used points anyway as per code.

bq.  arbitrary penalty for doc-values queries
 theoretically yes. The problem is currently we are ignoring the docvalue cost 
and comparing the cost of original scorer with that of points. So if original 
term had 1M match and estimatepoints is even 1M+1, we would endup with 
docvalue. That is why I was suggesting reducing the cost of points. May be we 
could refactor this if we can pass the "#matchingdocs or minScore" to the place 
where we decide the scorer.
{noformat}
  public Scorer get(boolean randomAccess) throws IOException {
return (randomAccess ? dvScorerSupplier : 
indexScorerSupplier).get(randomAccess);
  }
{noformat}


> RangeQuery optimization in IndexOrDocValuesQuery 
> -
>
> Key: LUCENE-7897
> URL: https://issues.apache.org/jira/browse/LUCENE-7897
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: trunk, 7.0
>Reporter: Murali Krishna P
>
> For range queries, Lucene uses either Points or Docvalues based on cost 
> estimation 
> (https://lucene.apache.org/core/6_5_0/core/org/apache/lucene/search/IndexOrDocValuesQuery.html).
>  Scorer is chosen based on the minCost here: 
> https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/search/Boolean2ScorerSupplier.java#L16
> However, the cost calculation for TermQuery and IndexOrDocvalueQuery seems to 
> have same weightage. Essentially, cost depends upon the docfreq in TermDict, 
> number of points visited and number of docvalues. In a situation where 
> docfreq is not too restrictive, this is lot of lookups for docvalues and 
> using points would have been better.
> Following query with 1M matches, takes 60ms with docvalues, but only 27ms 
> with points. If I change the query to "message:*", which matches all docs, it 
> choses the points(since cost is same), but with message:xyz it choses 
> docvalues eventhough doc frequency is 1million which results in many docvalue 
> fetches. Would it make sense to change the cost of docvalues query to be 
> higher or use points if the docfreq is too high for the term query(find an 
> optimum threshold where points cost < docvalue cost)?
> {noformat}
> {
>   "query": {
> "bool": {
>   "must": [
> {
>   "query_string": {
> "query": "message:xyz"
>   }
> },
> {
>   "range": {
> "@timestamp": {
>   "gte": 149865240,
>   "lte": 149890500,
>   "format": "epoch_millis"
> }
>   }
> }
>   ]
> }
>   }
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10123) Analytics Component 2.0

2017-07-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074819#comment-16074819
 ] 

ASF GitHub Bot commented on SOLR-10123:
---

Github user HoustonPutman commented on the issue:

https://github.com/apache/lucene-solr/pull/215
  
If you are asking where in the component are they handled, it's in the 
ExpressionFactory.createField() method where Trie and Point fields are 
separated.

If you are asking about the tests, they work with both Trie and Point 
fields.


> Analytics Component 2.0
> ---
>
> Key: SOLR-10123
> URL: https://issues.apache.org/jira/browse/SOLR-10123
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Houston Putman
>  Labels: features
> Attachments: SOLR-10123.patch, SOLR-10123.patch, SOLR-10123.patch
>
>
> A completely redesigned Analytics Component, introducing the following 
> features:
> * Support for distributed collections
> * New JSON request language, and response format that fits JSON better.
> * Faceting over mapping functions in addition to fields (Value Faceting)
> * PivotFaceting with ValueFacets
> * More advanced facet sorting
> * Support for PointField types
> * Expressions over multi-valued fields
> * New types of mapping functions
> ** Logical
> ** Conditional
> ** Comparison
> * Concurrent request execution
> * Custom user functions, defined within the request
> Fully backwards compatible with the orifinal Analytics Component with the 
> following exceptions:
> * All fields used must have doc-values enabled
> * Expression results can no longer be used when defining Range and Query 
> facets
> * The reverse(string) mapping function is no longer a native function



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7896) Upgrade to RandomizedRunner 2.5.2

2017-07-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074827#comment-16074827
 ] 

ASF subversion and git services commented on LUCENE-7896:
-

Commit ff7ccdeebb9071b23bb43189d92e492985077b63 in lucene-solr's branch 
refs/heads/master from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ff7ccde ]

LUCENE-7896: Upgrade to randomizedRunner-2.5.2


> Upgrade to RandomizedRunner 2.5.2
> -
>
> Key: LUCENE-7896
> URL: https://issues.apache.org/jira/browse/LUCENE-7896
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Simon Willnauer
>Priority: Minor
> Fix For: 7.0, master (8.0)
>
> Attachments: LUCENE-7896.patch
>
>
> RR 2.5.2 fixed a nasty error message that gets printed while running tests 
> that is pretty annoying if you have the environment hitting this. Lets 
> upgrade to 2.5.2



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 6 - Still Unstable

2017-07-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/6/

3 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:37670/gop

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:37670/gop
at 
__randomizedtesting.SeedInfo.seed([E1B383717E4D3D1A:69E7BCABD0B150E2]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:637)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:252)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1121)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1667)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1694)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest.test(ChaosMonkeyNothingIsSafeWithPullReplicasTest.java:297)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.ran

[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 5 - Still Unstable!

2017-07-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/5/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Could not find collection:collection2

Stack Trace:
java.lang.AssertionError: Could not find collection:collection2
at 
__randomizedtesting.SeedInfo.seed([5A71203D2B7FACDF:D2251FE78583C127]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:140)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:135)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:907)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrClient(FullSolrCloudDistribCmdsTest.java:612)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAda

[jira] [Commented] (SOLR-10494) Switch Solr's Default Response Type from XML to JSON

2017-07-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074859#comment-16074859
 ] 

Jan Høydahl commented on SOLR-10494:


I tried to debug theTestHierarchicalDocBuilder.testThreeLevelHierarchy test 
failure, and I see that the test data is different but that may be the 
intention, to randomize the hierarchy. The patch is pretty large so I guess 
several things could affect this. At least the failures are 100% reproducible 
so it should be possible to get to the end of it.

> Switch Solr's Default Response Type from XML to JSON
> 
>
> Key: SOLR-10494
> URL: https://issues.apache.org/jira/browse/SOLR-10494
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Trey Grainger
>Priority: Blocker
> Fix For: 7.0
>
> Attachments: SOLR-10494, SOLR-10494, SOLR-10494.patch, 
> SOLR-10494-withdocs.patch, SOLR-10494-withdocs.patch
>
>
> Solr's default response format is still XML, despite the fact that Solr has 
> supported the JSON response format for over a decade, developer mindshare has 
> clearly shifted toward JSON over the years, and most modern/competing systems 
> also use JSON format now by default.
> In fact, Solr's admin UI even explicitly adds wt=json to the request (by 
> default in the UI) to override the default of wt=xml, so Solr's Admin UI 
> effectively has a different default than the API.
> We have now introduced things like the JSON faceting API, and the new more 
> modern /V2 apis assume JSON for the areas of Solr they cover, so clearly 
> we're moving in the direction of JSON anyway.
> I'd like propose that we switch the default response writer to JSON (wt=json) 
> instead of XML for Solr 7.0, as this seems to me like the right direction and 
> a good time to make this change with the next major version.
> Based upon feedback from the Lucene Dev's mailing list, we want to:
> 1) Change the default response writer type to "wt=json" and also change to 
> "indent=on" by default
> 2) Make no changes on the update handler side; it already works as desired 
> (it returns the response in the same content-type as the request unless the 
> "wt" is passed in explicitly).
> 3) Keep the /query request handler around since people have already used it 
> for years to do JSON queries
> 4) Add a commented-out "wt=xml" to the solrconfig.xml as a reminder for folks 
> on how to change (back) the response format.
> The default format change, plus the addition of "indent=on" are back compat 
> changes, so we need to make sure we doc those clearly in the CHANGES.txt. 
> There will also need to be significant adjustments to the Solr Ref Guide, 
> Tutorial, etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7871) Platform independent config file instead of solr.in.sh and solr.in.cmd

2017-07-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-7871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-7871:
--
Fix Version/s: (was: 7.0)

> Platform independent config file instead of solr.in.sh and solr.in.cmd
> --
>
> Key: SOLR-7871
> URL: https://issues.apache.org/jira/browse/SOLR-7871
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.2.1
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: bin/solr
> Attachments: SOLR-7871.patch, SOLR-7871.patch
>
>
> Spinoff from SOLR-7043
> The config files {{solr.in.sh}} and {{solr.in.cmd}} are currently executable 
> batch files, but all they do is to set environment variables for the start 
> scripts on the format {{key=value}}
> Suggest to instead have one central platform independent config file e.g. 
> {{bin/solr.yml}} or {{bin/solrstart.properties}} which is parsed by 
> {{SolrCLI.java}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10665) POC for a PF4J based plugin system

2017-07-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-10665:
---
Fix Version/s: (was: 7.0)
   master (8.0)

> POC for a PF4J based plugin system
> --
>
> Key: SOLR-10665
> URL: https://issues.apache.org/jira/browse/SOLR-10665
> Project: Solr
>  Issue Type: Sub-task
>  Components: Plugin system
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: pf4j, plugins
> Fix For: master (8.0)
>
>
> In SOLR-5103 we have been discussing improvements to Solr plugin system, with 
> ability to bundle a plugin as zip, and easily install from shell or Admin UI.
> This first sub task aims to create a working POC to demonstrate how PF4J can 
> be used to bring a very simple plugin packaging and installation system to 
> Solr with a minimum of effort. Code speaks louder than words :)
> The POC effort will be a quite large patch and will be cutting some corners 
> to get the feature in the hands of people who can test and evaluate. If there 
> is consensus to add this to Solr, there will be other sub tasks to split up 
> the elephant into committable chunks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9640) Support PKI authentication and SSL in standalone-mode master/slave auth with local security.json

2017-07-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-9640:
--
Fix Version/s: (was: 7.0)
   7.1

> Support PKI authentication and SSL in standalone-mode master/slave auth with 
> local security.json
> 
>
> Key: SOLR-9640
> URL: https://issues.apache.org/jira/browse/SOLR-9640
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: authentication, pki
> Fix For: 6.7, 7.1
>
> Attachments: SOLR-9640.patch, SOLR-9640.patch, SOLR-9640.patch, 
> SOLR-9640.patch, SOLR-9640.patch
>
>
> While working with SOLR-9481 I managed to secure Solr standalone on a 
> single-node server. However, when adding 
> {{&shards=localhost:8081/solr/foo,localhost:8082/solr/foo}} to the request, I 
> get 401 error. This issue will fix PKI auth to work for standalone, which 
> should automatically make both sharding and master/slave index replication 
> work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10999) Support "Accept-Encoding" header to enable response gzip compression

2017-07-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-10999:
---
Fix Version/s: (was: 7.0)
   7.1

> Support "Accept-Encoding" header to enable response gzip compression
> 
>
> Key: SOLR-10999
> URL: https://issues.apache.org/jira/browse/SOLR-10999
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Reporter: Jan Høydahl
>  Labels: compression, http-headers, standards
> Fix For: 7.1
>
> Attachments: SOLR-10999.patch
>
>
> Spinoff from 
> [email|https://lists.apache.org/thread.html/b4ec90b01bc075a98947e77b0a683308f760221dccb11be5819d1601@%3Cdev.lucene.apache.org%3E]
> Accept-Encoding:
> Advertises which content encoding, usually a compression algorithm, the 
> client is able to understand
> https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept-Encoding
> Could enable compression of large search results. SOLR-856 suggests that this 
> is implemented,
> but it does not work. Seems it is only implemented for replication. I’d 
> expect this to be useful for
> large /export or /stream requests. Example:
> Accept-Encoding: gzip
> Could be configured with the [Jetty Gzip 
> Handler|http://www.eclipse.org/jetty/documentation/9.4.x/gzip-filter.html]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9526) data_driven configs defaults to "strings" for unmapped fields, makes most fields containing "textual content" unsearchable, breaks tutorial examples

2017-07-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074862#comment-16074862
 ] 

Jan Høydahl commented on SOLR-9526:
---

Any luck [~steve_rowe]? I'd like for this to be in 7.0 from the get go to have 
a better OOTB experience with field guessing now that _default schema will be 
even more used.

> data_driven configs defaults to "strings" for unmapped fields, makes most 
> fields containing "textual content" unsearchable, breaks tutorial examples
> 
>
> Key: SOLR-9526
> URL: https://issues.apache.org/jira/browse/SOLR-9526
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Reporter: Hoss Man
>Assignee: Jan Høydahl
>  Labels: dynamic-schema
> Fix For: 7.0
>
> Attachments: SOLR-9526.patch, SOLR-9526.patch, SOLR-9526.patch, 
> SOLR-9526.patch
>
>
> James Pritchett pointed out on the solr-user list that this sample query from 
> the quick start tutorial matched no docs (even though the tutorial text says 
> "The above request returns only one document")...
> http://localhost:8983/solr/gettingstarted/select?wt=json&indent=true&q=name:foundation
> The root problem seems to be that the add-unknown-fields-to-the-schema chain 
> in data_driven_schema_configs is configured with...
> {code}
> strings
> {code}
> ...and the "strings" type uses StrField and is not tokenized.
> 
> Original thread: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201609.mbox/%3ccac-n2zrpsspfnk43agecspchc5b-0ff25xlfnzogyuvyg2d...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Feature freeze @ 7.0 branch

2017-07-05 Thread Simon Willnauer
I'd like to push  https://issues.apache.org/jira/browse/LUCENE-7896 to
the branch_7_0 (upgrade to RandomziedRunner) unless anybody objects?!

simon

On Wed, Jul 5, 2017 at 2:09 PM, Mikhail Khludnev  wrote:
> Is it worth to push https://issues.apache.org/jira/browse/SOLR-10986 fixes
> regression in https://issues.apache.org/jira/browse/SOLR-6357 into 7.0?
>
> On Wed, Jul 5, 2017 at 12:48 PM, Michael McCandless
>  wrote:
>>
>> Hi Anshum, I'd like to do
>> https://issues.apache.org/jira/browse/LUCENE-7899 for 7.0; it's a simple
>> rename, which I think we should do on major release.  I'll get a patch up
>> shortly.
>>
>> Thanks,
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>> On Tue, Jul 4, 2017 at 12:40 PM, Anshum Gupta 
>> wrote:
>>>
>>> Sure Ab, this is an important bug fix.
>>>
>>> -Anshum
>>>
>>> On Tue, Jul 4, 2017 at 9:35 AM Andrzej Białecki
>>>  wrote:

 SOLR-10878 and SOLR-10879 didn’t make it before the branches were cut,
 but I think they should be included in 7x and 7_0 - I’m going to 
 cherry-pick
 the commits from master.

 On 3 Jul 2017, at 22:29, Anshum Gupta  wrote:

 Hi,

 I just wanted to call it out and remove any confusions around the fact
 that we shouldn’t we committing ‘new features’ to branch_7_0. As far as
 whatever was already agreed upon in previous communications, let’s get that
 stuff in if it’s ready or almost there. For everything else, kindly check
 before you commit to the release branch.

 Let us make sure that the bugs and edge cases are all taken care of, the
 deprecations, and cleanups too.

 P.S: Feel free to commit bug fixes without checking, but make sure that
 we aren’t hiding features in those commits.


 -Anshum




>>
>
>
>
> --
> Sincerely yours
> Mikhail Khludnev

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7896) Upgrade to RandomizedRunner 2.5.2

2017-07-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074868#comment-16074868
 ] 

ASF subversion and git services commented on LUCENE-7896:
-

Commit 5981b393e34ca6075019bbb0c738dcbc8ffc4272 in lucene-solr's branch 
refs/heads/branch_7x from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5981b39 ]

LUCENE-7896: Upgrade to randomizedRunner-2.5.2


> Upgrade to RandomizedRunner 2.5.2
> -
>
> Key: LUCENE-7896
> URL: https://issues.apache.org/jira/browse/LUCENE-7896
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Simon Willnauer
>Priority: Minor
> Fix For: 7.0, master (8.0)
>
> Attachments: LUCENE-7896.patch
>
>
> RR 2.5.2 fixed a nasty error message that gets printed while running tests 
> that is pretty annoying if you have the environment hitting this. Lets 
> upgrade to 2.5.2



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1690) JSONKeyValueTokenizerFactory -- JSON Tokenizer

2017-07-05 Thread Mohamed (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074869#comment-16074869
 ] 

Mohamed  commented on SOLR-1690:


+1 for this analyzer

> JSONKeyValueTokenizerFactory -- JSON Tokenizer
> --
>
> Key: SOLR-1690
> URL: https://issues.apache.org/jira/browse/SOLR-1690
> Project: Solr
>  Issue Type: New Feature
>  Components: Schema and Analysis
>Reporter: Ryan McKinley
>Priority: Minor
> Attachments: noggit-1.0-A1.jar, 
> SOLR-1690-JSONKeyValueTokenizerFactory.patch
>
>
> Sometimes it is nice to group structured data into a single field.
> This (rough) patch, takes JSON input and indexes tokens based on the key 
> values pairs in the json.
> {code:xml|title=schema.xml}
> 
>  omitNorms="true">
>   
>  hierarchicalKey="false"/>
> 
> 
>   
>   
> 
> 
> 
>   
> 
> {code}
> Given text:
> {code}
>  { "hello": "world", "rank":5 }
> {code}
> indexed as two tokens:
> || term position |1 | 2 |
> || term text |hello:world | rank:5 |
> || term type |word |  word |
> || source start,end | 12,17   | 27,28 |



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Feature freeze @ 7.0 branch

2017-07-05 Thread Jan Høydahl
I was hoping to get https://issues.apache.org/jira/browse/SOLR-9526 
 in 7.0 to go along with the 
improved usability of data driven schema. Still one NOCOMMIT to solve. WDYT?

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 5. jul. 2017 kl. 14.09 skrev Mikhail Khludnev :
> 
> Is it worth to push https://issues.apache.org/jira/browse/SOLR-10986 
>  fixes regression in 
> https://issues.apache.org/jira/browse/SOLR-6357 
>  into 7.0?  
> 
> On Wed, Jul 5, 2017 at 12:48 PM, Michael McCandless 
> mailto:luc...@mikemccandless.com>> wrote:
> Hi Anshum, I'd like to do https://issues.apache.org/jira/browse/LUCENE-7899 
>  for 7.0; it's a simple 
> rename, which I think we should do on major release.  I'll get a patch up 
> shortly.
> 
> Thanks,
> 
> Mike McCandless
> 
> http://blog.mikemccandless.com 
> 
> On Tue, Jul 4, 2017 at 12:40 PM, Anshum Gupta  > wrote:
> Sure Ab, this is an important bug fix.
> 
> -Anshum
> 
> On Tue, Jul 4, 2017 at 9:35 AM Andrzej Białecki 
> mailto:andrzej.biale...@lucidworks.com>> 
> wrote:
> SOLR-10878 and SOLR-10879 didn’t make it before the branches were cut, but I 
> think they should be included in 7x and 7_0 - I’m going to cherry-pick the 
> commits from master.
> 
>> On 3 Jul 2017, at 22:29, Anshum Gupta > > wrote:
>> 
>> Hi,
>> 
>> I just wanted to call it out and remove any confusions around the fact that 
>> we shouldn’t we committing ‘new features’ to branch_7_0. As far as whatever 
>> was already agreed upon in previous communications, let’s get that stuff in 
>> if it’s ready or almost there. For everything else, kindly check before you 
>> commit to the release branch.
>> 
>> Let us make sure that the bugs and edge cases are all taken care of, the 
>> deprecations, and cleanups too.
>> 
>> P.S: Feel free to commit bug fixes without checking, but make sure that we 
>> aren’t hiding features in those commits.
>> 
>> 
>> -Anshum
>> 
>> 
>> 
> 
> 
> 
> 
> 
> -- 
> Sincerely yours
> Mikhail Khludnev



[jira] [Comment Edited] (SOLR-1690) JSONKeyValueTokenizerFactory -- JSON Tokenizer

2017-07-05 Thread Mohamed (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074869#comment-16074869
 ] 

Mohamed  edited comment on SOLR-1690 at 7/5/17 2:45 PM:


+1 for this tokenizer index


was (Author: med):
+1 for this analyzer

> JSONKeyValueTokenizerFactory -- JSON Tokenizer
> --
>
> Key: SOLR-1690
> URL: https://issues.apache.org/jira/browse/SOLR-1690
> Project: Solr
>  Issue Type: New Feature
>  Components: Schema and Analysis
>Reporter: Ryan McKinley
>Priority: Minor
> Attachments: noggit-1.0-A1.jar, 
> SOLR-1690-JSONKeyValueTokenizerFactory.patch
>
>
> Sometimes it is nice to group structured data into a single field.
> This (rough) patch, takes JSON input and indexes tokens based on the key 
> values pairs in the json.
> {code:xml|title=schema.xml}
> 
>  omitNorms="true">
>   
>  hierarchicalKey="false"/>
> 
> 
>   
>   
> 
> 
> 
>   
> 
> {code}
> Given text:
> {code}
>  { "hello": "world", "rank":5 }
> {code}
> indexed as two tokens:
> || term position |1 | 2 |
> || term text |hello:world | rank:5 |
> || term type |word |  word |
> || source start,end | 12,17   | 27,28 |



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Feature freeze @ 7.0 branch

2017-07-05 Thread Anshum Gupta
+1 Simon!

On Wed, Jul 5, 2017 at 7:45 AM Jan Høydahl  wrote:

> I was hoping to get https://issues.apache.org/jira/browse/SOLR-9526 in
> 7.0 to go along with the improved usability of data driven schema. Still
> one NOCOMMIT to solve. WDYT?
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> 5. jul. 2017 kl. 14.09 skrev Mikhail Khludnev :
>
> Is it worth to push https://issues.apache.org/jira/browse/SOLR-10986
> fixes regression in https://issues.apache.org/jira/browse/SOLR-6357 into
> 7.0?
>
> On Wed, Jul 5, 2017 at 12:48 PM, Michael McCandless <
> luc...@mikemccandless.com> wrote:
>
>> Hi Anshum, I'd like to do
>> https://issues.apache.org/jira/browse/LUCENE-7899 for 7.0; it's a simple
>> rename, which I think we should do on major release.  I'll get a patch up
>> shortly.
>>
>> Thanks,
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>> On Tue, Jul 4, 2017 at 12:40 PM, Anshum Gupta 
>> wrote:
>>
>>> Sure Ab, this is an important bug fix.
>>>
>>> -Anshum
>>>
>>> On Tue, Jul 4, 2017 at 9:35 AM Andrzej Białecki <
>>> andrzej.biale...@lucidworks.com> wrote:
>>>
 SOLR-10878 and SOLR-10879 didn’t make it before the branches were cut,
 but I think they should be included in 7x and 7_0 - I’m going to
 cherry-pick the commits from master.

 On 3 Jul 2017, at 22:29, Anshum Gupta  wrote:

 Hi,

 I just wanted to call it out and remove any confusions around the fact
 that we shouldn’t we committing ‘new features’ to branch_7_0. As far as
 whatever was already agreed upon in previous communications, let’s get that
 stuff in if it’s ready or almost there. For everything else, kindly check
 before you commit to the release branch.

 Let us make sure that the bugs and edge cases are all taken care of,
 the deprecations, and cleanups too.

 P.S: Feel free to commit bug fixes without checking, but make sure that
 we aren’t hiding features in those commits.


 -Anshum





>>
>
>
> --
> Sincerely yours
> Mikhail Khludnev
>
>
>


[jira] [Commented] (SOLR-9526) data_driven configs defaults to "strings" for unmapped fields, makes most fields containing "textual content" unsearchable, breaks tutorial examples

2017-07-05 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074877#comment-16074877
 ] 

Anshum Gupta commented on SOLR-9526:


I think 

> data_driven configs defaults to "strings" for unmapped fields, makes most 
> fields containing "textual content" unsearchable, breaks tutorial examples
> 
>
> Key: SOLR-9526
> URL: https://issues.apache.org/jira/browse/SOLR-9526
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Reporter: Hoss Man
>Assignee: Jan Høydahl
>  Labels: dynamic-schema
> Fix For: 7.0
>
> Attachments: SOLR-9526.patch, SOLR-9526.patch, SOLR-9526.patch, 
> SOLR-9526.patch
>
>
> James Pritchett pointed out on the solr-user list that this sample query from 
> the quick start tutorial matched no docs (even though the tutorial text says 
> "The above request returns only one document")...
> http://localhost:8983/solr/gettingstarted/select?wt=json&indent=true&q=name:foundation
> The root problem seems to be that the add-unknown-fields-to-the-schema chain 
> in data_driven_schema_configs is configured with...
> {code}
> strings
> {code}
> ...and the "strings" type uses StrField and is not tokenized.
> 
> Original thread: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201609.mbox/%3ccac-n2zrpsspfnk43agecspchc5b-0ff25xlfnzogyuvyg2d...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7899) Rename FieldValueQuery to DocValuesFieldExistsQuery

2017-07-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074878#comment-16074878
 ] 

ASF subversion and git services commented on LUCENE-7899:
-

Commit 6abff51edeab08f272774683b9cbec3c517587a7 in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6abff51 ]

LUCENE-7899: rename FieldValueQuery to DocValuesFieldExistsQuery


> Rename FieldValueQuery to DocValuesFieldExistsQuery
> ---
>
> Key: LUCENE-7899
> URL: https://issues.apache.org/jira/browse/LUCENE-7899
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Blocker
> Fix For: 7.0
>
> Attachments: LUCENE-7899.patch
>
>
> I don't think we have a query today to efficiently test whether a doc values 
> field exists (has any value) for each document in the index?
> Now that we use iterators to access doc values, this should be an efficient 
> query: we can return the DISI we get for the doc values.
> ElasticSearch indexes its own field to record which field names occur in a 
> document, so it's able to do "exists" for any field (not just doc values 
> fields), but I think doc values fields we can just get "for free".
> I haven't started on this ... just wanted to open the issue first for 
> discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9526) data_driven configs defaults to "strings" for unmapped fields, makes most fields containing "textual content" unsearchable, breaks tutorial examples

2017-07-05 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074877#comment-16074877
 ] 

Anshum Gupta edited comment on SOLR-9526 at 7/5/17 2:51 PM:


I think we can get this into 7.0. I'm fine with this as it's an improvement 
that fixes things.


was (Author: anshumg):
I think 

> data_driven configs defaults to "strings" for unmapped fields, makes most 
> fields containing "textual content" unsearchable, breaks tutorial examples
> 
>
> Key: SOLR-9526
> URL: https://issues.apache.org/jira/browse/SOLR-9526
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Reporter: Hoss Man
>Assignee: Jan Høydahl
>  Labels: dynamic-schema
> Fix For: 7.0
>
> Attachments: SOLR-9526.patch, SOLR-9526.patch, SOLR-9526.patch, 
> SOLR-9526.patch
>
>
> James Pritchett pointed out on the solr-user list that this sample query from 
> the quick start tutorial matched no docs (even though the tutorial text says 
> "The above request returns only one document")...
> http://localhost:8983/solr/gettingstarted/select?wt=json&indent=true&q=name:foundation
> The root problem seems to be that the add-unknown-fields-to-the-schema chain 
> in data_driven_schema_configs is configured with...
> {code}
> strings
> {code}
> ...and the "strings" type uses StrField and is not tokenized.
> 
> Original thread: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201609.mbox/%3ccac-n2zrpsspfnk43agecspchc5b-0ff25xlfnzogyuvyg2d...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7899) Rename FieldValueQuery to DocValuesFieldExistsQuery

2017-07-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074882#comment-16074882
 ] 

ASF subversion and git services commented on LUCENE-7899:
-

Commit 6e36ad7c5d654030385156cd7c7e4845aeabb174 in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6e36ad7 ]

LUCENE-7899: add missing file


> Rename FieldValueQuery to DocValuesFieldExistsQuery
> ---
>
> Key: LUCENE-7899
> URL: https://issues.apache.org/jira/browse/LUCENE-7899
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Blocker
> Fix For: 7.0
>
> Attachments: LUCENE-7899.patch
>
>
> I don't think we have a query today to efficiently test whether a doc values 
> field exists (has any value) for each document in the index?
> Now that we use iterators to access doc values, this should be an efficient 
> query: we can return the DISI we get for the doc values.
> ElasticSearch indexes its own field to record which field names occur in a 
> document, so it's able to do "exists" for any field (not just doc values 
> fields), but I think doc values fields we can just get "for free".
> I haven't started on this ... just wanted to open the issue first for 
> discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7899) Rename FieldValueQuery to DocValuesFieldExistsQuery

2017-07-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074886#comment-16074886
 ] 

ASF subversion and git services commented on LUCENE-7899:
-

Commit 52f11df3ab64b5334492fd37612d64eb23d530bd in lucene-solr's branch 
refs/heads/branch_7x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=52f11df ]

LUCENE-7899: rename FieldValueQuery to DocValuesFieldExistsQuery


> Rename FieldValueQuery to DocValuesFieldExistsQuery
> ---
>
> Key: LUCENE-7899
> URL: https://issues.apache.org/jira/browse/LUCENE-7899
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Blocker
> Fix For: 7.0
>
> Attachments: LUCENE-7899.patch
>
>
> I don't think we have a query today to efficiently test whether a doc values 
> field exists (has any value) for each document in the index?
> Now that we use iterators to access doc values, this should be an efficient 
> query: we can return the DISI we get for the doc values.
> ElasticSearch indexes its own field to record which field names occur in a 
> document, so it's able to do "exists" for any field (not just doc values 
> fields), but I think doc values fields we can just get "for free".
> I haven't started on this ... just wanted to open the issue first for 
> discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7899) Rename FieldValueQuery to DocValuesFieldExistsQuery

2017-07-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074887#comment-16074887
 ] 

ASF subversion and git services commented on LUCENE-7899:
-

Commit f7ab772066faa0018814d7761b44b76a5114e796 in lucene-solr's branch 
refs/heads/branch_7x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f7ab772 ]

LUCENE-7899: add missing file


> Rename FieldValueQuery to DocValuesFieldExistsQuery
> ---
>
> Key: LUCENE-7899
> URL: https://issues.apache.org/jira/browse/LUCENE-7899
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Blocker
> Fix For: 7.0
>
> Attachments: LUCENE-7899.patch
>
>
> I don't think we have a query today to efficiently test whether a doc values 
> field exists (has any value) for each document in the index?
> Now that we use iterators to access doc values, this should be an efficient 
> query: we can return the DISI we get for the doc values.
> ElasticSearch indexes its own field to record which field names occur in a 
> document, so it's able to do "exists" for any field (not just doc values 
> fields), but I think doc values fields we can just get "for free".
> I haven't started on this ... just wanted to open the issue first for 
> discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Feature freeze @ 7.0 branch

2017-07-05 Thread Anshum Gupta
Sure Mikhail, and Mike.

-Anshum



> On Jul 5, 2017, at 5:09 AM, Mikhail Khludnev  wrote:
> 
> Is it worth to push https://issues.apache.org/jira/browse/SOLR-10986 
>  fixes regression in 
> https://issues.apache.org/jira/browse/SOLR-6357 
>  into 7.0?  
> 
> On Wed, Jul 5, 2017 at 12:48 PM, Michael McCandless 
> mailto:luc...@mikemccandless.com>> wrote:
> Hi Anshum, I'd like to do https://issues.apache.org/jira/browse/LUCENE-7899 
>  for 7.0; it's a simple 
> rename, which I think we should do on major release.  I'll get a patch up 
> shortly.
> 
> Thanks,
> 
> Mike McCandless
> 
> http://blog.mikemccandless.com 
> 
> On Tue, Jul 4, 2017 at 12:40 PM, Anshum Gupta  > wrote:
> Sure Ab, this is an important bug fix.
> 
> -Anshum
> 
> On Tue, Jul 4, 2017 at 9:35 AM Andrzej Białecki 
> mailto:andrzej.biale...@lucidworks.com>> 
> wrote:
> SOLR-10878 and SOLR-10879 didn’t make it before the branches were cut, but I 
> think they should be included in 7x and 7_0 - I’m going to cherry-pick the 
> commits from master.
> 
>> On 3 Jul 2017, at 22:29, Anshum Gupta > > wrote:
>> 
>> Hi,
>> 
>> I just wanted to call it out and remove any confusions around the fact that 
>> we shouldn’t we committing ‘new features’ to branch_7_0. As far as whatever 
>> was already agreed upon in previous communications, let’s get that stuff in 
>> if it’s ready or almost there. For everything else, kindly check before you 
>> commit to the release branch.
>> 
>> Let us make sure that the bugs and edge cases are all taken care of, the 
>> deprecations, and cleanups too.
>> 
>> P.S: Feel free to commit bug fixes without checking, but make sure that we 
>> aren’t hiding features in those commits.
>> 
>> 
>> -Anshum
>> 
>> 
>> 
> 
> 
> 
> 
> 
> -- 
> Sincerely yours
> Mikhail Khludnev



[jira] [Commented] (LUCENE-7899) Rename FieldValueQuery to DocValuesFieldExistsQuery

2017-07-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074895#comment-16074895
 ] 

ASF subversion and git services commented on LUCENE-7899:
-

Commit 6837e3d93f8eadb325229f2d9aa72a9e597d1993 in lucene-solr's branch 
refs/heads/branch_7_0 from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6837e3d ]

LUCENE-7899: add entry in MIGRATE.txt


> Rename FieldValueQuery to DocValuesFieldExistsQuery
> ---
>
> Key: LUCENE-7899
> URL: https://issues.apache.org/jira/browse/LUCENE-7899
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Blocker
> Fix For: 7.0
>
> Attachments: LUCENE-7899.patch
>
>
> I don't think we have a query today to efficiently test whether a doc values 
> field exists (has any value) for each document in the index?
> Now that we use iterators to access doc values, this should be an efficient 
> query: we can return the DISI we get for the doc values.
> ElasticSearch indexes its own field to record which field names occur in a 
> document, so it's able to do "exists" for any field (not just doc values 
> fields), but I think doc values fields we can just get "for free".
> I haven't started on this ... just wanted to open the issue first for 
> discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7899) Rename FieldValueQuery to DocValuesFieldExistsQuery

2017-07-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074893#comment-16074893
 ] 

ASF subversion and git services commented on LUCENE-7899:
-

Commit 71feef105667793b58dc7449bb5646caf9c23275 in lucene-solr's branch 
refs/heads/branch_7_0 from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=71feef1 ]

LUCENE-7899: rename FieldValueQuery to DocValuesFieldExistsQuery


> Rename FieldValueQuery to DocValuesFieldExistsQuery
> ---
>
> Key: LUCENE-7899
> URL: https://issues.apache.org/jira/browse/LUCENE-7899
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Blocker
> Fix For: 7.0
>
> Attachments: LUCENE-7899.patch
>
>
> I don't think we have a query today to efficiently test whether a doc values 
> field exists (has any value) for each document in the index?
> Now that we use iterators to access doc values, this should be an efficient 
> query: we can return the DISI we get for the doc values.
> ElasticSearch indexes its own field to record which field names occur in a 
> document, so it's able to do "exists" for any field (not just doc values 
> fields), but I think doc values fields we can just get "for free".
> I haven't started on this ... just wanted to open the issue first for 
> discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7899) Rename FieldValueQuery to DocValuesFieldExistsQuery

2017-07-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074894#comment-16074894
 ] 

ASF subversion and git services commented on LUCENE-7899:
-

Commit a0b5bce31acdbed5bfc74df6232733e39b6d67de in lucene-solr's branch 
refs/heads/branch_7_0 from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a0b5bce ]

LUCENE-7899: add missing file


> Rename FieldValueQuery to DocValuesFieldExistsQuery
> ---
>
> Key: LUCENE-7899
> URL: https://issues.apache.org/jira/browse/LUCENE-7899
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Blocker
> Fix For: 7.0
>
> Attachments: LUCENE-7899.patch
>
>
> I don't think we have a query today to efficiently test whether a doc values 
> field exists (has any value) for each document in the index?
> Now that we use iterators to access doc values, this should be an efficient 
> query: we can return the DISI we get for the doc values.
> ElasticSearch indexes its own field to record which field names occur in a 
> document, so it's able to do "exists" for any field (not just doc values 
> fields), but I think doc values fields we can just get "for free".
> I haven't started on this ... just wanted to open the issue first for 
> discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7899) Rename FieldValueQuery to DocValuesFieldExistsQuery

2017-07-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074897#comment-16074897
 ] 

ASF subversion and git services commented on LUCENE-7899:
-

Commit 084e5290e1eb232ceba62f6361a0e2c6625dceaa in lucene-solr's branch 
refs/heads/branch_7x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=084e529 ]

LUCENE-7899: add entry in MIGRATE.txt


> Rename FieldValueQuery to DocValuesFieldExistsQuery
> ---
>
> Key: LUCENE-7899
> URL: https://issues.apache.org/jira/browse/LUCENE-7899
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Blocker
> Fix For: 7.0
>
> Attachments: LUCENE-7899.patch
>
>
> I don't think we have a query today to efficiently test whether a doc values 
> field exists (has any value) for each document in the index?
> Now that we use iterators to access doc values, this should be an efficient 
> query: we can return the DISI we get for the doc values.
> ElasticSearch indexes its own field to record which field names occur in a 
> document, so it's able to do "exists" for any field (not just doc values 
> fields), but I think doc values fields we can just get "for free".
> I haven't started on this ... just wanted to open the issue first for 
> discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7899) Rename FieldValueQuery to DocValuesFieldExistsQuery

2017-07-05 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-7899.

   Resolution: Fixed
Fix Version/s: master (8.0)

> Rename FieldValueQuery to DocValuesFieldExistsQuery
> ---
>
> Key: LUCENE-7899
> URL: https://issues.apache.org/jira/browse/LUCENE-7899
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Blocker
> Fix For: 7.0, master (8.0)
>
> Attachments: LUCENE-7899.patch
>
>
> I don't think we have a query today to efficiently test whether a doc values 
> field exists (has any value) for each document in the index?
> Now that we use iterators to access doc values, this should be an efficient 
> query: we can return the DISI we get for the doc values.
> ElasticSearch indexes its own field to record which field names occur in a 
> document, so it's able to do "exists" for any field (not just doc values 
> fields), but I think doc values fields we can just get "for free".
> I haven't started on this ... just wanted to open the issue first for 
> discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   3   >