[jira] [Commented] (SOLR-6530) Commits under network partition can put any node in down state

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178012#comment-14178012
 ] 

ASF subversion and git services commented on SOLR-6530:
---

Commit 1633276 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1633276 ]

SOLR-6530: Protect against NPE when there are no live replicas

 Commits under network partition can put any node in down state
 --

 Key: SOLR-6530
 URL: https://issues.apache.org/jira/browse/SOLR-6530
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Critical
 Fix For: 4.10.2, 5.0, Trunk

 Attachments: SOLR-6530.patch, SOLR-6530.patch, SOLR-6530.patch, 
 SOLR-6530.patch, SOLR-6530.patch, SOLR-6530.patch, SOLR-6530.patch, 
 SOLR-6530.patch


 Commits are executed by any node in SolrCloud i.e. they're not routed via the 
 leader like other updates. 
 # Suppose there's 1 collection, 1 shard, 2 replicas (A and B) and A is the 
 leader
 # Suppose a commit request is made to node B during a time where B cannot 
 talk to A due to a partition for any reason (failing switch, heavy GC, 
 whatever)
 # B fails to distribute the commit to A (times out) and asks A to recover
 # This was okay earlier because a leader just ignores recovery requests but 
 with leader initiated recovery code, B puts A in the down state and A can 
 never get out of that state.
 tl;dr; During network partitions, if enough commit/optimize requests are sent 
 to the cluster, all the nodes in the cluster will eventually be marked as 
 down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6530) Commits under network partition can put any node in down state

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178013#comment-14178013
 ] 

ASF subversion and git services commented on SOLR-6530:
---

Commit 1633277 from sha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1633277 ]

SOLR-6530: Protect against NPE when there are no live replicas

 Commits under network partition can put any node in down state
 --

 Key: SOLR-6530
 URL: https://issues.apache.org/jira/browse/SOLR-6530
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Critical
 Fix For: 4.10.2, 5.0, Trunk

 Attachments: SOLR-6530.patch, SOLR-6530.patch, SOLR-6530.patch, 
 SOLR-6530.patch, SOLR-6530.patch, SOLR-6530.patch, SOLR-6530.patch, 
 SOLR-6530.patch


 Commits are executed by any node in SolrCloud i.e. they're not routed via the 
 leader like other updates. 
 # Suppose there's 1 collection, 1 shard, 2 replicas (A and B) and A is the 
 leader
 # Suppose a commit request is made to node B during a time where B cannot 
 talk to A due to a partition for any reason (failing switch, heavy GC, 
 whatever)
 # B fails to distribute the commit to A (times out) and asks A to recover
 # This was okay earlier because a leader just ignores recovery requests but 
 with leader initiated recovery code, B puts A in the down state and A can 
 never get out of that state.
 tl;dr; During network partitions, if enough commit/optimize requests are sent 
 to the cluster, all the nodes in the cluster will eventually be marked as 
 down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6530) Commits under network partition can put any node in down state

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178016#comment-14178016
 ] 

ASF subversion and git services commented on SOLR-6530:
---

Commit 1633278 from sha...@apache.org in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1633278 ]

SOLR-6530: Protect against NPE when there are no live replicas

 Commits under network partition can put any node in down state
 --

 Key: SOLR-6530
 URL: https://issues.apache.org/jira/browse/SOLR-6530
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Critical
 Fix For: 4.10.2, 5.0, Trunk

 Attachments: SOLR-6530.patch, SOLR-6530.patch, SOLR-6530.patch, 
 SOLR-6530.patch, SOLR-6530.patch, SOLR-6530.patch, SOLR-6530.patch, 
 SOLR-6530.patch


 Commits are executed by any node in SolrCloud i.e. they're not routed via the 
 leader like other updates. 
 # Suppose there's 1 collection, 1 shard, 2 replicas (A and B) and A is the 
 leader
 # Suppose a commit request is made to node B during a time where B cannot 
 talk to A due to a partition for any reason (failing switch, heavy GC, 
 whatever)
 # B fails to distribute the commit to A (times out) and asks A to recover
 # This was okay earlier because a leader just ignores recovery requests but 
 with leader initiated recovery code, B puts A in the down state and A can 
 never get out of that state.
 tl;dr; During network partitions, if enough commit/optimize requests are sent 
 to the cluster, all the nodes in the cluster will eventually be marked as 
 down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6013) Remove IndexableFieldType.indexed()

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178088#comment-14178088
 ] 

ASF subversion and git services commented on LUCENE-6013:
-

Commit 1633296 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1633296 ]

LUCENE-6013: remove IndexableFieldType.indexed and FieldInfo.indexed (it's 
redundant with IndexOptions != null)

 Remove IndexableFieldType.indexed()
 ---

 Key: LUCENE-6013
 URL: https://issues.apache.org/jira/browse/LUCENE-6013
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-6013.patch, LUCENE-6013.patch, LUCENE-6013.patch, 
 LUCENE-6013.patch


 Like LUCENE-6006, here's another pre-cursor for LUCENE-6005
 ... because I think it's important to nail down Lucene's low-schema
 (FieldType/FieldInfos) semantics before adding a high-schema.
 IndexableFieldType.indexed() is redundant with
 IndexableFieldType.indexOptions() != null, so we should remove it,
 codecs shouldn't have to write/read it, high-schema should not configure it, 
 etc.
 Similarly, the FieldInfo.indexed bit is redundant, so I removed it, but I
 left the sugar API (FieldInfo.isIndexed) and implement it as just
 checking IndexOptions != null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6016) Stempel converts trailing 1 (and prior character) to ć

2014-10-21 Thread Jeff Stein (JIRA)
Jeff Stein created LUCENE-6016:
--

 Summary: Stempel converts trailing 1 (and prior character) to ć
 Key: LUCENE-6016
 URL: https://issues.apache.org/jira/browse/LUCENE-6016
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
Affects Versions: 4.8.1
Reporter: Jeff Stein


In the stempel analysis module, the StempelFilter TokenFilter converts a 
trailing numeric one into a ć character, while also consuming the prior 
character. This was also filed against the downstream 
[elasticsearch-analysis-stempel 
project|https://github.com/elasticsearch/elasticsearch-analysis-stempel/issues/31].

I did not find any errors with other numbers in the trailing position.

Example:
|| input || output ||
| foo1 | foć |
| foo11 | fooć |



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6015) Revisit DocIdSetBuilder's heuristic to switch to FixedBitSet

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178114#comment-14178114
 ] 

ASF subversion and git services commented on LUCENE-6015:
-

Commit 1633305 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1633305 ]

LUCENE-6015: DocIdSetBuilder: Use the sparse set until more than 1/16 of the 
longs have one bit set.

 Revisit DocIdSetBuilder's heuristic to switch to FixedBitSet
 

 Key: LUCENE-6015
 URL: https://issues.apache.org/jira/browse/LUCENE-6015
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6015.patch


 DocIdSetBuilder starts with a SparseFixedBitSet and then upgrades to a 
 FixedBitSet when the cardinality grows larger than maxDoc  14. However 
 Robert improved SparseFixedBitSet performance quite significantly in 
 LUCENE-6003 so we should see if it makes sense to update this heuristic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5889) AnalyzingInfixSuggester should expose commit()

2014-10-21 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-5889.

Resolution: Fixed

 AnalyzingInfixSuggester should expose commit()
 --

 Key: LUCENE-5889
 URL: https://issues.apache.org/jira/browse/LUCENE-5889
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spellchecker
Reporter: Mike Sokolov
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5889.patch, LUCENE-5889.patch


 There is no way short of close() for a user of AnalyzingInfixSuggester to 
 cause it to commit() its underlying index: only refresh() is provided.  But 
 callers might want to ensure the index is flushed to disk without closing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6015) Revisit DocIdSetBuilder's heuristic to switch to FixedBitSet

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178115#comment-14178115
 ] 

ASF subversion and git services commented on LUCENE-6015:
-

Commit 1633306 from [~jpountz] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1633306 ]

LUCENE-6015: DocIdSetBuilder: Use the sparse set until more than 1/16 of the 
longs have one bit set.

 Revisit DocIdSetBuilder's heuristic to switch to FixedBitSet
 

 Key: LUCENE-6015
 URL: https://issues.apache.org/jira/browse/LUCENE-6015
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6015.patch


 DocIdSetBuilder starts with a SparseFixedBitSet and then upgrades to a 
 FixedBitSet when the cardinality grows larger than maxDoc  14. However 
 Robert improved SparseFixedBitSet performance quite significantly in 
 LUCENE-6003 so we should see if it makes sense to update this heuristic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6015) Revisit DocIdSetBuilder's heuristic to switch to FixedBitSet

2014-10-21 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-6015.
--
   Resolution: Fixed
Fix Version/s: 5.0

 Revisit DocIdSetBuilder's heuristic to switch to FixedBitSet
 

 Key: LUCENE-6015
 URL: https://issues.apache.org/jira/browse/LUCENE-6015
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-6015.patch


 DocIdSetBuilder starts with a SparseFixedBitSet and then upgrades to a 
 FixedBitSet when the cardinality grows larger than maxDoc  14. However 
 Robert improved SparseFixedBitSet performance quite significantly in 
 LUCENE-6003 so we should see if it makes sense to update this heuristic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (LUCENE-5293) Also use EliasFanoDocIdSet in CachingWrapperFilter

2014-10-21 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand closed LUCENE-5293.

Resolution: Won't Fix

 Also use EliasFanoDocIdSet in CachingWrapperFilter
 --

 Key: LUCENE-5293
 URL: https://issues.apache.org/jira/browse/LUCENE-5293
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Reporter: Paul Elschot
Priority: Minor
 Attachments: LUCENE-5293.patch, LUCENE-5293.patch, LUCENE-5293.patch, 
 LUCENE-5293.patch, LUCENE-5293.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5288) Add ProxBooleanTermQuery, like BooleanQuery but boosting when term occur close together (in proximity) in each document

2014-10-21 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-5288.

Resolution: Won't Fix

 Add ProxBooleanTermQuery, like BooleanQuery but boosting when term occur 
 close together (in proximity) in each document
 -

 Key: LUCENE-5288
 URL: https://issues.apache.org/jira/browse/LUCENE-5288
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 4.9

 Attachments: LUCENE-5288.patch, LUCENE-5288.patch, LUCENE-5288.patch, 
 LUCENE-5288.patch


 This is very much a work in progress, tons of nocommits...  It adds two 
 classes:
   * ProxBooleanTermQuery: like BooleanQuery (currently, all clauses
 must be TermQuery, and only Occur.SHOULD is supported), which is
 essentially a BooleanQuery (same matching/scoring) except for each
 matching docs the positions are merge-sorted and scored to boost
 the document's score
   * QueryRescorer: simple API to re-score top hits using a different
 query.  Because ProxBooleanTermQuery is so costly, apps would
 normally run an ordinary BooleanQuery across the full index, to
 get the top few hundred hits, and then rescore using the more
 costly ProxBooleanTermQuery (or other costly queries).
 I'm not sure how to actually compute the appropriate prox boost (this
 is the hard part!!) and I've completely punted on that in the current
 patch (it's just a hack now), but the patch does all the mechanics
 to merge/visit all the positions in order per hit.
 Maybe we could do the similar scoring that SpanNearQuery or sloppy
 PhraseQuery would do, or maybe this paper:
   http://plg.uwaterloo.ca/~claclark/sigir2006_term_proximity.pdf
 which Rob also used in LUCENE-4909 to add proximity scoring to
 PostingsHighlighter.  Maybe we need to make it (how the prox boost is
 computed/folded in) somehow pluggable ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6015) Revisit DocIdSetBuilder's heuristic to switch to FixedBitSet

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178120#comment-14178120
 ] 

ASF subversion and git services commented on LUCENE-6015:
-

Commit 1633309 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1633309 ]

LUCENE-6015: Fix TestDocIdSetBuilder.testDense expectations.

 Revisit DocIdSetBuilder's heuristic to switch to FixedBitSet
 

 Key: LUCENE-6015
 URL: https://issues.apache.org/jira/browse/LUCENE-6015
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-6015.patch


 DocIdSetBuilder starts with a SparseFixedBitSet and then upgrades to a 
 FixedBitSet when the cardinality grows larger than maxDoc  14. However 
 Robert improved SparseFixedBitSet performance quite significantly in 
 LUCENE-6003 so we should see if it makes sense to update this heuristic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [CI] Lucene Linux Java 7 64 Test Only - Build # 8157 - Failure!

2014-10-21 Thread Adrien Grand
I committed a fix. This failure was due to my previous commit on
LUCENE-6015.

On Tue, Oct 21, 2014 at 10:12 AM, bu...@elasticsearch.com wrote:

   *BUILD FAILURE*
 Build URL
 https://build-eu-1.elasticsearch.org/job/lucene_trunk_linux_java7_64_test_only/8157/
 Project:lucene_trunk_linux_java7_64_test_only Date of build:Tue, 21 Oct
 2014 10:09:54 +0200 Build duration:2 min 41 sec
  *CHANGES*   Revision  by *jpountz: * *(LUCENE-6015: DocIdSetBuilder: Use
 the sparse set until more than 1/16 of the longs have one bit set.)*
   edit
 checkout/lucene/core/src/java/org/apache/lucene/util/DocIdSetBuilder.java
 Revision
  by *mikemccand: * *(correct exotic whitespace)*edit
 checkout/lucene/core/src/test/org/apache/lucene/document/TestDocument.java
  *BUILD ARTIFACTS*
 checkout/lucene/build/core/test/temp/junit4-J0-20141021_101021_062.events
 https://build-eu-1.elasticsearch.org/job/lucene_trunk_linux_java7_64_test_only/8157/artifact/checkout/lucene/build/core/test/temp/junit4-J0-20141021_101021_062.events
 checkout/lucene/build/core/test/temp/junit4-J1-20141021_101021_062.events
 https://build-eu-1.elasticsearch.org/job/lucene_trunk_linux_java7_64_test_only/8157/artifact/checkout/lucene/build/core/test/temp/junit4-J1-20141021_101021_062.events
 checkout/lucene/build/core/test/temp/junit4-J2-20141021_101021_062.events
 https://build-eu-1.elasticsearch.org/job/lucene_trunk_linux_java7_64_test_only/8157/artifact/checkout/lucene/build/core/test/temp/junit4-J2-20141021_101021_062.events
 checkout/lucene/build/core/test/temp/junit4-J3-20141021_101021_062.events
 https://build-eu-1.elasticsearch.org/job/lucene_trunk_linux_java7_64_test_only/8157/artifact/checkout/lucene/build/core/test/temp/junit4-J3-20141021_101021_062.events
 checkout/lucene/build/core/test/temp/junit4-J4-20141021_101021_063.events
 https://build-eu-1.elasticsearch.org/job/lucene_trunk_linux_java7_64_test_only/8157/artifact/checkout/lucene/build/core/test/temp/junit4-J4-20141021_101021_063.events
 checkout/lucene/build/core/test/temp/junit4-J5-20141021_101021_063.events
 https://build-eu-1.elasticsearch.org/job/lucene_trunk_linux_java7_64_test_only/8157/artifact/checkout/lucene/build/core/test/temp/junit4-J5-20141021_101021_063.events
 checkout/lucene/build/core/test/temp/junit4-J6-20141021_101021_064.events
 https://build-eu-1.elasticsearch.org/job/lucene_trunk_linux_java7_64_test_only/8157/artifact/checkout/lucene/build/core/test/temp/junit4-J6-20141021_101021_064.events
 checkout/lucene/build/core/test/temp/junit4-J7-20141021_101021_064.events
 https://build-eu-1.elasticsearch.org/job/lucene_trunk_linux_java7_64_test_only/8157/artifact/checkout/lucene/build/core/test/temp/junit4-J7-20141021_101021_064.events
  *FAILED JUNIT TESTS* Name: org.apache.lucene.util Failed: 1 test(s),
 Passed: 255 test(s), Skipped: 3 test(s), Total: 259 test(s)
 *Failed: org.apache.lucene.util.TestDocIdSetBuilder.testDense *
  *CONSOLE OUTPUT* [...truncated 1502 lines...] [junit4]  [junit4] Tests
 with failures: [junit4] -
 org.apache.lucene.util.TestDocIdSetBuilder.testDense [junit4]  [junit4]
 [junit4] JVM J0: 0.99 .. 146.21 = 145.22s [junit4] JVM J1: 0.71 .. 143.70
 = 142.99s [junit4] JVM J2: 0.79 .. 142.65 = 141.87s [junit4] JVM J3: 0.84
 .. 147.16 = 146.32s [junit4] JVM J4: 0.73 .. 144.38 = 143.65s [junit4]
 JVM J5: 0.95 .. 149.31 = 148.36s [junit4] JVM J6: 0.90 .. 144.66 = 143.76s 
 [junit4]
 JVM J7: 0.71 .. 143.60 = 142.89s [junit4] Execution time total: 2 minutes
 29 seconds [junit4] Tests summary: 402 suites, 3275 tests, 1 failure, 61
 ignored (51 assumptions) BUILD FAILED 
 /home/jenkins/workspace/lucene_trunk_linux_java7_64_test_only/checkout/lucene/build.xml:49:
 The following error occurred while executing this line: 
 /home/jenkins/workspace/lucene_trunk_linux_java7_64_test_only/checkout/lucene/common-build.xml:1359:
 The following error occurred while executing this line: 
 /home/jenkins/workspace/lucene_trunk_linux_java7_64_test_only/checkout/lucene/common-build.xml:961:
 There were test failures: 402 suites, 3275 tests, 1 failure, 61 ignored (51
 assumptions) Total time: 2 minutes 36 seconds Build step 'Invoke Ant'
 marked build as failure Archiving artifacts Recording test results Email
 was triggered for: Failure - 1st Trigger Failure - Any was overridden by
 another trigger and will not send an email. Trigger Failure - Still was
 overridden by another trigger and will not send an email. Sending email
 for trigger: Failure - 1st




-- 
Adrien


[jira] [Commented] (LUCENE-6015) Revisit DocIdSetBuilder's heuristic to switch to FixedBitSet

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178123#comment-14178123
 ] 

ASF subversion and git services commented on LUCENE-6015:
-

Commit 1633310 from [~jpountz] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1633310 ]

LUCENE-6015: Fix TestDocIdSetBuilder.testDense expectations.

 Revisit DocIdSetBuilder's heuristic to switch to FixedBitSet
 

 Key: LUCENE-6015
 URL: https://issues.apache.org/jira/browse/LUCENE-6015
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-6015.patch


 DocIdSetBuilder starts with a SparseFixedBitSet and then upgrades to a 
 FixedBitSet when the cardinality grows larger than maxDoc  14. However 
 Robert improved SparseFixedBitSet performance quite significantly in 
 LUCENE-6003 so we should see if it makes sense to update this heuristic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-1252) Avoid using positions when not all required terms are present

2014-10-21 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-1252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178136#comment-14178136
 ] 

Michael McCandless commented on LUCENE-1252:


bq. I was fairly specific about how to tackle the problem... I dont mean to be 
rude, but did you actually read my proposal?

Yes, you were specific, and yes, I did in fact read the proposal,
several times, and as I said, I like it, but it's a complex problem
that I've also spent time thinking about and so I gave some simple
feedback about an alternative to one part of the proposal:

It just seems strange to me, when Lucene already has this notion of
boiling down complex queries into their low-level concrete equivalent
queries (i.e. rewrite) to then add a separate API (two-phase
execution) for doing what looks to me to be essentially the same
thing.  Two-phase execution is just rewriting to a two-clause
conjunction where the cost of each clause are wildly different,
opening up the freedom for BQ to execute more efficiently.  And,
yes, like rewrite, I think it should recurse.

I also think BQ should handle the costly nextDoc filters (i.e. do
them last) as first class citizens, and it seemed like the proposal
suggested otherwise (maybe I misunderstood this part of the proposal).


 Avoid using positions when not all required terms are present
 -

 Key: LUCENE-1252
 URL: https://issues.apache.org/jira/browse/LUCENE-1252
 Project: Lucene - Core
  Issue Type: Wish
  Components: core/search
Reporter: Paul Elschot
Priority: Minor
  Labels: gsoc2014

 In the Scorers of queries with (lots of) Phrases and/or (nested) Spans, 
 currently next() and skipTo() will use position information even when other 
 parts of the query cannot match because some required terms are not present.
 This could be avoided by adding some methods to Scorer that relax the 
 postcondition of next() and skipTo() to something like all required terms 
 are present, but no position info was checked yet, and implementing these 
 methods for Scorers that do conjunctions: BooleanScorer, PhraseScorer, and 
 SpanScorer/NearSpans.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6005) Explore alternative to Document/Field/FieldType API

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178142#comment-14178142
 ] 

ASF subversion and git services commented on LUCENE-6005:
-

Commit 1633312 from [~mikemccand] in branch 'dev/branches/lucene6005'
[ https://svn.apache.org/r1633312 ]

LUCENE-6005: make branch

 Explore alternative to Document/Field/FieldType API
 ---

 Key: LUCENE-6005
 URL: https://issues.apache.org/jira/browse/LUCENE-6005
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk


 Auto-prefix terms (LUCENE-5879) is blocked because it's impossible in
 Lucene today to add a simple API to use it, and I don't think we
 should commit features that only super-experts can figure out how to
 use: that's evil.
 The only realistic workaround for such new features is to instead
 add them directly to the various servers on top of Lucene, since they
 all already have nice schema APIs.
 I opened LUCENE-5989 to try do at least a baby step towards making it
 easier to use auto-prefix terms, so you can easily add singleton
 binary tokens, but even that has proven controversial.
 Net/net I think we have to solve the root cause of this by fixing the
 Document/Field/FieldType API so that new index-level features can have
 a usable API, properly defaulted for the right types of fields.
 Towards that, I'm exploring a replacement for
 Document/Field/FieldType.  The idea is to expose simple methods on the
 document class (no more separate Field and FieldType classes):
 {noformat}
 doc.addLargeText(body, some text);
 doc.addShortText(title, a title);
 doc.addAtom(id, 29jafnn);
 doc.addBinary(bytes, new byte[7]);
 doc.addNumber(number, 17);
 {noformat}
 And then expose a separate FieldTypes class, that you pass to ctor of
 the new document class, which lets you set all the various per-field
 settings (stored, doc values, etc.).  E.g.:
 {noformat}
 types.enableStored(id);
 {noformat}
 FieldTypes is a write-once schema, and it throws exceptions if you try
 to make invalid changes once a given setting is already written
 (e.g. enabling norms after having disabled them).  It will (I haven't
 implemented this yet) save its state into IndexWriter's commitData, so
 it's available when you open a new IndexWriter for append and when you
 open a reader.
 It has methods to set all the per-field settings (analyzer, stored,
 term vectors, norms, index options, doc values type), and chooses
 reasonable defaults based on the value's type when it suddenly sees
 a new field.  For example, when you add a number, it's indexed for
 range querying and sorting (numeric doc values) by default.
 FieldTypes provides the analyzer and codec (a little messy) that you
 pass to IndexWriterConfig.  Since it's effectively a persistent
 schema, it knows all about the available fields at search time, so we
 could use it to create queries (checking if they are valid given that
 field's type).  Query parsers and highlighters could consult it.
 Default UIs (above Lucene) could use it, etc.  This is all future .. I
 think for this issue the goal should be to just provide a better
 index-time API but not yet make use of it at search time.
 So with this change, for auto-prefix terms, we could add an enable
 range queries/filters option, but then validate that the selected
 postings format supports such an option.
 I know this exploration will be horribly controversial, but
 realistically I don't think Lucene can move on much further if we
 can't finally address this schema problem head on.
 This is long overdue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6005) Explore alternative to Document/Field/FieldType API

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178145#comment-14178145
 ] 

ASF subversion and git services commented on LUCENE-6005:
-

Commit 1633314 from [~mikemccand] in branch 'dev/branches/lucene6005'
[ https://svn.apache.org/r1633314 ]

LUCENE-6005: work in progress

 Explore alternative to Document/Field/FieldType API
 ---

 Key: LUCENE-6005
 URL: https://issues.apache.org/jira/browse/LUCENE-6005
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk


 Auto-prefix terms (LUCENE-5879) is blocked because it's impossible in
 Lucene today to add a simple API to use it, and I don't think we
 should commit features that only super-experts can figure out how to
 use: that's evil.
 The only realistic workaround for such new features is to instead
 add them directly to the various servers on top of Lucene, since they
 all already have nice schema APIs.
 I opened LUCENE-5989 to try do at least a baby step towards making it
 easier to use auto-prefix terms, so you can easily add singleton
 binary tokens, but even that has proven controversial.
 Net/net I think we have to solve the root cause of this by fixing the
 Document/Field/FieldType API so that new index-level features can have
 a usable API, properly defaulted for the right types of fields.
 Towards that, I'm exploring a replacement for
 Document/Field/FieldType.  The idea is to expose simple methods on the
 document class (no more separate Field and FieldType classes):
 {noformat}
 doc.addLargeText(body, some text);
 doc.addShortText(title, a title);
 doc.addAtom(id, 29jafnn);
 doc.addBinary(bytes, new byte[7]);
 doc.addNumber(number, 17);
 {noformat}
 And then expose a separate FieldTypes class, that you pass to ctor of
 the new document class, which lets you set all the various per-field
 settings (stored, doc values, etc.).  E.g.:
 {noformat}
 types.enableStored(id);
 {noformat}
 FieldTypes is a write-once schema, and it throws exceptions if you try
 to make invalid changes once a given setting is already written
 (e.g. enabling norms after having disabled them).  It will (I haven't
 implemented this yet) save its state into IndexWriter's commitData, so
 it's available when you open a new IndexWriter for append and when you
 open a reader.
 It has methods to set all the per-field settings (analyzer, stored,
 term vectors, norms, index options, doc values type), and chooses
 reasonable defaults based on the value's type when it suddenly sees
 a new field.  For example, when you add a number, it's indexed for
 range querying and sorting (numeric doc values) by default.
 FieldTypes provides the analyzer and codec (a little messy) that you
 pass to IndexWriterConfig.  Since it's effectively a persistent
 schema, it knows all about the available fields at search time, so we
 could use it to create queries (checking if they are valid given that
 field's type).  Query parsers and highlighters could consult it.
 Default UIs (above Lucene) could use it, etc.  This is all future .. I
 think for this issue the goal should be to just provide a better
 index-time API but not yet make use of it at search time.
 So with this change, for auto-prefix terms, we could add an enable
 range queries/filters option, but then validate that the selected
 postings format supports such an option.
 I know this exploration will be horribly controversial, but
 realistically I don't think Lucene can move on much further if we
 can't finally address this schema problem head on.
 This is long overdue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5911) Make MemoryIndex thread-safe for queries

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178167#comment-14178167
 ] 

ASF subversion and git services commented on LUCENE-5911:
-

Commit 1633321 from [~romseygeek] in branch 'dev/trunk'
[ https://svn.apache.org/r1633321 ]

LUCENE-5911: Remove cacheing of norms, calculate up front in freeze()

 Make MemoryIndex thread-safe for queries
 

 Key: LUCENE-5911
 URL: https://issues.apache.org/jira/browse/LUCENE-5911
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5911-2.patch, LUCENE-5911.patch, LUCENE-5911.patch


 We want to be able to run multiple queries at once over a MemoryIndex in 
 luwak (see 
 https://github.com/flaxsearch/luwak/commit/49a8fba5764020c2f0e4dc29d80d93abb0231191),
  but this isn't possible with the current implementation.  However, looking 
 at the code, it seems that it would be relatively simple to make MemoryIndex 
 thread-safe for reads/queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6630) Deprecate the implicit router and rename to manual

2014-10-21 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178169#comment-14178169
 ] 

Upayavira commented on SOLR-6630:
-

and rename compositeID to just 'id' or 'hashed-id' or something. composite ID 
routing also handles IDs that *aren't* composite, which is a major brain bender.

 Deprecate the implicit router and rename to manual
 --

 Key: SOLR-6630
 URL: https://issues.apache.org/jira/browse/SOLR-6630
 Project: Solr
  Issue Type: Task
  Components: SolrCloud
Affects Versions: 4.10
Reporter: Shawn Heisey
 Fix For: 5.0, Trunk


 I had this exchange with an IRC user named kindkid this morning:
 {noformat}
 08:30  kindkid I'm using sharding with the implicit router, but I'm seeing
  all my documents end up on just one of my 24 shards. What
  might be causing this? (4.10.0)
 08:35 @elyograg kindkid: you used the implicit router.  that means that
   documents will be indexed on the shard you sent them
 to, not
   routed elsewhere.
 08:37  kindkid oh. wow. not sure where I got the idea, but I was under the
  impression that implicit router would use a hash of the
  uniqueKey modulo number of shards to pick a shard.
 08:38 @elyograg I think you probably wanted the compositeId router.
 08:39 @elyograg implicit is not a very good name.  It's technically
 correct,
   but the meaning of the word is not well known.
 08:39 @elyograg manual would be a better name.
 {noformat}
 The word implicit has a very specific meaning, and I think it's
 absolutely correct terminology for what it does, but I don't think that
 it's very clear to a typical person.  This is not the first time I've
 encountered the confusion.
 Could we deprecate the implicit name and use something much more
 descriptive and easily understood, like manual instead?  Let's go
 ahead and accept implicit in 5.x releases, but issue a warning in the
 log.  Maybe we can have a startup system property or a config option
 that will force the name to be updated in zookeeper and get rid of the
 warning.  If we do this, my bias is to have an upgrade to 6.x force the
 name change in zookeeper.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5911) Make MemoryIndex thread-safe for queries

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178170#comment-14178170
 ] 

ASF subversion and git services commented on LUCENE-5911:
-

Commit 1633322 from [~romseygeek] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1633322 ]

LUCENE-5911: Remove cacheing of norms, calculate up front in freeze()

 Make MemoryIndex thread-safe for queries
 

 Key: LUCENE-5911
 URL: https://issues.apache.org/jira/browse/LUCENE-5911
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5911-2.patch, LUCENE-5911.patch, LUCENE-5911.patch


 We want to be able to run multiple queries at once over a MemoryIndex in 
 luwak (see 
 https://github.com/flaxsearch/luwak/commit/49a8fba5764020c2f0e4dc29d80d93abb0231191),
  but this isn't possible with the current implementation.  However, looking 
 at the code, it seems that it would be relatively simple to make MemoryIndex 
 thread-safe for reads/queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6005) Explore alternative to Document/Field/FieldType API

2014-10-21 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178179#comment-14178179
 ] 

Michael McCandless commented on LUCENE-6005:


I committed the current work-in-progress to a new branch
(https://svn.apache.org/repos/asf/lucene/dev/branches/lucene6005).

I added a new FieldTypes class (holds the optional write-once schema)
and Document2 (to replace Document eventually).

Net/net I think the approach can work well: it's a minimally intrusive
API to optionally build up the write-once schema.  You can skip the
API entirely and it will learn your schema by seeing which Java
types you are adding to your documents and setting sensible defaults
accordingly.  It's quite a bit simpler than the current oal.document
API: no more separate XXXField nor FieldType classes.

Indexed binary tokens work, via Document2.addAtom(...) (LUCENE-5989).

You can turn on/off sorting for a field, and this translates to the
appropriate DV type; I want to improve this by letting you specify the
default sort order, and also \[eventually] specify collator.  I plan to
similarly enable highlighting.

I also added search-time APIs, e.g. newSort, newTermQuery,
newRangeQuery.  These methods throw clear exceptions if the field name
is unknown, or it wasn't indexed with a type that matches that
method.

There are still many issues and nocommits:

  * Analyzer is passed to FieldTypes now; I would like to remove it
from IndexWriterConfig.  To do this, I think I need to push
multi-valued field handling out of IndexWriter up into user
space... I already removed IndexableFieldType.tokenized as a
first step.

  * Analyzers can't be serialized, so the app will have to
re-initialize them on startup (like they must do anyway today with
PFAW).  Same for Similarity.

  * You can only set per-field DVF and PF.

  * I only cutover a couple tests, but they lose randomness since
FieldTypes provides the default IWC, vs LTC.newIWC().

  * I had to suck in a fork of KeywordTokenizer.


 Explore alternative to Document/Field/FieldType API
 ---

 Key: LUCENE-6005
 URL: https://issues.apache.org/jira/browse/LUCENE-6005
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk


 Auto-prefix terms (LUCENE-5879) is blocked because it's impossible in
 Lucene today to add a simple API to use it, and I don't think we
 should commit features that only super-experts can figure out how to
 use: that's evil.
 The only realistic workaround for such new features is to instead
 add them directly to the various servers on top of Lucene, since they
 all already have nice schema APIs.
 I opened LUCENE-5989 to try do at least a baby step towards making it
 easier to use auto-prefix terms, so you can easily add singleton
 binary tokens, but even that has proven controversial.
 Net/net I think we have to solve the root cause of this by fixing the
 Document/Field/FieldType API so that new index-level features can have
 a usable API, properly defaulted for the right types of fields.
 Towards that, I'm exploring a replacement for
 Document/Field/FieldType.  The idea is to expose simple methods on the
 document class (no more separate Field and FieldType classes):
 {noformat}
 doc.addLargeText(body, some text);
 doc.addShortText(title, a title);
 doc.addAtom(id, 29jafnn);
 doc.addBinary(bytes, new byte[7]);
 doc.addNumber(number, 17);
 {noformat}
 And then expose a separate FieldTypes class, that you pass to ctor of
 the new document class, which lets you set all the various per-field
 settings (stored, doc values, etc.).  E.g.:
 {noformat}
 types.enableStored(id);
 {noformat}
 FieldTypes is a write-once schema, and it throws exceptions if you try
 to make invalid changes once a given setting is already written
 (e.g. enabling norms after having disabled them).  It will (I haven't
 implemented this yet) save its state into IndexWriter's commitData, so
 it's available when you open a new IndexWriter for append and when you
 open a reader.
 It has methods to set all the per-field settings (analyzer, stored,
 term vectors, norms, index options, doc values type), and chooses
 reasonable defaults based on the value's type when it suddenly sees
 a new field.  For example, when you add a number, it's indexed for
 range querying and sorting (numeric doc values) by default.
 FieldTypes provides the analyzer and codec (a little messy) that you
 pass to IndexWriterConfig.  Since it's effectively a persistent
 schema, it knows all about the available fields at search time, so we
 could use it to create queries (checking if they are valid given that
 field's type).  Query parsers and highlighters could consult it.
 Default 

[jira] [Resolved] (LUCENE-6013) Remove IndexableFieldType.indexed()

2014-10-21 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-6013.

Resolution: Fixed

 Remove IndexableFieldType.indexed()
 ---

 Key: LUCENE-6013
 URL: https://issues.apache.org/jira/browse/LUCENE-6013
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-6013.patch, LUCENE-6013.patch, LUCENE-6013.patch, 
 LUCENE-6013.patch


 Like LUCENE-6006, here's another pre-cursor for LUCENE-6005
 ... because I think it's important to nail down Lucene's low-schema
 (FieldType/FieldInfos) semantics before adding a high-schema.
 IndexableFieldType.indexed() is redundant with
 IndexableFieldType.indexOptions() != null, so we should remove it,
 codecs shouldn't have to write/read it, high-schema should not configure it, 
 etc.
 Similarly, the FieldInfo.indexed bit is redundant, so I removed it, but I
 left the sugar API (FieldInfo.isIndexed) and implement it as just
 checking IndexOptions != null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6013) Remove IndexableFieldType.indexed()

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178188#comment-14178188
 ] 

ASF subversion and git services commented on LUCENE-6013:
-

Commit 1633328 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1633328 ]

LUCENE-6013: remove IndexableFieldType.indexed and FieldInfo.indexed (it's 
redundant with IndexOptions != null)

 Remove IndexableFieldType.indexed()
 ---

 Key: LUCENE-6013
 URL: https://issues.apache.org/jira/browse/LUCENE-6013
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-6013.patch, LUCENE-6013.patch, LUCENE-6013.patch, 
 LUCENE-6013.patch


 Like LUCENE-6006, here's another pre-cursor for LUCENE-6005
 ... because I think it's important to nail down Lucene's low-schema
 (FieldType/FieldInfos) semantics before adding a high-schema.
 IndexableFieldType.indexed() is redundant with
 IndexableFieldType.indexOptions() != null, so we should remove it,
 codecs shouldn't have to write/read it, high-schema should not configure it, 
 etc.
 Similarly, the FieldInfo.indexed bit is redundant, so I removed it, but I
 left the sugar API (FieldInfo.isIndexed) and implement it as just
 checking IndexOptions != null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Early Access builds for JDK 9 b35 and JDK 8u40 b10 are available on java.net

2014-10-21 Thread Rory O'Donnell

Hi Uwe  Dawid,

Early Access build for JDK 9 b35 https://jdk9.java.net/download/ is 
available on java.net, summary of changes are listed here 
http://www.java.net/download/jdk9/changes/jdk9-b35.html


Early Access build for JDK 8u40 b10 http://jdk8.java.net/download.html 
is available on java.net, summary of changes are listed here. 
http://www.java.net/download/jdk8u40/changes/jdk8u40-b10.html


Rgds,Rory

--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland



[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178253#comment-14178253
 ] 

ASF subversion and git services commented on SOLR-6058:
---

Commit 1633340 from [~gsingers] in branch 'cms/branches/solr_6058'
[ https://svn.apache.org/r1633340 ]

SOLR-6058: first draft of new Solr site

 Solr needs a new website
 

 Key: SOLR-6058
 URL: https://issues.apache.org/jira/browse/SOLR-6058
 Project: Solr
  Issue Type: Task
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Attachments: HTML.rar, SOLR-6058, Solr_Icons.pdf, 
 Solr_Logo_on_black.pdf, Solr_Logo_on_black.png, Solr_Logo_on_orange.pdf, 
 Solr_Logo_on_orange.png, Solr_Logo_on_white.pdf, Solr_Logo_on_white.png, 
 Solr_Styleguide.pdf


 Solr needs a new website:  better organization of content, less verbose, more 
 pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_40-ea-b09) - Build # 11495 - Failure!

2014-10-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11495/
Java: 64bit/jdk1.8.0_40-ea-b09 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
REGRESSION:  org.apache.solr.cloud.BasicDistributedZkTest.testDistribSearch

Error Message:
commitWithin did not work on node: http://127.0.0.1:32923/collection1 
expected:68 but was:67

Stack Trace:
java.lang.AssertionError: commitWithin did not work on node: 
http://127.0.0.1:32923/collection1 expected:68 but was:67
at 
__randomizedtesting.SeedInfo.seed([A0482B7057D83484:21AEA568208754B8]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.BasicDistributedZkTest.doTest(BasicDistributedZkTest.java:345)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-10-21 Thread Grant Ingersoll (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178261#comment-14178261
 ] 

Grant Ingersoll commented on SOLR-6058:
---

OK, I've committed Fran's original patch on the branch.  [~FranLukesh] any 
chance you can script your original server to automatically apply updates on 
from that branch so others can see the changes we make?  Otherwise, I can do 
the same.

 Solr needs a new website
 

 Key: SOLR-6058
 URL: https://issues.apache.org/jira/browse/SOLR-6058
 Project: Solr
  Issue Type: Task
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Attachments: HTML.rar, SOLR-6058, Solr_Icons.pdf, 
 Solr_Logo_on_black.pdf, Solr_Logo_on_black.png, Solr_Logo_on_orange.pdf, 
 Solr_Logo_on_orange.png, Solr_Logo_on_white.pdf, Solr_Logo_on_white.png, 
 Solr_Styleguide.pdf


 Solr needs a new website:  better organization of content, less verbose, more 
 pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_20) - Build # 11333 - Failure!

2014-10-21 Thread Robert Muir
And the corruptindexexception happens because solr corrupted the index.

it mismatched the index files (e.g. via replication or something) and
5.0 detects this.

On Tue, Oct 21, 2014 at 2:16 AM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
 This happened because of a CorruptIndexException:

 Caused by: org.apache.lucene.index.CorruptIndexException: file mismatch,
 expected segment id=yhq3vokoe1den2av9jbd3yp8, got=yhq3vokoe1den2av9jbd3yp7
 (resource=BufferedChecksumIndexInput(MMapIndexInput(path=/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeySafeLeaderTest-F4B371D421E391CD-001/tempDir-001/jetty3/index/_1_2.liv)))
[junit4]   2  at
 org.apache.lucene.codecs.CodecUtil.checkSegmentHeader(CodecUtil.java:259)
[junit4]   2  at
 org.apache.lucene.codecs.lucene50.Lucene50LiveDocsFormat.readLiveDocs(Lucene50LiveDocsFormat.java:88)
[junit4]   2  at
 org.apache.lucene.codecs.asserting.AssertingLiveDocsFormat.readLiveDocs(AssertingLiveDocsFormat.java:64)
[junit4]   2  at
 org.apache.lucene.index.SegmentReader.init(SegmentReader.java:102)


 On Tue, Oct 21, 2014 at 3:07 AM, Policeman Jenkins Server
 jenk...@thetaphi.de wrote:

 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11333/
 Java: 64bit/jdk1.8.0_20 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

 1 tests failed.
 REGRESSION:
 org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

 Error Message:
 shard2 is not consistent.  Got 62 from
 http://127.0.0.1:57436/collection1lastClient and got 24 from
 http://127.0.0.1:53065/collection1

 Stack Trace:
 java.lang.AssertionError: shard2 is not consistent.  Got 62 from
 http://127.0.0.1:57436/collection1lastClient and got 24 from
 http://127.0.0.1:53065/collection1
 at
 __randomizedtesting.SeedInfo.seed([F4B371D421E391CD:7555FFCC56BCF1F1]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1255)
 at
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1234)
 at
 org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:162)
 at
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:483)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
 at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
 at
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 

[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_20) - Build # 4383 - Failure!

2014-10-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4383/
Java: 64bit/jdk1.8.0_20 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
REGRESSION:  org.apache.solr.cloud.DeleteReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this request:[http://127.0.0.1:50670, 
http://127.0.0.1:50661, http://127.0.0.1:50628, http://127.0.0.1:50652, 
http://127.0.0.1:50643]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:50670, http://127.0.0.1:50661, 
http://127.0.0.1:50628, http://127.0.0.1:50652, http://127.0.0.1:50643]
at 
__randomizedtesting.SeedInfo.seed([F126BF332BEAF1AE:70C0312B5CB59192]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:333)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:1015)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:793)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:736)
at 
org.apache.solr.cloud.DeleteReplicaTest.removeAndWaitForReplicaGone(DeleteReplicaTest.java:172)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:145)
at 
org.apache.solr.cloud.DeleteReplicaTest.doTest(DeleteReplicaTest.java:89)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor47.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Resolved] (LUCENE-6016) Stempel converts trailing 1 (and prior character) to ć

2014-10-21 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-6016.
-
Resolution: Won't Fix

I already explained to you on the elasticsearch issue why this isn't a bug. 

 Stempel converts trailing 1 (and prior character) to ć
 --

 Key: LUCENE-6016
 URL: https://issues.apache.org/jira/browse/LUCENE-6016
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
Affects Versions: 4.8.1
Reporter: Jeff Stein

 In the stempel analysis module, the StempelFilter TokenFilter converts a 
 trailing numeric one into a ć character, while also consuming the prior 
 character. This was also filed against the downstream 
 [elasticsearch-analysis-stempel 
 project|https://github.com/elasticsearch/elasticsearch-analysis-stempel/issues/31].
 I did not find any errors with other numbers in the trailing position.
 Example:
 || input || output ||
 | foo1 | foć |
 | foo11 | fooć |



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6633) let /update/json/docs store the source json as well

2014-10-21 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6633:
-
Description: 
it is a common requirement to store the entire JSON as a field in Solr. 

we can have a extra param srcField=field_name to specify the field name

the /update/json/docs is only useful when all the json fields are predefined or 
in schemaless mode.

The better option would be to store the content in a store only field and index 
the data in another field in other modes

the relevant section in solrconfig.xml
{code:xml}
 initParams path=/update/json/docs
lst name=defaults
  !--this ensures that the entire json doc will be stored verbatim into 
one field--
  str name=srcField_src/str
  !--This means a the uniqueKeyField will be extracted from the fields and
   all fields go into the 'df' field. In this config df is already 
configured to be 'text'
--
  str name=mapUniqueKeyOnlytrue/str
/lst

  /initParams
{code}

  was:
it is a common requirement to store the entire JSON as a field in Solr. 

we can have a extra param srcField=field_name to specify the field name

the /update/json/docs is only useful when all the json fields are predefined or 
in schemaless mode.

The better option would be to store the content in a store only field and index 
the data in another field in other modes


 let /update/json/docs store the source json as well
 ---

 Key: SOLR-6633
 URL: https://issues.apache.org/jira/browse/SOLR-6633
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
  Labels: EaseOfUse
 Attachments: SOLR-6633.patch, SOLR-6633.patch


 it is a common requirement to store the entire JSON as a field in Solr. 
 we can have a extra param srcField=field_name to specify the field name
 the /update/json/docs is only useful when all the json fields are predefined 
 or in schemaless mode.
 The better option would be to store the content in a store only field and 
 index the data in another field in other modes
 the relevant section in solrconfig.xml
 {code:xml}
  initParams path=/update/json/docs
 lst name=defaults
   !--this ensures that the entire json doc will be stored verbatim into 
 one field--
   str name=srcField_src/str
   !--This means a the uniqueKeyField will be extracted from the fields 
 and
all fields go into the 'df' field. In this config df is already 
 configured to be 'text'
 --
   str name=mapUniqueKeyOnlytrue/str
 /lst
   /initParams
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178344#comment-14178344
 ] 

ASF subversion and git services commented on SOLR-6058:
---

Commit 1633360 from [~gsingers] in branch 'cms/branches/solr_6058'
[ https://svn.apache.org/r1633360 ]

SOLR-6058: add some logos, update the news

 Solr needs a new website
 

 Key: SOLR-6058
 URL: https://issues.apache.org/jira/browse/SOLR-6058
 Project: Solr
  Issue Type: Task
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Attachments: HTML.rar, SOLR-6058, Solr_Icons.pdf, 
 Solr_Logo_on_black.pdf, Solr_Logo_on_black.png, Solr_Logo_on_orange.pdf, 
 Solr_Logo_on_orange.png, Solr_Logo_on_white.pdf, Solr_Logo_on_white.png, 
 Solr_Styleguide.pdf


 Solr needs a new website:  better organization of content, less verbose, more 
 pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-10-21 Thread Grant Ingersoll (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178346#comment-14178346
 ] 

Grant Ingersoll commented on SOLR-6058:
---

[~FranLukesh] One of the things I'd like to figure out, is how to properly link 
back to the Lucene sites, as the nav is a bit weird.  If a user is on 
lucene.a.o and nav's to solr, all of the tabs disappear, which is probably 
somewhat counter-intuitive to the user.  I don't think the tabs fit with the 
new design, but at the same time, it would be good to somehow make it clear how 
to get back.

Also, I uploaded some logos from the Solr Public Servers page.  I could use 
some help making them look right.

 Solr needs a new website
 

 Key: SOLR-6058
 URL: https://issues.apache.org/jira/browse/SOLR-6058
 Project: Solr
  Issue Type: Task
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Attachments: HTML.rar, SOLR-6058, Solr_Icons.pdf, 
 Solr_Logo_on_black.pdf, Solr_Logo_on_black.png, Solr_Logo_on_orange.pdf, 
 Solr_Logo_on_orange.png, Solr_Logo_on_white.pdf, Solr_Logo_on_white.png, 
 Solr_Styleguide.pdf


 Solr needs a new website:  better organization of content, less verbose, more 
 pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-10-21 Thread Grant Ingersoll (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178348#comment-14178348
 ] 

Grant Ingersoll commented on SOLR-6058:
---

[~elyograg] Good catch, I will update.

 Solr needs a new website
 

 Key: SOLR-6058
 URL: https://issues.apache.org/jira/browse/SOLR-6058
 Project: Solr
  Issue Type: Task
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Attachments: HTML.rar, SOLR-6058, Solr_Icons.pdf, 
 Solr_Logo_on_black.pdf, Solr_Logo_on_black.png, Solr_Logo_on_orange.pdf, 
 Solr_Logo_on_orange.png, Solr_Logo_on_white.pdf, Solr_Logo_on_white.png, 
 Solr_Styleguide.pdf


 Solr needs a new website:  better organization of content, less verbose, more 
 pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178354#comment-14178354
 ] 

ASF subversion and git services commented on SOLR-6058:
---

Commit 1633363 from [~gsingers] in branch 'cms/branches/solr_6058'
[ https://svn.apache.org/r1633363 ]

SOLR-6058: update news to latest

 Solr needs a new website
 

 Key: SOLR-6058
 URL: https://issues.apache.org/jira/browse/SOLR-6058
 Project: Solr
  Issue Type: Task
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Attachments: HTML.rar, SOLR-6058, Solr_Icons.pdf, 
 Solr_Logo_on_black.pdf, Solr_Logo_on_black.png, Solr_Logo_on_orange.pdf, 
 Solr_Logo_on_orange.png, Solr_Logo_on_white.pdf, Solr_Logo_on_white.png, 
 Solr_Styleguide.pdf


 Solr needs a new website:  better organization of content, less verbose, more 
 pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-1252) Avoid using positions when not all required terms are present

2014-10-21 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-1252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178374#comment-14178374
 ] 

Robert Muir commented on LUCENE-1252:
-

Both of those are covered. Yes its proposed as a first-class citizen (on DISI), 
more first-class than your second-class rewrite() solution.

It might be good to already look at what BQ and co do with query execution 
today. the fact is, rewrite() is inappropriate.

the main reason rewrite() is second-class here is because scoring is 
*per-segment* in lucene. In trunk today this means:
* filters might get executed with a completely different plan for different 
segments (conjunction versus Bits-acceptDocs).
* scorers might get returned as null for some segments (e.g. term doesnt 
exist). BQ doesn't have special null handling in every subscorer that it uses, 
that would be horrible. Instead it eliminates such scorers immediately in 
BooleanWeight's constructor. 
* this like the above can impact the structure of what BQ does (e.g. A or B, if 
B is null, it just returns A instead of a DisjunctionScorer).
* scorers have a cost() api, which tells us additional critical information for 
execution. I've proposed expanding this to much more (additional flags/methods) 
so that BQ can be a quarterback and not the entire football team.

another reason rewrite() is wrong here, is that such two-phase execution is 
only useful for conjunction (or: conjunction-like queries such as 
MinShouldMatch). Otherwise, its useless. Rewriting to two queries when the 
query is not a conjunction will not help performance.

at the low level, I can't see a rewrite() solution being efficient. Today 
ExactPhraseScorer is already coded as:

{code}
while (iterate approximation...) {
  confirm(); // phraseFreq  0
}
{code}

so in the case, its already ready to be exposed for two-phase execution.

On the other hand with rewrite(), it would either be 2 separate DPenums, or a 
mess? 

Finally, by having this generic on DISI, this approximation becomes much more 
flexible. For example a postings format could implement it for its docsenums, 
if it is able to implement a cheap approximation... perhaps via skiplist-like 
data, or perhaps with an explicit approximate cache mechanism specifically 
for this purpose. 

Another example is a filter like a geographic distance filter, could return a 
bounding box here automatically rather than forcing the user to deal with this 
themselves with complex query/filter hacks. At the same time, such a filter 
could signal that it has a linear time nextDoc, and booleanquery can really do 
the best execution. A rewrite() solution really makes this hard, because the 
two things would then be completely separate queries. But if you look at all 
the cases of executing this logic, its very useful for BQ to know that A is a 
superset of B.


 Avoid using positions when not all required terms are present
 -

 Key: LUCENE-1252
 URL: https://issues.apache.org/jira/browse/LUCENE-1252
 Project: Lucene - Core
  Issue Type: Wish
  Components: core/search
Reporter: Paul Elschot
Priority: Minor
  Labels: gsoc2014

 In the Scorers of queries with (lots of) Phrases and/or (nested) Spans, 
 currently next() and skipTo() will use position information even when other 
 parts of the query cannot match because some required terms are not present.
 This could be avoided by adding some methods to Scorer that relax the 
 postcondition of next() and skipTo() to something like all required terms 
 are present, but no position info was checked yet, and implementing these 
 methods for Scorers that do conjunctions: BooleanScorer, PhraseScorer, and 
 SpanScorer/NearSpans.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178379#comment-14178379
 ] 

ASF subversion and git services commented on SOLR-6058:
---

Commit 1633373 from [~gsingers] in branch 'cms/branches/solr_6058'
[ https://svn.apache.org/r1633373 ]

SOLR-6058: make it easier to change version on headers, start updating the 
features section

 Solr needs a new website
 

 Key: SOLR-6058
 URL: https://issues.apache.org/jira/browse/SOLR-6058
 Project: Solr
  Issue Type: Task
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Attachments: HTML.rar, SOLR-6058, Solr_Icons.pdf, 
 Solr_Logo_on_black.pdf, Solr_Logo_on_black.png, Solr_Logo_on_orange.pdf, 
 Solr_Logo_on_orange.png, Solr_Logo_on_white.pdf, Solr_Logo_on_white.png, 
 Solr_Styleguide.pdf


 Solr needs a new website:  better organization of content, less verbose, more 
 pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6638) Create an off-heap implementation of Solr caches

2014-10-21 Thread Noble Paul (JIRA)
Noble Paul created SOLR-6638:


 Summary: Create an off-heap implementation of Solr caches
 Key: SOLR-6638
 URL: https://issues.apache.org/jira/browse/SOLR-6638
 Project: Solr
  Issue Type: New Feature
Reporter: Noble Paul


There are various implementations of off-heap implementations of cache. One 
such is using sun.misc.Unsafe .The cache implementations are pluggable in Solr 
anyway



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5052) bitset codec for off heap filters

2014-10-21 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated LUCENE-5052:
---
Labels: features perfomance  (was: features)

 bitset codec for off heap filters
 -

 Key: LUCENE-5052
 URL: https://issues.apache.org/jira/browse/LUCENE-5052
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/codecs
Reporter: Mikhail Khludnev
  Labels: features, perfomance
 Fix For: Trunk

 Attachments: LUCENE-5052-1.patch, LUCENE-5052.patch, bitsetcodec.zip, 
 bitsetcodec.zip


 Colleagues,
 When we filter we don’t care any of scoring factors i.e. norms, positions, 
 tf, but it should be fast. The obvious way to handle this is to decode 
 postings list and cache it in heap (CachingWrappingFilter, Solr’s DocSet). 
 Both of consuming a heap and decoding as well are expensive. 
 Let’s write a posting list as a bitset, if df is greater than segment's 
 maxdocs/8  (what about skiplists? and overall performance?). 
 Beside of the codec implementation, the trickiest part to me is to design API 
 for this. How we can let the app know that a term query don’t need to be 
 cached in heap, but can be held as an mmaped bitset?
 WDYT?  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6630) Deprecate the implicit router and rename to manual

2014-10-21 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178389#comment-14178389
 ] 

Shawn Heisey commented on SOLR-6630:


[~upayavira], I would guess that the majority of SolrCloud users are either 
using compositeId or have chosen to build a single-shard index -- defaulting to 
implicit, but where the router choice doesn't really matter because there's one 
shard.  Renaming implicit will (hopefully) not directly impact a big chunk of 
the installed userbase.  A change to compositeId would have much greater user 
impact.

The rename of implicit will already be a far-reaching change that might come 
back to bite us in ways we never expected.  A rename of compositeId, if we do 
it at all, needs to be done later, after we figure out the pitfalls from THIS 
rename.


 Deprecate the implicit router and rename to manual
 --

 Key: SOLR-6630
 URL: https://issues.apache.org/jira/browse/SOLR-6630
 Project: Solr
  Issue Type: Task
  Components: SolrCloud
Affects Versions: 4.10
Reporter: Shawn Heisey
 Fix For: 5.0, Trunk


 I had this exchange with an IRC user named kindkid this morning:
 {noformat}
 08:30  kindkid I'm using sharding with the implicit router, but I'm seeing
  all my documents end up on just one of my 24 shards. What
  might be causing this? (4.10.0)
 08:35 @elyograg kindkid: you used the implicit router.  that means that
   documents will be indexed on the shard you sent them
 to, not
   routed elsewhere.
 08:37  kindkid oh. wow. not sure where I got the idea, but I was under the
  impression that implicit router would use a hash of the
  uniqueKey modulo number of shards to pick a shard.
 08:38 @elyograg I think you probably wanted the compositeId router.
 08:39 @elyograg implicit is not a very good name.  It's technically
 correct,
   but the meaning of the word is not well known.
 08:39 @elyograg manual would be a better name.
 {noformat}
 The word implicit has a very specific meaning, and I think it's
 absolutely correct terminology for what it does, but I don't think that
 it's very clear to a typical person.  This is not the first time I've
 encountered the confusion.
 Could we deprecate the implicit name and use something much more
 descriptive and easily understood, like manual instead?  Let's go
 ahead and accept implicit in 5.x releases, but issue a warning in the
 log.  Maybe we can have a startup system property or a config option
 that will force the name to be updated in zookeeper and get rid of the
 warning.  If we do this, my bias is to have an upgrade to 6.x force the
 name change in zookeeper.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6638) Create an off-heap implementation of Solr caches

2014-10-21 Thread Tommaso Teofili (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178391#comment-14178391
 ] 

Tommaso Teofili commented on SOLR-6638:
---

for the record there's a quite old one (based on Solr 3.6.2) in Apache 
DirectMemory (see 
http://svn.apache.org/viewvc/directmemory/trunk/integrations/solr/src/main/java/org/apache/directmemory/solr/SolrOffHeapCache.java?view=markup)

 Create an off-heap implementation of Solr caches
 

 Key: SOLR-6638
 URL: https://issues.apache.org/jira/browse/SOLR-6638
 Project: Solr
  Issue Type: New Feature
Reporter: Noble Paul

 There are various implementations of off-heap implementations of cache. One 
 such is using sun.misc.Unsafe .The cache implementations are pluggable in 
 Solr anyway



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-10-21 Thread Grant Ingersoll (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178392#comment-14178392
 ] 

Grant Ingersoll commented on SOLR-6058:
---

My progress for today:
# Added some powered by icons on the front page (still need to be styled)
# Templatized the version annotation at the top of the landing so we can change 
the latest version number in just one place
# Worked a bunch on the features page (features.mdtext).  Left off at the 
Detailed Features section.  [~FranLukesh], note, I put in some TODO 
placeholders on some of the icons in the second section as they are repetitive 
with the first section.

All in all, I'm really liking the look and feel.  I'd hope we can get this live 
in the next two weeks or so.

 Solr needs a new website
 

 Key: SOLR-6058
 URL: https://issues.apache.org/jira/browse/SOLR-6058
 Project: Solr
  Issue Type: Task
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Attachments: HTML.rar, SOLR-6058, Solr_Icons.pdf, 
 Solr_Logo_on_black.pdf, Solr_Logo_on_black.png, Solr_Logo_on_orange.pdf, 
 Solr_Logo_on_orange.png, Solr_Logo_on_white.pdf, Solr_Logo_on_white.png, 
 Solr_Styleguide.pdf


 Solr needs a new website:  better organization of content, less verbose, more 
 pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6638) Create an off-heap implementation of Solr caches

2014-10-21 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178403#comment-14178403
 ] 

Uwe Schindler commented on SOLR-6638:
-

Please don't use unsafe! Use ByteBuffer.allocateDirect(). This is enough to 
serve a cache. In Java 7 and Java 8 the ByteBuffer overhead is minimal (look at 
the assembler code generated by Hotspot).

 Create an off-heap implementation of Solr caches
 

 Key: SOLR-6638
 URL: https://issues.apache.org/jira/browse/SOLR-6638
 Project: Solr
  Issue Type: New Feature
Reporter: Noble Paul

 There are various implementations of off-heap implementations of cache. One 
 such is using sun.misc.Unsafe .The cache implementations are pluggable in 
 Solr anyway



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6638) Create an off-heap implementation of Solr caches

2014-10-21 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178409#comment-14178409
 ] 

Yonik Seeley commented on SOLR-6638:


For those who will be at Lucene Revolution in a couple of weeks, I have a talk 
about this in the internals track:
{quote}
Native Code and Off-Heap Data Structures for Solr
Presented by Yonik Seeley, Heliosearch

Off-heap data structures and native code performance improvements for Apache 
Solr are being developed as part of the Heliosearch project. This presentation 
will cover the reasons behind these features, implementation details, and 
performance impacts. Recently developed features will also be covered.
{quote}

 Create an off-heap implementation of Solr caches
 

 Key: SOLR-6638
 URL: https://issues.apache.org/jira/browse/SOLR-6638
 Project: Solr
  Issue Type: New Feature
Reporter: Noble Paul

 There are various implementations of off-heap implementations of cache. One 
 such is using sun.misc.Unsafe .The cache implementations are pluggable in 
 Solr anyway



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Unsubscribe

2014-10-21 Thread Xavier Morera
On Tue, Oct 21, 2014 at 7:26 AM, gsing...@apache.org wrote:

 Author: gsingers
 Date: Tue Oct 21 13:26:57 2014
 New Revision: 1633373

 URL: http://svn.apache.org/r1633373
 Log:
 SOLR-6058: make it easier to change version on headers, start updating the
 features section

 Added:
 lucene/cms/branches/solr_6058/templates/_solr-version.html
 Modified:
 lucene/cms/branches/solr_6058/content/solr/features.mdtext
 lucene/cms/branches/solr_6058/templates/solr-index.html

 Modified: lucene/cms/branches/solr_6058/content/solr/features.mdtext
 URL:
 http://svn.apache.org/viewvc/lucene/cms/branches/solr_6058/content/solr/features.mdtext?rev=1633373r1=1633372r2=1633373view=diff

 ==
 --- lucene/cms/branches/solr_6058/content/solr/features.mdtext (original)
 +++ lucene/cms/branches/solr_6058/content/solr/features.mdtext Tue Oct 21
 13:26:57 2014
 @@ -3,15 +3,12 @@ Title: Features
  section class=hero alternate
div class=row
  div class=large-12 columns
 -  div class=annotation
 -Solr 4.9
 -  /div
h1
  Solr Features
/h1
p
  Solr is a standalone enterprise search server with a REST-like
 API. You put
 -documents in it (called indexing) via XML, JSON, CSV or binary
 over HTTP. You query it via HTTP GET and receive XML, JSON, CSV or binary
 results.
 +documents in it (called indexing) via JSON, XML, CSV or binary
 over HTTP. You query it via HTTP GET and receive JSON, XML, CSV or binary
 results.
/p
div class=down-arrowa data-scroll href=#anchor-1i
 class=fa fa-angle-down fa-2x red/i/a/div
  /div
 @@ -26,7 +23,7 @@ Title: Features
div class=imgimg
 src=/solr/assets/images/Solr_Icons_advanced_full-text_search.svg//div
div class=content
  h3Advanced Full-Text Search Capabilities/h3
 -pBRIEF DESCRIPTION/p
 +pPowered by Lucene, Solr enables powerful matching
 capabilities including phrases, wildcards, joins, grouping and much more
 across a any data type/p
/div
  /div
/li
 @@ -35,7 +32,7 @@ Title: Features
div class=imgimg
 src=/solr/assets/images/Solr_Icons_optimized_for_high_volume.svg//div
div class=content
  h3Optimized for High Volume Traffic/h3
 -pBRIEF DESCRIPTION/p
 +pSolr is proven at extremely large scales the world over/p
/div
  /div
/li
 @@ -44,7 +41,7 @@ Title: Features
div class=imgimg
 src=/solr/assets/images/Solr_Icons_standards_based_open_interfaces.svg//div
div class=content
  h3Standards Based Open Interfaces - XML, JSON and HTTP/h3
 -pBRIEF DESCRIPTION/p
 +pSolr uses the tools you use to make application building a
 snap/p
/div
  /div
/li
 @@ -53,7 +50,7 @@ Title: Features
div class=imgimg
 src=/solr/assets/images/Solr_Icons_comprehensive_html_admin.svg//div
div class=content
  h3Comprehensive HTML Administration Interfaces/h3
 -pBRIEF DESCRIPTION/p
 +pSolr ships with a built-in, responsive Admin interface to
 make it easy to control your Solr instances/p
/div
  /div
/li
 @@ -62,16 +59,16 @@ Title: Features
div class=imgimg
 src=/solr/assets/images/Solr_Icons_server_statistics_exposed.svg//div
div class=content
  h3Server statistics exposed over JMX for monitoring/h3
 -pBRIEF DESCRIPTION/p
 +pNeed more insight into your instances? Solr publishes
 loads of metric data via JMX/p
/div
  /div
/li
li
  div class=box
 -  div class=imgimg
 src=/solr/assets/images/Solr_Icons_linearly_scalable,_auto_index.svg//div
 +  div class=imgimg
 src=/solr/assets/images/Solr_Icons_highy_scalable.svg//div
div class=content
 -h3Linearly scalable, auto index replication, auto failover
 and recovery/h3
 -pBRIEF DESCRIPTION/p
 +h3Highly Scalable and Fault Tolerant/h3
 +pBuilt on the battle-tested Apache Zookeeper, Solr makes it
 easy to scale up and down.  Solr bakes in replication, distribution,
 rebalancing and fault tolerance out of the box./p
/div
  /div
/li
 @@ -79,8 +76,8 @@ Title: Features
  div class=box
div class=imgimg
 src=/solr/assets/images/Solr_Icons_flexible_and_adaptable.svg//div
div class=content
 -h3Flexible and Adaptable with XML configuration/h3
 -pBRIEF DESCRIPTION/p
 +h3Flexible and Adaptable with easy configuration/h3
 +pSolr's is designed to adapt to your needs all while
 simplifying configuration/p
/div
  /div
/li
 @@ -89,7 +86,7 @@ Title: Features
div 

[jira] [Commented] (SOLR-6587) Misleading exception when creating collections in SolrCloud with bad configuration

2014-10-21 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178434#comment-14178434
 ] 

Jan Høydahl commented on SOLR-6587:
---

Candidate for 4.10.2?

 Misleading exception when creating collections in SolrCloud with bad 
 configuration
 --

 Key: SOLR-6587
 URL: https://issues.apache.org/jira/browse/SOLR-6587
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.1, 5.0, Trunk
Reporter: Tomás Fernández Löbbe
Assignee: Tomás Fernández Löbbe
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6587.patch


 I uploaded a configuration in bad shape to Zookeeper, then tried to create a 
 collection and I was getting: 
 {noformat}
 ERROR - 2014-10-03 16:48:25.712; org.apache.solr.core.CoreContainer; Error 
 creating core [tflobbe_collection1_shard2_replica2]: Could not load conf for 
 core tflobbe_collection1_shard2_replica2: ZkSolrResourceLoader does not 
 support getConfigDir() - likely, what you are trying to do is not supported 
 in ZooKeeper mode
 org.apache.solr.common.SolrException: Could not load conf for core 
 tflobbe_collection1_shard2_replica2: ZkSolrResourceLoader does not support 
 getConfigDir() - likely, what you are trying to do is not supported in 
 ZooKeeper mode
 at 
 org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:66)
 at org.apache.solr.core.CoreContainer.create(CoreContainer.java:489)
 at org.apache.solr.core.CoreContainer.create(CoreContainer.java:466)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:575)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:199)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:188)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:729)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:258)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
 at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
 at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
 at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
 at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
 at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
 at org.eclipse.jetty.server.Server.handle(Server.java:368)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
 at 
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)
 at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)
 at 
 org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)
 at 
 org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
 at 
 org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
 at 
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
 at 
 org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: org.apache.solr.common.cloud.ZooKeeperException: 
 ZkSolrResourceLoader does not support getConfigDir() - likely, what you are 
 trying to do is not 

[jira] [Commented] (LUCENE-6016) Stempel converts trailing 1 (and prior character) to ć

2014-10-21 Thread Jeff Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178441#comment-14178441
 ] 

Jeff Stein commented on LUCENE-6016:


Thanks for your response Robert. I'm a new user running into the problem, not 
the person who opened the Elasticsearch plugin problem.

I disagree with your assessment. I'm working on an application with support for 
searching multiple languages and Polish is the only algarithimic stemmer that 
has this problem.

 Stempel converts trailing 1 (and prior character) to ć
 --

 Key: LUCENE-6016
 URL: https://issues.apache.org/jira/browse/LUCENE-6016
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
Affects Versions: 4.8.1
Reporter: Jeff Stein

 In the stempel analysis module, the StempelFilter TokenFilter converts a 
 trailing numeric one into a ć character, while also consuming the prior 
 character. This was also filed against the downstream 
 [elasticsearch-analysis-stempel 
 project|https://github.com/elasticsearch/elasticsearch-analysis-stempel/issues/31].
 I did not find any errors with other numbers in the trailing position.
 Example:
 || input || output ||
 | foo1 | foć |
 | foo11 | fooć |



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178443#comment-14178443
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1633385 from [~rcmuir] in branch 'dev/branches/lucene5969'
[ https://svn.apache.org/r1633385 ]

LUCENE-5969: clean up constants

 Add Lucene50Codec
 -

 Key: LUCENE-5969
 URL: https://issues.apache.org/jira/browse/LUCENE-5969
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5969.patch, LUCENE-5969.patch, 
 LUCENE-5969_part2.patch


 Spinoff from LUCENE-5952:
   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
 read time.
   * Lucene42TermVectorsFormat should not use the same codecName as 
 Lucene41StoredFieldsFormat
 It would also be nice if we had a bumpCodecVersion script so rolling a new 
 codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Unsubscribe

2014-10-21 Thread Erick Erickson
Please follow the instructions here:
https://wiki.apache.org/solr/Unsubscribing%20from%20mailing%20lists

Note you must use the exact same e-mail you registered with.

If you still have problems, let us know what they are.

Best,
Erick

On Tue, Oct 21, 2014 at 9:52 AM, Xavier Morera xav...@familiamorera.com wrote:


 On Tue, Oct 21, 2014 at 7:26 AM, gsing...@apache.org wrote:

 Author: gsingers
 Date: Tue Oct 21 13:26:57 2014
 New Revision: 1633373

 URL: http://svn.apache.org/r1633373
 Log:
 SOLR-6058: make it easier to change version on headers, start updating the
 features section

 Added:
 lucene/cms/branches/solr_6058/templates/_solr-version.html
 Modified:
 lucene/cms/branches/solr_6058/content/solr/features.mdtext
 lucene/cms/branches/solr_6058/templates/solr-index.html

 Modified: lucene/cms/branches/solr_6058/content/solr/features.mdtext
 URL:
 http://svn.apache.org/viewvc/lucene/cms/branches/solr_6058/content/solr/features.mdtext?rev=1633373r1=1633372r2=1633373view=diff

 ==
 --- lucene/cms/branches/solr_6058/content/solr/features.mdtext (original)
 +++ lucene/cms/branches/solr_6058/content/solr/features.mdtext Tue Oct 21
 13:26:57 2014
 @@ -3,15 +3,12 @@ Title: Features
  section class=hero alternate
div class=row
  div class=large-12 columns
 -  div class=annotation
 -Solr 4.9
 -  /div
h1
  Solr Features
/h1
p
  Solr is a standalone enterprise search server with a REST-like
 API. You put
 -documents in it (called indexing) via XML, JSON, CSV or binary
 over HTTP. You query it via HTTP GET and receive XML, JSON, CSV or binary
 results.
 +documents in it (called indexing) via JSON, XML, CSV or binary
 over HTTP. You query it via HTTP GET and receive JSON, XML, CSV or binary
 results.
/p
div class=down-arrowa data-scroll href=#anchor-1i
 class=fa fa-angle-down fa-2x red/i/a/div
  /div
 @@ -26,7 +23,7 @@ Title: Features
div class=imgimg
 src=/solr/assets/images/Solr_Icons_advanced_full-text_search.svg//div
div class=content
  h3Advanced Full-Text Search Capabilities/h3
 -pBRIEF DESCRIPTION/p
 +pPowered by Lucene, Solr enables powerful matching
 capabilities including phrases, wildcards, joins, grouping and much more
 across a any data type/p
/div
  /div
/li
 @@ -35,7 +32,7 @@ Title: Features
div class=imgimg
 src=/solr/assets/images/Solr_Icons_optimized_for_high_volume.svg//div
div class=content
  h3Optimized for High Volume Traffic/h3
 -pBRIEF DESCRIPTION/p
 +pSolr is proven at extremely large scales the world
 over/p
/div
  /div
/li
 @@ -44,7 +41,7 @@ Title: Features
div class=imgimg
 src=/solr/assets/images/Solr_Icons_standards_based_open_interfaces.svg//div
div class=content
  h3Standards Based Open Interfaces - XML, JSON and HTTP/h3
 -pBRIEF DESCRIPTION/p
 +pSolr uses the tools you use to make application building a
 snap/p
/div
  /div
/li
 @@ -53,7 +50,7 @@ Title: Features
div class=imgimg
 src=/solr/assets/images/Solr_Icons_comprehensive_html_admin.svg//div
div class=content
  h3Comprehensive HTML Administration Interfaces/h3
 -pBRIEF DESCRIPTION/p
 +pSolr ships with a built-in, responsive Admin interface to
 make it easy to control your Solr instances/p
/div
  /div
/li
 @@ -62,16 +59,16 @@ Title: Features
div class=imgimg
 src=/solr/assets/images/Solr_Icons_server_statistics_exposed.svg//div
div class=content
  h3Server statistics exposed over JMX for monitoring/h3
 -pBRIEF DESCRIPTION/p
 +pNeed more insight into your instances? Solr publishes
 loads of metric data via JMX/p
/div
  /div
/li
li
  div class=box
 -  div class=imgimg
 src=/solr/assets/images/Solr_Icons_linearly_scalable,_auto_index.svg//div
 +  div class=imgimg
 src=/solr/assets/images/Solr_Icons_highy_scalable.svg//div
div class=content
 -h3Linearly scalable, auto index replication, auto failover
 and recovery/h3
 -pBRIEF DESCRIPTION/p
 +h3Highly Scalable and Fault Tolerant/h3
 +pBuilt on the battle-tested Apache Zookeeper, Solr makes it
 easy to scale up and down.  Solr bakes in replication, distribution,
 rebalancing and fault tolerance out of the box./p
/div
  /div
/li
 @@ -79,8 +76,8 @@ Title: Features
  div class=box
div class=imgimg
 src=/solr/assets/images/Solr_Icons_flexible_and_adaptable.svg//div
div class=content
 -h3Flexible and Adaptable 

[jira] [Commented] (SOLR-6638) Create an off-heap implementation of Solr caches

2014-10-21 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178456#comment-14178456
 ] 

Noble Paul commented on SOLR-6638:
--

[~thetaphi] , I was going through Cassandra codebase and found them using 
sun.misc.Unsafe to do malloc/dealloc memory. I will explore  
ByteBuffer.allocateDirect() . Is there any project using that ?

 [~ysee...@gmail.com] , sure . I'll look forward to that

 Create an off-heap implementation of Solr caches
 

 Key: SOLR-6638
 URL: https://issues.apache.org/jira/browse/SOLR-6638
 Project: Solr
  Issue Type: New Feature
Reporter: Noble Paul

 There are various implementations of off-heap implementations of cache. One 
 such is using sun.misc.Unsafe .The cache implementations are pluggable in 
 Solr anyway



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6638) Create an off-heap implementation of Solr caches

2014-10-21 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178459#comment-14178459
 ] 

Noble Paul commented on SOLR-6638:
--

[~teofili] Is that code relevant in this new age? 

 Create an off-heap implementation of Solr caches
 

 Key: SOLR-6638
 URL: https://issues.apache.org/jira/browse/SOLR-6638
 Project: Solr
  Issue Type: New Feature
Reporter: Noble Paul

 There are various implementations of off-heap implementations of cache. One 
 such is using sun.misc.Unsafe .The cache implementations are pluggable in 
 Solr anyway



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6638) Create an off-heap implementation of Solr caches

2014-10-21 Thread Tommaso Teofili (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178467#comment-14178467
 ] 

Tommaso Teofili commented on SOLR-6638:
---

bq. Is that code relevant in this new age?

it would have to be updated to be used with Solr 4.x, but it shouldn't be too 
hard. If you plan to run some tests/benchmarks I could update that so that you 
can try (the interesting thing there is also that serialization would be 
pluggable, e.g. via Kryo or others).

 Create an off-heap implementation of Solr caches
 

 Key: SOLR-6638
 URL: https://issues.apache.org/jira/browse/SOLR-6638
 Project: Solr
  Issue Type: New Feature
Reporter: Noble Paul

 There are various implementations of off-heap implementations of cache. One 
 such is using sun.misc.Unsafe .The cache implementations are pluggable in 
 Solr anyway



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6633) let /update/json/docs store the source json as well

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178474#comment-14178474
 ] 

ASF subversion and git services commented on SOLR-6633:
---

Commit 1633390 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1633390 ]

SOLR-6633

 let /update/json/docs store the source json as well
 ---

 Key: SOLR-6633
 URL: https://issues.apache.org/jira/browse/SOLR-6633
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
  Labels: EaseOfUse
 Attachments: SOLR-6633.patch, SOLR-6633.patch


 it is a common requirement to store the entire JSON as a field in Solr. 
 we can have a extra param srcField=field_name to specify the field name
 the /update/json/docs is only useful when all the json fields are predefined 
 or in schemaless mode.
 The better option would be to store the content in a store only field and 
 index the data in another field in other modes
 the relevant section in solrconfig.xml
 {code:xml}
  initParams path=/update/json/docs
 lst name=defaults
   !--this ensures that the entire json doc will be stored verbatim into 
 one field--
   str name=srcField_src/str
   !--This means a the uniqueKeyField will be extracted from the fields 
 and
all fields go into the 'df' field. In this config df is already 
 configured to be 'text'
 --
   str name=mapUniqueKeyOnlytrue/str
 /lst
   /initParams
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: svn commit: r1633390 - in /lucene/dev/trunk/solr: core/src/java/org/apache/solr/handler/loader/ core/src/java/org/apache/solr/util/xslt/ core/src/test/org/apache/solr/handler/ example/solr/collect

2014-10-21 Thread Uwe Schindler
Hi,

JSONLoader imports java.lang.reflect.Field, but it is unused in the code. I was 
afraid that this one uses reflection, so I stumbled on it. Could you remove the 
bogus import?

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: no...@apache.org [mailto:no...@apache.org]
 Sent: Tuesday, October 21, 2014 4:53 PM
 To: comm...@lucene.apache.org
 Subject: svn commit: r1633390 - in /lucene/dev/trunk/solr:
 core/src/java/org/apache/solr/handler/loader/
 core/src/java/org/apache/solr/util/xslt/
 core/src/test/org/apache/solr/handler/ example/solr/collection1/conf/
 solrj/src/test/org/apache/solr/client/solrj/e...
 
 Author: noble
 Date: Tue Oct 21 14:53:04 2014
 New Revision: 1633390
 
 URL: http://svn.apache.org/r1633390
 Log:
 SOLR-6633
 
 Added:
 
 lucene/dev/trunk/solr/core/src/java/org/apache/solr/util/xslt/RecordingJSO
 NParser.java   (with props)
 Modified:
 
 lucene/dev/trunk/solr/core/src/java/org/apache/solr/handler/loader/JsonL
 oader.java
 
 lucene/dev/trunk/solr/core/src/test/org/apache/solr/handler/JsonLoaderTe
 st.java
 lucene/dev/trunk/solr/example/solr/collection1/conf/schema.xml
 lucene/dev/trunk/solr/example/solr/collection1/conf/solrconfig.xml
 
 lucene/dev/trunk/solr/solrj/src/test/org/apache/solr/client/solrj/embedde
 d/SolrExampleJettyTest.java
 
 Modified:
 lucene/dev/trunk/solr/core/src/java/org/apache/solr/handler/loader/JsonL
 oader.java
 URL:
 http://svn.apache.org/viewvc/lucene/dev/trunk/solr/core/src/java/org/apa
 che/solr/handler/loader/JsonLoader.java?rev=1633390r1=1633389r2=163
 3390view=diff
 ==
 
 ---
 lucene/dev/trunk/solr/core/src/java/org/apache/solr/handler/loader/JsonL
 oader.java (original)
 +++ lucene/dev/trunk/solr/core/src/java/org/apache/solr/handler/loader/J
 +++ sonLoader.java Tue Oct 21 14:53:04 2014
 @@ -16,21 +16,34 @@ package org.apache.solr.handler.loader;
   * limitations under the License.
   */
 
 +import java.io.FilterReader;
  import java.io.IOException;
  import java.io.Reader;
  import java.io.StringReader;
 +import java.lang.reflect.Field;
 +import java.nio.CharBuffer;
  import java.util.ArrayList;
  import java.util.Arrays;
  import java.util.HashMap;
  import java.util.Iterator;
 +import java.util.LinkedHashMap;
  import java.util.List;
 +import java.util.Locale;
  import java.util.Map;
 +import java.util.UUID;
  import java.util.concurrent.atomic.AtomicBoolean;
 +import java.util.concurrent.atomic.AtomicReference;
 
 +import com.ctc.wstx.stax.FilteredStreamReader;
  import org.apache.commons.io.IOUtils;
 +import org.apache.commons.io.input.TeeInputStream;
 +import org.apache.solr.common.params.CommonParams;
  import org.apache.solr.common.params.SolrParams;
  import org.apache.solr.common.params.UpdateParams;
  import org.apache.solr.common.util.JsonRecordReader;
 +import org.apache.solr.schema.SchemaField;
 +import org.apache.solr.util.xslt.RecordingJSONParser;
 +import org.noggit.CharArr;
  import org.noggit.JSONParser;
  import org.noggit.ObjectBuilder;
  import org.apache.solr.common.SolrException;
 @@ -50,50 +63,49 @@ import org.slf4j.Logger;  import
 org.slf4j.LoggerFactory;
 
 
 -
  /**
   * @since solr 4.0
   */
  public class JsonLoader extends ContentStreamLoader {
 -  final static Logger log = LoggerFactory.getLogger( JsonLoader.class );
 +  final static Logger log = LoggerFactory.getLogger(JsonLoader.class);
private static final String CHILD_DOC_KEY = _childDocuments_;
 
@Override
public String getDefaultWT() {
  return json;
}
 -
 +
@Override
public void load(SolrQueryRequest req, SolrQueryResponse rsp,
 -  ContentStream stream, UpdateRequestProcessor processor) throws
 Exception {
 -new SingleThreadedJsonLoader(req,rsp,processor).load(req, rsp, stream,
 processor);
 +   ContentStream stream, UpdateRequestProcessor processor)
 throws Exception {
 +new SingleThreadedJsonLoader(req, rsp, processor).load(req, rsp,
 + stream, processor);
}
 
 -
 +
static class SingleThreadedJsonLoader extends ContentStreamLoader {
 -
 +
  protected final UpdateRequestProcessor processor;
  protected final SolrQueryRequest req;
  protected SolrQueryResponse rsp;
  protected JSONParser parser;
  protected final int commitWithin;
  protected final boolean overwrite;
 -
 +
  public SingleThreadedJsonLoader(SolrQueryRequest req,
 SolrQueryResponse rsp, UpdateRequestProcessor processor) {
this.processor = processor;
this.req = req;
this.rsp = rsp;
 
commitWithin =
 req.getParams().getInt(UpdateParams.COMMIT_WITHIN, -1);
 -  overwrite = req.getParams().getBool(UpdateParams.OVERWRITE, true);
 +  overwrite = req.getParams().getBool(UpdateParams.OVERWRITE,
 + true);
  }
 -
 +
  @Override
 -public void load(SolrQueryRequest req,
 -

[jira] [Commented] (SOLR-6633) let /update/json/docs store the source json as well

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178482#comment-14178482
 ] 

ASF subversion and git services commented on SOLR-6633:
---

Commit 1633391 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1633391 ]

SOLR-6633

 let /update/json/docs store the source json as well
 ---

 Key: SOLR-6633
 URL: https://issues.apache.org/jira/browse/SOLR-6633
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
  Labels: EaseOfUse
 Attachments: SOLR-6633.patch, SOLR-6633.patch


 it is a common requirement to store the entire JSON as a field in Solr. 
 we can have a extra param srcField=field_name to specify the field name
 the /update/json/docs is only useful when all the json fields are predefined 
 or in schemaless mode.
 The better option would be to store the content in a store only field and 
 index the data in another field in other modes
 the relevant section in solrconfig.xml
 {code:xml}
  initParams path=/update/json/docs
 lst name=defaults
   !--this ensures that the entire json doc will be stored verbatim into 
 one field--
   str name=srcField_src/str
   !--This means a the uniqueKeyField will be extracted from the fields 
 and
all fields go into the 'df' field. In this config df is already 
 configured to be 'text'
 --
   str name=mapUniqueKeyOnlytrue/str
 /lst
   /initParams
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_40-ea-b09) - Build # 11496 - Still Failing!

2014-10-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11496/
Java: 64bit/jdk1.8.0_40-ea-b09 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
REGRESSION:  org.apache.solr.core.TestNonNRTOpen.testReaderIsNotNRT

Error Message:
SOLR-5815? : wrong maxDoc: core=org.apache.solr.core.SolrCore@33014107 
searcher=Searcher@4623d493[collection1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_4(6.0.0):C1)
 Uninverting(_5(6.0.0):C1)))} expected:3 but was:2

Stack Trace:
java.lang.AssertionError: SOLR-5815? : wrong maxDoc: 
core=org.apache.solr.core.SolrCore@33014107 
searcher=Searcher@4623d493[collection1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_4(6.0.0):C1)
 Uninverting(_5(6.0.0):C1)))} expected:3 but was:2
at 
__randomizedtesting.SeedInfo.seed([1E0FE1A0BB85BF00:AB89802704440DF4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.core.TestNonNRTOpen.assertNotNRT(TestNonNRTOpen.java:142)
at 
org.apache.solr.core.TestNonNRTOpen.testReaderIsNotNRT(TestNonNRTOpen.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 

[jira] [Commented] (SOLR-6633) let /update/json/docs store the source json as well

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178485#comment-14178485
 ] 

ASF subversion and git services commented on SOLR-6633:
---

Commit 1633392 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1633392 ]

SOLR-6633

 let /update/json/docs store the source json as well
 ---

 Key: SOLR-6633
 URL: https://issues.apache.org/jira/browse/SOLR-6633
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
  Labels: EaseOfUse
 Attachments: SOLR-6633.patch, SOLR-6633.patch


 it is a common requirement to store the entire JSON as a field in Solr. 
 we can have a extra param srcField=field_name to specify the field name
 the /update/json/docs is only useful when all the json fields are predefined 
 or in schemaless mode.
 The better option would be to store the content in a store only field and 
 index the data in another field in other modes
 the relevant section in solrconfig.xml
 {code:xml}
  initParams path=/update/json/docs
 lst name=defaults
   !--this ensures that the entire json doc will be stored verbatim into 
 one field--
   str name=srcField_src/str
   !--This means a the uniqueKeyField will be extracted from the fields 
 and
all fields go into the 'df' field. In this config df is already 
 configured to be 'text'
 --
   str name=mapUniqueKeyOnlytrue/str
 /lst
   /initParams
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6573) Query elevation fails when localParams are used

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178502#comment-14178502
 ] 

ASF subversion and git services commented on SOLR-6573:
---

Commit 1633395 from jan...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1633395 ]

SOLR-6573: Updated after comments from Hoss, now also supports value parsed 
from localParam v=, using rb.getQparser().getLocalParams()

 Query elevation fails when localParams are used
 ---

 Key: SOLR-6573
 URL: https://issues.apache.org/jira/browse/SOLR-6573
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.10.1
Reporter: Radek Urbas
Assignee: Jan Høydahl
 Fix For: 5.0

 Attachments: SOLR-6573.patch, SOLR-6573.patch


 Elevation does not work when localParams are specified.
 In example collection1 shipped with Solr query like this one 
 {code}http://localhost:8983/solr/collection1/elevate?q=ipodfl=id,title,[elevated]{code}
  properly returns elevated documents on top.
 If localParams are specified e.g. {!q.op=AND} in query like 
 {code}http://localhost:8983/solr/collection1/elevate?q=\{!q.op=AND\}ipodfl=id,title,[elevated]{code}
 documents are not elevated anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6633) let /update/json/docs store the source json as well

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178501#comment-14178501
 ] 

ASF subversion and git services commented on SOLR-6633:
---

Commit 1633394 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1633394 ]

SOLR-6633 changed package

 let /update/json/docs store the source json as well
 ---

 Key: SOLR-6633
 URL: https://issues.apache.org/jira/browse/SOLR-6633
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
  Labels: EaseOfUse
 Attachments: SOLR-6633.patch, SOLR-6633.patch


 it is a common requirement to store the entire JSON as a field in Solr. 
 we can have a extra param srcField=field_name to specify the field name
 the /update/json/docs is only useful when all the json fields are predefined 
 or in schemaless mode.
 The better option would be to store the content in a store only field and 
 index the data in another field in other modes
 the relevant section in solrconfig.xml
 {code:xml}
  initParams path=/update/json/docs
 lst name=defaults
   !--this ensures that the entire json doc will be stored verbatim into 
 one field--
   str name=srcField_src/str
   !--This means a the uniqueKeyField will be extracted from the fields 
 and
all fields go into the 'df' field. In this config df is already 
 configured to be 'text'
 --
   str name=mapUniqueKeyOnlytrue/str
 /lst
   /initParams
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6633) let /update/json/docs store the source json as well

2014-10-21 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-6633.
--
   Resolution: Fixed
Fix Version/s: Trunk
   5.0

 let /update/json/docs store the source json as well
 ---

 Key: SOLR-6633
 URL: https://issues.apache.org/jira/browse/SOLR-6633
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
  Labels: EaseOfUse
 Fix For: 5.0, Trunk

 Attachments: SOLR-6633.patch, SOLR-6633.patch


 it is a common requirement to store the entire JSON as a field in Solr. 
 we can have a extra param srcField=field_name to specify the field name
 the /update/json/docs is only useful when all the json fields are predefined 
 or in schemaless mode.
 The better option would be to store the content in a store only field and 
 index the data in another field in other modes
 the relevant section in solrconfig.xml
 {code:xml}
  initParams path=/update/json/docs
 lst name=defaults
   !--this ensures that the entire json doc will be stored verbatim into 
 one field--
   str name=srcField_src/str
   !--This means a the uniqueKeyField will be extracted from the fields 
 and
all fields go into the 'df' field. In this config df is already 
 configured to be 'text'
 --
   str name=mapUniqueKeyOnlytrue/str
 /lst
   /initParams
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6633) let /update/json/docs store the source json as well

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178507#comment-14178507
 ] 

ASF subversion and git services commented on SOLR-6633:
---

Commit 1633398 from [~noble.paul] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1633398 ]

SOLR-6633 changed package

 let /update/json/docs store the source json as well
 ---

 Key: SOLR-6633
 URL: https://issues.apache.org/jira/browse/SOLR-6633
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
  Labels: EaseOfUse
 Attachments: SOLR-6633.patch, SOLR-6633.patch


 it is a common requirement to store the entire JSON as a field in Solr. 
 we can have a extra param srcField=field_name to specify the field name
 the /update/json/docs is only useful when all the json fields are predefined 
 or in schemaless mode.
 The better option would be to store the content in a store only field and 
 index the data in another field in other modes
 the relevant section in solrconfig.xml
 {code:xml}
  initParams path=/update/json/docs
 lst name=defaults
   !--this ensures that the entire json doc will be stored verbatim into 
 one field--
   str name=srcField_src/str
   !--This means a the uniqueKeyField will be extracted from the fields 
 and
all fields go into the 'df' field. In this config df is already 
 configured to be 'text'
 --
   str name=mapUniqueKeyOnlytrue/str
 /lst
   /initParams
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6633) let /update/json/docs store the source json as well

2014-10-21 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178536#comment-14178536
 ] 

Steve Rowe commented on SOLR-6633:
--

It should be documented that atomic updates over documents indexed using this 
feature will cause the source field to become out of sync with the rest of the 
doc.

 let /update/json/docs store the source json as well
 ---

 Key: SOLR-6633
 URL: https://issues.apache.org/jira/browse/SOLR-6633
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
  Labels: EaseOfUse
 Fix For: 5.0, Trunk

 Attachments: SOLR-6633.patch, SOLR-6633.patch


 it is a common requirement to store the entire JSON as a field in Solr. 
 we can have a extra param srcField=field_name to specify the field name
 the /update/json/docs is only useful when all the json fields are predefined 
 or in schemaless mode.
 The better option would be to store the content in a store only field and 
 index the data in another field in other modes
 the relevant section in solrconfig.xml
 {code:xml}
  initParams path=/update/json/docs
 lst name=defaults
   !--this ensures that the entire json doc will be stored verbatim into 
 one field--
   str name=srcField_src/str
   !--This means a the uniqueKeyField will be extracted from the fields 
 and
all fields go into the 'df' field. In this config df is already 
 configured to be 'text'
 --
   str name=mapUniqueKeyOnlytrue/str
 /lst
   /initParams
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6573) Query elevation fails when localParams are used

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178543#comment-14178543
 ] 

ASF subversion and git services commented on SOLR-6573:
---

Commit 1633402 from jan...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1633402 ]

SOLR-6573: Updated after comments from Hoss, now also supports value parsed 
from localParam v=, using rb.getQparser().getLocalParams()

 Query elevation fails when localParams are used
 ---

 Key: SOLR-6573
 URL: https://issues.apache.org/jira/browse/SOLR-6573
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.10.1
Reporter: Radek Urbas
Assignee: Jan Høydahl
 Fix For: 5.0

 Attachments: SOLR-6573.patch, SOLR-6573.patch


 Elevation does not work when localParams are specified.
 In example collection1 shipped with Solr query like this one 
 {code}http://localhost:8983/solr/collection1/elevate?q=ipodfl=id,title,[elevated]{code}
  properly returns elevated documents on top.
 If localParams are specified e.g. {!q.op=AND} in query like 
 {code}http://localhost:8983/solr/collection1/elevate?q=\{!q.op=AND\}ipodfl=id,title,[elevated]{code}
 documents are not elevated anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6633) let /update/json/docs store the source json as well

2014-10-21 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178548#comment-14178548
 ] 

Noble Paul commented on SOLR-6633:
--

sure [~sar...@syr.edu]

 let /update/json/docs store the source json as well
 ---

 Key: SOLR-6633
 URL: https://issues.apache.org/jira/browse/SOLR-6633
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
  Labels: EaseOfUse
 Fix For: 5.0, Trunk

 Attachments: SOLR-6633.patch, SOLR-6633.patch


 it is a common requirement to store the entire JSON as a field in Solr. 
 we can have a extra param srcField=field_name to specify the field name
 the /update/json/docs is only useful when all the json fields are predefined 
 or in schemaless mode.
 The better option would be to store the content in a store only field and 
 index the data in another field in other modes
 the relevant section in solrconfig.xml
 {code:xml}
  initParams path=/update/json/docs
 lst name=defaults
   !--this ensures that the entire json doc will be stored verbatim into 
 one field--
   str name=srcField_src/str
   !--This means a the uniqueKeyField will be extracted from the fields 
 and
all fields go into the 'df' field. In this config df is already 
 configured to be 'text'
 --
   str name=mapUniqueKeyOnlytrue/str
 /lst
   /initParams
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6266) Couchbase plug-in for Solr

2014-10-21 Thread Karol Abramczyk (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178553#comment-14178553
 ] 

Karol Abramczyk commented on SOLR-6266:
---

I made this plugin's repository public. It's here: 
https://github.com/LucidWorks/solr-couchbase-plugin. I think it would be much 
more convenient to use it with Pull Requests to provide new features.

 Couchbase plug-in for Solr
 --

 Key: SOLR-6266
 URL: https://issues.apache.org/jira/browse/SOLR-6266
 Project: Solr
  Issue Type: New Feature
Reporter: Varun
Assignee: Joel Bernstein
 Attachments: solr-couchbase-plugin-0.0.3-SNAPSHOT.tar.gz, 
 solr-couchbase-plugin-0.0.5-SNAPSHOT.tar.gz, 
 solr-couchbase-plugin-0.0.5.1-SNAPSHOT.tar.gz, solr-couchbase-plugin.tar.gz, 
 solr-couchbase-plugin.tar.gz


 It would be great if users could connect Couchbase and Solr so that updates 
 to Couchbase can automatically flow to Solr. Couchbase provides some very 
 nice API's which allow applications to mimic the behavior of a Couchbase 
 server so that it can receive updates via Couchbase's normal cross data 
 center replication (XDCR).
 One possible design for this is to create a CouchbaseLoader that extends 
 ContentStreamLoader. This new loader would embed the couchbase api's that 
 listen for incoming updates from couchbase, then marshal the couchbase 
 updates into the normal Solr update process. 
 Instead of marshaling couchbase updates into the normal Solr update process, 
 we could also embed a SolrJ client to relay the request through the http 
 interfaces. This may be necessary if we have to handle mapping couchbase 
 buckets to Solr collections on the Solr side. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5441) Decouple DocIdSet from OpenBitSet and FixedBitSet

2014-10-21 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-5441:
-
Attachment: LUCENE-5441.patch

I think this is an important change so I tried to iterate on Shai and Uwe's 
work. A few things changed since the last patch with OpenBitSet removed and 
FixedBitSet's new thiner brother SparseFixedBitSet. Here is a summary of what 
this new patch changes:
 - FixedBitSet renamed to IntBitSet
 - SparseFixedBitSet renamed to SparseIntBitSet
 - IntBitSet and SparseIntBitSet do not extend DocIdSet anymore, you need to 
wrap the bit set with (Sparse)IntBitDocIdSet
 - IntBitDocIdSet and SparseBitDocIdSet require the {{cost}} to be provided 
explicitely, so that you can use the actual set cardinality if you already know 
it, or use cardinality() if it makes sense. This should help make better 
decisions when there are bitsets involved.

The major difference compared to the previous patch is that the 
or/and/andNot/xor methods are still on FixedBitSet. The reasoning here is that 
by having the IntBitDocIdSet not exposing mutators, it makes sense to cache the 
cost. I think these mutator methods would make more sense on something like 
oal.util.DocIdSetBuilder.

 Decouple DocIdSet from OpenBitSet and FixedBitSet
 -

 Key: LUCENE-5441
 URL: https://issues.apache.org/jira/browse/LUCENE-5441
 Project: Lucene - Core
  Issue Type: Task
  Components: core/other
Affects Versions: 4.6.1
Reporter: Uwe Schindler
 Fix For: Trunk

 Attachments: LUCENE-5441.patch, LUCENE-5441.patch, LUCENE-5441.patch, 
 LUCENE-5441.patch


 Back from the times of Lucene 2.4 when DocIdSet was introduced, we somehow 
 kept the stupid filters can return a BitSet directly in the code. So lots 
 of Filters return just FixedBitSet, because this is the superclass (ideally 
 interface) of FixedBitSet.
 We should decouple that and *not* implement that abstract interface directly 
 by FixedBitSet. This leads to bugs e.g. in BlockJoin, because it used Filters 
 in a wrong way, just because it was always returning Bitsets. But some 
 filters actually don't do this.
 I propose to let FixedBitSet (only in trunk, because that a major backwards 
 break) just have a method {{asDocIdSet()}}, that returns an anonymous 
 instance of DocIdSet: bits() returns the FixedBitSet itsself, iterator() 
 returns a new Iterator (like it always did) and the cost/cacheable methods 
 return static values.
 Filters in trunk would need to be changed like that:
 {code:java}
 FixedBitSet bits = 
 ...
 return bits;
 {code}
 gets:
 {code:java}
 FixedBitSet bits = 
 ...
 return bits.asDocIdSet();
 {code}
 As this methods returns an anonymous DocIdSet, calling code can no longer 
 rely or check if the implementation behind is a FixedBitSet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5441) Decouple DocIdSet from OpenBitSet and FixedBitSet

2014-10-21 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-5441:
-
Attachment: (was: LUCENE-5441.patch)

 Decouple DocIdSet from OpenBitSet and FixedBitSet
 -

 Key: LUCENE-5441
 URL: https://issues.apache.org/jira/browse/LUCENE-5441
 Project: Lucene - Core
  Issue Type: Task
  Components: core/other
Affects Versions: 4.6.1
Reporter: Uwe Schindler
 Fix For: Trunk

 Attachments: LUCENE-5441.patch, LUCENE-5441.patch, LUCENE-5441.patch, 
 LUCENE-5441.patch


 Back from the times of Lucene 2.4 when DocIdSet was introduced, we somehow 
 kept the stupid filters can return a BitSet directly in the code. So lots 
 of Filters return just FixedBitSet, because this is the superclass (ideally 
 interface) of FixedBitSet.
 We should decouple that and *not* implement that abstract interface directly 
 by FixedBitSet. This leads to bugs e.g. in BlockJoin, because it used Filters 
 in a wrong way, just because it was always returning Bitsets. But some 
 filters actually don't do this.
 I propose to let FixedBitSet (only in trunk, because that a major backwards 
 break) just have a method {{asDocIdSet()}}, that returns an anonymous 
 instance of DocIdSet: bits() returns the FixedBitSet itsself, iterator() 
 returns a new Iterator (like it always did) and the cost/cacheable methods 
 return static values.
 Filters in trunk would need to be changed like that:
 {code:java}
 FixedBitSet bits = 
 ...
 return bits;
 {code}
 gets:
 {code:java}
 FixedBitSet bits = 
 ...
 return bits.asDocIdSet();
 {code}
 As this methods returns an anonymous DocIdSet, calling code can no longer 
 rely or check if the implementation behind is a FixedBitSet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5441) Decouple DocIdSet from OpenBitSet and FixedBitSet

2014-10-21 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-5441:
-
Attachment: LUCENE-5441.patch

 Decouple DocIdSet from OpenBitSet and FixedBitSet
 -

 Key: LUCENE-5441
 URL: https://issues.apache.org/jira/browse/LUCENE-5441
 Project: Lucene - Core
  Issue Type: Task
  Components: core/other
Affects Versions: 4.6.1
Reporter: Uwe Schindler
 Fix For: Trunk

 Attachments: LUCENE-5441.patch, LUCENE-5441.patch, LUCENE-5441.patch, 
 LUCENE-5441.patch


 Back from the times of Lucene 2.4 when DocIdSet was introduced, we somehow 
 kept the stupid filters can return a BitSet directly in the code. So lots 
 of Filters return just FixedBitSet, because this is the superclass (ideally 
 interface) of FixedBitSet.
 We should decouple that and *not* implement that abstract interface directly 
 by FixedBitSet. This leads to bugs e.g. in BlockJoin, because it used Filters 
 in a wrong way, just because it was always returning Bitsets. But some 
 filters actually don't do this.
 I propose to let FixedBitSet (only in trunk, because that a major backwards 
 break) just have a method {{asDocIdSet()}}, that returns an anonymous 
 instance of DocIdSet: bits() returns the FixedBitSet itsself, iterator() 
 returns a new Iterator (like it always did) and the cost/cacheable methods 
 return static values.
 Filters in trunk would need to be changed like that:
 {code:java}
 FixedBitSet bits = 
 ...
 return bits;
 {code}
 gets:
 {code:java}
 FixedBitSet bits = 
 ...
 return bits.asDocIdSet();
 {code}
 As this methods returns an anonymous DocIdSet, calling code can no longer 
 rely or check if the implementation behind is a FixedBitSet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178597#comment-14178597
 ] 

ASF subversion and git services commented on SOLR-6058:
---

Commit 1633413 from [~gsingers] in branch 'cms/branches/solr_6058'
[ https://svn.apache.org/r1633413 ]

SOLR-6058: small tweak to feature language

 Solr needs a new website
 

 Key: SOLR-6058
 URL: https://issues.apache.org/jira/browse/SOLR-6058
 Project: Solr
  Issue Type: Task
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Attachments: HTML.rar, SOLR-6058, Solr_Icons.pdf, 
 Solr_Logo_on_black.pdf, Solr_Logo_on_black.png, Solr_Logo_on_orange.pdf, 
 Solr_Logo_on_orange.png, Solr_Logo_on_white.pdf, Solr_Logo_on_white.png, 
 Solr_Styleguide.pdf


 Solr needs a new website:  better organization of content, less verbose, more 
 pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6587) Misleading exception when creating collections in SolrCloud with bad configuration

2014-10-21 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178600#comment-14178600
 ] 

Tomás Fernández Löbbe commented on SOLR-6587:
-

Yes, I'll merge it today

 Misleading exception when creating collections in SolrCloud with bad 
 configuration
 --

 Key: SOLR-6587
 URL: https://issues.apache.org/jira/browse/SOLR-6587
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.1, 5.0, Trunk
Reporter: Tomás Fernández Löbbe
Assignee: Tomás Fernández Löbbe
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6587.patch


 I uploaded a configuration in bad shape to Zookeeper, then tried to create a 
 collection and I was getting: 
 {noformat}
 ERROR - 2014-10-03 16:48:25.712; org.apache.solr.core.CoreContainer; Error 
 creating core [tflobbe_collection1_shard2_replica2]: Could not load conf for 
 core tflobbe_collection1_shard2_replica2: ZkSolrResourceLoader does not 
 support getConfigDir() - likely, what you are trying to do is not supported 
 in ZooKeeper mode
 org.apache.solr.common.SolrException: Could not load conf for core 
 tflobbe_collection1_shard2_replica2: ZkSolrResourceLoader does not support 
 getConfigDir() - likely, what you are trying to do is not supported in 
 ZooKeeper mode
 at 
 org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:66)
 at org.apache.solr.core.CoreContainer.create(CoreContainer.java:489)
 at org.apache.solr.core.CoreContainer.create(CoreContainer.java:466)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:575)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:199)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:188)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:729)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:258)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
 at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
 at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
 at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
 at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
 at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
 at org.eclipse.jetty.server.Server.handle(Server.java:368)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
 at 
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)
 at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)
 at 
 org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)
 at 
 org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
 at 
 org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
 at 
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
 at 
 org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: org.apache.solr.common.cloud.ZooKeeperException: 
 ZkSolrResourceLoader does not support getConfigDir() - likely, what you are 
 

[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-10-21 Thread Francis Lukesh (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178631#comment-14178631
 ] 

Francis Lukesh commented on SOLR-6058:
--

Per [this 
comment|https://issues.apache.org/jira/browse/SOLR-6058?focusedCommentId=14178344page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14178344],
 the site running on [http://lucene.lukesh.com/solr] now updates and builds 
from the SOLR-6058 branch every minute.

 Solr needs a new website
 

 Key: SOLR-6058
 URL: https://issues.apache.org/jira/browse/SOLR-6058
 Project: Solr
  Issue Type: Task
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Attachments: HTML.rar, SOLR-6058, Solr_Icons.pdf, 
 Solr_Logo_on_black.pdf, Solr_Logo_on_black.png, Solr_Logo_on_orange.pdf, 
 Solr_Logo_on_orange.png, Solr_Logo_on_white.pdf, Solr_Logo_on_white.png, 
 Solr_Styleguide.pdf


 Solr needs a new website:  better organization of content, less verbose, more 
 pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5441) Decouple DocIdSet from OpenBitSet and FixedBitSet

2014-10-21 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178639#comment-14178639
 ] 

Yonik Seeley commented on LUCENE-5441:
--

We should really try and pick good names and then stick to them!

bq. FixedBitSet renamed to IntBitSet

The IntBitSet name actually may be more confusing and thus likely to be renamed 
in the future yet again.  It suggests that it's a bitset implementation backed 
by ints... but the implementation actually uses longs.

 Decouple DocIdSet from OpenBitSet and FixedBitSet
 -

 Key: LUCENE-5441
 URL: https://issues.apache.org/jira/browse/LUCENE-5441
 Project: Lucene - Core
  Issue Type: Task
  Components: core/other
Affects Versions: 4.6.1
Reporter: Uwe Schindler
 Fix For: Trunk

 Attachments: LUCENE-5441.patch, LUCENE-5441.patch, LUCENE-5441.patch, 
 LUCENE-5441.patch


 Back from the times of Lucene 2.4 when DocIdSet was introduced, we somehow 
 kept the stupid filters can return a BitSet directly in the code. So lots 
 of Filters return just FixedBitSet, because this is the superclass (ideally 
 interface) of FixedBitSet.
 We should decouple that and *not* implement that abstract interface directly 
 by FixedBitSet. This leads to bugs e.g. in BlockJoin, because it used Filters 
 in a wrong way, just because it was always returning Bitsets. But some 
 filters actually don't do this.
 I propose to let FixedBitSet (only in trunk, because that a major backwards 
 break) just have a method {{asDocIdSet()}}, that returns an anonymous 
 instance of DocIdSet: bits() returns the FixedBitSet itsself, iterator() 
 returns a new Iterator (like it always did) and the cost/cacheable methods 
 return static values.
 Filters in trunk would need to be changed like that:
 {code:java}
 FixedBitSet bits = 
 ...
 return bits;
 {code}
 gets:
 {code:java}
 FixedBitSet bits = 
 ...
 return bits.asDocIdSet();
 {code}
 As this methods returns an anonymous DocIdSet, calling code can no longer 
 rely or check if the implementation behind is a FixedBitSet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5441) Decouple DocIdSet from OpenBitSet and FixedBitSet

2014-10-21 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178653#comment-14178653
 ] 

Robert Muir commented on LUCENE-5441:
-

{quote}
require the cost to be provided explicitely, so that you can use the actual set 
cardinality if you already know it, or use cardinality() if it makes sense. 
This should help make better decisions when there are bitsets involved.
{quote}

+1 to fixing this. I would even make a default/sugar ctor that just forwards 
cardinality(). 

Currently, people are making the wrong tradeoffs. they are so afraid to call 
cardinality up-front a single time, and instead pay the price with bad 
execution over and over again for e.g. cached filters.


 Decouple DocIdSet from OpenBitSet and FixedBitSet
 -

 Key: LUCENE-5441
 URL: https://issues.apache.org/jira/browse/LUCENE-5441
 Project: Lucene - Core
  Issue Type: Task
  Components: core/other
Affects Versions: 4.6.1
Reporter: Uwe Schindler
 Fix For: Trunk

 Attachments: LUCENE-5441.patch, LUCENE-5441.patch, LUCENE-5441.patch, 
 LUCENE-5441.patch


 Back from the times of Lucene 2.4 when DocIdSet was introduced, we somehow 
 kept the stupid filters can return a BitSet directly in the code. So lots 
 of Filters return just FixedBitSet, because this is the superclass (ideally 
 interface) of FixedBitSet.
 We should decouple that and *not* implement that abstract interface directly 
 by FixedBitSet. This leads to bugs e.g. in BlockJoin, because it used Filters 
 in a wrong way, just because it was always returning Bitsets. But some 
 filters actually don't do this.
 I propose to let FixedBitSet (only in trunk, because that a major backwards 
 break) just have a method {{asDocIdSet()}}, that returns an anonymous 
 instance of DocIdSet: bits() returns the FixedBitSet itsself, iterator() 
 returns a new Iterator (like it always did) and the cost/cacheable methods 
 return static values.
 Filters in trunk would need to be changed like that:
 {code:java}
 FixedBitSet bits = 
 ...
 return bits;
 {code}
 gets:
 {code:java}
 FixedBitSet bits = 
 ...
 return bits.asDocIdSet();
 {code}
 As this methods returns an anonymous DocIdSet, calling code can no longer 
 rely or check if the implementation behind is a FixedBitSet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6573) Query elevation fails when localParams are used

2014-10-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-6573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-6573:
--
Fix Version/s: Trunk
   4.10.2

 Query elevation fails when localParams are used
 ---

 Key: SOLR-6573
 URL: https://issues.apache.org/jira/browse/SOLR-6573
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.10.1
Reporter: Radek Urbas
Assignee: Jan Høydahl
 Fix For: 4.10.2, 5.0, Trunk

 Attachments: SOLR-6573.patch, SOLR-6573.patch


 Elevation does not work when localParams are specified.
 In example collection1 shipped with Solr query like this one 
 {code}http://localhost:8983/solr/collection1/elevate?q=ipodfl=id,title,[elevated]{code}
  properly returns elevated documents on top.
 If localParams are specified e.g. {!q.op=AND} in query like 
 {code}http://localhost:8983/solr/collection1/elevate?q=\{!q.op=AND\}ipodfl=id,title,[elevated]{code}
 documents are not elevated anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178771#comment-14178771
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1633429 from [~rjernst] in branch 'dev/branches/lucene5969'
[ https://svn.apache.org/r1633429 ]

LUCENE-5969: Copy block tree to backward codecs for 4.0-4.10, remove 
conditionals in 50 version, add getStats() TermsEnum API to remove need to 
expose Stats impl

 Add Lucene50Codec
 -

 Key: LUCENE-5969
 URL: https://issues.apache.org/jira/browse/LUCENE-5969
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5969.patch, LUCENE-5969.patch, 
 LUCENE-5969_part2.patch


 Spinoff from LUCENE-5952:
   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
 read time.
   * Lucene42TermVectorsFormat should not use the same codecName as 
 Lucene41StoredFieldsFormat
 It would also be nice if we had a bumpCodecVersion script so rolling a new 
 codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6638) Create an off-heap implementation of Solr caches

2014-10-21 Thread Shinichiro Abe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178784#comment-14178784
 ] 

Shinichiro Abe commented on SOLR-6638:
--

FYI, Apache DirectMemory uses ByteBuffer.allocateDirect(). 1 years ago, I tried 
to upgrade to Solr 4.4. A very small patch is on DIRECTMEMORY-134.

 Create an off-heap implementation of Solr caches
 

 Key: SOLR-6638
 URL: https://issues.apache.org/jira/browse/SOLR-6638
 Project: Solr
  Issue Type: New Feature
Reporter: Noble Paul

 There are various implementations of off-heap implementations of cache. One 
 such is using sun.misc.Unsafe .The cache implementations are pluggable in 
 Solr anyway



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 663 - Still Failing

2014-10-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/663/

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create 
core [halfcollection_shard1_replica1] Caused by: Could not get shard id for 
core: halfcollection_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error 
CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core 
[halfcollection_shard1_replica1] Caused by: Could not get shard id for core: 
halfcollection_shard1_replica1
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:569)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 

[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178930#comment-14178930
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1633441 from [~rcmuir] in branch 'dev/branches/lucene5969'
[ https://svn.apache.org/r1633441 ]

LUCENE-5969: move blocktree stats - stats api

 Add Lucene50Codec
 -

 Key: LUCENE-5969
 URL: https://issues.apache.org/jira/browse/LUCENE-5969
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5969.patch, LUCENE-5969.patch, 
 LUCENE-5969_part2.patch


 Spinoff from LUCENE-5952:
   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
 read time.
   * Lucene42TermVectorsFormat should not use the same codecName as 
 Lucene41StoredFieldsFormat
 It would also be nice if we had a bumpCodecVersion script so rolling a new 
 codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178934#comment-14178934
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1633442 from [~rcmuir] in branch 'dev/branches/lucene5969'
[ https://svn.apache.org/r1633442 ]

LUCENE-5969: add missing @Override

 Add Lucene50Codec
 -

 Key: LUCENE-5969
 URL: https://issues.apache.org/jira/browse/LUCENE-5969
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5969.patch, LUCENE-5969.patch, 
 LUCENE-5969_part2.patch


 Spinoff from LUCENE-5952:
   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
 read time.
   * Lucene42TermVectorsFormat should not use the same codecName as 
 Lucene41StoredFieldsFormat
 It would also be nice if we had a bumpCodecVersion script so rolling a new 
 codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5441) Decouple DocIdSet from OpenBitSet and FixedBitSet

2014-10-21 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-5441:
-
Attachment: LUCENE-5441.patch

bq. The IntBitSet name actually may be more confusing

I tried to fold in feedback from previous comments, but since it's 
controversial, I'll remove this change from the patch.

bq.  I would even make a default/sugar ctor that just forwards cardinality(). 

Done in the new patch.

 Decouple DocIdSet from OpenBitSet and FixedBitSet
 -

 Key: LUCENE-5441
 URL: https://issues.apache.org/jira/browse/LUCENE-5441
 Project: Lucene - Core
  Issue Type: Task
  Components: core/other
Affects Versions: 4.6.1
Reporter: Uwe Schindler
 Fix For: Trunk

 Attachments: LUCENE-5441.patch, LUCENE-5441.patch, LUCENE-5441.patch, 
 LUCENE-5441.patch, LUCENE-5441.patch


 Back from the times of Lucene 2.4 when DocIdSet was introduced, we somehow 
 kept the stupid filters can return a BitSet directly in the code. So lots 
 of Filters return just FixedBitSet, because this is the superclass (ideally 
 interface) of FixedBitSet.
 We should decouple that and *not* implement that abstract interface directly 
 by FixedBitSet. This leads to bugs e.g. in BlockJoin, because it used Filters 
 in a wrong way, just because it was always returning Bitsets. But some 
 filters actually don't do this.
 I propose to let FixedBitSet (only in trunk, because that a major backwards 
 break) just have a method {{asDocIdSet()}}, that returns an anonymous 
 instance of DocIdSet: bits() returns the FixedBitSet itsself, iterator() 
 returns a new Iterator (like it always did) and the cost/cacheable methods 
 return static values.
 Filters in trunk would need to be changed like that:
 {code:java}
 FixedBitSet bits = 
 ...
 return bits;
 {code}
 gets:
 {code:java}
 FixedBitSet bits = 
 ...
 return bits.asDocIdSet();
 {code}
 As this methods returns an anonymous DocIdSet, calling code can no longer 
 rely or check if the implementation behind is a FixedBitSet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5441) Decouple DocIdSet from OpenBitSet and FixedBitSet

2014-10-21 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178983#comment-14178983
 ] 

Yonik Seeley commented on LUCENE-5441:
--

bq.  The IntBitSet name actually may be more confusing
bq. I tried to fold in feedback from previous comments, but since it's 
controversial, I'll remove this change from the patch.

Hmmm, I had missed the earlier comments about that.  Perhaps a poll of a larger 
audience might be in order?

 Decouple DocIdSet from OpenBitSet and FixedBitSet
 -

 Key: LUCENE-5441
 URL: https://issues.apache.org/jira/browse/LUCENE-5441
 Project: Lucene - Core
  Issue Type: Task
  Components: core/other
Affects Versions: 4.6.1
Reporter: Uwe Schindler
 Fix For: Trunk

 Attachments: LUCENE-5441.patch, LUCENE-5441.patch, LUCENE-5441.patch, 
 LUCENE-5441.patch, LUCENE-5441.patch


 Back from the times of Lucene 2.4 when DocIdSet was introduced, we somehow 
 kept the stupid filters can return a BitSet directly in the code. So lots 
 of Filters return just FixedBitSet, because this is the superclass (ideally 
 interface) of FixedBitSet.
 We should decouple that and *not* implement that abstract interface directly 
 by FixedBitSet. This leads to bugs e.g. in BlockJoin, because it used Filters 
 in a wrong way, just because it was always returning Bitsets. But some 
 filters actually don't do this.
 I propose to let FixedBitSet (only in trunk, because that a major backwards 
 break) just have a method {{asDocIdSet()}}, that returns an anonymous 
 instance of DocIdSet: bits() returns the FixedBitSet itsself, iterator() 
 returns a new Iterator (like it always did) and the cost/cacheable methods 
 return static values.
 Filters in trunk would need to be changed like that:
 {code:java}
 FixedBitSet bits = 
 ...
 return bits;
 {code}
 gets:
 {code:java}
 FixedBitSet bits = 
 ...
 return bits.asDocIdSet();
 {code}
 As this methods returns an anonymous DocIdSet, calling code can no longer 
 rely or check if the implementation behind is a FixedBitSet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6573) Query elevation fails when localParams are used

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178987#comment-14178987
 ] 

ASF subversion and git services commented on SOLR-6573:
---

Commit 1633446 from jan...@apache.org in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1633446 ]

SOLR-6573: QueryElevationComponent now works with localParams in the query - 
backport from trunk

 Query elevation fails when localParams are used
 ---

 Key: SOLR-6573
 URL: https://issues.apache.org/jira/browse/SOLR-6573
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.10.1
Reporter: Radek Urbas
Assignee: Jan Høydahl
 Fix For: 4.10.2, 5.0, Trunk

 Attachments: SOLR-6573.patch, SOLR-6573.patch


 Elevation does not work when localParams are specified.
 In example collection1 shipped with Solr query like this one 
 {code}http://localhost:8983/solr/collection1/elevate?q=ipodfl=id,title,[elevated]{code}
  properly returns elevated documents on top.
 If localParams are specified e.g. {!q.op=AND} in query like 
 {code}http://localhost:8983/solr/collection1/elevate?q=\{!q.op=AND\}ipodfl=id,title,[elevated]{code}
 documents are not elevated anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6573) Query elevation fails when localParams are used

2014-10-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-6573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-6573.
---
Resolution: Fixed

 Query elevation fails when localParams are used
 ---

 Key: SOLR-6573
 URL: https://issues.apache.org/jira/browse/SOLR-6573
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.10.1
Reporter: Radek Urbas
Assignee: Jan Høydahl
 Fix For: 4.10.2, 5.0, Trunk

 Attachments: SOLR-6573.patch, SOLR-6573.patch


 Elevation does not work when localParams are specified.
 In example collection1 shipped with Solr query like this one 
 {code}http://localhost:8983/solr/collection1/elevate?q=ipodfl=id,title,[elevated]{code}
  properly returns elevated documents on top.
 If localParams are specified e.g. {!q.op=AND} in query like 
 {code}http://localhost:8983/solr/collection1/elevate?q=\{!q.op=AND\}ipodfl=id,title,[elevated]{code}
 documents are not elevated anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6639) Failed CREATE leaves behind on-disk cruft that DELETE does not remove

2014-10-21 Thread Jessica Cheng Mallet (JIRA)
Jessica Cheng Mallet created SOLR-6639:
--

 Summary: Failed CREATE leaves behind on-disk cruft that DELETE 
does not remove
 Key: SOLR-6639
 URL: https://issues.apache.org/jira/browse/SOLR-6639
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Jessica Cheng Mallet


When a CREATE api call fails (due to bad config etc.), it leaves behind on-disk 
core directories that a DELETE api call does not delete, which results in 
future CREATE call of with the same collection name to fail. The only way to 
get around this is to go on to the host and manually remove the orphaned 
directories.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14179033#comment-14179033
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1633451 from [~rcmuir] in branch 'dev/branches/lucene5969'
[ https://svn.apache.org/r1633451 ]

LUCENE-5969: fail precommit on typo'ed TODO

 Add Lucene50Codec
 -

 Key: LUCENE-5969
 URL: https://issues.apache.org/jira/browse/LUCENE-5969
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5969.patch, LUCENE-5969.patch, 
 LUCENE-5969_part2.patch


 Spinoff from LUCENE-5952:
   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
 read time.
   * Lucene42TermVectorsFormat should not use the same codecName as 
 Lucene41StoredFieldsFormat
 It would also be nice if we had a bumpCodecVersion script so rolling a new 
 codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6573) Query elevation fails when localParams are used

2014-10-21 Thread Sheri Watkins (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14179039#comment-14179039
 ] 

Sheri Watkins commented on SOLR-6573:
-

Will this fix be included in the 4.10.2 version?

 Query elevation fails when localParams are used
 ---

 Key: SOLR-6573
 URL: https://issues.apache.org/jira/browse/SOLR-6573
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.10.1
Reporter: Radek Urbas
Assignee: Jan Høydahl
 Fix For: 4.10.2, 5.0, Trunk

 Attachments: SOLR-6573.patch, SOLR-6573.patch


 Elevation does not work when localParams are specified.
 In example collection1 shipped with Solr query like this one 
 {code}http://localhost:8983/solr/collection1/elevate?q=ipodfl=id,title,[elevated]{code}
  properly returns elevated documents on top.
 If localParams are specified e.g. {!q.op=AND} in query like 
 {code}http://localhost:8983/solr/collection1/elevate?q=\{!q.op=AND\}ipodfl=id,title,[elevated]{code}
 documents are not elevated anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6573) Query elevation fails when localParams are used

2014-10-21 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14179049#comment-14179049
 ] 

Jan Høydahl commented on SOLR-6573:
---

Yes, this is now included in the (possible) 4.10.2 release

 Query elevation fails when localParams are used
 ---

 Key: SOLR-6573
 URL: https://issues.apache.org/jira/browse/SOLR-6573
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.10.1
Reporter: Radek Urbas
Assignee: Jan Høydahl
 Fix For: 4.10.2, 5.0, Trunk

 Attachments: SOLR-6573.patch, SOLR-6573.patch


 Elevation does not work when localParams are specified.
 In example collection1 shipped with Solr query like this one 
 {code}http://localhost:8983/solr/collection1/elevate?q=ipodfl=id,title,[elevated]{code}
  properly returns elevated documents on top.
 If localParams are specified e.g. {!q.op=AND} in query like 
 {code}http://localhost:8983/solr/collection1/elevate?q=\{!q.op=AND\}ipodfl=id,title,[elevated]{code}
 documents are not elevated anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14179052#comment-14179052
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1633453 from [~rcmuir] in branch 'dev/branches/lucene5969'
[ https://svn.apache.org/r1633453 ]

LUCENE-5969: fix TOODs and add missing license header

 Add Lucene50Codec
 -

 Key: LUCENE-5969
 URL: https://issues.apache.org/jira/browse/LUCENE-5969
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5969.patch, LUCENE-5969.patch, 
 LUCENE-5969_part2.patch


 Spinoff from LUCENE-5952:
   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
 read time.
   * Lucene42TermVectorsFormat should not use the same codecName as 
 Lucene41StoredFieldsFormat
 It would also be nice if we had a bumpCodecVersion script so rolling a new 
 codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6640) ChaosMonkeySafeLeaderTest failure with CorruptIndexException

2014-10-21 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6640:

Attachment: Lucene-Solr-5.x-Linux-64bit-jdk1.8.0_20-Build-11333.txt

The test logs from jenkins are attached.

 ChaosMonkeySafeLeaderTest failure with CorruptIndexException
 

 Key: SOLR-6640
 URL: https://issues.apache.org/jira/browse/SOLR-6640
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 5.0
Reporter: Shalin Shekhar Mangar
 Fix For: 5.0

 Attachments: Lucene-Solr-5.x-Linux-64bit-jdk1.8.0_20-Build-11333.txt


 Test failure found on jenkins:
 http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11333/
 {code}
 1 tests failed.
 REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch
 Error Message:
 shard2 is not consistent.  Got 62 from 
 http://127.0.0.1:57436/collection1lastClient and got 24 from 
 http://127.0.0.1:53065/collection1
 Stack Trace:
 java.lang.AssertionError: shard2 is not consistent.  Got 62 from 
 http://127.0.0.1:57436/collection1lastClient and got 24 from 
 http://127.0.0.1:53065/collection1
 at 
 __randomizedtesting.SeedInfo.seed([F4B371D421E391CD:7555FFCC56BCF1F1]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1255)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1234)
 at 
 org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:162)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
 {code}
 Cause of inconsistency is:
 {code}
 Caused by: org.apache.lucene.index.CorruptIndexException: file mismatch, 
 expected segment id=yhq3vokoe1den2av9jbd3yp8, got=yhq3vokoe1den2av9jbd3yp7 
 (resource=BufferedChecksumIndexInput(MMapIndexInput(path=/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeySafeLeaderTest-F4B371D421E391CD-001/tempDir-001/jetty3/index/_1_2.liv)))
[junit4]   2  at 
 org.apache.lucene.codecs.CodecUtil.checkSegmentHeader(CodecUtil.java:259)
[junit4]   2  at 
 org.apache.lucene.codecs.lucene50.Lucene50LiveDocsFormat.readLiveDocs(Lucene50LiveDocsFormat.java:88)
[junit4]   2  at 
 org.apache.lucene.codecs.asserting.AssertingLiveDocsFormat.readLiveDocs(AssertingLiveDocsFormat.java:64)
[junit4]   2  at 
 org.apache.lucene.index.SegmentReader.init(SegmentReader.java:102)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6640) ChaosMonkeySafeLeaderTest failure with CorruptIndexException

2014-10-21 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-6640:
---

 Summary: ChaosMonkeySafeLeaderTest failure with 
CorruptIndexException
 Key: SOLR-6640
 URL: https://issues.apache.org/jira/browse/SOLR-6640
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 5.0
Reporter: Shalin Shekhar Mangar
 Fix For: 5.0
 Attachments: Lucene-Solr-5.x-Linux-64bit-jdk1.8.0_20-Build-11333.txt

Test failure found on jenkins:
http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11333/

{code}
1 tests failed.
REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:
shard2 is not consistent.  Got 62 from 
http://127.0.0.1:57436/collection1lastClient and got 24 from 
http://127.0.0.1:53065/collection1

Stack Trace:
java.lang.AssertionError: shard2 is not consistent.  Got 62 from 
http://127.0.0.1:57436/collection1lastClient and got 24 from 
http://127.0.0.1:53065/collection1
at 
__randomizedtesting.SeedInfo.seed([F4B371D421E391CD:7555FFCC56BCF1F1]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1255)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1234)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:162)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
{code}

Cause of inconsistency is:
{code}
Caused by: org.apache.lucene.index.CorruptIndexException: file mismatch, 
expected segment id=yhq3vokoe1den2av9jbd3yp8, got=yhq3vokoe1den2av9jbd3yp7 
(resource=BufferedChecksumIndexInput(MMapIndexInput(path=/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeySafeLeaderTest-F4B371D421E391CD-001/tempDir-001/jetty3/index/_1_2.liv)))
   [junit4]   2at 
org.apache.lucene.codecs.CodecUtil.checkSegmentHeader(CodecUtil.java:259)
   [junit4]   2at 
org.apache.lucene.codecs.lucene50.Lucene50LiveDocsFormat.readLiveDocs(Lucene50LiveDocsFormat.java:88)
   [junit4]   2at 
org.apache.lucene.codecs.asserting.AssertingLiveDocsFormat.readLiveDocs(AssertingLiveDocsFormat.java:64)
   [junit4]   2at 
org.apache.lucene.index.SegmentReader.init(SegmentReader.java:102)
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14179109#comment-14179109
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1633465 from [~rjernst] in branch 'dev/branches/lucene5969'
[ https://svn.apache.org/r1633465 ]

LUCENE-5969: Move bwc blocktree writer to test module, update 50 writer to use 
new header reading/writing

 Add Lucene50Codec
 -

 Key: LUCENE-5969
 URL: https://issues.apache.org/jira/browse/LUCENE-5969
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5969.patch, LUCENE-5969.patch, 
 LUCENE-5969_part2.patch


 Spinoff from LUCENE-5952:
   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
 read time.
   * Lucene42TermVectorsFormat should not use the same codecName as 
 Lucene41StoredFieldsFormat
 It would also be nice if we had a bumpCodecVersion script so rolling a new 
 codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-10-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14179167#comment-14179167
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1633471 from [~rcmuir] in branch 'dev/branches/lucene5969'
[ https://svn.apache.org/r1633471 ]

LUCENE-5969: test that default codec uses segmentheader for all files. change 
.si to write its ID the same way for consistency

 Add Lucene50Codec
 -

 Key: LUCENE-5969
 URL: https://issues.apache.org/jira/browse/LUCENE-5969
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5969.patch, LUCENE-5969.patch, 
 LUCENE-5969_part2.patch


 Spinoff from LUCENE-5952:
   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
 read time.
   * Lucene42TermVectorsFormat should not use the same codecName as 
 Lucene41StoredFieldsFormat
 It would also be nice if we had a bumpCodecVersion script so rolling a new 
 codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6641) SystemInfoHandler should include the zkHost the node is using (when running in solrcloud mode)

2014-10-21 Thread Timothy Potter (JIRA)
Timothy Potter created SOLR-6641:


 Summary: SystemInfoHandler should include the zkHost the node is 
using (when running in solrcloud mode)
 Key: SOLR-6641
 URL: https://issues.apache.org/jira/browse/SOLR-6641
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Timothy Potter
Assignee: Timothy Potter
Priority: Minor


There's no good way to get the zkHost from a running server. I've resorted to 
parsing the ZooKeeper host out of the command-line args, but that doesn't help 
when users set the zkHost in solr.xml. So it should be exposed in the 
/solr/admin/info/system metadata.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6351) Let Stats Hang off of Pivots (via 'tag')

2014-10-21 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-6351:
---
Attachment: SOLR-6351.patch


bq. It looks like Vitaliy's new code doesn't account for stats returned by a 
shard in response to refinement requests.

I updated DistributedFacetPivotLongTailTest to check stats on a request where 
refinement was required for correct results and was able to reliable reproduce.

Looking at the logs i realized that in the refinement requests stats=false 
was explicitly set, and traced this to an optimization in StatsComponent -- it 
was assuming that only the initial (PURPOSE_GET_TOP_IDS) shard request needed 
stats computed, so i modified that to recognize (PURPOSE_REFINE_PIVOT_FACETS) 
as another situation where we need to leave stats=true

This gets the tests to pass (still hammering on TestCloudPivot, but so far 
looks good) but i'm not really liking this solution, for reasons i noted in a 
StatsComponent nocommi comment...

{noformat}
// nocommit: PURPOSE_REFINE_PIVOT_FACETS by itself shouldn't be enough for 
this...
//
// we need to check if the pivots actually have stats hanging off of them,
// if they don't then we still should supress the stats param
// (no need to compute the top level stats over and over)
//
// actually ... even if we do have stats hanging off of pivots,
// we need to make stats component smart enough not to waste time re-computing
// top level stats on every refinement request.
//
// so maybe StatsCOmponent should be left alone, and FacetComponent's prepare 
method 
// should be modified so that *if* isShard  there are pivot refine params  
those 
// pivots have stats, then set some variable so that stats logic happens even 
if stats=false?
{noformat}

I also made a few other various changes as i was reviewing the test (noted 
below).

My plan is to move forward and continue reviewing more of the patch, starting 
with the other tests, and then dig into the code changes -- writting additional 
test cases if/when i notice things that looks like they may not be adequately 
covered -- and then come back and revist the question of the stats=false 
during refinement requests later (i'm certainly open to suggestions)

{panel:title=changes in this iteration of the patch}
* TestCloudPivots
** removed the bogus param cleanup vitaliy mentioned
** added some nocommits as reminders for the future
* DistributedFacetPivotLongTailTest
** refactored query + assertFieldStats into doTestDeepPivotStats 
*** only called once, and wasn't general in anyway - only usable for checking 
one specific query
** renamed foo_i to stat_i so it's a bit more obvious why that field is 
there
** added stat_i to some long tail docs  updated the existing stats assertions
** modified existing query that required refinement to show stats aren't correct
*** bbb0 on shard2 only gets included with refinement, but the min stat 
trivially demonstrates that the -1 from shard2 isn't included 
* StatsComponent
** check for PURPOSE_REFINE_PIVOT_FACETS in modifyRequest.

*NOTE:* This patch is significantly smaller then the last one because i 
generated it using svn diff -x --ignore-all-space to supress a bunch of small 
formatted changes from the previous patches.
{panel}


 Let Stats Hang off of Pivots (via 'tag')
 

 Key: SOLR-6351
 URL: https://issues.apache.org/jira/browse/SOLR-6351
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
 Attachments: SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
 SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
 SOLR-6351.patch, SOLR-6351.patch


 he goal here is basically flip the notion of stats.facet on it's head, so 
 that instead of asking the stats component to also do some faceting 
 (something that's never worked well with the variety of field types and has 
 never worked in distributed mode) we instead ask the PivotFacet code to 
 compute some stats X for each leaf in a pivot.  We'll do this with the 
 existing {{stats.field}} params, but we'll leverage the {{tag}} local param 
 of the {{stats.field}} instances to be able to associate which stats we want 
 hanging off of which {{facet.pivot}}
 Example...
 {noformat}
 facet.pivot={!stats=s1}category,manufacturer
 stats.field={!key=avg_price tag=s1 mean=true}price
 stats.field={!tag=s1 min=true max=true}user_rating
 {noformat}
 ...with the request above, in addition to computing the min/max user_rating 
 and mean price (labeled avg_price) over the entire result set, the 
 PivotFacet component will also include those stats for every node of the tree 
 it builds up when generating a pivot of the fields category,manufacturer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: 

[jira] [Commented] (SOLR-6351) Let Stats Hang off of Pivots (via 'tag')

2014-10-21 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14179307#comment-14179307
 ] 

Hoss Man commented on SOLR-6351:


bq. ...we need to make stats component smart enough not to waste time 
re-computing top level stats on every refinement request.

actually, thinking about it a bit more - with my change to StatsComponent, it's 
is probably even worse then wasting work on the shards -- since i've set the 
purpose to include PURPOSE_GET_STATS, StatsComponent on the coordinator node is 
going to merge those top-level stats in again every time a pivot refinement 
response is returned to the coordinator, so the top level stats will be totally 
wrong. ... gotta make sure our final fix accounts for that as well.

 Let Stats Hang off of Pivots (via 'tag')
 

 Key: SOLR-6351
 URL: https://issues.apache.org/jira/browse/SOLR-6351
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
 Attachments: SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
 SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
 SOLR-6351.patch, SOLR-6351.patch


 he goal here is basically flip the notion of stats.facet on it's head, so 
 that instead of asking the stats component to also do some faceting 
 (something that's never worked well with the variety of field types and has 
 never worked in distributed mode) we instead ask the PivotFacet code to 
 compute some stats X for each leaf in a pivot.  We'll do this with the 
 existing {{stats.field}} params, but we'll leverage the {{tag}} local param 
 of the {{stats.field}} instances to be able to associate which stats we want 
 hanging off of which {{facet.pivot}}
 Example...
 {noformat}
 facet.pivot={!stats=s1}category,manufacturer
 stats.field={!key=avg_price tag=s1 mean=true}price
 stats.field={!tag=s1 min=true max=true}user_rating
 {noformat}
 ...with the request above, in addition to computing the min/max user_rating 
 and mean price (labeled avg_price) over the entire result set, the 
 PivotFacet component will also include those stats for every node of the tree 
 it builds up when generating a pivot of the fields category,manufacturer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6640) ChaosMonkeySafeLeaderTest failure with CorruptIndexException

2014-10-21 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14179322#comment-14179322
 ] 

Robert Muir commented on SOLR-6640:
---

This exception translates to: this file _1_2.liv is mismatched and does not 
belong to this segment _1, instead it belongs to another segment (likely _0)

You can see via the suppressed checksum exception from the full stacktrace that 
the contents are themselves consistent and not corrupt (checksum passes), its 
just mismatched.



 ChaosMonkeySafeLeaderTest failure with CorruptIndexException
 

 Key: SOLR-6640
 URL: https://issues.apache.org/jira/browse/SOLR-6640
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Affects Versions: 5.0
Reporter: Shalin Shekhar Mangar
 Fix For: 5.0

 Attachments: Lucene-Solr-5.x-Linux-64bit-jdk1.8.0_20-Build-11333.txt


 Test failure found on jenkins:
 http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11333/
 {code}
 1 tests failed.
 REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch
 Error Message:
 shard2 is not consistent.  Got 62 from 
 http://127.0.0.1:57436/collection1lastClient and got 24 from 
 http://127.0.0.1:53065/collection1
 Stack Trace:
 java.lang.AssertionError: shard2 is not consistent.  Got 62 from 
 http://127.0.0.1:57436/collection1lastClient and got 24 from 
 http://127.0.0.1:53065/collection1
 at 
 __randomizedtesting.SeedInfo.seed([F4B371D421E391CD:7555FFCC56BCF1F1]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1255)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1234)
 at 
 org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:162)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
 {code}
 Cause of inconsistency is:
 {code}
 Caused by: org.apache.lucene.index.CorruptIndexException: file mismatch, 
 expected segment id=yhq3vokoe1den2av9jbd3yp8, got=yhq3vokoe1den2av9jbd3yp7 
 (resource=BufferedChecksumIndexInput(MMapIndexInput(path=/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeySafeLeaderTest-F4B371D421E391CD-001/tempDir-001/jetty3/index/_1_2.liv)))
[junit4]   2  at 
 org.apache.lucene.codecs.CodecUtil.checkSegmentHeader(CodecUtil.java:259)
[junit4]   2  at 
 org.apache.lucene.codecs.lucene50.Lucene50LiveDocsFormat.readLiveDocs(Lucene50LiveDocsFormat.java:88)
[junit4]   2  at 
 org.apache.lucene.codecs.asserting.AssertingLiveDocsFormat.readLiveDocs(AssertingLiveDocsFormat.java:64)
[junit4]   2  at 
 org.apache.lucene.index.SegmentReader.init(SegmentReader.java:102)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



precommit/validate failing in lucene_solr_4_10 branch?

2014-10-21 Thread Tomás Fernández Löbbe
I just ran ant precommit and ant validate in lucene_solr_4_10 branch
and both are failing to me, is it the same for everyone? There is no
jenkins job running on this branch, right?

...
check-lib-versions:
 [echo] Lib versions check under:
/Users/tflobbe/Documents/apache/lucene-solr-branch-4-10-commit/lucene/..
[libversions] :: loading settings :: file =
/Users/tflobbe/Documents/apache/lucene-solr-branch-4-10-commit/lucene/ivy-settings.xml
[libversions]
[libversions] :: problems summary ::
[libversions]  WARNINGS
[libversions] circular dependency found:
dom4j#dom4j;1.6.1-jaxen#jaxen;1.1-beta-6-dom4j#dom4j;1.5.2
[libversions]
[libversions] circular dependency found:
jaxen#jaxen;1.1-beta-6-jdom#jdom;1.0-jaxen#jaxen;1.0-FCS
[libversions]
[libversions]
[libversions] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
[libversions]
[libversions] :: problems summary ::
[libversions]  WARNINGS
[libversions] module not found:
org.seasar.container#s2-framework;[2.4,2.5)
[libversions]
[libversions]  local: tried
[libversions]
[libversions]
/Users/tflobbe/.ivy2/local/org.seasar.container/s2-framework/[revision]/ivys/ivy.xml
[libversions]
[libversions]   -- artifact
org.seasar.container#s2-framework;[2.4,2.5)!s2-framework.jar:
[libversions]
[libversions]
/Users/tflobbe/.ivy2/local/org.seasar.container/s2-framework/[revision]/jars/s2-framework.jar
[libversions]
[libversions]  shared: tried
[libversions]
[libversions]
/Users/tflobbe/.ivy2/shared/org.seasar.container/s2-framework/[revision]/ivys/ivy.xml
[libversions]
[libversions]   -- artifact
org.seasar.container#s2-framework;[2.4,2.5)!s2-framework.jar:
[libversions]
[libversions]
/Users/tflobbe/.ivy2/shared/org.seasar.container/s2-framework/[revision]/jars/s2-framework.jar
[libversions]
[libversions]  public: tried
[libversions]
[libversions]
http://repo1.maven.org/maven2/org/seasar/container/s2-framework/[revision]/s2-framework-[revision].pom
[libversions]
[libversions]   -- artifact
org.seasar.container#s2-framework;[2.4,2.5)!s2-framework.jar:
[libversions]
[libversions]
http://repo1.maven.org/maven2/org/seasar/container/s2-framework/[revision]/s2-framework-[revision].jar
[libversions]
[libversions]  cloudera: tried
[libversions]
[libversions]
http://repository.cloudera.com/artifactory/repo/org/seasar/container/s2-framework/[revision]/s2-framework-[revision].pom
[libversions]
[libversions]   -- artifact
org.seasar.container#s2-framework;[2.4,2.5)!s2-framework.jar:
[libversions]
[libversions]
http://repository.cloudera.com/artifactory/repo/org/seasar/container/s2-framework/[revision]/s2-framework-[revision].jar
[libversions]
[libversions]  releases.cloudera.com: tried
[libversions]
[libversions]
http://repository.cloudera.com/content/repositories/releases/org/seasar/container/s2-framework/[revision]/s2-framework-[revision].pom
[libversions]
[libversions]   -- artifact
org.seasar.container#s2-framework;[2.4,2.5)!s2-framework.jar:
[libversions]
[libversions]
http://repository.cloudera.com/content/repositories/releases/org/seasar/container/s2-framework/[revision]/s2-framework-[revision].jar
[libversions]
[libversions]  sonatype-releases: tried
[libversions]
[libversions]
https://oss.sonatype.org/content/repositories/releases/org/seasar/container/s2-framework/[revision]/s2-framework-[revision].pom
[libversions]
[libversions]   -- artifact
org.seasar.container#s2-framework;[2.4,2.5)!s2-framework.jar:
[libversions]
[libversions]
https://oss.sonatype.org/content/repositories/releases/org/seasar/container/s2-framework/[revision]/s2-framework-[revision].jar
[libversions]
[libversions]  maven.restlet.org: tried
[libversions]
[libversions]
http://maven.restlet.org/org/seasar/container/s2-framework/[revision]/s2-framework-[revision].pom
[libversions]
[libversions]   -- artifact
org.seasar.container#s2-framework;[2.4,2.5)!s2-framework.jar:
[libversions]
[libversions]
http://maven.restlet.org/org/seasar/container/s2-framework/[revision]/s2-framework-[revision].jar
[libversions]
[libversions]  working-chinese-mirror: tried
[libversions]
[libversions]
http://uk.maven.org/maven2/org/seasar/container/s2-framework/[revision]/s2-framework-[revision].pom
[libversions]
[libversions]   -- artifact
org.seasar.container#s2-framework;[2.4,2.5)!s2-framework.jar:
[libversions]
[libversions]
http://uk.maven.org/maven2/org/seasar/container/s2-framework/[revision]/s2-framework-[revision].jar
[libversions]
[libversions]
[libversions] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
[libversions]
[libversions] :: problems summary ::
[libversions]  WARNINGS
[libversions] circular dependency found:
dom4j#dom4j;1.6.1-jaxen#jaxen;1.1-beta-6-dom4j#dom4j;1.5.2
[libversions]
[libversions]
[libversions] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
[libversions] VERSION CONFLICT: transitive dependency in module(s) langid:
[libversions] /net.arnx/jsonic=1.2.7

[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b34) - Build # 11340 - Failure!

2014-10-21 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11340/
Java: 64bit/jdk1.9.0-ea-b34 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
REGRESSION:  org.apache.solr.cloud.DeleteReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:38541/_/n, https://127.0.0.1:37621/_/n, 
https://127.0.0.1:51800/_/n, https://127.0.0.1:50192/_/n, 
https://127.0.0.1:48124/_/n]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[https://127.0.0.1:38541/_/n, 
https://127.0.0.1:37621/_/n, https://127.0.0.1:51800/_/n, 
https://127.0.0.1:50192/_/n, https://127.0.0.1:48124/_/n]
at 
__randomizedtesting.SeedInfo.seed([1021C7E6BD723E8B:91C749FECA2D5EB7]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:333)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:1015)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:793)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:736)
at 
org.apache.solr.cloud.DeleteReplicaTest.removeAndWaitForReplicaGone(DeleteReplicaTest.java:172)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:145)
at 
org.apache.solr.cloud.DeleteReplicaTest.doTest(DeleteReplicaTest.java:89)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Created] (SOLR-6642) ArrayIndexOutOfBoundException when searching by specific keywords

2014-10-21 Thread Mohmed (JIRA)
Mohmed created SOLR-6642:


 Summary: ArrayIndexOutOfBoundException when searching by specific 
keywords
 Key: SOLR-6642
 URL: https://issues.apache.org/jira/browse/SOLR-6642
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 4.9.1
 Environment: ALL
Reporter: Mohmed
 Fix For: 4.9.1


Recently we migrated from 4.7.0 to 4.9.1 and hitting this issue on production, 
where some keywords search will fail with the following stack trace. I'll 
update the issue with the exact query used to search.

Caused by: org.apache.solr.client.solrj.SolrServerException: 
java.lang.ArrayIndexOutOfBoundsException: 31
at 
com.sabax.datastore.impl.DatastoreImpl$EmbeddedSolrServer.request(DatastoreImpl.java:607)
 ~[sabasearch.jar:na]
... 196 common frames omitted
Caused by: java.lang.ArrayIndexOutOfBoundsException: 31
at org.apache.lucene.util.FixedBitSet.nextSetBit(FixedBitSet.java:294) 
~[sabasearch.jar:na]
at org.apache.solr.search.DocSetBase$1$1$1.advance(DocSetBase.java:202) 
~[sabasearch.jar:na]
at 
org.apache.lucene.search.ConstantScoreQuery$ConstantScorer.advance(ConstantScoreQuery.java:278)
 ~[sabasearch.jar:na]
at 
org.apache.lucene.search.ConjunctionScorer.doNext(ConjunctionScorer.java:69) 
~[sabasearch.jar:na]
at 
org.apache.lucene.search.ConjunctionScorer.nextDoc(ConjunctionScorer.java:100) 
~[sabasearch.jar:na]
at 
org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:192) 
~[sabasearch.jar:na]
at 
org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:163) 
~[sabasearch.jar:na]
at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35) 
~[sabasearch.jar:na]
at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621) 
~[sabasearch.jar:na]
at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:297) 
~[sabasearch.jar:na]
at 
org.apache.solr.search.SolrIndexSearcher.numDocs(SolrIndexSearcher.java:2040) 
~[sabasearch.jar:na]
at 
org.apache.solr.request.SimpleFacets.rangeCount(SimpleFacets.java:1338) 
~[sabasearch.jar:na]
at 
org.apache.solr.request.SimpleFacets.getFacetRangeCounts(SimpleFacets.java:1262)
 ~[sabasearch.jar:na]
at 
org.apache.solr.request.SimpleFacets.getFacetRangeCounts(SimpleFacets.java:1197)
 ~[sabasearch.jar:na]
at 
org.apache.solr.request.SimpleFacets.getFacetRangeCounts(SimpleFacets.java:1141)
 ~[sabasearch.jar:na]
at 
org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:262) 
~[sabasearch.jar:na]
at 
org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:84)
 ~[sabasearch.jar:na]
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
 ~[sabasearch.jar:na]
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 ~[sabasearch.jar:na]
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1962) 
~[sabasearch.jar:na]
at 
com.sabax.datastore.impl.DatastoreImpl$EmbeddedSolrServer.request(DatastoreImpl.java:602)
 ~[sabasearch.jar:na]
... 196 common frames omitted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6349) LocalParams for enabling/disabling individual stats

2014-10-21 Thread Xu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Zhang updated SOLR-6349:
---
Attachment: SOLR-6349-xu.patch

Made some adjustments based on Hoss's comments.

1. Only use EnumSet now.
2. Move calcDistinct into Enum.  
 But the question is, do we consider calcDistinct is a local parameter like 
min/max, or it should be the top-level parameter. (Which keeps the existing 
behavior for things like stats.field=foostats.calcDistinct=true)

3. Support distributed request, add faking test. 
4. Remove static dependsOn(Stat) methods, use Enum property instead. 



 LocalParams for enabling/disabling individual stats
 ---

 Key: SOLR-6349
 URL: https://issues.apache.org/jira/browse/SOLR-6349
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
 Attachments: SOLR-6349-tflobbe.patch, SOLR-6349-tflobbe.patch, 
 SOLR-6349-tflobbe.patch, SOLR-6349-xu.patch, SOLR-6349-xu.patch, 
 SOLR-6349___bad_idea_broken.patch


 Stats component currently computes all stats (except for one) every time 
 because they are relatively cheap, and in some cases dependent on eachother 
 for distrib computation -- but if we start layering stats on other things it 
 becomes unnecessarily expensive to compute all the stats when they just want 
 the sum (and it will definitely become excessively verbose in the 
 responses).  
 The plan here is to use local params to make this configurable.  All of the 
 existing stat options could be modeled as a simple boolean param, but future 
 params (like percentiles) might take in a more complex param value...
 Example:
 {noformat}
 stats.field={!min=true max=true percentiles='99,99.999'}price
 stats.field={!mean=true}weight
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >