[jira] [Resolved] (SOLR-6259) Performance issue with large number of fields and values when using copyFields

2014-07-19 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-6259.
-

   Resolution: Fixed
Fix Version/s: 4.10
   5.0

Thanks Steven.

Committed r1611852 on trunk and r1611853 on branch_4x.

 Performance issue with large number of fields and values when using copyFields
 --

 Key: SOLR-6259
 URL: https://issues.apache.org/jira/browse/SOLR-6259
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8.1
Reporter: Steven Bower
Assignee: Shalin Shekhar Mangar
Priority: Critical
 Fix For: 5.0, 4.10

 Attachments: SOLR-6259.patch


 When you have schema with a large enough number of fields (in my case around 
 250 fields) and you use copyFields to populate a number of fields (very few 
 in my case 3-4) you see a severe degradation in the performance of ingestion.
 Tracking this down using a profiler found that in the lucene 
 Document.getField() was using 87% of all CPU time. As it turns out getField() 
 does an iteration over the list of fields in the Document returning the field 
 if the name matches.. Anyway in the case of copyFields with lots of values 
 getField() gets called alot...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6121) cursorMark should accept sort without the uniqueKey

2014-07-19 Thread ayush mittal (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067430#comment-14067430
 ] 

ayush mittal commented on SOLR-6121:


Hello i am newbie , could you please elaborate

 cursorMark should accept sort without the uniqueKey
 ---

 Key: SOLR-6121
 URL: https://issues.apache.org/jira/browse/SOLR-6121
 Project: Solr
  Issue Type: Improvement
Reporter: David Smiley
Priority: Minor

 If you are using the cursorMark (deep paging) feature, you shouldn't *have* 
 to add the uniqueKey to the sort parameter.  If the user doesn't do it, the 
 user obviously doesn't care about the uniqueKey order relative to whatever 
 other sort parameters they may or may not have provided.  So if sort doesn't 
 have it, then Solr should simply tack it on at the end instead of providing 
 an error and potentially confusing the user.  This would be more user 
 friendly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6257) More than two !-s in a doc ID throws an ArrayIndexOutOfBoundsException when using the composite id router

2014-07-19 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067471#comment-14067471
 ] 

Anshum Gupta commented on SOLR-6257:


+1. LGTM!

 More than two !-s in a doc ID throws an ArrayIndexOutOfBoundsException when 
 using the composite id router
 ---

 Key: SOLR-6257
 URL: https://issues.apache.org/jira/browse/SOLR-6257
 Project: Solr
  Issue Type: Bug
Reporter: Steve Rowe
Assignee: Steve Rowe
 Attachments: SOLR-6257.patch, SOLR-6257.patch


 Since {{CompositeIdRouter}} is the default router, it has to be able to deal 
 with *any* ID string without throwing an exception.
 The following test (added to {{TestHashPartitioner}}) currently fails:
 {code:java}
   public void testNonConformingCompositeId() throws Exception {
 DocRouter router = DocRouter.getDocRouter(CompositeIdRouter.NAME);
 DocCollection coll = createCollection(4, router);
 Slice targetSlice = coll.getRouter().getTargetSlice(A!B!C!D, null, 
 null, coll);
 assertNotNull(targetSlice);
   }
 {code}
 with the following output: 
 {noformat}
[junit4] Suite: org.apache.solr.cloud.TestHashPartitioner
[junit4]   2 log4j:WARN No such property [conversionPattern] in 
 org.apache.solr.util.SolrLogLayout.
[junit4]   2 Creating dataDir: 
 /Users/sarowe/svn/lucene/dev/trunk/solr/build/solr-core/test/J0/./temp/solr.cloud.TestHashPartitioner-19514036FB5C5E56-001/init-core-data-001
[junit4]   2 1233 T11 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
 (false) and clientAuth (false)
[junit4]   2 1296 T11 oas.SolrTestCaseJ4.setUp ###Starting 
 testNonConformingCompositeId
[junit4] Throwable #1: java.lang.ArrayIndexOutOfBoundsException: 2
[junit4]  at 
 __randomizedtesting.SeedInfo.seed([19514036FB5C5E56:3A131EC016F531A4]:0)
[junit4]  at 
 org.apache.solr.common.cloud.CompositeIdRouter$KeyParser.getHash(CompositeIdRouter.java:296)
[junit4]  at 
 org.apache.solr.common.cloud.CompositeIdRouter.sliceHash(CompositeIdRouter.java:58)
[junit4]  at 
 org.apache.solr.common.cloud.HashBasedRouter.getTargetSlice(HashBasedRouter.java:33)
[junit4]  at 
 org.apache.solr.cloud.TestHashPartitioner.testNonConformingCompositeId(TestHashPartitioner.java:205)
[junit4]  at java.lang.Thread.run(Thread.java:745)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5834) Make empty doc values impls singletons

2014-07-19 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-5834:


 Summary: Make empty doc values impls singletons
 Key: LUCENE-5834
 URL: https://issues.apache.org/jira/browse/LUCENE-5834
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Trivial
 Fix For: 5.0, 4.10


Making these empty instances singletons would allow to use {{unwrapSingleton}} 
to check if they are single-valued.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5834) Make empty doc values impls singletons

2014-07-19 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-5834:
-

Attachment: LUCENE-5834.patch

Patch.

 Make empty doc values impls singletons
 --

 Key: LUCENE-5834
 URL: https://issues.apache.org/jira/browse/LUCENE-5834
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Trivial
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5834.patch


 Making these empty instances singletons would allow to use 
 {{unwrapSingleton}} to check if they are single-valued.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5835) Add sortMissingLast support to TermValComparator

2014-07-19 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-5835:


 Summary: Add sortMissingLast support to TermValComparator
 Key: LUCENE-5835
 URL: https://issues.apache.org/jira/browse/LUCENE-5835
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0, 4.10


It would be nice to allow to configure the behavior on missing values for this 
comparator, similarly to what TermOrdValComparator does.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5835) Add sortMissingLast support to TermValComparator

2014-07-19 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-5835:
-

Attachment: LUCENE-5835.patch

Here is a patch. I removed the notes about removing TermValComparator as it is 
the only way to sort a field that has binary doc values.

Other than that:
 - it can now sort missing values last
 - you can override the terms and docsWithField that are used for comparison
 - you can override the detection for null values. This is typically useful if 
there is a sentinel value that represents null.

I didn't add support for custom missing values as I'm not sure it is a common 
need on binary/string content but it is easy to implement on top of this 
comparator by overriding {{getDocsWithField}} to return a Bits.MatchAllBits set 
and wrap the binary dv returned by {{getBinaryDocValues}}.

 Add sortMissingLast support to TermValComparator
 

 Key: LUCENE-5835
 URL: https://issues.apache.org/jira/browse/LUCENE-5835
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5835.patch


 It would be nice to allow to configure the behavior on missing values for 
 this comparator, similarly to what TermOrdValComparator does.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5836) BytesRef.copyBytes and copyChars don't oversize

2014-07-19 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-5836:


 Summary: BytesRef.copyBytes and copyChars don't oversize
 Key: LUCENE-5836
 URL: https://issues.apache.org/jira/browse/LUCENE-5836
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand


When copying data from another BytesRef/CharSequence, these methods don't 
oversize. This is not an issue if this method is used only once per BytesRef 
instance but I just reviewed the usage of these methods and they are very 
frequently used in loops to do things like:
 - keep track of the top values in comparators
 - keep track of the previous terms in various loops over a terms enum 
(lucene49 DV consumer, BlockTreeTermsWriter)
 - etc.

Although unlikely, it might be possible to hit a worst-case and to resize the 
underlying byte[] on every call to copyBytes? Should we oversize the underlying 
array in these methods?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5836) BytesRef.copyBytes and copyChars don't oversize

2014-07-19 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067509#comment-14067509
 ] 

Robert Muir commented on LUCENE-5836:
-

I'm not sure we should encourage the stringbuffer usage of these things. Maybe 
copyBytes should just be a front-end for system.arraycopy and the user has to 
ensure allocation themself.

 BytesRef.copyBytes and copyChars don't oversize
 ---

 Key: LUCENE-5836
 URL: https://issues.apache.org/jira/browse/LUCENE-5836
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand

 When copying data from another BytesRef/CharSequence, these methods don't 
 oversize. This is not an issue if this method is used only once per BytesRef 
 instance but I just reviewed the usage of these methods and they are very 
 frequently used in loops to do things like:
  - keep track of the top values in comparators
  - keep track of the previous terms in various loops over a terms enum 
 (lucene49 DV consumer, BlockTreeTermsWriter)
  - etc.
 Although unlikely, it might be possible to hit a worst-case and to resize the 
 underlying byte[] on every call to copyBytes? Should we oversize the 
 underlying array in these methods?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5834) Make empty doc values impls singletons

2014-07-19 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067510#comment-14067510
 ] 

Robert Muir commented on LUCENE-5834:
-

+1

 Make empty doc values impls singletons
 --

 Key: LUCENE-5834
 URL: https://issues.apache.org/jira/browse/LUCENE-5834
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Trivial
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5834.patch


 Making these empty instances singletons would allow to use 
 {{unwrapSingleton}} to check if they are single-valued.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5836) BytesRef.copyBytes and copyChars don't oversize

2014-07-19 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067516#comment-14067516
 ] 

Adrien Grand commented on LUCENE-5836:
--

So the idea would be to change copyBytes to just copy bytes and fix call sites 
to call BytesRef.grow before copyBytes if necessary?

 BytesRef.copyBytes and copyChars don't oversize
 ---

 Key: LUCENE-5836
 URL: https://issues.apache.org/jira/browse/LUCENE-5836
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand

 When copying data from another BytesRef/CharSequence, these methods don't 
 oversize. This is not an issue if this method is used only once per BytesRef 
 instance but I just reviewed the usage of these methods and they are very 
 frequently used in loops to do things like:
  - keep track of the top values in comparators
  - keep track of the previous terms in various loops over a terms enum 
 (lucene49 DV consumer, BlockTreeTermsWriter)
  - etc.
 Although unlikely, it might be possible to hit a worst-case and to resize the 
 underlying byte[] on every call to copyBytes? Should we oversize the 
 underlying array in these methods?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5835) Add sortMissingLast support to TermValComparator

2014-07-19 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067515#comment-14067515
 ] 

Robert Muir commented on LUCENE-5835:
-

+1

 Add sortMissingLast support to TermValComparator
 

 Key: LUCENE-5835
 URL: https://issues.apache.org/jira/browse/LUCENE-5835
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5835.patch


 It would be nice to allow to configure the behavior on missing values for 
 this comparator, similarly to what TermOrdValComparator does.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5836) BytesRef.copyBytes and copyChars don't oversize

2014-07-19 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067517#comment-14067517
 ] 

Robert Muir commented on LUCENE-5836:
-

The problem is not unique to copyBytes.

copyBytes/append/grow/copyChars are all stringbuffer-type methods. 

I really think we should remove/discourage these: because bytesref is *really 
crap* for a stringbuffer type object since it has no safety. You cant be a 
reference to array with offset and also be this: its just horrible software 
design

 BytesRef.copyBytes and copyChars don't oversize
 ---

 Key: LUCENE-5836
 URL: https://issues.apache.org/jira/browse/LUCENE-5836
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand

 When copying data from another BytesRef/CharSequence, these methods don't 
 oversize. This is not an issue if this method is used only once per BytesRef 
 instance but I just reviewed the usage of these methods and they are very 
 frequently used in loops to do things like:
  - keep track of the top values in comparators
  - keep track of the previous terms in various loops over a terms enum 
 (lucene49 DV consumer, BlockTreeTermsWriter)
  - etc.
 Although unlikely, it might be possible to hit a worst-case and to resize the 
 underlying byte[] on every call to copyBytes? Should we oversize the 
 underlying array in these methods?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: Lucene 5825

2014-07-19 Thread shenoyvvarun
GitHub user shenoyvvarun opened a pull request:

https://github.com/apache/lucene-solr/pull/65

Lucene 5825



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shenoyvvarun/lucene-solr lucene-5825

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/65.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #65


commit d3d356c0855321d4f570a919664d1855f1802cf1
Author: Varun Shenoy shenoyvva...@gmail.com
Date:   2014-07-16T19:20:19Z

Supports configurable PostingsFormat though codec.PostingsFormat param

commit 0effc0959ef089aef42309dbddcf5a8510d45dab
Author: Varun Shenoy shenoyvva...@gmail.com
Date:   2014-07-16T19:25:42Z

Supports configurable PostingsFormat though codec.PostingsFormat param

commit 212dd46fa0b572837281f4fb652179c52353792d
Author: Varun Shenoy shenoyvva...@gmail.com
Date:   2014-07-19T13:38:21Z

Merge branch 'trunk' of https://github.com/apache/lucene-solr into 
lucene-5825
Pulled upstream




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5837) Only check docsWithField when necessary in numeric comparators

2014-07-19 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-5837:


 Summary: Only check docsWithField when necessary in numeric 
comparators
 Key: LUCENE-5837
 URL: https://issues.apache.org/jira/browse/LUCENE-5837
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0, 4.10


Our numeric comparators have branches to deal with missing values. However 
there are some cases when checking docs that have a field is not useful:
 - if all docs have a value
 - if no docs have a value
 - if the missing value is 0



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5837) Only check docsWithField when necessary in numeric comparators

2014-07-19 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-5837:
-

Attachment: LUCENE-5837.patch

Here is a patch.

Do we have a benchmark that could be used to validate this change? I just 
checked out luceneutil but it only seems to have tasks for queries, not sorting?

 Only check docsWithField when necessary in numeric comparators
 --

 Key: LUCENE-5837
 URL: https://issues.apache.org/jira/browse/LUCENE-5837
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5837.patch


 Our numeric comparators have branches to deal with missing values. However 
 there are some cases when checking docs that have a field is not useful:
  - if all docs have a value
  - if no docs have a value
  - if the missing value is 0



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5825) Allowing the benchmarking algorithm to choose PostingsFormat

2014-07-19 Thread Varun V Shenoy (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067531#comment-14067531
 ] 

Varun  V Shenoy commented on LUCENE-5825:
-

I have pushed my branch upstream and also sent a pull request.

The url for my branch
https://github.com/shenoyvvarun/lucene-solr/tree/lucene-5825

 Allowing the benchmarking algorithm to choose PostingsFormat
 

 Key: LUCENE-5825
 URL: https://issues.apache.org/jira/browse/LUCENE-5825
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/benchmark
Affects Versions: 5.0
Reporter: Varun  V Shenoy
Priority: Minor
 Fix For: 5.0

 Attachments: patch_17_Jul_2014


 The algorithm file for benchmarking should allow PostingsFormat to be 
 configurable.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5837) Only check docsWithField when necessary in numeric comparators

2014-07-19 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067533#comment-14067533
 ] 

Robert Muir commented on LUCENE-5837:
-

I dont understand how the MatchNoBits case is safe.

 Only check docsWithField when necessary in numeric comparators
 --

 Key: LUCENE-5837
 URL: https://issues.apache.org/jira/browse/LUCENE-5837
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5837.patch


 Our numeric comparators have branches to deal with missing values. However 
 there are some cases when checking docs that have a field is not useful:
  - if all docs have a value
  - if no docs have a value
  - if the missing value is 0



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5837) Only check docsWithField when necessary in numeric comparators

2014-07-19 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067535#comment-14067535
 ] 

Adrien Grand commented on LUCENE-5837:
--

If no document has values, then they will all return the missing value?

 Only check docsWithField when necessary in numeric comparators
 --

 Key: LUCENE-5837
 URL: https://issues.apache.org/jira/browse/LUCENE-5837
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5837.patch


 Our numeric comparators have branches to deal with missing values. However 
 there are some cases when checking docs that have a field is not useful:
  - if all docs have a value
  - if no docs have a value
  - if the missing value is 0



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5837) Only check docsWithField when necessary in numeric comparators

2014-07-19 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-5837:
-

Attachment: LUCENE-5837.patch

Oops, I understand your question now, I didn't upload the latest version of my 
patch. :-)

 Only check docsWithField when necessary in numeric comparators
 --

 Key: LUCENE-5837
 URL: https://issues.apache.org/jira/browse/LUCENE-5837
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5837.patch, LUCENE-5837.patch


 Our numeric comparators have branches to deal with missing values. However 
 there are some cases when checking docs that have a field is not useful:
  - if all docs have a value
  - if no docs have a value
  - if the missing value is 0



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5837) Only check docsWithField when necessary in numeric comparators

2014-07-19 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067543#comment-14067543
 ] 

Robert Muir commented on LUCENE-5837:
-

i benchmarked the first version of the patch with the little benchmark in 
luceneutil, but saw no improvement.

I think the current null check is effective? it has to be handled anyway. And 
personally i would be wary of overspecialization here...

 Only check docsWithField when necessary in numeric comparators
 --

 Key: LUCENE-5837
 URL: https://issues.apache.org/jira/browse/LUCENE-5837
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5837.patch, LUCENE-5837.patch


 Our numeric comparators have branches to deal with missing values. However 
 there are some cases when checking docs that have a field is not useful:
  - if all docs have a value
  - if no docs have a value
  - if the missing value is 0



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-19 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067556#comment-14067556
 ] 

Littlestar commented on LUCENE-5801:


4.6.1 works well, but 4.9.0 works fail.

In 4.6.1, I use CategoryPath.
In 4.9.0, I use FacetField.

4.9.0 missing OrdinalMappingAtomicReader, I get it from 4.10 trunk.
I use it for merging indexes with taxonomies.

 Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 -

 Key: LUCENE-5801
 URL: https://issues.apache.org/jira/browse/LUCENE-5801
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.7
Reporter: Nicola Buso
Assignee: Shai Erera
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5801.patch, LUCENE-5801.patch, 
 LUCENE-5801_1.patch, LUCENE-5801_2.patch


 from lucene  4.6.1 the class:
 org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 was removed; resurrect it because used merging indexes related to merged 
 taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5843) No way to clear error state of a core that doesn't even exist any more

2014-07-19 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067565#comment-14067565
 ] 

Shalin Shekhar Mangar commented on SOLR-5843:
-

I think this is take care of by SOLR-6232

 No way to clear error state of a core that doesn't even exist any more
 --

 Key: SOLR-5843
 URL: https://issues.apache.org/jira/browse/SOLR-5843
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6.1
Reporter: Nathan Neulinger
  Labels: cloud, failure, initialization

 Created collections with missing configs - this is known to create a problem 
 state. Those collections have all since been deleted -- but one of my nodes 
 still insists that there are initialization errors.
 There are no references to those 'failed' cores in any of the cloud tabs, or 
 in ZK, or in the directories on the server itself. 
 There should be some easy way to refresh this state or to clear them out 
 without having to restart the instance. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5843) No way to clear error state of a core that doesn't even exist any more

2014-07-19 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067565#comment-14067565
 ] 

Shalin Shekhar Mangar edited comment on SOLR-5843 at 7/19/14 3:58 PM:
--

I think this is taken care of by SOLR-6232


was (Author: shalinmangar):
I think this is take care of by SOLR-6232

 No way to clear error state of a core that doesn't even exist any more
 --

 Key: SOLR-5843
 URL: https://issues.apache.org/jira/browse/SOLR-5843
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6.1
Reporter: Nathan Neulinger
  Labels: cloud, failure, initialization

 Created collections with missing configs - this is known to create a problem 
 state. Those collections have all since been deleted -- but one of my nodes 
 still insists that there are initialization errors.
 There are no references to those 'failed' cores in any of the cloud tabs, or 
 in ZK, or in the directories on the server itself. 
 There should be some easy way to refresh this state or to clear them out 
 without having to restart the instance. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5205) [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2014-07-19 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067571#comment-14067571
 ] 

Paul Elschot commented on LUCENE-5205:
--

To have the github pull request message show up here, iirc one can add the 
lucene issue identier somewhere early in the message, see for example at 
LUCENE-5627.

I did this locally:
git pull  https://github.com/tballison/lucene-solr lucene5205:lucene5205

which ended in git commit 4d95fb5b69e667c0ec5d51bbe92096fe23d88f9c .
Then, in directory lucene/queryparser, ant test failed to compile. This is the 
first error message:
... 
lucene/queryparser/src/test/org/apache/lucene/queryparser/util/QueryParserTestBase.java:51:
 error: cannot find symbol
[javac] import org.apache.lucene.util.automaton.BasicAutomata;
[javac]^
[javac]   symbol:   class BasicAutomata

Also I think it would be better not to use lucene5205 as the branch name, 
because it is used in the upstream repository.
Shall we use for example lucene5205-ta and lucene5205-pe as branch names in our 
github repositories?

 [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to 
 classic QueryParser
 ---

 Key: LUCENE-5205
 URL: https://issues.apache.org/jira/browse/LUCENE-5205
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/queryparser
Reporter: Tim Allison
  Labels: patch
 Fix For: 4.9

 Attachments: LUCENE-5205-cleanup-tests.patch, 
 LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, 
 LUCENE-5205_dateTestReInitPkgPrvt.patch, 
 LUCENE-5205_improve_stop_word_handling.patch, 
 LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, 
 SpanQueryParser_v1.patch.gz, patch.txt


 This parser extends QueryParserBase and includes functionality from:
 * Classic QueryParser: most of its syntax
 * SurroundQueryParser: recursive parsing for near and not clauses.
 * ComplexPhraseQueryParser: can handle near queries that include multiterms 
 (wildcard, fuzzy, regex, prefix),
 * AnalyzingQueryParser: has an option to analyze multiterms.
 At a high level, there's a first pass BooleanQuery/field parser and then a 
 span query parser handles all terminal nodes and phrases.
 Same as classic syntax:
 * term: test 
 * fuzzy: roam~0.8, roam~2
 * wildcard: te?t, test*, t*st
 * regex: /\[mb\]oat/
 * phrase: jakarta apache
 * phrase with slop: jakarta apache~3
 * default or clause: jakarta apache
 * grouping or clause: (jakarta apache)
 * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta
 * multiple fields: title:lucene author:hatcher
  
 Main additions in SpanQueryParser syntax vs. classic syntax:
 * Can require in order for phrases with slop with the \~ operator: 
 jakarta apache\~3
 * Can specify not near: fever bieber!\~3,10 ::
 find fever but not if bieber appears within 3 words before or 10 
 words after it.
 * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta 
 apache\]~3 lucene\]\~4 :: 
 find jakarta within 3 words of apache, and that hit has to be within 
 four words before lucene
 * Can also use \[\] for single level phrasal queries instead of  as in: 
 \[jakarta apache\]
 * Can use or grouping clauses in phrasal queries: apache (lucene solr)\~3 
 :: find apache and then either lucene or solr within three words.
 * Can use multiterms in phrasal queries: jakarta\~1 ap*che\~2
 * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
 /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like jakarta within two 
 words of ap*che and that hit has to be within ten words of something like 
 solr or that lucene regex.
 * Can require at least x number of hits at boolean level: apache AND (lucene 
 solr tika)~2
 * Can use negative only query: -jakarta :: Find all docs that don't contain 
 jakarta
 * Can use an edit distance  2 for fuzzy query via SlowFuzzyQuery (beware of 
 potential performance issues!).
 Trivial additions:
 * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, 
 prefix =2)
 * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance 
 =2: (jakarta~1 (OSA) vs jakarta~1(Levenshtein)
 This parser can be very useful for concordance tasks (see also LUCENE-5317 
 and LUCENE-5318) and for analytical search.  
 Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.
 Most of the documentation is in the javadoc for SpanQueryParser.
 Any and all feedback is welcome.  Thank you.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: 

[jira] [Updated] (SOLR-6216) Better faceting for multiple intervals on DV fields

2014-07-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-6216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-6216:


Attachment: SOLR-6216.patch

Attached a new patch with some more parsing unit tests (validating Erick's 
question of f.field.facet.interval.set vs facet.interval.set)
Added some more javadocs on SimpleFacets (removed that TODO)

 Better faceting for multiple intervals on DV fields
 ---

 Key: SOLR-6216
 URL: https://issues.apache.org/jira/browse/SOLR-6216
 Project: Solr
  Issue Type: Improvement
Reporter: Tomás Fernández Löbbe
Assignee: Erick Erickson
 Attachments: SOLR-6216.patch, SOLR-6216.patch, SOLR-6216.patch, 
 SOLR-6216.patch, SOLR-6216.patch, SOLR-6216.patch, SOLR-6216.patch, 
 SOLR-6216.patch


 There are two ways to have faceting on values ranges in Solr right now: 
 “Range Faceting” and “Query Faceting” (doing range queries). They both end up 
 doing something similar:
 {code:java}
 searcher.numDocs(rangeQ , docs)
 {code}
 The good thing about this implementation is that it can benefit from caching. 
 The bad thing is that it may be slow with cold caches, and that there will be 
 a query for each of the ranges.
 A different implementation would be one that works similar to regular field 
 faceting, using doc values and validating ranges for each value of the 
 matching documents. This implementation would sometimes be faster than Range 
 Faceting / Query Faceting, specially on cases where caches are not very 
 effective, like on a high update rate, or where ranges change frequently.
 Functionally, the result should be exactly the same as the one obtained by 
 doing a facet query for every interval



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6216) Better faceting for multiple intervals on DV fields

2014-07-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-6216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-6216:


Attachment: SOLR-6216.patch

Added a couple of tests

 Better faceting for multiple intervals on DV fields
 ---

 Key: SOLR-6216
 URL: https://issues.apache.org/jira/browse/SOLR-6216
 Project: Solr
  Issue Type: Improvement
Reporter: Tomás Fernández Löbbe
Assignee: Erick Erickson
 Attachments: SOLR-6216.patch, SOLR-6216.patch, SOLR-6216.patch, 
 SOLR-6216.patch, SOLR-6216.patch, SOLR-6216.patch, SOLR-6216.patch, 
 SOLR-6216.patch, SOLR-6216.patch


 There are two ways to have faceting on values ranges in Solr right now: 
 “Range Faceting” and “Query Faceting” (doing range queries). They both end up 
 doing something similar:
 {code:java}
 searcher.numDocs(rangeQ , docs)
 {code}
 The good thing about this implementation is that it can benefit from caching. 
 The bad thing is that it may be slow with cold caches, and that there will be 
 a query for each of the ranges.
 A different implementation would be one that works similar to regular field 
 faceting, using doc values and validating ranges for each value of the 
 matching documents. This implementation would sometimes be faster than Range 
 Faceting / Query Faceting, specially on cases where caches are not very 
 effective, like on a high update rate, or where ranges change frequently.
 Functionally, the result should be exactly the same as the one obtained by 
 doing a facet query for every interval



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-19 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067583#comment-14067583
 ] 

Littlestar commented on LUCENE-5801:


I test again.

this bug only occur when there is another BinaryDocValue field.
two field: FacetField + BinaryDocValuesField 

when I remove the BinaryDocValuesField, tested ok.

I checked OrdinalMappingAtomicReader, there is no differ between normal 
BinaryDocValuesField  or FacetField ??

 Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 -

 Key: LUCENE-5801
 URL: https://issues.apache.org/jira/browse/LUCENE-5801
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.7
Reporter: Nicola Buso
Assignee: Shai Erera
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5801.patch, LUCENE-5801.patch, 
 LUCENE-5801_1.patch, LUCENE-5801_2.patch


 from lucene  4.6.1 the class:
 org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 was removed; resurrect it because used merging indexes related to merged 
 taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-19 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067583#comment-14067583
 ] 

Littlestar edited comment on LUCENE-5801 at 7/19/14 4:45 PM:
-

I test again.

this bug only occur when there is another BinaryDocValue field.
two field: FacetField + BinaryDocValuesField 

when I remove the BinaryDocValuesField, tested ok.

I checked OrdinalMappingAtomicReader, there is no check FacetField ?

I think getBinaryDocValues is wrong in 4.10.0
in 4.6.1, it check field in dvFieldMap.

in 4.6.1
 @Override
  public BinaryDocValues getBinaryDocValues(String field) throws IOException {
BinaryDocValues inner = super.getBinaryDocValues(field);
if (inner == null) {
  return inner;
}

CategoryListParams clp = dvFieldMap.get(field);
if (clp == null) {
  return inner;
} else {
  return new OrdinalMappingBinaryDocValues(clp, inner);
}
  }



was (Author: cnstar9988):
I test again.

this bug only occur when there is another BinaryDocValue field.
two field: FacetField + BinaryDocValuesField 

when I remove the BinaryDocValuesField, tested ok.

I checked OrdinalMappingAtomicReader, there is no differ between normal 
BinaryDocValuesField  or FacetField ??

 Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 -

 Key: LUCENE-5801
 URL: https://issues.apache.org/jira/browse/LUCENE-5801
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.7
Reporter: Nicola Buso
Assignee: Shai Erera
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5801.patch, LUCENE-5801.patch, 
 LUCENE-5801_1.patch, LUCENE-5801_2.patch


 from lucene  4.6.1 the class:
 org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 was removed; resurrect it because used merging indexes related to merged 
 taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-19 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067586#comment-14067586
 ] 

Littlestar commented on LUCENE-5801:


the following patch works well for me.

OrdinalMappingAtomicReader(4.10.0)
@Override
  public BinaryDocValues getBinaryDocValues(String field) throws IOException {
  if (!field.equals(facetsConfig.getDimConfig(field).indexFieldName)) {
  return super.getBinaryDocValues(field);
  }
  
final OrdinalsReader ordsReader = getOrdinalsReader(field);
return new 
OrdinalMappingBinaryDocValues(ordsReader.getReader(in.getContext()));
  }

 Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 -

 Key: LUCENE-5801
 URL: https://issues.apache.org/jira/browse/LUCENE-5801
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.7
Reporter: Nicola Buso
Assignee: Shai Erera
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5801.patch, LUCENE-5801.patch, 
 LUCENE-5801_1.patch, LUCENE-5801_2.patch


 from lucene  4.6.1 the class:
 org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 was removed; resurrect it because used merging indexes related to merged 
 taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-19 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067583#comment-14067583
 ] 

Littlestar edited comment on LUCENE-5801 at 7/19/14 4:55 PM:
-

I test again.

this bug only occur when there is another BinaryDocValue field.
two field: FacetField + BinaryDocValuesField 

when I remove the BinaryDocValuesField, tested ok.

I think getBinaryDocValues is wrong in 4.10.0
it has no check the BinaryDocValuesField is  FacetField or not, just wrapper it 
to OrdinalMappingBinaryDocValues.

in 4.6.1, it check field in dvFieldMap.

in 4.6.1
 @Override
  public BinaryDocValues getBinaryDocValues(String field) throws IOException {
BinaryDocValues inner = super.getBinaryDocValues(field);
if (inner == null) {
  return inner;
}

CategoryListParams clp = dvFieldMap.get(field);
if (clp == null) {
  return inner;
} else {
  return new OrdinalMappingBinaryDocValues(clp, inner);
}
  }



was (Author: cnstar9988):
I test again.

this bug only occur when there is another BinaryDocValue field.
two field: FacetField + BinaryDocValuesField 

when I remove the BinaryDocValuesField, tested ok.

I checked OrdinalMappingAtomicReader, there is no check FacetField ?

I think getBinaryDocValues is wrong in 4.10.0
in 4.6.1, it check field in dvFieldMap.

in 4.6.1
 @Override
  public BinaryDocValues getBinaryDocValues(String field) throws IOException {
BinaryDocValues inner = super.getBinaryDocValues(field);
if (inner == null) {
  return inner;
}

CategoryListParams clp = dvFieldMap.get(field);
if (clp == null) {
  return inner;
} else {
  return new OrdinalMappingBinaryDocValues(clp, inner);
}
  }


 Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 -

 Key: LUCENE-5801
 URL: https://issues.apache.org/jira/browse/LUCENE-5801
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.7
Reporter: Nicola Buso
Assignee: Shai Erera
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5801.patch, LUCENE-5801.patch, 
 LUCENE-5801_1.patch, LUCENE-5801_2.patch


 from lucene  4.6.1 the class:
 org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 was removed; resurrect it because used merging indexes related to merged 
 taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5801) Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader

2014-07-19 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067583#comment-14067583
 ] 

Littlestar edited comment on LUCENE-5801 at 7/19/14 5:05 PM:
-

I test again.

this bug only occur when there is another BinaryDocValue field.
two field: FacetField + BinaryDocValuesField 

when I remove the BinaryDocValuesField, tested ok.

I think OrdinalMappingBinaryDocValues#getBinaryDocValues is wrong in 4.10.0
it has no check whether the BinaryDocValuesField is FacetField or not, just 
wrapper it to OrdinalMappingBinaryDocValues.

in 4.6.1, it has checked whether the field  exist in dvFieldMap or not.

in 4.6.1
 @Override
  public BinaryDocValues getBinaryDocValues(String field) throws IOException {
BinaryDocValues inner = super.getBinaryDocValues(field);
if (inner == null) {
  return inner;
}

CategoryListParams clp = dvFieldMap.get(field);
if (clp == null) {
  return inner;
} else {
  return new OrdinalMappingBinaryDocValues(clp, inner);
}
  }



was (Author: cnstar9988):
I test again.

this bug only occur when there is another BinaryDocValue field.
two field: FacetField + BinaryDocValuesField 

when I remove the BinaryDocValuesField, tested ok.

I think getBinaryDocValues is wrong in 4.10.0
it has no check the BinaryDocValuesField is  FacetField or not, just wrapper it 
to OrdinalMappingBinaryDocValues.

in 4.6.1, it check field in dvFieldMap.

in 4.6.1
 @Override
  public BinaryDocValues getBinaryDocValues(String field) throws IOException {
BinaryDocValues inner = super.getBinaryDocValues(field);
if (inner == null) {
  return inner;
}

CategoryListParams clp = dvFieldMap.get(field);
if (clp == null) {
  return inner;
} else {
  return new OrdinalMappingBinaryDocValues(clp, inner);
}
  }


 Resurrect org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 -

 Key: LUCENE-5801
 URL: https://issues.apache.org/jira/browse/LUCENE-5801
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.7
Reporter: Nicola Buso
Assignee: Shai Erera
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5801.patch, LUCENE-5801.patch, 
 LUCENE-5801_1.patch, LUCENE-5801_2.patch


 from lucene  4.6.1 the class:
 org.apache.lucene.facet.util.OrdinalMappingAtomicReader
 was removed; resurrect it because used merging indexes related to merged 
 taxonomies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6241) HttpPartitionTest.testRf3WithLeaderFailover fails sometimes

2014-07-19 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067592#comment-14067592
 ] 

Timothy Potter commented on SOLR-6241:
--

I went ahead and disabled this test on trunk  branch_4x using the AwaitsFix 
annotation. I'm digging into the failure as well Shalin, thanks for the help!

 HttpPartitionTest.testRf3WithLeaderFailover fails sometimes
 ---

 Key: SOLR-6241
 URL: https://issues.apache.org/jira/browse/SOLR-6241
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 4.10


 This test fails sometimes locally as well as on jenkins.
 {code}
 Expected 2 of 3 replicas to be active but only found 1
 at org.junit.Assert.fail(Assert.java:93)
 at org.junit.Assert.assertTrue(Assert.java:43)
 at 
 org.apache.solr.cloud.HttpPartitionTest.testRf3WithLeaderFailover(HttpPartitionTest.java:367)
 at 
 org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:148)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:863)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-5831) ant precommit should depend on clean-jars

2014-07-19 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe reassigned LUCENE-5831:
--

Assignee: Steve Rowe

 ant precommit should depend on clean-jars
 -

 Key: LUCENE-5831
 URL: https://issues.apache.org/jira/browse/LUCENE-5831
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/build
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Minor

 Ivy's bug that fails to remove differently versioned dependencies in 
 (test-)lib/ dirs even though we set {{sync=true}} on {{ivy:retrieve}} (I 
 couldn't find a JIRA for this) continues to cause trouble/confusion (see 
 related LUCENE-5467).
 We should make the ant {{precommit}} target depend on {{clean-jars}}, so that 
 people won't think they need to run {{ant jar-checksums}} because of stale 
 jars Ivy leaves in {{lib/}} or {{test-lib/}} directories, which currently 
 causes {{ant precommit}} to bitch that there are missing checksums.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5831) ant precommit should remind people to run clean-jars when checksums are not right

2014-07-19 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-5831:
---

Summary: ant precommit should remind people to run clean-jars when 
checksums are not right  (was: ant precommit should depend on clean-jars)

 ant precommit should remind people to run clean-jars when checksums are not 
 right
 -

 Key: LUCENE-5831
 URL: https://issues.apache.org/jira/browse/LUCENE-5831
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/build
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Minor

 Ivy's bug that fails to remove differently versioned dependencies in 
 (test-)lib/ dirs even though we set {{sync=true}} on {{ivy:retrieve}} (I 
 couldn't find a JIRA for this) continues to cause trouble/confusion (see 
 related LUCENE-5467).
 We should make the ant {{precommit}} target depend on {{clean-jars}}, so that 
 people won't think they need to run {{ant jar-checksums}} because of stale 
 jars Ivy leaves in {{lib/}} or {{test-lib/}} directories, which currently 
 causes {{ant precommit}} to bitch that there are missing checksums.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5831) ant precommit should remind people to run clean-jars and jar-checksums when checksums are not right

2014-07-19 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-5831:
---

Summary: ant precommit should remind people to run clean-jars and 
jar-checksums when checksums are not right  (was: ant precommit should remind 
people to run clean-jars when checksums are not right)

 ant precommit should remind people to run clean-jars and jar-checksums when 
 checksums are not right
 ---

 Key: LUCENE-5831
 URL: https://issues.apache.org/jira/browse/LUCENE-5831
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/build
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Minor

 Ivy's bug that fails to remove differently versioned dependencies in 
 (test-)lib/ dirs even though we set {{sync=true}} on {{ivy:retrieve}} (I 
 couldn't find a JIRA for this) continues to cause trouble/confusion (see 
 related LUCENE-5467).
 We should make the ant {{precommit}} target depend on {{clean-jars}}, so that 
 people won't think they need to run {{ant jar-checksums}} because of stale 
 jars Ivy leaves in {{lib/}} or {{test-lib/}} directories, which currently 
 causes {{ant precommit}} to bitch that there are missing checksums.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5205) [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2014-07-19 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067615#comment-14067615
 ] 

Paul Elschot commented on LUCENE-5205:
--

I wrote:

bq. Shall we use for example lucene5205-ta and lucene5205-pe as branch names in 
our github repositories?

Actually, once we're done here solving the earlier merge conflict between 
lucene5205 and trunk, we can move to LUCENE-5758.
So the branch names could be lucene5758-ta and lucene5758-pe .

 [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to 
 classic QueryParser
 ---

 Key: LUCENE-5205
 URL: https://issues.apache.org/jira/browse/LUCENE-5205
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/queryparser
Reporter: Tim Allison
  Labels: patch
 Fix For: 4.9

 Attachments: LUCENE-5205-cleanup-tests.patch, 
 LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, 
 LUCENE-5205_dateTestReInitPkgPrvt.patch, 
 LUCENE-5205_improve_stop_word_handling.patch, 
 LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, 
 SpanQueryParser_v1.patch.gz, patch.txt


 This parser extends QueryParserBase and includes functionality from:
 * Classic QueryParser: most of its syntax
 * SurroundQueryParser: recursive parsing for near and not clauses.
 * ComplexPhraseQueryParser: can handle near queries that include multiterms 
 (wildcard, fuzzy, regex, prefix),
 * AnalyzingQueryParser: has an option to analyze multiterms.
 At a high level, there's a first pass BooleanQuery/field parser and then a 
 span query parser handles all terminal nodes and phrases.
 Same as classic syntax:
 * term: test 
 * fuzzy: roam~0.8, roam~2
 * wildcard: te?t, test*, t*st
 * regex: /\[mb\]oat/
 * phrase: jakarta apache
 * phrase with slop: jakarta apache~3
 * default or clause: jakarta apache
 * grouping or clause: (jakarta apache)
 * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta
 * multiple fields: title:lucene author:hatcher
  
 Main additions in SpanQueryParser syntax vs. classic syntax:
 * Can require in order for phrases with slop with the \~ operator: 
 jakarta apache\~3
 * Can specify not near: fever bieber!\~3,10 ::
 find fever but not if bieber appears within 3 words before or 10 
 words after it.
 * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta 
 apache\]~3 lucene\]\~4 :: 
 find jakarta within 3 words of apache, and that hit has to be within 
 four words before lucene
 * Can also use \[\] for single level phrasal queries instead of  as in: 
 \[jakarta apache\]
 * Can use or grouping clauses in phrasal queries: apache (lucene solr)\~3 
 :: find apache and then either lucene or solr within three words.
 * Can use multiterms in phrasal queries: jakarta\~1 ap*che\~2
 * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
 /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like jakarta within two 
 words of ap*che and that hit has to be within ten words of something like 
 solr or that lucene regex.
 * Can require at least x number of hits at boolean level: apache AND (lucene 
 solr tika)~2
 * Can use negative only query: -jakarta :: Find all docs that don't contain 
 jakarta
 * Can use an edit distance  2 for fuzzy query via SlowFuzzyQuery (beware of 
 potential performance issues!).
 Trivial additions:
 * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, 
 prefix =2)
 * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance 
 =2: (jakarta~1 (OSA) vs jakarta~1(Levenshtein)
 This parser can be very useful for concordance tasks (see also LUCENE-5317 
 and LUCENE-5318) and for analytical search.  
 Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.
 Most of the documentation is in the javadoc for SpanQueryParser.
 Any and all feedback is welcome.  Thank you.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5837) Only check docsWithField when necessary in numeric comparators

2014-07-19 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067618#comment-14067618
 ] 

Uwe Schindler commented on LUCENE-5837:
---

I dont think there is specialization needed.  The null check, Robert mentions, 
was done like this to optimize missing values.

the null check has to be done anyways by the jvm, so removing it brings nothing.

see the original missing values issue for discussion.

 Only check docsWithField when necessary in numeric comparators
 --

 Key: LUCENE-5837
 URL: https://issues.apache.org/jira/browse/LUCENE-5837
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5837.patch, LUCENE-5837.patch


 Our numeric comparators have branches to deal with missing values. However 
 there are some cases when checking docs that have a field is not useful:
  - if all docs have a value
  - if no docs have a value
  - if the missing value is 0



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5831) ant precommit should remind people to run clean-jars and jar-checksums when checksums are not right

2014-07-19 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved LUCENE-5831.


   Resolution: Fixed
Fix Version/s: 4.10
   5.0

Thanks everybody, I added Hoss's text, with minor modifications, to the failure 
message printed by {{LicenseCheckTask}} (called by ant target 
{{check-licenses}} via {{validate}} via {{precommit}}.)

 ant precommit should remind people to run clean-jars and jar-checksums when 
 checksums are not right
 ---

 Key: LUCENE-5831
 URL: https://issues.apache.org/jira/browse/LUCENE-5831
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/build
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Minor
 Fix For: 5.0, 4.10


 Ivy's bug that fails to remove differently versioned dependencies in 
 (test-)lib/ dirs even though we set {{sync=true}} on {{ivy:retrieve}} (I 
 couldn't find a JIRA for this) continues to cause trouble/confusion (see 
 related LUCENE-5467).
 We should make the ant {{precommit}} target depend on {{clean-jars}}, so that 
 people won't think they need to run {{ant jar-checksums}} because of stale 
 jars Ivy leaves in {{lib/}} or {{test-lib/}} directories, which currently 
 causes {{ant precommit}} to bitch that there are missing checksums.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4999) Make the collections API consistent by using 'collection' instead of 'name'

2014-07-19 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067621#comment-14067621
 ] 

Anshum Gupta commented on SOLR-4999:


Thinking about all the APIs, the name/collection split already mostly makes 
sense (logically). This would just help standardize the APIs.
If we decide to do this (which I'm already having split thoughts about), it 
would make more sense to break the back-compat with 5.0 and give users an 
option to use either for all the 4.x releases. Having both of them work in 5.x 
would only make things more confusing and also lead to messy code.

 Make the collections API consistent by using 'collection' instead of 'name'
 ---

 Key: SOLR-4999
 URL: https://issues.apache.org/jira/browse/SOLR-4999
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3.1
Reporter: Anshum Gupta

 The collections API as of now are split between using 'name' and 'collection' 
 parameter.
 We should add support to all APIs to work with 'collection', while 
 maintaining 'name' (where it already exists) until 5.0.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6257) More than two !-s in a doc ID throws an ArrayIndexOutOfBoundsException when using the composite id router

2014-07-19 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-6257.
--

   Resolution: Fixed
Fix Version/s: 4.10
   5.0

Committed:
* trunk: r1611934
* branch_4x: r1611940

 More than two !-s in a doc ID throws an ArrayIndexOutOfBoundsException when 
 using the composite id router
 ---

 Key: SOLR-6257
 URL: https://issues.apache.org/jira/browse/SOLR-6257
 Project: Solr
  Issue Type: Bug
Reporter: Steve Rowe
Assignee: Steve Rowe
 Fix For: 5.0, 4.10

 Attachments: SOLR-6257.patch, SOLR-6257.patch


 Since {{CompositeIdRouter}} is the default router, it has to be able to deal 
 with *any* ID string without throwing an exception.
 The following test (added to {{TestHashPartitioner}}) currently fails:
 {code:java}
   public void testNonConformingCompositeId() throws Exception {
 DocRouter router = DocRouter.getDocRouter(CompositeIdRouter.NAME);
 DocCollection coll = createCollection(4, router);
 Slice targetSlice = coll.getRouter().getTargetSlice(A!B!C!D, null, 
 null, coll);
 assertNotNull(targetSlice);
   }
 {code}
 with the following output: 
 {noformat}
[junit4] Suite: org.apache.solr.cloud.TestHashPartitioner
[junit4]   2 log4j:WARN No such property [conversionPattern] in 
 org.apache.solr.util.SolrLogLayout.
[junit4]   2 Creating dataDir: 
 /Users/sarowe/svn/lucene/dev/trunk/solr/build/solr-core/test/J0/./temp/solr.cloud.TestHashPartitioner-19514036FB5C5E56-001/init-core-data-001
[junit4]   2 1233 T11 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
 (false) and clientAuth (false)
[junit4]   2 1296 T11 oas.SolrTestCaseJ4.setUp ###Starting 
 testNonConformingCompositeId
[junit4] Throwable #1: java.lang.ArrayIndexOutOfBoundsException: 2
[junit4]  at 
 __randomizedtesting.SeedInfo.seed([19514036FB5C5E56:3A131EC016F531A4]:0)
[junit4]  at 
 org.apache.solr.common.cloud.CompositeIdRouter$KeyParser.getHash(CompositeIdRouter.java:296)
[junit4]  at 
 org.apache.solr.common.cloud.CompositeIdRouter.sliceHash(CompositeIdRouter.java:58)
[junit4]  at 
 org.apache.solr.common.cloud.HashBasedRouter.getTargetSlice(HashBasedRouter.java:33)
[junit4]  at 
 org.apache.solr.cloud.TestHashPartitioner.testNonConformingCompositeId(TestHashPartitioner.java:205)
[junit4]  at java.lang.Thread.run(Thread.java:745)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-07-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067625#comment-14067625
 ] 

Mark Miller commented on SOLR-3619:
---

bq. Just to be fair and balanced, ES does not come with any examples at all. 

To me, this issue is not about being more like ES. It's about being like most 
database systems out there. Examples are good, calling your out of the box 
experience an example is not good. 

There are lot's of low hanging ease of use issues we can address, but I still 
think the best way for this issue to move forward is to do what it says in the 
title and fight other battles in other issues.

bq. Rename 'example' dir to 'server' and pull examples into an 'examples' 
directory

Let's ship with a server dir where out of the box Solr lives. Let's move the 
example stuff mostly out of that.

We can work on other things in other issues. Like not having this silly default 
collection1 and exactly how many examples we should have and how they should be 
laid out. Let's just get a server directory going without duplicating stuff 
from example. We don't want to drop examples, we want to have a simple and 
generic out of the box experience and offer easy to use examples somewhere to 
the side.

 Rename 'example' dir to 'server' and pull examples into an 'examples' 
 directory
 ---

 Key: SOLR-3619
 URL: https://issues.apache.org/jira/browse/SOLR-3619
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
 Fix For: 4.9, 5.0

 Attachments: SOLR-3619.patch, server-name-layout.png






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6257) More than two !-s in a doc ID throws an ArrayIndexOutOfBoundsException when using the composite id router

2014-07-19 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067624#comment-14067624
 ] 

Steve Rowe edited comment on SOLR-6257 at 7/19/14 6:32 PM:
---

Committed:
* trunk: [r1611934|http://svn.apache.org/r1611934]
* branch_4x: [r1611940|http://svn.apache.org/r1611940]


was (Author: steve_rowe):
Committed:
* trunk: r1611934
* branch_4x: r1611940

 More than two !-s in a doc ID throws an ArrayIndexOutOfBoundsException when 
 using the composite id router
 ---

 Key: SOLR-6257
 URL: https://issues.apache.org/jira/browse/SOLR-6257
 Project: Solr
  Issue Type: Bug
Reporter: Steve Rowe
Assignee: Steve Rowe
 Fix For: 5.0, 4.10

 Attachments: SOLR-6257.patch, SOLR-6257.patch


 Since {{CompositeIdRouter}} is the default router, it has to be able to deal 
 with *any* ID string without throwing an exception.
 The following test (added to {{TestHashPartitioner}}) currently fails:
 {code:java}
   public void testNonConformingCompositeId() throws Exception {
 DocRouter router = DocRouter.getDocRouter(CompositeIdRouter.NAME);
 DocCollection coll = createCollection(4, router);
 Slice targetSlice = coll.getRouter().getTargetSlice(A!B!C!D, null, 
 null, coll);
 assertNotNull(targetSlice);
   }
 {code}
 with the following output: 
 {noformat}
[junit4] Suite: org.apache.solr.cloud.TestHashPartitioner
[junit4]   2 log4j:WARN No such property [conversionPattern] in 
 org.apache.solr.util.SolrLogLayout.
[junit4]   2 Creating dataDir: 
 /Users/sarowe/svn/lucene/dev/trunk/solr/build/solr-core/test/J0/./temp/solr.cloud.TestHashPartitioner-19514036FB5C5E56-001/init-core-data-001
[junit4]   2 1233 T11 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
 (false) and clientAuth (false)
[junit4]   2 1296 T11 oas.SolrTestCaseJ4.setUp ###Starting 
 testNonConformingCompositeId
[junit4] Throwable #1: java.lang.ArrayIndexOutOfBoundsException: 2
[junit4]  at 
 __randomizedtesting.SeedInfo.seed([19514036FB5C5E56:3A131EC016F531A4]:0)
[junit4]  at 
 org.apache.solr.common.cloud.CompositeIdRouter$KeyParser.getHash(CompositeIdRouter.java:296)
[junit4]  at 
 org.apache.solr.common.cloud.CompositeIdRouter.sliceHash(CompositeIdRouter.java:58)
[junit4]  at 
 org.apache.solr.common.cloud.HashBasedRouter.getTargetSlice(HashBasedRouter.java:33)
[junit4]  at 
 org.apache.solr.cloud.TestHashPartitioner.testNonConformingCompositeId(TestHashPartitioner.java:205)
[junit4]  at java.lang.Thread.run(Thread.java:745)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6121) cursorMark should accept sort without the uniqueKey

2014-07-19 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-6121:
-

Description: 
If you are using the cursorMark (deep paging) feature, you shouldn't *have* to 
add the uniqueKey to the sort parameter.  If the user doesn't do it, the user 
obviously doesn't care about the uniqueKey order relative to whatever other 
sort parameters they may or may not have provided.  So if sort doesn't have it, 
then Solr should simply tack it on at the end instead of providing an error and 
potentially confusing the user.  This would be more user friendly.

Quoting Hoss from 
[SOLR-5463|https://issues.apache.org/jira/browse/SOLR-5463?focusedCommentId=14011384page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14011384]:
{quote}
The reason the code currently throws an error was because i figured it was 
better to force the user to choose which tie breaker they wanted (asc vs desc) 
then to just magically pick one arbitrarily.

If folks think a magic default is a better i've got no serious objections – 
just open a new issue.
{quote}


  was:If you are using the cursorMark (deep paging) feature, you shouldn't 
*have* to add the uniqueKey to the sort parameter.  If the user doesn't do it, 
the user obviously doesn't care about the uniqueKey order relative to whatever 
other sort parameters they may or may not have provided.  So if sort doesn't 
have it, then Solr should simply tack it on at the end instead of providing an 
error and potentially confusing the user.  This would be more user friendly.


 cursorMark should accept sort without the uniqueKey
 ---

 Key: SOLR-6121
 URL: https://issues.apache.org/jira/browse/SOLR-6121
 Project: Solr
  Issue Type: Improvement
Reporter: David Smiley
Priority: Minor

 If you are using the cursorMark (deep paging) feature, you shouldn't *have* 
 to add the uniqueKey to the sort parameter.  If the user doesn't do it, the 
 user obviously doesn't care about the uniqueKey order relative to whatever 
 other sort parameters they may or may not have provided.  So if sort doesn't 
 have it, then Solr should simply tack it on at the end instead of providing 
 an error and potentially confusing the user.  This would be more user 
 friendly.
 Quoting Hoss from 
 [SOLR-5463|https://issues.apache.org/jira/browse/SOLR-5463?focusedCommentId=14011384page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14011384]:
 {quote}
 The reason the code currently throws an error was because i figured it was 
 better to force the user to choose which tie breaker they wanted (asc vs 
 desc) then to just magically pick one arbitrarily.
 If folks think a magic default is a better i've got no serious objections – 
 just open a new issue.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr checkIfIAmLeader usage from ZK event thread

2014-07-19 Thread Mark Miller
Put up a patch a lets take a look.

Most anywhere that holds up the zk processing thread for any decent amount of 
time is probably something waiting to be fixed.

-- 
Mark Miller
about.me/markrmiller

On July 15, 2014 at 10:09:56 AM, Ramkumar R. Aiyengar (andyetitmo...@gmail.com) 
wrote:
 Currently when a replica is watching the current leader's ephemeral node
 and the leader disappears, it runs the leadership check along with its two
 way peer sync, ZK update etc. on the ZK event thread where the watch was
 fired.
 
 What this means is that for instances with lots of cores, you would be
 serializing leadership elections and the last in the list could take a long
 time to have a replacement elected (during which you will have no leader).
 
 I did a quick change to make the checkIfIAmLeader call async, but Solr
 cloud tests being what they are (thanks Shalin for cleaning them up btw :)
 ), I wanted to check if I am doing something stupid. If not, I will raise a
 JIRA.
 
 One contention could be if you might end up with two elections for the same
 shard, but I can't see how that might happen..
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6227) ChaosMonkeySafeLeaderTest failures on jenkins

2014-07-19 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067650#comment-14067650
 ] 

Shalin Shekhar Mangar commented on SOLR-6227:
-

I haven't seen this test fail ever since SOLR-6235 was committed. It is 
possible that the underlying issue was the same in both fails. My local jenkins 
is chugging along nicely but I haven't been able to reproduce this. I'll keep 
this open for a couple of days more and then close if I still can't reproduce 
the failure.

 ChaosMonkeySafeLeaderTest failures on jenkins
 -

 Key: SOLR-6227
 URL: https://issues.apache.org/jira/browse/SOLR-6227
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
 Fix For: 4.10


 This is happening very frequently.
 {code}
 1 tests failed.
 REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch
 Error Message:
 shard1 is not consistent.  Got 143 from 
 https://127.0.0.1:36610/xvv/collection1lastClient and got 142 from 
 https://127.0.0.1:33168/xvv/collection1
 Stack Trace:
 java.lang.AssertionError: shard1 is not consistent.  Got 143 from 
 https://127.0.0.1:36610/xvv/collection1lastClient and got 142 from 
 https://127.0.0.1:33168/xvv/collection1
 at 
 __randomizedtesting.SeedInfo.seed([3C1FB6EAFE71:BDF938F2AA829E4D]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1139)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
 at 
 org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:150)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4895) Throw an error when a rollback is attempted in SolrCloud mode.

2014-07-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067652#comment-14067652
 ] 

Mark Miller commented on SOLR-4895:
---

Usually, 
http://lucene.apache.org/solr/4_9_0/solr-core/org/apache/solr/core/CoreContainer.html#isZooKeeperAware()

 Throw an error when a rollback is attempted in SolrCloud mode.
 --

 Key: SOLR-4895
 URL: https://issues.apache.org/jira/browse/SOLR-4895
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.9, 5.0






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5837) Only check docsWithField when necessary in numeric comparators

2014-07-19 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067657#comment-14067657
 ] 

Adrien Grand commented on LUCENE-5837:
--

Uwe, please read the issue again: the goal was not to remove the null check, 
but the check for missing values.

The reason why I came up with this issue is that I'm writing a selector in 
order to sort based on the values of a block of documents. To make it work 
efficiently I need to write a NumericDocValues instance that already returns 
the missing value when there are either no child documents in the block or if 
none of them have a value. So there is no need to check the missing values in 
the comparator.

I'm surprised that you think of it as a specialization as this is actually 
making things simpler? The handling of the missing value is done once for all 
in setNextReader and then the comparator only needs to care about the 
NumericDocValues instance. And it makes it easier (and potentially more 
efficient) to write selectors.

 Only check docsWithField when necessary in numeric comparators
 --

 Key: LUCENE-5837
 URL: https://issues.apache.org/jira/browse/LUCENE-5837
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5837.patch, LUCENE-5837.patch


 Our numeric comparators have branches to deal with missing values. However 
 there are some cases when checking docs that have a field is not useful:
  - if all docs have a value
  - if no docs have a value
  - if the missing value is 0



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5836) BytesRef.copyBytes and copyChars don't oversize

2014-07-19 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067671#comment-14067671
 ] 

Adrien Grand commented on LUCENE-5836:
--

I'd like to fix it as well but this would be a very big change. :( In the mean 
time would you agree to fix copyBytes to oversize the destination array to make 
sure that we don't hit the worst-case?

 BytesRef.copyBytes and copyChars don't oversize
 ---

 Key: LUCENE-5836
 URL: https://issues.apache.org/jira/browse/LUCENE-5836
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand

 When copying data from another BytesRef/CharSequence, these methods don't 
 oversize. This is not an issue if this method is used only once per BytesRef 
 instance but I just reviewed the usage of these methods and they are very 
 frequently used in loops to do things like:
  - keep track of the top values in comparators
  - keep track of the previous terms in various loops over a terms enum 
 (lucene49 DV consumer, BlockTreeTermsWriter)
  - etc.
 Although unlikely, it might be possible to hit a worst-case and to resize the 
 underlying byte[] on every call to copyBytes? Should we oversize the 
 underlying array in these methods?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5837) Only check docsWithField when necessary in numeric comparators

2014-07-19 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067686#comment-14067686
 ] 

Uwe Schindler commented on LUCENE-5837:
---

Hi Adrien,

it is somehow a specialization - on the docvalues instance. You may be right - 
you can also see it as simplifcication. In any case you have to take care that 
the v==0 optimization is still done, which you do (as I see)

What happens in case missingValue==null (which was possible in the old API)? 
I am not 100% sure if the current code does the right thing - but if tests pass 
I am fine.

I don't like the crazy API around the missingValue declared as long in the 
abstract base class. This is very confusing. Especially because the generics 
enfore a real type which is removed here. At least make the constructor of the 
abstract base class hidden - or hide the whole abstract base class 
(NumericComparator). I am not sure if it needs to be public at all. 

If this does not slow down, go for it!

How to handle that in Lucene 4.x? The API still uses FieldCache.DEFAULT there 
and the order of calls for getDocsWithField() is important.

 Only check docsWithField when necessary in numeric comparators
 --

 Key: LUCENE-5837
 URL: https://issues.apache.org/jira/browse/LUCENE-5837
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5837.patch, LUCENE-5837.patch


 Our numeric comparators have branches to deal with missing values. However 
 there are some cases when checking docs that have a field is not useful:
  - if all docs have a value
  - if no docs have a value
  - if the missing value is 0



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5837) Only check docsWithField when necessary in numeric comparators

2014-07-19 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067689#comment-14067689
 ] 

Uwe Schindler commented on LUCENE-5837:
---

bq. What happens in case missingValue==null (which was possible in the old 
API)? I am not 100% sure if the current code does the right thing - but if 
tests pass I am fine.

Your code does the right thing. You set the long missingValue to 0L and the 
specialization NumericDocValuesInstance returns missingValue - which is exactly 
the same as if the missingValue was null in old code (the if check then 
returned the original value, which was also 0).

One thing: I would remove the variable assignment in the compareXxx methods and 
make them one-liners.

 Only check docsWithField when necessary in numeric comparators
 --

 Key: LUCENE-5837
 URL: https://issues.apache.org/jira/browse/LUCENE-5837
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5837.patch, LUCENE-5837.patch


 Our numeric comparators have branches to deal with missing values. However 
 there are some cases when checking docs that have a field is not useful:
  - if all docs have a value
  - if no docs have a value
  - if the missing value is 0



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5837) Only check docsWithField when necessary in numeric comparators

2014-07-19 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067690#comment-14067690
 ] 

Robert Muir commented on LUCENE-5837:
-

{quote}
I'm surprised that you think of it as a specialization as this is actually 
making things simpler? The handling of the missing value is done once for all 
in setNextReader and then the comparator only needs to care about the 
NumericDocValues instance. And it makes it easier (and potentially more 
efficient) to write selectors.
{quote}

It is a specialization, because instead of a branch for null, you have a branch 
checking class of the numericdocvalues. and if this one fails, the whole thing 
gets deoptimized and hotspot goes crazy.

 Only check docsWithField when necessary in numeric comparators
 --

 Key: LUCENE-5837
 URL: https://issues.apache.org/jira/browse/LUCENE-5837
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5837.patch, LUCENE-5837.patch


 Our numeric comparators have branches to deal with missing values. However 
 there are some cases when checking docs that have a field is not useful:
  - if all docs have a value
  - if no docs have a value
  - if the missing value is 0



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5836) BytesRef.copyBytes and copyChars don't oversize

2014-07-19 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067693#comment-14067693
 ] 

Robert Muir commented on LUCENE-5836:
-

No, because there is no indication it would ever be reused: it could just be 
creating waste.

 BytesRef.copyBytes and copyChars don't oversize
 ---

 Key: LUCENE-5836
 URL: https://issues.apache.org/jira/browse/LUCENE-5836
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand

 When copying data from another BytesRef/CharSequence, these methods don't 
 oversize. This is not an issue if this method is used only once per BytesRef 
 instance but I just reviewed the usage of these methods and they are very 
 frequently used in loops to do things like:
  - keep track of the top values in comparators
  - keep track of the previous terms in various loops over a terms enum 
 (lucene49 DV consumer, BlockTreeTermsWriter)
  - etc.
 Although unlikely, it might be possible to hit a worst-case and to resize the 
 underlying byte[] on every call to copyBytes? Should we oversize the 
 underlying array in these methods?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5837) Only check docsWithField when necessary in numeric comparators

2014-07-19 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067695#comment-14067695
 ] 

Uwe Schindler commented on LUCENE-5837:
---

bq. the goal was not to remove the null check, but the check for missing values.

In fact you are removing the null check, which is the extra branch to check for 
missing values - just look at the old code (this was my trick). It was done 
exactly like this to not slow down - hotspot can optimize that away, if it 
finds out that it is null - it does this very fast. We checked this at the time 
I added this to Lucene 3.5 or like that. We compared the two implementations 
and they were exactly the same speed. The same that Robert discovered here, too.

In fact your patch would only work in Lucene trunk, in 4.x this cannot be done 
like that.

 Only check docsWithField when necessary in numeric comparators
 --

 Key: LUCENE-5837
 URL: https://issues.apache.org/jira/browse/LUCENE-5837
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5837.patch, LUCENE-5837.patch


 Our numeric comparators have branches to deal with missing values. However 
 there are some cases when checking docs that have a field is not useful:
  - if all docs have a value
  - if no docs have a value
  - if the missing value is 0



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5837) Only check docsWithField when necessary in numeric comparators

2014-07-19 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067695#comment-14067695
 ] 

Uwe Schindler edited comment on LUCENE-5837 at 7/19/14 9:28 PM:


bq. the goal was not to remove the null check, but the check for missing values.

In fact you are removing the null check, which is the extra branch to check for 
missing values - just look at the old code (this was my trick). It was done 
exactly like this to not slow down - hotspot can optimize that away, if it 
finds out that it is null - it does this very fast. We checked this at the time 
I added this to Lucene 3.5 or like that. We compared the two implementations - 
without missing values and the new one with missing values - and they were 
exactly the same speed. The same that Robert discovered here, too.

In fact your patch would only work in Lucene trunk, in 4.x this cannot be done 
like that.


was (Author: thetaphi):
bq. the goal was not to remove the null check, but the check for missing values.

In fact you are removing the null check, which is the extra branch to check for 
missing values - just look at the old code (this was my trick). It was done 
exactly like this to not slow down - hotspot can optimize that away, if it 
finds out that it is null - it does this very fast. We checked this at the time 
I added this to Lucene 3.5 or like that. We compared the two implementations 
and they were exactly the same speed. The same that Robert discovered here, too.

In fact your patch would only work in Lucene trunk, in 4.x this cannot be done 
like that.

 Only check docsWithField when necessary in numeric comparators
 --

 Key: LUCENE-5837
 URL: https://issues.apache.org/jira/browse/LUCENE-5837
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5837.patch, LUCENE-5837.patch


 Our numeric comparators have branches to deal with missing values. However 
 there are some cases when checking docs that have a field is not useful:
  - if all docs have a value
  - if no docs have a value
  - if the missing value is 0



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5836) BytesRef.copyBytes and copyChars don't oversize

2014-07-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067697#comment-14067697
 ] 

ASF subversion and git services commented on LUCENE-5836:
-

Commit 1611970 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1611970 ]

LUCENE-5836: when prefix-coding variable length terms, preallocate lastTerm to 
the correct size

 BytesRef.copyBytes and copyChars don't oversize
 ---

 Key: LUCENE-5836
 URL: https://issues.apache.org/jira/browse/LUCENE-5836
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand

 When copying data from another BytesRef/CharSequence, these methods don't 
 oversize. This is not an issue if this method is used only once per BytesRef 
 instance but I just reviewed the usage of these methods and they are very 
 frequently used in loops to do things like:
  - keep track of the top values in comparators
  - keep track of the previous terms in various loops over a terms enum 
 (lucene49 DV consumer, BlockTreeTermsWriter)
  - etc.
 Although unlikely, it might be possible to hit a worst-case and to resize the 
 underlying byte[] on every call to copyBytes? Should we oversize the 
 underlying array in these methods?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5836) BytesRef.copyBytes and copyChars don't oversize

2014-07-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067700#comment-14067700
 ] 

ASF subversion and git services commented on LUCENE-5836:
-

Commit 1611971 from [~rcmuir] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1611971 ]

LUCENE-5836: when prefix-coding variable length terms, preallocate lastTerm to 
the correct size

 BytesRef.copyBytes and copyChars don't oversize
 ---

 Key: LUCENE-5836
 URL: https://issues.apache.org/jira/browse/LUCENE-5836
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand

 When copying data from another BytesRef/CharSequence, these methods don't 
 oversize. This is not an issue if this method is used only once per BytesRef 
 instance but I just reviewed the usage of these methods and they are very 
 frequently used in loops to do things like:
  - keep track of the top values in comparators
  - keep track of the previous terms in various loops over a terms enum 
 (lucene49 DV consumer, BlockTreeTermsWriter)
  - etc.
 Although unlikely, it might be possible to hit a worst-case and to resize the 
 underlying byte[] on every call to copyBytes? Should we oversize the 
 underlying array in these methods?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5836) BytesRef.copyBytes and copyChars don't oversize

2014-07-19 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067701#comment-14067701
 ] 

Robert Muir commented on LUCENE-5836:
-

{quote}
keep track of the previous terms in various loops over a terms enum (lucene49 
DV consumer, 
{quote}

Thanks for pointing this out, i fixed this one (it knows the maximum size 
before the loop)

 BytesRef.copyBytes and copyChars don't oversize
 ---

 Key: LUCENE-5836
 URL: https://issues.apache.org/jira/browse/LUCENE-5836
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand

 When copying data from another BytesRef/CharSequence, these methods don't 
 oversize. This is not an issue if this method is used only once per BytesRef 
 instance but I just reviewed the usage of these methods and they are very 
 frequently used in loops to do things like:
  - keep track of the top values in comparators
  - keep track of the previous terms in various loops over a terms enum 
 (lucene49 DV consumer, BlockTreeTermsWriter)
  - etc.
 Although unlikely, it might be possible to hit a worst-case and to resize the 
 underlying byte[] on every call to copyBytes? Should we oversize the 
 underlying array in these methods?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 587 - Failure

2014-07-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/587/

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.TestReplicationHandlerBackup: 1) Thread[id=21595, 
name=Thread-8810, state=RUNNABLE, group=TGRP-TestReplicationHandlerBackup]  
   at java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at 
java.net.Socket.connect(Socket.java:579) at 
java.net.Socket.connect(Socket.java:528) at 
sun.net.NetworkClient.doConnect(NetworkClient.java:180) at 
sun.net.www.http.HttpClient.openServer(HttpClient.java:432) at 
sun.net.www.http.HttpClient.openServer(HttpClient.java:527) at 
sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652) at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
 at java.net.URL.openStream(URL.java:1037) at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:314)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.TestReplicationHandlerBackup: 
   1) Thread[id=21595, name=Thread-8810, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
at java.net.URL.openStream(URL.java:1037)
at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:314)
at __randomizedtesting.SeedInfo.seed([714C93886BFCB771]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=21595, name=Thread-8810, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at 
java.net.Socket.connect(Socket.java:579) at 
java.net.Socket.connect(Socket.java:528) at 
sun.net.NetworkClient.doConnect(NetworkClient.java:180) at 
sun.net.www.http.HttpClient.openServer(HttpClient.java:432) at 
sun.net.www.http.HttpClient.openServer(HttpClient.java:527) at 
sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652) at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
 at java.net.URL.openStream(URL.java:1037) at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:314)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=21595, name=Thread-8810, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at 

[jira] [Commented] (SOLR-6227) ChaosMonkeySafeLeaderTest failures on jenkins

2014-07-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067715#comment-14067715
 ] 

Mark Miller commented on SOLR-6227:
---

I've had my jenkins running all week as well (with some CM specific jobs as 
well), just have not checked up on them yet. I'll look and report back soon.

 ChaosMonkeySafeLeaderTest failures on jenkins
 -

 Key: SOLR-6227
 URL: https://issues.apache.org/jira/browse/SOLR-6227
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
 Fix For: 4.10


 This is happening very frequently.
 {code}
 1 tests failed.
 REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch
 Error Message:
 shard1 is not consistent.  Got 143 from 
 https://127.0.0.1:36610/xvv/collection1lastClient and got 142 from 
 https://127.0.0.1:33168/xvv/collection1
 Stack Trace:
 java.lang.AssertionError: shard1 is not consistent.  Got 143 from 
 https://127.0.0.1:36610/xvv/collection1lastClient and got 142 from 
 https://127.0.0.1:33168/xvv/collection1
 at 
 __randomizedtesting.SeedInfo.seed([3C1FB6EAFE71:BDF938F2AA829E4D]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1139)
 at 
 org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
 at 
 org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:150)
 at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-5843) No way to clear error state of a core that doesn't even exist any more

2014-07-19 Thread Nathan Neulinger (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Neulinger closed SOLR-5843.
--

Resolution: Fixed

 No way to clear error state of a core that doesn't even exist any more
 --

 Key: SOLR-5843
 URL: https://issues.apache.org/jira/browse/SOLR-5843
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6.1
Reporter: Nathan Neulinger
  Labels: cloud, failure, initialization

 Created collections with missing configs - this is known to create a problem 
 state. Those collections have all since been deleted -- but one of my nodes 
 still insists that there are initialization errors.
 There are no references to those 'failed' cores in any of the cloud tabs, or 
 in ZK, or in the directories on the server itself. 
 There should be some easy way to refresh this state or to clear them out 
 without having to restart the instance. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-3345) BaseDistributedSearchTestCase should always ignore QTime

2014-07-19 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-3345:
-

Assignee: Mark Miller

 BaseDistributedSearchTestCase should always ignore QTime
 

 Key: SOLR-3345
 URL: https://issues.apache.org/jira/browse/SOLR-3345
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-ALPHA
Reporter: Benson Margulies
Assignee: Mark Miller
 Attachments: SOLR-3345.patch


 The existing subclasses of BaseDistributedSearchTestCase all skip QTime. I 
 can't see any way in which those numbers will ever match. Why not make this 
 the default, or only, behavior?
 (This is really a question, in that I will provide a patch if no one tells me 
 that it is a bad idea.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6165) DataImportHandler writes BigInteger and BigDecimal as-is which causes errors in SolrCloud replication

2014-07-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067743#comment-14067743
 ] 

ASF subversion and git services commented on SOLR-6165:
---

Commit 1611985 from [~ehatcher] in branch 'dev/trunk'
[ https://svn.apache.org/r1611985 ]

SOLR-6165: Add tests for convertType and BigDecimal implicit conversion

 DataImportHandler writes BigInteger and BigDecimal as-is which causes errors 
 in SolrCloud replication
 -

 Key: SOLR-6165
 URL: https://issues.apache.org/jira/browse/SOLR-6165
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8
Reporter: anand sengamalai
Assignee: Shalin Shekhar Mangar
 Fix For: 5.0, 4.10


 we are trying to migrate to 4.8 from 4.1, after setting up the new solr cloud 
 when we try to do a dataimport using the DataimportHandler we have issues 
 from replication. following is the error we are getting in the log. The field 
 from the db is numeric field and in solr schema file its been declared as 
 double.
 603215 [qtp280884709-15] INFO 
 org.apache.solr.update.processor.LogUpdateProcessor ? [locations] 
 webapp=/solr path=/update 
 params={update.distrib=FROMLEADERdistrib.from=http://servername:8983/solr/locations/wt=javabinversion=2}
  {} 0 0
 603216 [qtp280884709-15] ERROR org.apache.solr.core.SolrCore ? 
 org.apache.solr.common.SolrException: ERROR: [doc=SALT LAKE CITY-UT-84127] 
 Error adding field 'city_lat'='java.math.BigDecimal:40.7607793000' msg=For 
 input string: java.math.BigDecimal:40.7607793000
 at org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:167)
 at 
 org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:77)
 at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:234)
 at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:160)
 at 
 org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
 at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:703)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:857)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:556)
 at 
 org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:100)
 at 
 org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:96)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:166)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:136)
 at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:225)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:121)
 at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:190)
 at org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:116)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:173)
 at 
 org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:106)
 at org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:58)
 at 
 org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1916)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:780)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:427)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
 at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
 at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
 at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
 at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
 at 
 

[jira] [Commented] (SOLR-6165) DataImportHandler writes BigInteger and BigDecimal as-is which causes errors in SolrCloud replication

2014-07-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067748#comment-14067748
 ] 

ASF subversion and git services commented on SOLR-6165:
---

Commit 1611987 from [~ehatcher] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1611987 ]

SOLR-6165: Add tests for convertType and BigDecimal implicit conversion

 DataImportHandler writes BigInteger and BigDecimal as-is which causes errors 
 in SolrCloud replication
 -

 Key: SOLR-6165
 URL: https://issues.apache.org/jira/browse/SOLR-6165
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8
Reporter: anand sengamalai
Assignee: Shalin Shekhar Mangar
 Fix For: 5.0, 4.10


 we are trying to migrate to 4.8 from 4.1, after setting up the new solr cloud 
 when we try to do a dataimport using the DataimportHandler we have issues 
 from replication. following is the error we are getting in the log. The field 
 from the db is numeric field and in solr schema file its been declared as 
 double.
 603215 [qtp280884709-15] INFO 
 org.apache.solr.update.processor.LogUpdateProcessor ? [locations] 
 webapp=/solr path=/update 
 params={update.distrib=FROMLEADERdistrib.from=http://servername:8983/solr/locations/wt=javabinversion=2}
  {} 0 0
 603216 [qtp280884709-15] ERROR org.apache.solr.core.SolrCore ? 
 org.apache.solr.common.SolrException: ERROR: [doc=SALT LAKE CITY-UT-84127] 
 Error adding field 'city_lat'='java.math.BigDecimal:40.7607793000' msg=For 
 input string: java.math.BigDecimal:40.7607793000
 at org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:167)
 at 
 org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:77)
 at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:234)
 at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:160)
 at 
 org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
 at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:703)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:857)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:556)
 at 
 org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:100)
 at 
 org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:96)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:166)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:136)
 at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:225)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:121)
 at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:190)
 at org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:116)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:173)
 at 
 org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:106)
 at org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:58)
 at 
 org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1916)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:780)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:427)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
 at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
 at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
 at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
 at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
 at 
 

[jira] [Commented] (SOLR-6252) Simplify UnInvertedField#getUnInvertedField synchronization module

2014-07-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067750#comment-14067750
 ] 

Mark Miller commented on SOLR-6252:
---

Hmm...now I'm looking at this beyond the correctness of how it was taken out...

Wasn't the intent to pull the creation of the UnInvertedField out of the sync 
block on cache so that more of them can be constructed in parallel rather than 
sequentially? 

 Simplify UnInvertedField#getUnInvertedField synchronization module
 --

 Key: SOLR-6252
 URL: https://issues.apache.org/jira/browse/SOLR-6252
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 5.0
Reporter: Vamsee Yarlagadda
Assignee: Mark Miller
Priority: Minor
 Attachments: SOLR-6252.patch, SOLR-6252v2.patch


 Looks like UnInvertedField#getUnInvertedField has implemented a bit 
 additional synchronization module rather than what is required, and thereby 
 increasing the complexity.
 https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/request/UnInvertedField.java#L667
 As pointed out in the above link, as the synchronization is performed on the 
 cache variable(which itself will protect the threads from obtaining access to 
 the cache), we can safely remove all the placeholder flags. As long as 
 cache.get() is in synchronized block, we can simply populate the cache with 
 new entries and other threads will be able to see the changes.
 This change has been introduced in 
 https://issues.apache.org/jira/browse/SOLR-2548 (Multithreaded faceting)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-07-19 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067756#comment-14067756
 ] 

Alexandre Rafalovitch commented on SOLR-3619:
-

+1 on splitting the issue if that gets something in here done. 

Just to clarify though, when a server is started, it still needs to point at 
something, right? To keep the download, unzip, run experience.

That's going to be some collection1? What that's going to be? A new basic 
example? Or still the original one just with different path and invocation 
command line?

 Rename 'example' dir to 'server' and pull examples into an 'examples' 
 directory
 ---

 Key: SOLR-3619
 URL: https://issues.apache.org/jira/browse/SOLR-3619
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
 Fix For: 4.9, 5.0

 Attachments: SOLR-3619.patch, server-name-layout.png






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-07-19 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067757#comment-14067757
 ] 

Yonik Seeley commented on SOLR-3619:


bq. Just to clarify though, when a server is started, it still needs to point 
at something, right? To keep the download, unzip, run experience.

Yeah, it feels like one should still be able to start the server and then index 
a document (as they can do now) without any other mandatory steps.

 Rename 'example' dir to 'server' and pull examples into an 'examples' 
 directory
 ---

 Key: SOLR-3619
 URL: https://issues.apache.org/jira/browse/SOLR-3619
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
 Fix For: 4.9, 5.0

 Attachments: SOLR-3619.patch, server-name-layout.png






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6252) Simplify UnInvertedField#getUnInvertedField synchronization module

2014-07-19 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067760#comment-14067760
 ] 

Gregory Chanan commented on SOLR-6252:
--

Good point Mark.  Maybe a comment to that affect and removing isPlaceholder 
checks with == checks against the static variable is the right way to go here?  
[I'm assuming == on references vs booleans has similar performance, though I 
haven't actually checked that myself...]

 Simplify UnInvertedField#getUnInvertedField synchronization module
 --

 Key: SOLR-6252
 URL: https://issues.apache.org/jira/browse/SOLR-6252
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 5.0
Reporter: Vamsee Yarlagadda
Assignee: Mark Miller
Priority: Minor
 Attachments: SOLR-6252.patch, SOLR-6252v2.patch


 Looks like UnInvertedField#getUnInvertedField has implemented a bit 
 additional synchronization module rather than what is required, and thereby 
 increasing the complexity.
 https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/request/UnInvertedField.java#L667
 As pointed out in the above link, as the synchronization is performed on the 
 cache variable(which itself will protect the threads from obtaining access to 
 the cache), we can safely remove all the placeholder flags. As long as 
 cache.get() is in synchronized block, we can simply populate the cache with 
 new entries and other threads will be able to see the changes.
 This change has been introduced in 
 https://issues.apache.org/jira/browse/SOLR-2548 (Multithreaded faceting)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-07-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067779#comment-14067779
 ] 

Mark Miller commented on SOLR-3619:
---

I don't think so. Having this 'collection1' is just odd. You want to rename it 
right away.

I think the issue is, it just has to be a single command to get a collection1 
named what you want. And it should be easy to get a sensible starting schema or 
go schemaless. A flag change at most.

So you have:

* download
* run
* create mycollection
* index a doc

Making it so that any real system first has to delete collection1 and acting 
against the normal for systems like this doesn't seem worth it.

create mycollection just has to be about that simple, and then they know how to 
create mycollection2.

Having a default collection1 is just a band aid for the amount of work we put 
you through to create a collection. It's pretty undesirable other than that.

 Rename 'example' dir to 'server' and pull examples into an 'examples' 
 directory
 ---

 Key: SOLR-3619
 URL: https://issues.apache.org/jira/browse/SOLR-3619
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
 Fix For: 4.9, 5.0

 Attachments: SOLR-3619.patch, server-name-layout.png






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-07-19 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067787#comment-14067787
 ] 

Alexandre Rafalovitch commented on SOLR-3619:
-

Ok, so what does this mean on the practical level: **create mycollection**. 
Especially for people who are still trying Solr as opposed to doing production 
setup (two different distributions?).

The way ES handles it now is by basically baking in the templates and defaults 
into the core jars. So, the first time you issue a URL against non-existing 
collection, it's created and inherits all the defaults (ids, endpoints, default 
types (including multilngual), etc).

Solr requires everything explicitly. And you cannot expect people to create a 
schema.xml and solrconfig.xml from scratch to get their first collection to 
work. So, **something** has to pre-exist. And that something should be easy to 
run as step 3 (worst case 4). 

If it's not a pre-built collection, then we need some sort of wizard to create 
the directory structure. How do you see that? Maybe with global configsets or 
something similar? I would love that, but I am not sure that starting to talk 
about the wizards in this JIRA is any smarter than rebuilding the whole set of 
examples.


 Rename 'example' dir to 'server' and pull examples into an 'examples' 
 directory
 ---

 Key: SOLR-3619
 URL: https://issues.apache.org/jira/browse/SOLR-3619
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
 Fix For: 4.9, 5.0

 Attachments: SOLR-3619.patch, server-name-layout.png






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-07-19 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067791#comment-14067791
 ] 

Alexandre Rafalovitch commented on SOLR-3619:
-

Rereading the comment. Did you mean the following sequence of steps?
1) Download
2) Unzip
3) Start server
4) Visit Admin UI
5) Create new collection (using one of provided configsets, which is basically 
renamed examples)
6) Index?

That could work. And a good differentiation from ES, as we ship AdminUI in the 
box (theirs is separate and is free for development only). 

We'd need UI support for configsets (and I don't actually know how extensive 
they are), but that's probably a good idea anyway. On the other hand, asking 
users to create a directory in a right location, copy schema and solrconfig 
files there (from where?) and then do AdminUI (or command line) call to create 
collection - is NOT going to work. 

 Rename 'example' dir to 'server' and pull examples into an 'examples' 
 directory
 ---

 Key: SOLR-3619
 URL: https://issues.apache.org/jira/browse/SOLR-3619
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
 Fix For: 4.9, 5.0

 Attachments: SOLR-3619.patch, server-name-layout.png






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-07-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067794#comment-14067794
 ] 

Mark Miller commented on SOLR-3619:
---

It would be just like it is now. People don't start from scratch, they pull the 
so called example config and build from that. You should be able to say, create 
me a collection and get that at a minimum. Except it should be better - schema 
guessing option, dynamic field option, production schema option, etc. Less 
example, and more practical. It's all common sense, and it's basically how 
things have worked anyway, it's just that automating it and simplifying it 
hasn't happened yet. When I was at Lucid we did a lot of this with config 
'templates' for their enterprise search product. You can get a default starting 
template, a built in template, a user supplied template. Probably lot's of ways 
to hide the current exposed complexity and not sacrifice power or control. We 
should discuss all this in other issues though. This issue won't remove 
collection1.

 Rename 'example' dir to 'server' and pull examples into an 'examples' 
 directory
 ---

 Key: SOLR-3619
 URL: https://issues.apache.org/jira/browse/SOLR-3619
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
 Fix For: 4.9, 5.0

 Attachments: SOLR-3619.patch, server-name-layout.png






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6260) Rename DirectUpdateHandler2

2014-07-19 Thread JIRA
Tomás Fernández Löbbe created SOLR-6260:
---

 Summary: Rename DirectUpdateHandler2
 Key: SOLR-6260
 URL: https://issues.apache.org/jira/browse/SOLR-6260
 Project: Solr
  Issue Type: Improvement
Affects Versions: 5.0
Reporter: Tomás Fernández Löbbe
Priority: Minor


DirectUpdateHandler was removed, I think in Solr 4. DirectUpdateHandler2 
should be renamed, at least remove that 2. I don't know really what direct 
means here. Maybe it could be renamed to DefaultUpdateHandler, or 
UpdateHandlerDefaultImpl, or other good suggestions



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-07-19 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067797#comment-14067797
 ] 

Alexandre Rafalovitch commented on SOLR-3619:
-

I do not disagree with a single thing you say there. All I am asking is to 
clarify/visualize the exact user experience at the end of *this* JIRA before 
you start moving things around. Step by step UX, not generalizing create a 
collection bit. I think it is important not to diminish the usability we 
already have and I am happy for the actual improvements to be in discussed 
separately. 

I am assuming, of course, we are still talking about releasing this specific 
improvement as part of Solr 4.X and not just as a small part of the big 5.0 
overhaul.

 Rename 'example' dir to 'server' and pull examples into an 'examples' 
 directory
 ---

 Key: SOLR-3619
 URL: https://issues.apache.org/jira/browse/SOLR-3619
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
 Fix For: 4.9, 5.0

 Attachments: SOLR-3619.patch, server-name-layout.png






--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6260) Rename DirectUpdateHandler2

2014-07-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-6260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-6260:


Attachment: SOLR-6260.patch

Simple search and replace to DirectUpdateHandler. Tests are passing. 

 Rename DirectUpdateHandler2
 ---

 Key: SOLR-6260
 URL: https://issues.apache.org/jira/browse/SOLR-6260
 Project: Solr
  Issue Type: Improvement
Affects Versions: 5.0
Reporter: Tomás Fernández Löbbe
Priority: Minor
 Attachments: SOLR-6260.patch


 DirectUpdateHandler was removed, I think in Solr 4. DirectUpdateHandler2 
 should be renamed, at least remove that 2. I don't know really what 
 direct means here. Maybe it could be renamed to DefaultUpdateHandler, or 
 UpdateHandlerDefaultImpl, or other good suggestions



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.7.0_65) - Build # 10734 - Failure!

2014-07-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/10734/
Java: 64bit/jdk1.7.0_65 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 59785 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:467: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:406: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:87: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:179: The 
following files are missing svn:eol-style (or binary svn:mime-type):
* 
./solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestJdbcDataSourceConvertType.java

Total time: 106 minutes 20 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 64bit/jdk1.7.0_65 
-XX:+UseCompressedOops -XX:+UseConcMarkSweepGC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-6260) Rename DirectUpdateHandler2

2014-07-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067805#comment-14067805
 ] 

Mark Miller commented on SOLR-6260:
---

Perhaps SolrUpdateHandler. It should really be our only one, and goes along 
with names like SolrIndexWriter and SolrCore and SolrIndexSearcher. We only 
really let you override it as an advanced unsupported thing, and we really 
don't want to have to maintain more than one implementation ourselves.

 Rename DirectUpdateHandler2
 ---

 Key: SOLR-6260
 URL: https://issues.apache.org/jira/browse/SOLR-6260
 Project: Solr
  Issue Type: Improvement
Affects Versions: 5.0
Reporter: Tomás Fernández Löbbe
Priority: Minor
 Attachments: SOLR-6260.patch


 DirectUpdateHandler was removed, I think in Solr 4. DirectUpdateHandler2 
 should be renamed, at least remove that 2. I don't know really what 
 direct means here. Maybe it could be renamed to DefaultUpdateHandler, or 
 UpdateHandlerDefaultImpl, or other good suggestions



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-4.x #660: POMs out of sync

2014-07-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-4.x/660/

2 tests failed.
REGRESSION:  org.apache.solr.cloud.OverseerTest.testOverseerFailure

Error Message:
KeeperErrorCode = NoNode for 
/collections/collection1/leader_elect/shard1/election

Stack Trace:
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode 
for /collections/collection1/leader_elect/shard1/election
at 
__randomizedtesting.SeedInfo.seed([FD101888579BC18B:F918977B453E2EAA]:0)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1472)
at 
org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:260)
at 
org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:257)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:73)
at 
org.apache.solr.common.cloud.SolrZkClient.getChildren(SolrZkClient.java:257)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:94)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:155)
at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:314)
at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
at 
org.apache.solr.cloud.OverseerTest$MockZKController.publishState(OverseerTest.java:155)
at 
org.apache.solr.cloud.OverseerTest.testOverseerFailure(OverseerTest.java:660)


FAILED:  org.apache.solr.cloud.MultiThreadedOCPTest.testDistribSearch

Error Message:
Task 3002 did not complete, final state: running

Stack Trace:
java.lang.AssertionError: Task 3002 did not complete, final state: running
at 
__randomizedtesting.SeedInfo.seed([5159783DC1610AE:84F3199BAB497092]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.testDeduplicationOfSubmittedTasks(MultiThreadedOCPTest.java:162)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.doTest(MultiThreadedOCPTest.java:71)




Build Log:
[...truncated 55192 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/build.xml:490: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/build.xml:182: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/extra-targets.xml:77:
 Java returned: 1

Total time: 244 minutes 4 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-6260) Rename DirectUpdateHandler2

2014-07-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067808#comment-14067808
 ] 

Tomás Fernández Löbbe commented on SOLR-6260:
-

I think that sounds good.

 Rename DirectUpdateHandler2
 ---

 Key: SOLR-6260
 URL: https://issues.apache.org/jira/browse/SOLR-6260
 Project: Solr
  Issue Type: Improvement
Affects Versions: 5.0
Reporter: Tomás Fernández Löbbe
Priority: Minor
 Attachments: SOLR-6260.patch


 DirectUpdateHandler was removed, I think in Solr 4. DirectUpdateHandler2 
 should be renamed, at least remove that 2. I don't know really what 
 direct means here. Maybe it could be renamed to DefaultUpdateHandler, or 
 UpdateHandlerDefaultImpl, or other good suggestions



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6260) Rename DirectUpdateHandler2

2014-07-19 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067809#comment-14067809
 ] 

Shalin Shekhar Mangar commented on SOLR-6260:
-

+1 for SolrUpdateHandler

 Rename DirectUpdateHandler2
 ---

 Key: SOLR-6260
 URL: https://issues.apache.org/jira/browse/SOLR-6260
 Project: Solr
  Issue Type: Improvement
Affects Versions: 5.0
Reporter: Tomás Fernández Löbbe
Priority: Minor
 Attachments: SOLR-6260.patch


 DirectUpdateHandler was removed, I think in Solr 4. DirectUpdateHandler2 
 should be renamed, at least remove that 2. I don't know really what 
 direct means here. Maybe it could be renamed to DefaultUpdateHandler, or 
 UpdateHandlerDefaultImpl, or other good suggestions



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org