[jira] [Resolved] (LUCENE-5245) ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything before query weight is calculated

2013-09-25 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-5245.
---

Resolution: Fixed

Thanks Nik!

> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated
> -
>
> Key: LUCENE-5245
> URL: https://issues.apache.org/jira/browse/LUCENE-5245
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: Nik Everett
>Assignee: Uwe Schindler
> Fix For: 5.0, 4.6
>
> Attachments: LUCENE-5245.patch, LUCENE-5245.patch, LUCENE-5245.patch
>
>
> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated.  This dramatically changes the resulting 
> score which is bad when comparing scores across different Lucene 
> indexes/shards/whatever.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5245) ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything before query weight is calculated

2013-09-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13778526#comment-13778526
 ] 

ASF subversion and git services commented on LUCENE-5245:
-

Commit 1526401 from [~thetaphi] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1526401 ]

Merged revision(s) 1526399 from lucene/dev/trunk:
LUCENE-5245: Fix MultiTermQuery's constant score rewrites to always return a 
ConstantScoreQuery to make scoring consistent. Previously it returned an empty 
unwrapped BooleanQuery, if no terms were available, which has a different query 
norm

> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated
> -
>
> Key: LUCENE-5245
> URL: https://issues.apache.org/jira/browse/LUCENE-5245
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: Nik Everett
>Assignee: Uwe Schindler
> Fix For: 5.0, 4.6
>
> Attachments: LUCENE-5245.patch, LUCENE-5245.patch, LUCENE-5245.patch
>
>
> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated.  This dramatically changes the resulting 
> score which is bad when comparing scores across different Lucene 
> indexes/shards/whatever.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5245) ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything before query weight is calculated

2013-09-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13778524#comment-13778524
 ] 

ASF subversion and git services commented on LUCENE-5245:
-

Commit 1526399 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1526399 ]

LUCENE-5245: Fix MultiTermQuery's constant score rewrites to always return a 
ConstantScoreQuery to make scoring consistent. Previously it returned an empty 
unwrapped BooleanQuery, if no terms were available, which has a different query 
norm

> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated
> -
>
> Key: LUCENE-5245
> URL: https://issues.apache.org/jira/browse/LUCENE-5245
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: Nik Everett
>Assignee: Uwe Schindler
> Fix For: 5.0, 4.6
>
> Attachments: LUCENE-5245.patch, LUCENE-5245.patch, LUCENE-5245.patch
>
>
> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated.  This dramatically changes the resulting 
> score which is bad when comparing scores across different Lucene 
> indexes/shards/whatever.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: 4.5 Solr RefGuide release plan: looking to cut an RC0 tomorow ~ 1PM GMT-6

2013-09-25 Thread Uwe Schindler
Hi Chris, Hi Cassandra,

Just unrelated:

> FYI: I'm moving forward as RM for this doc relase since Cassandra ran into
> SVN authorization problems trying to commit to the "dist.apache.org" repo
> to update the KEYS file and to post the RC in the "dev" directory.

The KEYS file should be the automatical one provided by the ASF. When checking 
this automatic file @ http://people.apache.org/keys/group/lucene.asc I noticed 
that Cassandra is missing in it (it is also missing in the committers list: 
http://people.apache.org/keys/committer/). To make the key available, Cassandra 
should upload her public key to https://id.apache.org . It may take up to a day 
until the key appears in the autogenerated files, so Cassandra should upload 
her key asap.

The scripts uploading the release artifacts to the DIST SVN repo should use 
this URL to use the most recent keys file - see buildAndPushRelease.py in the 
dev-tools/scripts folder. Please don't edit the KEYS file manually, it should 
be consistent with the "official" one.

Uwe


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4221) Custom sharding

2013-09-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13778509#comment-13778509
 ] 

ASF subversion and git services commented on SOLR-4221:
---

Commit 1526395 from [~noble.paul] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1526395 ]

SOLR-4221 make new solrj client/router able to read old clusterstate

> Custom sharding
> ---
>
> Key: SOLR-4221
> URL: https://issues.apache.org/jira/browse/SOLR-4221
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Noble Paul
> Fix For: 4.5, 5.0
>
> Attachments: SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, 
> SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, 
> SOLR-4221.patch
>
>
> Features to let users control everything about sharding/routing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-25 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13778368#comment-13778368
 ] 

Littlestar commented on LUCENE-5218:


patch tested OK.
please submit to trunk/trunk4x/trunk45, thanks.

Checking only these segments: _d8:
  44 of 54: name=_d8 docCount=19599
codec=hybaseStd42x
compound=true
numFiles=3
size (MB)=9.559
diagnostics = {timestamp=1379167874407, mergeFactor=22, 
os.version=2.6.32-358.el6.x86_64, os=Linux, lucene.version=4.4.0 1504776 - 
sarowe - 2013-07-19 02:49:47, source=merge, os.arch=amd64, 
mergeMaxNumSegments=1, java.version=1.7.0_25, java.vendor=Oracle Corporation}
no deletions
test: open reader.OK
test: fields..OK [29 fields]
test: field norms.OK [4 fields]
test: terms, freq, prox...OK [289268 terms; 3096641 terms/docs pairs; 
689694 tokens]
test: stored fields...OK [408046 total field count; avg 20.82 fields 
per doc]
test: term vectorsOK [0 total vector count; avg 0 term/freq vector 
fields per doc]
test: docvalues...OK [0 total doc count; 13 docvalues fields]

No problems were detected with this index.

> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
>Assignee: Michael McCandless
> Attachments: lucene44-LUCENE-5218.zip, LUCENE-5218.patch
>
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
>   at 
> org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> ===

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5274) Updating org.apache.httpcomponents above 4.2.2 causes tests using SSL to fail.

2013-09-25 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-5274:
--

Attachment: SOLR-5274.patch

> Updating org.apache.httpcomponents above 4.2.2 causes tests using SSL to fail.
> --
>
> Key: SOLR-5274
> URL: https://issues.apache.org/jira/browse/SOLR-5274
> Project: Solr
>  Issue Type: Test
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, 4.6
>
> Attachments: SOLR-5274.patch
>
>
> It seems like the system properties are no longer being cleaned up properly, 
> or some such test contamination. Tests run fine in isolation. To get around, 
> I've added the ability to add specific settings rather than use System 
> properties - at some point I'd like to be able to load jetties in parallel, 
> and in this is a required step for that anyway.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5274) Updating org.apache.httpcomponents above 4.2.2 causes tests using SSL to fail.

2013-09-25 Thread Mark Miller (JIRA)
Mark Miller created SOLR-5274:
-

 Summary: Updating org.apache.httpcomponents above 4.2.2 causes 
tests using SSL to fail.
 Key: SOLR-5274
 URL: https://issues.apache.org/jira/browse/SOLR-5274
 Project: Solr
  Issue Type: Test
  Components: Tests
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.6


It seems like the system properties are no longer being cleaned up properly, or 
some such test contamination. Tests run fine in isolation. To get around, I've 
added the ability to add specific settings rather than use System properties - 
at some point I'd like to be able to load jetties in parallel, and in this is a 
required step for that anyway.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5273) Update org.apache.httpcomponents from 4.2.2.

2013-09-25 Thread Mark Miller (JIRA)
Mark Miller created SOLR-5273:
-

 Summary: Update org.apache.httpcomponents from 4.2.2.
 Key: SOLR-5273
 URL: https://issues.apache.org/jira/browse/SOLR-5273
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.6


There have been a few releases in the stable line since.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5245) ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything before query weight is calculated

2013-09-25 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5245:
--

Attachment: LUCENE-5245.patch

New patch including test case that compares all 3 constant rewrites and also 
all 3 constant rewrites with a non-matching MTQ (using a should with a dummy 
term, so the query norm can be checked to be identical).

I will commit this tomorrow.

> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated
> -
>
> Key: LUCENE-5245
> URL: https://issues.apache.org/jira/browse/LUCENE-5245
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: Nik Everett
>Assignee: Uwe Schindler
> Fix For: 5.0, 4.6
>
> Attachments: LUCENE-5245.patch, LUCENE-5245.patch, LUCENE-5245.patch
>
>
> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated.  This dramatically changes the resulting 
> score which is bad when comparing scores across different Lucene 
> indexes/shards/whatever.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5245) ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything before query weight is calculated

2013-09-25 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5245:
--

Attachment: (was: LUCENE-5245.patch)

> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated
> -
>
> Key: LUCENE-5245
> URL: https://issues.apache.org/jira/browse/LUCENE-5245
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: Nik Everett
>Assignee: Uwe Schindler
> Fix For: 5.0, 4.6
>
> Attachments: LUCENE-5245.patch, LUCENE-5245.patch
>
>
> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated.  This dramatically changes the resulting 
> score which is bad when comparing scores across different Lucene 
> indexes/shards/whatever.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5245) ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything before query weight is calculated

2013-09-25 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5245:
--

Attachment: LUCENE-5245.patch

> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated
> -
>
> Key: LUCENE-5245
> URL: https://issues.apache.org/jira/browse/LUCENE-5245
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: Nik Everett
>Assignee: Uwe Schindler
> Fix For: 5.0, 4.6
>
> Attachments: LUCENE-5245.patch, LUCENE-5245.patch
>
>
> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated.  This dramatically changes the resulting 
> score which is bad when comparing scores across different Lucene 
> indexes/shards/whatever.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5245) ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything before query weight is calculated

2013-09-25 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5245:
--

Attachment: LUCENE-5245.patch

Here is a patch that fixes both issues!

[~mikemccand]: The issue is only affecting rewrites with 0 terms, so our 
shortcut is too aggressive. We return BooleanQuery(true) empty in that case, 
hwich has a different querynorm than ConstantScoreQuery, resulting in different 
scores. To be consistent we should return the same query type 
(ConstantScoreQuery for the constant rewrites). This has no speed impact, as 
the scorer is always empty.

> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated
> -
>
> Key: LUCENE-5245
> URL: https://issues.apache.org/jira/browse/LUCENE-5245
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: Nik Everett
>Assignee: Uwe Schindler
> Fix For: 5.0, 4.6
>
> Attachments: LUCENE-5245.patch, LUCENE-5245.patch
>
>
> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated.  This dramatically changes the resulting 
> score which is bad when comparing scores across different Lucene 
> indexes/shards/whatever.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-5245) ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything before query weight is calculated

2013-09-25 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler reassigned LUCENE-5245:
-

Assignee: Uwe Schindler

> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated
> -
>
> Key: LUCENE-5245
> URL: https://issues.apache.org/jira/browse/LUCENE-5245
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: Nik Everett
>Assignee: Uwe Schindler
> Attachments: LUCENE-5245.patch
>
>
> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated.  This dramatically changes the resulting 
> score which is bad when comparing scores across different Lucene 
> indexes/shards/whatever.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5245) ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything before query weight is calculated

2013-09-25 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5245:
--

Fix Version/s: 4.6
   5.0

> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated
> -
>
> Key: LUCENE-5245
> URL: https://issues.apache.org/jira/browse/LUCENE-5245
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: Nik Everett
>Assignee: Uwe Schindler
> Fix For: 5.0, 4.6
>
> Attachments: LUCENE-5245.patch
>
>
> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated.  This dramatically changes the resulting 
> score which is bad when comparing scores across different Lucene 
> indexes/shards/whatever.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5245) ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything before query weight is calculated

2013-09-25 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13778116#comment-13778116
 ] 

Uwe Schindler commented on LUCENE-5245:
---

ScoringRewrite#CONSTANT_SCORE_BOOLEAN_QUERY_REWRITE has the same problem.

> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated
> -
>
> Key: LUCENE-5245
> URL: https://issues.apache.org/jira/browse/LUCENE-5245
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: Nik Everett
> Attachments: LUCENE-5245.patch
>
>
> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated.  This dramatically changes the resulting 
> score which is bad when comparing scores across different Lucene 
> indexes/shards/whatever.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5245) ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything before query weight is calculated

2013-09-25 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13778112#comment-13778112
 ] 

Uwe Schindler commented on LUCENE-5245:
---

Ah sorry. It only applies for the case where no term is found. Yes, in that 
case the boost is missing and affects query norm!

Thanks for opening the issue.

> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated
> -
>
> Key: LUCENE-5245
> URL: https://issues.apache.org/jira/browse/LUCENE-5245
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: Nik Everett
> Attachments: LUCENE-5245.patch
>
>
> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated.  This dramatically changes the resulting 
> score which is bad when comparing scores across different Lucene 
> indexes/shards/whatever.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5245) ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything before query weight is calculated

2013-09-25 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13778104#comment-13778104
 ] 

Uwe Schindler commented on LUCENE-5245:
---

Your patch applies the constant scoring 2 times and also multiplies boost 2 
times.

> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated
> -
>
> Key: LUCENE-5245
> URL: https://issues.apache.org/jira/browse/LUCENE-5245
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: Nik Everett
> Attachments: LUCENE-5245.patch
>
>
> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated.  This dramatically changes the resulting 
> score which is bad when comparing scores across different Lucene 
> indexes/shards/whatever.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5245) ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything before query weight is calculated

2013-09-25 Thread Nik Everett (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nik Everett updated LUCENE-5245:


Attachment: LUCENE-5245.patch

This fixes my problem but I'm not sure how to setup unit tests in Lucene.

> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated
> -
>
> Key: LUCENE-5245
> URL: https://issues.apache.org/jira/browse/LUCENE-5245
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: Nik Everett
> Attachments: LUCENE-5245.patch
>
>
> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated.  This dramatically changes the resulting 
> score which is bad when comparing scores across different Lucene 
> indexes/shards/whatever.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-2844) benchmark geospatial performance based on geonames.org

2013-09-25 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-2844:
-

Component/s: modules/spatial

> benchmark geospatial performance based on geonames.org
> --
>
> Key: LUCENE-2844
> URL: https://issues.apache.org/jira/browse/LUCENE-2844
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/benchmark, modules/spatial
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 5.0, 4.6
>
> Attachments: benchmark-geo.patch, benchmark-geo.patch, 
> LUCENE-2844_spatial_benchmark.patch
>
>
> See comments for details.
> In particular, the original patch "benchmark-geo.patch" is fairly different 
> than LUCENE-2844.patch

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5245) ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything before query weight is calculated

2013-09-25 Thread Nik Everett (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13778093#comment-13778093
 ] 

Nik Everett commented on LUCENE-5245:
-

The query norm applied to the constant score query changes.  Say I had a query 
string like "foo:findm*^20 bar:findm*" and only foo had a result on shard 1 and 
only bar had a result shard 2.  Both end up with the same score because on 
shard one the query is rewritten to "foo:findm*^20" (norm = .05) and 
"bar:findm*" (norm = 1).

> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated
> -
>
> Key: LUCENE-5245
> URL: https://issues.apache.org/jira/browse/LUCENE-5245
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: Nik Everett
>
> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated.  This dramatically changes the resulting 
> score which is bad when comparing scores across different Lucene 
> indexes/shards/whatever.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-2844) benchmark geospatial performance based on geonames.org

2013-09-25 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-2844:
-

Attachment: LUCENE-2844_spatial_benchmark.patch

I completely re-did this with a summer intern, Liviy Ambrose. It's similar but 
simpler to the first approach; it isn't based on it.  Unlike the first patch, 
it does *not* modify any of the existing benchmark code (aside from the 
build.xml of course). I intend to enhance the benchmark code under separate 
issues, so that this patch can focus on just spatial benchmarking.

h3. Test data
The build.xml grabs a tab-separated values file from geonames.org, which 
contains millions of latitude & longitude based points. I want to take a 
snapshot (for reproducible tests), randomize the line order, and put it on 
http://people.apache.org/~dsmiley/.  Additionally, Spatial4j's tests has a file 
containing a WKT-formatted polygon for many countries. I want to host that as 
well in a format readable by LineDocSource.

h3. Source files (only 3):
* GeonamesLineParser.java: This is designed for use with LineDocSource.  
Geonames.org data comes in a tab-separated value file.
* SpatialDocMaker.java: This class is key.
** It holds a reference to the Lucene SpatialStrategy which it configures from 
the algorithm file, mostly via factories. It's possible to test quite a variety 
of spatial configurations, although it does assume RecursivePrefixTree.
** This DocMaker has the specialization to convert the shape-formatted string 
in the body field to a Shape object to be indexed.  It also has a configurable 
ShapeConverter to optionally convert a point to a circle or bounding box.
* SpatialFileQueryMaker.java: Instead of hard-coded queries (as seen in other 
non-spatial tests), it configures a private LineDocSource instance and it reads 
the shapes off that to use as spatial queries. For now you'd use it with 
GeonamesLineParser. Furthermore, it re-uses SpatialDocMaker's ShapeConverter so 
that the points can then become circle or rectangle queries.

The provided spatial.alg shows how to use it. 

Notes:
* The spatial data is placed into the "body" field of a standard benchmark 
DocData class as a string. Originally I experimented with a custom 
SpatialDocData but I determined it was needless to do that since the existing 
class can work. And after all, if you're testing spatial, you don't need to be 
simultaneously testing text. I didn't put it in DocData's attached Properties 
instance because that seems kinda heavyweight or at least medium-weight ;-)  

The patch is *not* ready -- I need to add documentation, pending input on this 
approach.

> benchmark geospatial performance based on geonames.org
> --
>
> Key: LUCENE-2844
> URL: https://issues.apache.org/jira/browse/LUCENE-2844
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/benchmark
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 5.0, 4.6
>
> Attachments: benchmark-geo.patch, benchmark-geo.patch, 
> LUCENE-2844_spatial_benchmark.patch
>
>
> See comments for details.
> In particular, the original patch "benchmark-geo.patch" is fairly different 
> than LUCENE-2844.patch

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5264) New method on NamedList to return one or many config arguments as collection

2013-09-25 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-5264:
---

Attachment: SOLR-5264.patch

New patch.  Adds methods to NamedList: public methods removeAll and removeArgs, 
private method killAll.  Removes the oneOrMany method from 
FieldMutatingUpdateProcessorFactory.  Adds tests for new methods, improves one 
existing test.  I am seeing a replication stress test failure, but I don't 
think it's related to these changes.  The patch has a conflict on branch_4x, so 
I haven't done any testing there yet.

All this so I can use removeArgs in my own custom update processors!

Please let me know if there is any objection to committing this after making 
sure branch_4x is good.


> New method on NamedList to return one or many config arguments as collection
> 
>
> Key: SOLR-5264
> URL: https://issues.apache.org/jira/browse/SOLR-5264
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Affects Versions: 4.5
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
>Priority: Minor
> Fix For: 5.0, 4.6
>
> Attachments: SOLR-5264.patch, SOLR-5264.patch, SOLR-5264.patch
>
>
> In the FieldMutatingUpdateProcessorFactory is a method called "oneOrMany" 
> that takes all of the entries in a NamedList and pulls them out into a 
> Collection.  I'd like to use that in a custom update processor I'm building.
> It seems as though this functionality would be right at home as part of 
> NamedList itself.  Here's a patch that moves the method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5245) ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything before query weight is calculated

2013-09-25 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13778064#comment-13778064
 ] 

Uwe Schindler commented on LUCENE-5245:
---

The query is constant score, so the score is always the same (the boost 
factor). What is the problem?

> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated
> -
>
> Key: LUCENE-5245
> URL: https://issues.apache.org/jira/browse/LUCENE-5245
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: Nik Everett
>
> ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
> before query weight is calculated.  This dramatically changes the resulting 
> score which is bad when comparing scores across different Lucene 
> indexes/shards/whatever.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5245) ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything before query weight is calculated

2013-09-25 Thread Nik Everett (JIRA)
Nik Everett created LUCENE-5245:
---

 Summary: ConstantScoreAutoRewrite rewrites prefix queryies that 
don't match anything before query weight is calculated
 Key: LUCENE-5245
 URL: https://issues.apache.org/jira/browse/LUCENE-5245
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.4
Reporter: Nik Everett


ConstantScoreAutoRewrite rewrites prefix queryies that don't match anything 
before query weight is calculated.  This dramatically changes the resulting 
score which is bad when comparing scores across different Lucene 
indexes/shards/whatever.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2844) benchmark geospatial performance based on geonames.org

2013-09-25 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13778035#comment-13778035
 ] 

David Smiley commented on LUCENE-2844:
--

h2. benchmark-geo.patch (2011-01)

Until now (with this patch), the benchmark contrib module did not include a 
means to test geospatial data.  This patch includes some new files and changes 
to existing ones.  Here is a summary of what is being added in this patch per 
file (all files below are within the benchmark contrib module) along with my 
notes:

Changes:
* build.xml -- Add dependency on Lucene's spatial module and Solr.
** It was a real pain to figure out the convoluted ant build system to make 
this work, and I doubt I did it the proper way.  
** Rob Muir thought it would be a good idea to make the benchmark contrib 
module be top level module (i.e. be alongside analysis) so that it can depend 
on everything.  
http://lucene.472066.n3.nabble.com/Re-Geospatial-search-in-Lucene-Solr-tp2157146p2157824.html
  I agree 
* ReadTask.java -- Added a search.useHitTotal boolean option that will use the 
total hits number for reporting purposes, instead of the existing behavior.
** The existing behavior (i.e. when search.useHitTotal=false) doesn't look very 
useful since the response integer is the sum of several things instead of just 
one thing.  I don't see how anyone makes use of it.

Note that on my local system, I also changed ReportTask & RepSelectByPrefTask 
to not include the '-' every other line, and also changed Format.java to not 
use commas in the numbers.  These changes are to make copy-pasting into excel 
more streamlined.

New Files:
* geoname-spatial.alg -- my algorithm file.
**  Note the ":0" trailing the Populate sequence.  This is a trick I use to 
skip building the index, since it takes a while to build and I'm not interested 
in benchmarking index construction.  You'll want to set this to :1 and then 
subsequently put it back for further runs as long as you keep the 
doc.geo.schemaField or any other configuration elements affecting index the 
same.
** In the patch, doc.geo.schemaField=geohash but unless you're tinkering with 
SOLR-2155, you'll probably want to set this to "latlon"
* GeoNamesContentSource.java -- a ContentSource for a geonames.org data file 
(either a single country like US.txt or allCountries.txt).
** Uses a subclass of DocData to store all the fields.  The existing DocData 
wasn't very applicable to data that is not composed of a title and body.
** Doesn't reuse the docdata parameter to getNextDocData(); a new one is 
created every time.
** Only supports content.source.forever=false
* GeoNamesDocMaker.java -- a subclass of DocMaker that works very differently 
than the existing DocMaker.
** Instead of assuming that each line from geonames.org will correspond to one 
Lucene document, this implementation supports, via configuration, creating a 
variable number of documents, each with a variable number of points taken 
randomly from a GeoNamesContentSource.
** doc.geo.docsToGenerate:  The number of documents to generate.  If blank it 
defaults to the number of rows in GeoNamesContentSource.
** doc.geo.avgPlacesPerDoc: The average number of places to be added to a 
document.  A random number between 0 and one less than twice this amount is 
chosen on a per document basis.  If this is set to 1, then exactly one is 
always used.  In order to support a value greater than 1, use the geohash field 
type and incorporate SOLR-2155 (geohash prefix technique).
** doc.geo.oneDocPerPlace: Whether at most one document should use the same 
place.  In other words, Can more than one document have the same place?  If so, 
set this to false.
** doc.geo.schemaField: references a field name in schema.xml.  The field 
should implement SpatialQueryable.
* GeoPerfData.java: This class is a singleton storing data in memory that is 
shared by GeoNamesDocMaker.java and GeoQueryMaker.java.
** content.geo.zeroPopSubst: if a population is encountered that is <= 0, then 
use this population value instead.  Default is 100.
** content.geo.maxPlaces: A limit on the number of rows read in from 
GeoNamesContentSource.java can be set here.  Defaults to Integer.MAX_VALUE.
** GeoPerfData is primarily responsible for reading in data from 
GeoNamesContentSource into memory to store the lat, lon, and population.  When 
a random place is asked for, you get one weighted according to population.  The 
idea is to skew the data towards more referenced places, and a population 
number is a decent way of doing it.
* GeoQueryMaker.java -- returns random queries from GeoPerfData by taking a 
random point and using a particular configured radius.  A pure lat-lon bounding 
box query is ultimately done.
** query.geo.radiuskm: The radius of the query in kilometers.
* schema.xml -- a Solr schema file to configure SpatialQueriable fields 
referenced by doc.geo.schemaField.

Wh

[jira] [Updated] (LUCENE-2844) benchmark geospatial performance based on geonames.org

2013-09-25 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-2844:
-

Description: 
See comments for details.
In particular, the original patch "benchmark-geo.patch" is fairly different 
than LUCENE-2844.patch

  was:
Until now (with this patch), the benchmark contrib module did not include a 
means to test geospatial data.  This patch includes some new files and changes 
to existing ones.  Here is a summary of what is being added in this patch per 
file (all files below are within the benchmark contrib module) along with my 
notes:

Changes:
* build.xml -- Add dependency on Lucene's spatial module and Solr.
** It was a real pain to figure out the convoluted ant build system to make 
this work, and I doubt I did it the proper way.  
** Rob Muir thought it would be a good idea to make the benchmark contrib 
module be top level module (i.e. be alongside analysis) so that it can depend 
on everything.  
http://lucene.472066.n3.nabble.com/Re-Geospatial-search-in-Lucene-Solr-tp2157146p2157824.html
  I agree 
* ReadTask.java -- Added a search.useHitTotal boolean option that will use the 
total hits number for reporting purposes, instead of the existing behavior.
** The existing behavior (i.e. when search.useHitTotal=false) doesn't look very 
useful since the response integer is the sum of several things instead of just 
one thing.  I don't see how anyone makes use of it.

Note that on my local system, I also changed ReportTask & RepSelectByPrefTask 
to not include the '-' every other line, and also changed Format.java to not 
use commas in the numbers.  These changes are to make copy-pasting into excel 
more streamlined.

New Files:
* geoname-spatial.alg -- my algorithm file.
**  Note the ":0" trailing the Populate sequence.  This is a trick I use to 
skip building the index, since it takes a while to build and I'm not interested 
in benchmarking index construction.  You'll want to set this to :1 and then 
subsequently put it back for further runs as long as you keep the 
doc.geo.schemaField or any other configuration elements affecting index the 
same.
** In the patch, doc.geo.schemaField=geohash but unless you're tinkering with 
SOLR-2155, you'll probably want to set this to "latlon"
* GeoNamesContentSource.java -- a ContentSource for a geonames.org data file 
(either a single country like US.txt or allCountries.txt).
** Uses a subclass of DocData to store all the fields.  The existing DocData 
wasn't very applicable to data that is not composed of a title and body.
** Doesn't reuse the docdata parameter to getNextDocData(); a new one is 
created every time.
** Only supports content.source.forever=false
* GeoNamesDocMaker.java -- a subclass of DocMaker that works very differently 
than the existing DocMaker.
** Instead of assuming that each line from geonames.org will correspond to one 
Lucene document, this implementation supports, via configuration, creating a 
variable number of documents, each with a variable number of points taken 
randomly from a GeoNamesContentSource.
** doc.geo.docsToGenerate:  The number of documents to generate.  If blank it 
defaults to the number of rows in GeoNamesContentSource.
** doc.geo.avgPlacesPerDoc: The average number of places to be added to a 
document.  A random number between 0 and one less than twice this amount is 
chosen on a per document basis.  If this is set to 1, then exactly one is 
always used.  In order to support a value greater than 1, use the geohash field 
type and incorporate SOLR-2155 (geohash prefix technique).
** doc.geo.oneDocPerPlace: Whether at most one document should use the same 
place.  In other words, Can more than one document have the same place?  If so, 
set this to false.
** doc.geo.schemaField: references a field name in schema.xml.  The field 
should implement SpatialQueryable.
* GeoPerfData.java: This class is a singleton storing data in memory that is 
shared by GeoNamesDocMaker.java and GeoQueryMaker.java.
** content.geo.zeroPopSubst: if a population is encountered that is <= 0, then 
use this population value instead.  Default is 100.
** content.geo.maxPlaces: A limit on the number of rows read in from 
GeoNamesContentSource.java can be set here.  Defaults to Integer.MAX_VALUE.
** GeoPerfData is primarily responsible for reading in data from 
GeoNamesContentSource into memory to store the lat, lon, and population.  When 
a random place is asked for, you get one weighted according to population.  The 
idea is to skew the data towards more referenced places, and a population 
number is a decent way of doing it.
* GeoQueryMaker.java -- returns random queries from GeoPerfData by taking a 
random point and using a particular configured radius.  A pure lat-lon bounding 
box query is ultimately done.
** query.geo.radiuskm: The radius of the query in kilometers.
* schema.xml -- a Solr schema file to configure Spa

[jira] [Commented] (SOLR-5264) New method on NamedList to return one or many config arguments as collection

2013-09-25 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13778011#comment-13778011
 ] 

Shawn Heisey commented on SOLR-5264:


I've thought of a new approach.  There is already a getAll method.  I can 
implement a removeAll method that's very similar to getAll.  I can then re-do 
the method I've created here, renaming it to removeArgs:  Start with getAll.  
Go through that list and build a collection, throwing an exception if any found 
values are not Strings.  Finally, delete the matching values and return the 
collection.


> New method on NamedList to return one or many config arguments as collection
> 
>
> Key: SOLR-5264
> URL: https://issues.apache.org/jira/browse/SOLR-5264
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Affects Versions: 4.5
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
>Priority: Minor
> Fix For: 5.0, 4.6
>
> Attachments: SOLR-5264.patch, SOLR-5264.patch
>
>
> In the FieldMutatingUpdateProcessorFactory is a method called "oneOrMany" 
> that takes all of the entries in a NamedList and pulls them out into a 
> Collection.  I'd like to use that in a custom update processor I'm building.
> It seems as though this functionality would be right at home as part of 
> NamedList itself.  Here's a patch that moves the method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 4.5 Solr RefGuide release plan: looking to cut an RC0 tomorow ~ 1PM GMT-6

2013-09-25 Thread Chris Hostetter

: Cassandra has volunteered to be my guinea pig and try acting as RM for the 4.5
: guide, to help sanity check that the process i write up makes sense & can be
: executed by someone who isn't me.

FYI: I'm moving forward as RM for this doc relase since Cassandra ran into 
SVN authorization problems trying to commit to the "dist.apache.org" repo 
to update the KEYS file and to post the RC in the "dev" directory.
 
bassed on my poking arround, it appears that the way the dist repo is 
setup only PMC members are allowed to commit to this repo unless they are 
explicitly included as in a "committers_may_release" whitelist.  (i guess 
all of our release managers since dist got migrated to using
svnpubsub have been PMC members so we haven't noticed before?  I'm 
fairly certain the old rsync dir on people.apache.org was just setup to 
allow anyone in the lucene unix group to write to it.)

I've got  question out to the infra team to sanity check that my 
speculations are correct -- if i get confirmation, i'll start a DISCUSS 
thread to figure out if we as a project want to request a change so all 
committers can be release managers



-Hoss

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



VOTE RC0 Release apache-solr-ref-guide-4.5.pdf"

2013-09-25 Thread Chris Hostetter


Please vote to release the following artifacts as the Apache Solr 
Reference Guide for 4.5...


https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-4.5-RC0/

$ cat apache-solr-ref-guide-4.5-RC0/apache-solr-ref-guide-4.5.pdf.sha1
ee40215d30f264d663f723ea2196b72b8cc5effc  apache-solr-ref-guide-4.5.pdf

(When reviewing the PDF, please don't hesitate to point out any typos 
or formatting glitches or any other problems of subject matter. 
Re-spinning a new RC is trivial, So in my opinion the bar is very low in 
terms of what things are worth fixing before relase.)






-Hoss

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5085) update ref guide release process to account for SHA1 checksum, PGP signing, and KEYS files

2013-09-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777981#comment-13777981
 ] 

ASF subversion and git services commented on SOLR-5085:
---

Commit 1526272 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1526272 ]

SOLR-5085: fix script to use proper pdf file name in sha1 checksum file (should 
not include directory path)

> update ref guide release process to account for SHA1 checksum, PGP signing, 
> and KEYS files
> --
>
> Key: SOLR-5085
> URL: https://issues.apache.org/jira/browse/SOLR-5085
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Hoss Man
>
> https://cwiki.apache.org/confluence/display/solr/Internal+-+How+To+Publish+This+Documentation
> * section on generating the .pdf needs to also generate a .asc and .sha1
> ** ideally script this, borrow from buildAndPushRelease.py
> * make corresponding updates to the post-vote publish section
> ** might as well update process to take advantage of 
> https://dist.apache.org/repos/dist/dev and using "svn mv" to publish
> ** https://www.apache.org/dev/release.html#upload-ci
> * need to figure out what KEYS files (if any) process should mention keeping 
> up to date before PGP signing files

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5272) Schema REST API not returning correct Content-Type for JSON

2013-09-25 Thread Stefan Matheis (steffkes) (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777970#comment-13777970
 ] 

Stefan Matheis (steffkes) commented on SOLR-5272:
-

David, i guess you're running the provided example configuration, since you 
didn't tell otherwise? If so .. have a look at your [solrconfig.xml 
L1730-1736|http://svn.apache.org/viewvc/lucene/dev/trunk/solr/example/solr/collection1/conf/solrconfig.xml?view=markup#l1729]:

{code:xml}
  
  text/plain; charset=UTF-8
{code}

That should do the trick?

btw: {{curl -I http://localhost}} does the same as your command, but is a bit 
shorter

> Schema REST API not returning correct Content-Type for JSON
> ---
>
> Key: SOLR-5272
> URL: https://issues.apache.org/jira/browse/SOLR-5272
> Project: Solr
>  Issue Type: Bug
>  Components: Response Writers
>Affects Versions: 4.4
>Reporter: David Arthur
>
> The new Schema REST API is not returning application/json as the Content-Type 
> when wt=json (or when wt is omitted).
> Examples:
> $ curl -s -D - http://localhost:/solr/collection1/schema/fields -o 
> /dev/null
> {code}
> HTTP/1.1 200 OK
> Content-Type: text/plain; charset=UTF-8
> Date: Wed, 25 Sep 2013 17:29:24 GMT
> Accept-Ranges: bytes
> Transfer-Encoding: chunked
> {code}
> $ curl -s -D - http://localhost:/solr/collection1/schema/fields?wt=json 
> -o /dev/null
> {code}
> HTTP/1.1 200 OK
> Content-Type: text/plain; charset=UTF-8
> Date: Wed, 25 Sep 2013 17:30:59 GMT
> Accept-Ranges: bytes
> Transfer-Encoding: chunked
> {code}
> $ curl -s -D - http://localhost:/solr/collection1/schema/fields?wt=xml -o 
> /dev/null
> {code}
> HTTP/1.1 200 OK
> Content-Type: application/xml; charset=UTF-8
> Date: Wed, 25 Sep 2013 17:31:13 GMT
> Accept-Ranges: bytes
> Transfer-Encoding: chunked
> {code}
> $ curl -s -D - 
> http://localhost:/solr/collection1/schema/fields?wt=javabin -o /dev/null
> {code}
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> Date: Wed, 25 Sep 2013 17:31:45 GMT
> Accept-Ranges: bytes
> Transfer-Encoding: chunked
> {code}
> This might be more than just a schema REST API problem - perhaps something to 
> do with the Restlet/Solr writer bridge? I peeked in the code but saw nothing 
> obvious.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-25 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-5218:
---

Attachment: LUCENE-5218.patch

Patch w/ test and fix; I fixed it slightly differently, just returning 
immediately when length == 0.

> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
>Assignee: Michael McCandless
> Attachments: lucene44-LUCENE-5218.zip, LUCENE-5218.patch
>
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
>   at 
> org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> ===

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5085) update ref guide release process to account for SHA1 checksum, PGP signing, and KEYS files

2013-09-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777965#comment-13777965
 ] 

ASF subversion and git services commented on SOLR-5085:
---

Commit 1526271 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1526271 ]

SOLR-5085: the point of the subdir is so the files wouldn't have 'rc' in their 
name, and could be copied as is ... fix script to split X.Y-RCZ to build final 
file names, and vet that args look correct

> update ref guide release process to account for SHA1 checksum, PGP signing, 
> and KEYS files
> --
>
> Key: SOLR-5085
> URL: https://issues.apache.org/jira/browse/SOLR-5085
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Hoss Man
>
> https://cwiki.apache.org/confluence/display/solr/Internal+-+How+To+Publish+This+Documentation
> * section on generating the .pdf needs to also generate a .asc and .sha1
> ** ideally script this, borrow from buildAndPushRelease.py
> * make corresponding updates to the post-vote publish section
> ** might as well update process to take advantage of 
> https://dist.apache.org/repos/dist/dev and using "svn mv" to publish
> ** https://www.apache.org/dev/release.html#upload-ci
> * need to figure out what KEYS files (if any) process should mention keeping 
> up to date before PGP signing files

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-25 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777956#comment-13777956
 ] 

Michael McCandless commented on LUCENE-5218:


Thanks Littlestar, I'm able to reproduce this with a small test case ... I'll 
add a patch shortly.

> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
>Assignee: Michael McCandless
> Attachments: lucene44-LUCENE-5218.zip
>
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
>   at 
> org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> ===

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-25 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless reassigned LUCENE-5218:
--

Assignee: Michael McCandless

> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
>Assignee: Michael McCandless
> Attachments: lucene44-LUCENE-5218.zip
>
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
>   at 
> org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> ===

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5264) New method on NamedList to return one or many config arguments as collection

2013-09-25 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-5264:
---

Attachment: SOLR-5264.patch

Updated patch.  No compilation errors.  Running tests.

Still need to make a test for the new method.


> New method on NamedList to return one or many config arguments as collection
> 
>
> Key: SOLR-5264
> URL: https://issues.apache.org/jira/browse/SOLR-5264
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Affects Versions: 4.5
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
>Priority: Minor
> Fix For: 5.0, 4.6
>
> Attachments: SOLR-5264.patch, SOLR-5264.patch
>
>
> In the FieldMutatingUpdateProcessorFactory is a method called "oneOrMany" 
> that takes all of the entries in a NamedList and pulls them out into a 
> Collection.  I'd like to use that in a custom update processor I'm building.
> It seems as though this functionality would be right at home as part of 
> NamedList itself.  Here's a patch that moves the method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4221) Custom sharding

2013-09-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777862#comment-13777862
 ] 

ASF subversion and git services commented on SOLR-4221:
---

Commit 1526255 from [~yo...@apache.org] in branch 'dev/branches/lucene_solr_4_5'
[ https://svn.apache.org/r1526255 ]

SOLR-4221: make new solrj client/router able to read old clusterstate

> Custom sharding
> ---
>
> Key: SOLR-4221
> URL: https://issues.apache.org/jira/browse/SOLR-4221
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Noble Paul
> Fix For: 4.5, 5.0
>
> Attachments: SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, 
> SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, 
> SOLR-4221.patch
>
>
> Features to let users control everything about sharding/routing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5241) Hotspot crash, SIGSEGV with Java 1.6u45

2013-09-25 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777834#comment-13777834
 ] 

Dawid Weiss commented on LUCENE-5241:
-

Cannot be the same as https://bugs.openjdk.java.net/browse/JDK-8024830, as 
pointed out by Vladimir (this time it's the client compiler vs. C2). Vladimir 
filed a separate issue:

https://bugs.openjdk.java.net/browse/JDK-8025460

> Hotspot crash, SIGSEGV with Java 1.6u45
> ---
>
> Key: LUCENE-5241
> URL: https://issues.apache.org/jira/browse/LUCENE-5241
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: consoleText.txt, event-files.zip, J0.ZIP, ubuntu_9.png
>
>
> First spotted here.
> http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/7486/
> I can reproduce this... sort of. On ubuntu it crashes about 1 time in
> 10.
> Reproduction steps are quite simple --
> {code}
> svn checkout -r 1525563
> http://svn.apache.org/repos/asf/lucene/dev/branches/branch_4x branch_4x_alt
> cd branch_4x_alt/lucene
> ant "-Dargs=-client -XX:+UseConcMarkSweepGC -Xmx512m" 
> -Dtests.disableHdfs=true -Dtests.multiplier=3 -Dtests.jvms=1 
> "-Dtests.class=*TestDirectory" -Dtests.seed=4B7F292A927C08A  test-core
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5241) Hotspot crash, SIGSEGV with Java 1.6u45

2013-09-25 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-5241:


Description: 
First spotted here.
http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/7486/

I can reproduce this... sort of. On ubuntu it crashes about 1 time in
10.

Reproduction steps are quite simple --
{code}
svn checkout -r 1525563
http://svn.apache.org/repos/asf/lucene/dev/branches/branch_4x branch_4x_alt
cd branch_4x_alt/lucene
ant "-Dargs=-client -XX:+UseConcMarkSweepGC -Xmx512m" -Dtests.disableHdfs=true 
-Dtests.multiplier=3 -Dtests.jvms=1 "-Dtests.class=*TestDirectory" 
-Dtests.seed=4B7F292A927C08A  test-core
{code}

  was:
First spotted here.
http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/7486/

I can reproduce this... sort of. On ubuntu it crashes about 1 time in
10. Could it be a backport of the same bug as 
https://bugs.openjdk.java.net/browse/JDK-8024830?

Reproduction steps are quite simple --
{code}
svn checkout -r 1525563
http://svn.apache.org/repos/asf/lucene/dev/branches/branch_4x branch_4x_alt
cd branch_4x_alt/lucene
ant "-Dargs=-client -XX:+UseConcMarkSweepGC -Xmx512m" -Dtests.disableHdfs=true 
-Dtests.multiplier=3 -Dtests.jvms=1 "-Dtests.class=*TestDirectory" 
-Dtests.seed=4B7F292A927C08A  test-core
{code}


> Hotspot crash, SIGSEGV with Java 1.6u45
> ---
>
> Key: LUCENE-5241
> URL: https://issues.apache.org/jira/browse/LUCENE-5241
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: consoleText.txt, event-files.zip, J0.ZIP, ubuntu_9.png
>
>
> First spotted here.
> http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/7486/
> I can reproduce this... sort of. On ubuntu it crashes about 1 time in
> 10.
> Reproduction steps are quite simple --
> {code}
> svn checkout -r 1525563
> http://svn.apache.org/repos/asf/lucene/dev/branches/branch_4x branch_4x_alt
> cd branch_4x_alt/lucene
> ant "-Dargs=-client -XX:+UseConcMarkSweepGC -Xmx512m" 
> -Dtests.disableHdfs=true -Dtests.multiplier=3 -Dtests.jvms=1 
> "-Dtests.class=*TestDirectory" -Dtests.seed=4B7F292A927C08A  test-core
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5233) we should change the suggested search in the demo docs because the lucene code base is full of swear words

2013-09-25 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved LUCENE-5233.
--

   Resolution: Fixed
Fix Version/s: 4.6
   5.0
 Assignee: Hoss Man

I went with...

{noformat}"supercalifragilisticexpialidocious"{noformat}

> we should change the suggested search in the demo docs because the lucene 
> code base is full of swear words
> --
>
> Key: LUCENE-5233
> URL: https://issues.apache.org/jira/browse/LUCENE-5233
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Trivial
> Fix For: 5.0, 4.6
>
>
> the javadocs for the lucene demo say...
> bq. You'll be prompted for a query. Type in a swear word and press the enter 
> key. You'll see that the Lucene developers are very well mannered and get no 
> results. Now try entering the word "string". That should return a whole bunch 
> of documents. The results will page at every tenth result and ask you whether 
> you want more results.
> ...but thanks to files like "KStemData*.java" and "Top50KWiki.utf8" i was 
> *really* hard pressed to find an (english) swear word that didn't result in a 
> match in any of the files in the lucene code base (and i have a pretty 
> extensive breadth of knowledge of profanity)
> We should change this paragraph to refer to something that is total giberish 
> ("supercalifragilisticexpialidocious")... or maybe just "nocommit"
> (side note: since this para exists in the javadoc package comments, it will 
> get picked up when they index the source -- so we should include an HTML 
> comment in the middle of whatever word is picked)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5233) we should change the suggested search in the demo docs because the lucene code base is full of swear words

2013-09-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777820#comment-13777820
 ] 

ASF subversion and git services commented on LUCENE-5233:
-

Commit 1526248 from hoss...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1526248 ]

LUCENE-5233: tweak demo example search string to something that isn't in the 
code base (merge r1526247)

> we should change the suggested search in the demo docs because the lucene 
> code base is full of swear words
> --
>
> Key: LUCENE-5233
> URL: https://issues.apache.org/jira/browse/LUCENE-5233
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
>Priority: Trivial
>
> the javadocs for the lucene demo say...
> bq. You'll be prompted for a query. Type in a swear word and press the enter 
> key. You'll see that the Lucene developers are very well mannered and get no 
> results. Now try entering the word "string". That should return a whole bunch 
> of documents. The results will page at every tenth result and ask you whether 
> you want more results.
> ...but thanks to files like "KStemData*.java" and "Top50KWiki.utf8" i was 
> *really* hard pressed to find an (english) swear word that didn't result in a 
> match in any of the files in the lucene code base (and i have a pretty 
> extensive breadth of knowledge of profanity)
> We should change this paragraph to refer to something that is total giberish 
> ("supercalifragilisticexpialidocious")... or maybe just "nocommit"
> (side note: since this para exists in the javadoc package comments, it will 
> get picked up when they index the source -- so we should include an HTML 
> comment in the middle of whatever word is picked)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5233) we should change the suggested search in the demo docs because the lucene code base is full of swear words

2013-09-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777812#comment-13777812
 ] 

ASF subversion and git services commented on LUCENE-5233:
-

Commit 1526247 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1526247 ]

LUCENE-5233: tweak demo example search string to something that isn't in the 
code base

> we should change the suggested search in the demo docs because the lucene 
> code base is full of swear words
> --
>
> Key: LUCENE-5233
> URL: https://issues.apache.org/jira/browse/LUCENE-5233
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
>Priority: Trivial
>
> the javadocs for the lucene demo say...
> bq. You'll be prompted for a query. Type in a swear word and press the enter 
> key. You'll see that the Lucene developers are very well mannered and get no 
> results. Now try entering the word "string". That should return a whole bunch 
> of documents. The results will page at every tenth result and ask you whether 
> you want more results.
> ...but thanks to files like "KStemData*.java" and "Top50KWiki.utf8" i was 
> *really* hard pressed to find an (english) swear word that didn't result in a 
> match in any of the files in the lucene code base (and i have a pretty 
> extensive breadth of knowledge of profanity)
> We should change this paragraph to refer to something that is total giberish 
> ("supercalifragilisticexpialidocious")... or maybe just "nocommit"
> (side note: since this para exists in the javadoc package comments, it will 
> get picked up when they index the source -- so we should include an HTML 
> comment in the middle of whatever word is picked)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4221) Custom sharding

2013-09-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777807#comment-13777807
 ] 

ASF subversion and git services commented on SOLR-4221:
---

Commit 1526244 from [~yo...@apache.org] in branch 'dev/trunk'
[ https://svn.apache.org/r1526244 ]

SOLR-4221: make new solrj client/router able to read old clusterstate

> Custom sharding
> ---
>
> Key: SOLR-4221
> URL: https://issues.apache.org/jira/browse/SOLR-4221
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Noble Paul
> Fix For: 4.5, 5.0
>
> Attachments: SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, 
> SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, 
> SOLR-4221.patch
>
>
> Features to let users control everything about sharding/routing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5272) Schema REST API not returning correct Content-Type for JSON

2013-09-25 Thread David Arthur (JIRA)
David Arthur created SOLR-5272:
--

 Summary: Schema REST API not returning correct Content-Type for 
JSON
 Key: SOLR-5272
 URL: https://issues.apache.org/jira/browse/SOLR-5272
 Project: Solr
  Issue Type: Bug
  Components: Response Writers
Affects Versions: 4.4
Reporter: David Arthur


The new Schema REST API is not returning application/json as the Content-Type 
when wt=json (or when wt is omitted).

Examples:

$ curl -s -D - http://localhost:/solr/collection1/schema/fields -o /dev/null

{code}
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
Date: Wed, 25 Sep 2013 17:29:24 GMT
Accept-Ranges: bytes
Transfer-Encoding: chunked
{code}

$ curl -s -D - http://localhost:/solr/collection1/schema/fields?wt=json -o 
/dev/null

{code}
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
Date: Wed, 25 Sep 2013 17:30:59 GMT
Accept-Ranges: bytes
Transfer-Encoding: chunked
{code}

$ curl -s -D - http://localhost:/solr/collection1/schema/fields?wt=xml -o 
/dev/null

{code}
HTTP/1.1 200 OK
Content-Type: application/xml; charset=UTF-8
Date: Wed, 25 Sep 2013 17:31:13 GMT
Accept-Ranges: bytes
Transfer-Encoding: chunked
{code}

$ curl -s -D - http://localhost:/solr/collection1/schema/fields?wt=javabin 
-o /dev/null

{code}
HTTP/1.1 200 OK
Content-Type: application/octet-stream
Date: Wed, 25 Sep 2013 17:31:45 GMT
Accept-Ranges: bytes
Transfer-Encoding: chunked
{code}

This might be more than just a schema REST API problem - perhaps something to 
do with the Restlet/Solr writer bridge? I peeked in the code but saw nothing 
obvious.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2216) Highlighter query exceeds maxBooleanClause limit due to range query

2013-09-25 Thread Simon Endele (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777691#comment-13777691
 ] 

Simon Endele commented on SOLR-2216:


Am I right in assuming that this isn't a problem when using the 
FastVectorHighlighter or the PostingsHighlighter?

> Highlighter query exceeds maxBooleanClause limit due to range query
> ---
>
> Key: SOLR-2216
> URL: https://issues.apache.org/jira/browse/SOLR-2216
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter
>Affects Versions: 1.4.1
> Environment: Linux solr-2.bizjournals.int 2.6.18-194.3.1.el5 #1 SMP 
> Thu May 13 13:08:30 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
> java version "1.6.0_21"
> Java(TM) SE Runtime Environment (build 1.6.0_21-b06)
> Java HotSpot(TM) 64-Bit Server VM (build 17.0-b16, mixed mode)
> JAVA_OPTS="-client -Dcom.sun.management.jmxremote=true 
> -Dcom.sun.management.jmxremote.port= 
> -Dcom.sun.management.jmxremote.authenticate=true 
> -Dcom.sun.management.jmxremote.access.file=/root/.jmxaccess 
> -Dcom.sun.management.jmxremote.password.file=/root/.jmxpasswd 
> -Dcom.sun.management.jmxremote.ssl=false -XX:+UseCompressedOops 
> -XX:MaxPermSize=512M -Xms10240M -Xmx15360M -XX:+UseParallelGC 
> -XX:+AggressiveOpts -XX:NewRatio=5"
> top - 11:38:49 up 124 days, 22:37,  1 user,  load average: 5.20, 4.35, 3.90
> Tasks: 220 total,   1 running, 219 sleeping,   0 stopped,   0 zombie
> Cpu(s): 47.5%us,  2.9%sy,  0.0%ni, 49.5%id,  0.1%wa,  0.0%hi,  0.0%si,  0.0%st
> Mem:  24679008k total, 18179980k used,  6499028k free,   125424k buffers
> Swap: 26738680k total,29276k used, 26709404k free,  8187444k cached
>Reporter: Ken Stanley
>
> For a full detail of the issue, please see the mailing list: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201011.mbox/%3CAANLkTimE8z8yOni+u0Nsbgct1=ef7e+su0_waku2c...@mail.gmail.com%3E
> The nutshell version of the issue is that when I have a query that contains 
> ranges on a specific (non-highlighted) field, the highlighter component is 
> attempting to create a query that exceeds the value of maxBooleanClauses set 
> from solrconfig.xml. This is despite my explicit setting of hl.field, 
> hl.requireFieldMatch, and various other hightlight options in the query. 
> As suggested by Koji in the follow-up response, I removed the range queries 
> from my main query, and SOLR and highlighting were happy to fulfill my 
> request. It was suggested that if removing the range queries worked that this 
> might potentially be a bug, hence my filing this JIRA ticket. For what it is 
> worth, if I move my range queries into an fq, I do not get the exception 
> about exceeding maxBooleanClauses, and I get the effect that I was looking 
> for. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-25 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777672#comment-13777672
 ] 

Littlestar commented on LUCENE-5218:


mybe binary doc length=0.
myapp convert string to byte[], adding to binaryDocvalues.

above patch works for me.

> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
> Attachments: lucene44-LUCENE-5218.zip
>
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
>   at 
> org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> ===

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5243) Update Clover

2013-09-25 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777640#comment-13777640
 ] 

Uwe Schindler commented on LUCENE-5243:
---

I keep this issue open (under progress) until Clover 3.2.0 is out.

> Update Clover
> -
>
> Key: LUCENE-5243
> URL: https://issues.apache.org/jira/browse/LUCENE-5243
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Affects Versions: 5.0
> Environment: Jenkins build
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 5.0
>
> Attachments: LUCENE-5243.patch
>
>
> Currently we sometimes get the following build error on the Clover builds 
> (only Java 7, so happens only in Lucene/Solr trunk):
> {noformat}
> BUILD FAILED
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/build.xml:452:
>  The following error occurred while executing this line:
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/build.xml:364:
>  The following error occurred while executing this line:
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/build.xml:382:
>  The following error occurred while executing this line:
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/extra-targets.xml:41:
>  com.atlassian.clover.api.CloverException: 
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
>   at 
> com.cenqua.clover.reporters.html.HtmlReporter.executeImpl(HtmlReporter.java:165)
>   at 
> com.cenqua.clover.reporters.CloverReporter.execute(CloverReporter.java:41)
>   at 
> com.cenqua.clover.tasks.CloverReportTask.generateReports(CloverReportTask.java:427)
>   at 
> com.cenqua.clover.tasks.CloverReportTask.cloverExecute(CloverReportTask.java:384)
>   at 
> com.cenqua.clover.tasks.AbstractCloverTask.execute(AbstractCloverTask.java:55)
>   at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
>   at org.apache.tools.ant.Task.perform(Task.java:348)
>   at org.apache.tools.ant.Target.execute(Target.java:390)
>   at org.apache.tools.ant.Target.performTasks(Target.java:411)
>   at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
>   at 
> org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
>   at org.apache.tools.ant.Project.executeTargets(Project.java:1251)
>   at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442)
>   at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:302)
>   at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:221)
>   at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
>   at org.apache.tools.ant.Task.perform(Task.java:348)
>   at org.apache.tools.ant.Target.execute(Target.java:390)
>   at org.apache.tools.ant.Target.performTasks(Target.java:411)
>   at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
>   at 
> org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
>   at org.apache.tools.ant.Project.executeTargets(Project.java:1251)
>   at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442)
>   at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105)
>   at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
>   at org.apache.tools.ant.Task.perform(Task.java:348)
>   at org.apache.tools.ant.Target.execute(Target.java:390)
>   at org.apache.tools.ant.Target.performTasks(Target.java:411)
>   at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
>   at 
> org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
>   at org.apache.tools.ant.Project.executeTargets(Project.java:1251)
>

[jira] [Commented] (LUCENE-5243) Update Clover

2013-09-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777639#comment-13777639
 ] 

ASF subversion and git services commented on LUCENE-5243:
-

Commit 1526210 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1526210 ]

LUCENE-5243: Temporarily fix problem with Clover + Java 7 caused by broken 
comparator. Once Clover 3.2.0 is out we can remove the snapshot repository. 
This patch is needed on trunk only (because Java 7)

> Update Clover
> -
>
> Key: LUCENE-5243
> URL: https://issues.apache.org/jira/browse/LUCENE-5243
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Affects Versions: 5.0
> Environment: Jenkins build
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 5.0
>
> Attachments: LUCENE-5243.patch
>
>
> Currently we sometimes get the following build error on the Clover builds 
> (only Java 7, so happens only in Lucene/Solr trunk):
> {noformat}
> BUILD FAILED
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/build.xml:452:
>  The following error occurred while executing this line:
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/build.xml:364:
>  The following error occurred while executing this line:
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/build.xml:382:
>  The following error occurred while executing this line:
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/extra-targets.xml:41:
>  com.atlassian.clover.api.CloverException: 
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
>   at 
> com.cenqua.clover.reporters.html.HtmlReporter.executeImpl(HtmlReporter.java:165)
>   at 
> com.cenqua.clover.reporters.CloverReporter.execute(CloverReporter.java:41)
>   at 
> com.cenqua.clover.tasks.CloverReportTask.generateReports(CloverReportTask.java:427)
>   at 
> com.cenqua.clover.tasks.CloverReportTask.cloverExecute(CloverReportTask.java:384)
>   at 
> com.cenqua.clover.tasks.AbstractCloverTask.execute(AbstractCloverTask.java:55)
>   at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
>   at org.apache.tools.ant.Task.perform(Task.java:348)
>   at org.apache.tools.ant.Target.execute(Target.java:390)
>   at org.apache.tools.ant.Target.performTasks(Target.java:411)
>   at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
>   at 
> org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
>   at org.apache.tools.ant.Project.executeTargets(Project.java:1251)
>   at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442)
>   at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:302)
>   at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:221)
>   at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
>   at org.apache.tools.ant.Task.perform(Task.java:348)
>   at org.apache.tools.ant.Target.execute(Target.java:390)
>   at org.apache.tools.ant.Target.performTasks(Target.java:411)
>   at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
>   at 
> org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
>   at org.apache.tools.ant.Project.executeTargets(Project.java:1251)
>   at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442)
>   at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105)
>   at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
>   at org.apache.tools.ant.Task.perform(Task.java:348)
>   at org.apache.tools.ant.Target.execute(Target.java:390)
>   at org.apache.tools.ant.Target.performTasks(Target.java:4

[jira] [Commented] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-25 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777637#comment-13777637
 ] 

Michael McCandless commented on LUCENE-5218:


Hmm are you adding length=0 binary doc values?  It sounds like this could be a 
bug in that case, when the start aligns with the block boundary.

> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
> Attachments: lucene44-LUCENE-5218.zip
>
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
>   at 
> org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> ===

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 4.5.0 RC2

2013-09-25 Thread Jack Krupansky

+1 for Windows 7 with IE10.

-- Jack Krupansky

-Original Message- 
From: Adrien Grand

Sent: Wednesday, September 25, 2013 2:55 AM
To: dev@lucene.apache.org
Subject: [VOTE] Release Lucene/Solr 4.5.0 RC2

Here is a new release candidate that fixes some JavaBin codec backward
compatibility issues (SOLR-5261, SOLR-4221).

Please vote to release the following artifacts:
 
http://people.apache.org/~jpountz/staging_area/lucene-solr-4.5.0-RC2-rev1526012/

Smoke tester was happy on my end so here is my +1.

--
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-25 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777584#comment-13777584
 ] 

Littlestar edited comment on LUCENE-5218 at 9/25/13 3:07 PM:
-

{noformat}
public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];  //here is 
java.lang.ArrayIndexOutOfBoundsException
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}
{noformat}

I debug into above code.
when java.lang.ArrayIndexOutOfBoundsException occurs:
b=[], start=131072,lenth=0
index=2, offset=0, blockSize=65536, blockBits=16, blockMask=65535, 
blocks.length=2

my patch:
{noformat}
 public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1: "length=" + length;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  
  +if (index >= blocks.length) {
  +   // Outside block
  +   b.bytes = EMPTY_BYTES;
  +   b.offset = b.length = 0;
  +} else if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}

{noformat}

  was (Author: cnstar9988):
{noformat}
public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];  //here is 
java.lang.ArrayIndexOutOfBoundsException
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}
{noformat}

I debug into above code.
when java.lang.ArrayIndexOutOfBoundsException occurs:
b=[], start=131072,lenth=0
index=2, offset=0, blockSize=65536, blocks.length=2

my patch:
{noformat}
 public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1: "length=" + length;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  
  +if (index >= blocks.length) {
  +   // Outside block
  +   b.bytes = EMPTY_BYTES;
  +   b.offset = b.length = 0;
  +} else if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}

{noformat}
  
> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
> Attachments: lucene44-LUCENE-5218.zip
>
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.

[jira] [Comment Edited] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-25 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777584#comment-13777584
 ] 

Littlestar edited comment on LUCENE-5218 at 9/25/13 3:02 PM:
-

{noformat}
public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];  //here is 
java.lang.ArrayIndexOutOfBoundsException
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}
{noformat}

I debug into above code.
when java.lang.ArrayIndexOutOfBoundsException occurs:
b=[], start=131072,lenth=0
index=2, offset=0, blockSize=65536, blocks.length=2

my patch:
{noformat}
 public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1: "length=" + length;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  
  +if (index >= blocks.length) {
  +   // Outside block
  +   b.bytes = EMPTY_BYTES;
  +   b.offset = b.length = 0;
  +} else if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}

{noformat}

  was (Author: cnstar9988):
{noformat}
public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];  //here is 
java.lang.ArrayIndexOutOfBoundsException
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}
{noformat}

I debug into above code.
when java.lang.ArrayIndexOutOfBoundsException occurs:
b=[], start=131072,lenth=0
index=2, offset=0, blocks.length=2

my patch:
{noformat}
 public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1: "length=" + length;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  
  +if (index >= blocks.length) {
  +   // Outside block
  +   b.bytes = EMPTY_BYTES;
  +   b.offset = b.length = 0;
  +} else if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}

{noformat}
  
> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
> Attachments: lucene44-LUCENE-5218.zip
>
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexW

[jira] [Comment Edited] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-25 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777584#comment-13777584
 ] 

Littlestar edited comment on LUCENE-5218 at 9/25/13 2:59 PM:
-

{noformat}
public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];  //here is 
java.lang.ArrayIndexOutOfBoundsException
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}
{noformat}

I debug into above code.
when java.lang.ArrayIndexOutOfBoundsException occurs:
b=[], start=131072,lenth=0
index=2, offset=0, blocks.length=2

my patch:
{noformat}
 public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1: "length=" + length;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  
  +if (index >= blocks.length) {
  +   // Outside block
  +   b.bytes = EMPTY_BYTES;
  +   b.offset = b.length = 0;
  +} else if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}

{noformat}

  was (Author: cnstar9988):
{noformat}
public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}
{noformat}

I debug into above code.
when java.lang.ArrayIndexOutOfBoundsException occurs:
b=[], start=131072,lenth=0
index=2, offset=0, blocks.length=2

my patch:
{noformat}
 public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1: "length=" + length;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  
  +if (index >= blocks.length) {
  +   // Outside block
  +   b.bytes = EMPTY_BYTES;
  +   b.offset = b.length = 0;
  +} else if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}

{noformat}
  
> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
> Attachments: lucene44-LUCENE-5218.zip
>
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.

[jira] [Comment Edited] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-25 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777584#comment-13777584
 ] 

Littlestar edited comment on LUCENE-5218 at 9/25/13 2:58 PM:
-

{noformat}
public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}
{noformat}

I debug into above code.
when java.lang.ArrayIndexOutOfBoundsException occurs:
b=[], start=131072,lenth=0
index=2, offset=0, blocks.length=2

my patch:
{noformat}
 public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1: "length=" + length;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  
  +if (index >= blocks.length) {
  +   // Outside block
  +   b.bytes = EMPTY_BYTES;
  +   b.offset = b.length = 0;
  +} else if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}

{noformat}

  was (Author: cnstar9988):
{noformat}
public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}
{noformat}

I debug into above code.
when java.lang.ArrayIndexOutOfBoundsException occurs:
b=[], start=131072,lenth=0
index=2, offset=0, blocks.length=2

my patch:
 public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1: "length=" + length;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  
  +if (index >= blocks.length) {
  +   // Outside block
  +   b.bytes = EMPTY_BYTES;
  +   b.offset = b.length = 0;
  +} else if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}
  

  
> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
> Attachments: lucene44-LUCENE-5218.zip
>
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: j

[jira] [Commented] (LUCENE-5218) background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

2013-09-25 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777584#comment-13777584
 ] 

Littlestar commented on LUCENE-5218:


{noformat}
public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}
{noformat}

I debug into above code.
when java.lang.ArrayIndexOutOfBoundsException occurs:
b=[], start=131072,lenth=0
index=2, offset=0, blocks.length=2

my patch:
 public void fillSlice(BytesRef b, long start, int length) {
  assert length >= 0: "length=" + length;
  assert length <= blockSize+1: "length=" + length;
  final int index = (int) (start >> blockBits);
  final int offset = (int) (start & blockMask);
  b.length = length;
  
  +if (index >= blocks.length) {
  +   // Outside block
  +   b.bytes = EMPTY_BYTES;
  +   b.offset = b.length = 0;
  +} else if (blockSize - offset >= length) {
// Within block
b.bytes = blocks[index];
b.offset = offset;
  } else {
// Split
b.bytes = new byte[length];
b.offset = 0;
System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, 
length-(blockSize-offset));
  }
}
  


> background merge hit exception && Caused by: 
> java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: LUCENE-5218
> URL: https://issues.apache.org/jira/browse/LUCENE-5218
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.4
> Environment: Linux MMapDirectory.
>Reporter: Littlestar
> Attachments: lucene44-LUCENE-5218.zip
>
>
> forceMerge(80)
> ==
> Caused by: java.io.IOException: background merge hit exception: 
> _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 
> _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 
> _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt 
> [maxNumSegments=80]
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
>   at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
>   at 
> com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
>   ... 4 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
>   at 
> org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
>   at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
>   at 
> org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
>   at 
> org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
>   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> ===

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5215) Add support for FieldInfos generation

2013-09-25 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated LUCENE-5215:
---

Attachment: LUCENE-5215.patch

Fixed some places which referenced Lucene45 (javadocs, comments etc.). Also 
made SR.readFieldInfos package-private, and added a deprecation comment to the 
new deprecated classes.

I still didn't change PerField to use Long instead of Integer. Rob, if you 
think it's important, I'll do it, should be easy. Otherwise, I think it's ready 
to go in. I'll run some tests first.

> Add support for FieldInfos generation
> -
>
> Key: LUCENE-5215
> URL: https://issues.apache.org/jira/browse/LUCENE-5215
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/index
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5215.patch, LUCENE-5215.patch, LUCENE-5215.patch, 
> LUCENE-5215.patch, LUCENE-5215.patch, LUCENE-5215.patch, LUCENE-5215.patch
>
>
> In LUCENE-5189 we've identified few reasons to do that:
> # If you want to update docs' values of field 'foo', where 'foo' exists in 
> the index, but not in a specific segment (sparse DV), we cannot allow that 
> and have to throw a late UOE. If we could rewrite FieldInfos (with 
> generation), this would be possible since we'd also write a new generation of 
> FIS.
> # When we apply NDV updates, we call DVF.fieldsConsumer. Currently the 
> consumer isn't allowed to change FI.attributes because we cannot modify the 
> existing FIS. This is implicit however, and we silently ignore any modified 
> attributes. FieldInfos.gen will allow that too.
> The idea is to add to SIPC fieldInfosGen, add to each FieldInfo a dvGen and 
> add support for FIS generation in FieldInfosFormat, SegReader etc., like we 
> now do for DocValues. I'll work on a patch.
> Also on LUCENE-5189, Rob raised a concern about SegmentInfo.attributes that 
> have same limitation -- if a Codec modifies them, they are silently being 
> ignored, since we don't gen the .si files. I think we can easily solve that 
> by recording SI.attributes in SegmentInfos, so they are recorded per-commit. 
> But I think it should be handled in a separate issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5244) NPE in Japanese Analyzer

2013-09-25 Thread Benson Margulies (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benson Margulies resolved LUCENE-5244.
--

Resolution: Invalid

This was pilot error, I forgot to call reset().

> NPE in Japanese Analyzer
> 
>
> Key: LUCENE-5244
> URL: https://issues.apache.org/jira/browse/LUCENE-5244
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.4
>Reporter: Benson Margulies
>
> I've got a test case that shows an NPE with the Japanese analyzer.
> It's all available in https://github.com/benson-basis/kuromoji-npe, and I 
> explicitly grant a license to the Foundation.
> If anyone would prefer that I attach a tarball here, just let me know.
> {noformat}
> ---
>  T E S T S
> ---
> Running com.basistech.testcase.JapaneseNpeTest
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.298 sec <<< 
> FAILURE! - in com.basistech.testcase.JapaneseNpeTest
> japaneseNpe(com.basistech.testcase.JapaneseNpeTest)  Time elapsed: 0.282 sec  
> <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.lucene.analysis.util.RollingCharBuffer.get(RollingCharBuffer.java:86)
>   at 
> org.apache.lucene.analysis.ja.JapaneseTokenizer.parse(JapaneseTokenizer.java:618)
>   at 
> org.apache.lucene.analysis.ja.JapaneseTokenizer.incrementToken(JapaneseTokenizer.java:468)
>   at 
> com.basistech.testcase.JapaneseNpeTest.japaneseNpe(JapaneseNpeTest.java:28)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5244) NPE in Japanese Analyzer

2013-09-25 Thread Christian Moen (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777560#comment-13777560
 ] 

Christian Moen commented on LUCENE-5244:


Hello Benson,

In your code on Github, try calling {{tokenStream.reset()}} before consumption.

> NPE in Japanese Analyzer
> 
>
> Key: LUCENE-5244
> URL: https://issues.apache.org/jira/browse/LUCENE-5244
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.4
>Reporter: Benson Margulies
>
> I've got a test case that shows an NPE with the Japanese analyzer.
> It's all available in https://github.com/benson-basis/kuromoji-npe, and I 
> explicitly grant a license to the Foundation.
> If anyone would prefer that I attach a tarball here, just let me know.
> {noformat}
> ---
>  T E S T S
> ---
> Running com.basistech.testcase.JapaneseNpeTest
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.298 sec <<< 
> FAILURE! - in com.basistech.testcase.JapaneseNpeTest
> japaneseNpe(com.basistech.testcase.JapaneseNpeTest)  Time elapsed: 0.282 sec  
> <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.lucene.analysis.util.RollingCharBuffer.get(RollingCharBuffer.java:86)
>   at 
> org.apache.lucene.analysis.ja.JapaneseTokenizer.parse(JapaneseTokenizer.java:618)
>   at 
> org.apache.lucene.analysis.ja.JapaneseTokenizer.incrementToken(JapaneseTokenizer.java:468)
>   at 
> com.basistech.testcase.JapaneseNpeTest.japaneseNpe(JapaneseNpeTest.java:28)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5244) NPE in Japanese Analyzer

2013-09-25 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777559#comment-13777559
 ] 

Robert Muir commented on LUCENE-5244:
-

Your code does not consume correctly, the npe is intentional. In current Svn 
you get illegalstate...


> NPE in Japanese Analyzer
> 
>
> Key: LUCENE-5244
> URL: https://issues.apache.org/jira/browse/LUCENE-5244
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.4
>Reporter: Benson Margulies
>
> I've got a test case that shows an NPE with the Japanese analyzer.
> It's all available in https://github.com/benson-basis/kuromoji-npe, and I 
> explicitly grant a license to the Foundation.
> If anyone would prefer that I attach a tarball here, just let me know.
> {noformat}
> ---
>  T E S T S
> ---
> Running com.basistech.testcase.JapaneseNpeTest
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.298 sec <<< 
> FAILURE! - in com.basistech.testcase.JapaneseNpeTest
> japaneseNpe(com.basistech.testcase.JapaneseNpeTest)  Time elapsed: 0.282 sec  
> <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.lucene.analysis.util.RollingCharBuffer.get(RollingCharBuffer.java:86)
>   at 
> org.apache.lucene.analysis.ja.JapaneseTokenizer.parse(JapaneseTokenizer.java:618)
>   at 
> org.apache.lucene.analysis.ja.JapaneseTokenizer.incrementToken(JapaneseTokenizer.java:468)
>   at 
> com.basistech.testcase.JapaneseNpeTest.japaneseNpe(JapaneseNpeTest.java:28)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5244) NPE in Japanese Analyzer

2013-09-25 Thread Benson Margulies (JIRA)
Benson Margulies created LUCENE-5244:


 Summary: NPE in Japanese Analyzer
 Key: LUCENE-5244
 URL: https://issues.apache.org/jira/browse/LUCENE-5244
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
Affects Versions: 4.4
Reporter: Benson Margulies


I've got a test case that shows an NPE with the Japanese analyzer.

It's all available in https://github.com/benson-basis/kuromoji-npe, and I 
explicitly grant a license to the Foundation.

If anyone would prefer that I attach a tarball here, just let me know.

{noformat}
---
 T E S T S
---
Running com.basistech.testcase.JapaneseNpeTest
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.298 sec <<< 
FAILURE! - in com.basistech.testcase.JapaneseNpeTest
japaneseNpe(com.basistech.testcase.JapaneseNpeTest)  Time elapsed: 0.282 sec  
<<< ERROR!
java.lang.NullPointerException: null
at 
org.apache.lucene.analysis.util.RollingCharBuffer.get(RollingCharBuffer.java:86)
at 
org.apache.lucene.analysis.ja.JapaneseTokenizer.parse(JapaneseTokenizer.java:618)
at 
org.apache.lucene.analysis.ja.JapaneseTokenizer.incrementToken(JapaneseTokenizer.java:468)
at 
com.basistech.testcase.JapaneseNpeTest.japaneseNpe(JapaneseNpeTest.java:28)
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5243) Update Clover

2013-09-25 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5243:
--

Attachment: LUCENE-5243.patch

Patch.

It tooks some time until I found out why the ivy:cachepath did not use the 
snapshot repository: The dependency to ivy-configure was missing :(

> Update Clover
> -
>
> Key: LUCENE-5243
> URL: https://issues.apache.org/jira/browse/LUCENE-5243
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Affects Versions: 5.0
> Environment: Jenkins build
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 5.0
>
> Attachments: LUCENE-5243.patch
>
>
> Currently we sometimes get the following build error on the Clover builds 
> (only Java 7, so happens only in Lucene/Solr trunk):
> {noformat}
> BUILD FAILED
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/build.xml:452:
>  The following error occurred while executing this line:
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/build.xml:364:
>  The following error occurred while executing this line:
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/build.xml:382:
>  The following error occurred while executing this line:
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/extra-targets.xml:41:
>  com.atlassian.clover.api.CloverException: 
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
>   at 
> com.cenqua.clover.reporters.html.HtmlReporter.executeImpl(HtmlReporter.java:165)
>   at 
> com.cenqua.clover.reporters.CloverReporter.execute(CloverReporter.java:41)
>   at 
> com.cenqua.clover.tasks.CloverReportTask.generateReports(CloverReportTask.java:427)
>   at 
> com.cenqua.clover.tasks.CloverReportTask.cloverExecute(CloverReportTask.java:384)
>   at 
> com.cenqua.clover.tasks.AbstractCloverTask.execute(AbstractCloverTask.java:55)
>   at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
>   at org.apache.tools.ant.Task.perform(Task.java:348)
>   at org.apache.tools.ant.Target.execute(Target.java:390)
>   at org.apache.tools.ant.Target.performTasks(Target.java:411)
>   at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
>   at 
> org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
>   at org.apache.tools.ant.Project.executeTargets(Project.java:1251)
>   at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442)
>   at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:302)
>   at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:221)
>   at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
>   at org.apache.tools.ant.Task.perform(Task.java:348)
>   at org.apache.tools.ant.Target.execute(Target.java:390)
>   at org.apache.tools.ant.Target.performTasks(Target.java:411)
>   at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
>   at 
> org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
>   at org.apache.tools.ant.Project.executeTargets(Project.java:1251)
>   at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442)
>   at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105)
>   at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
>   at org.apache.tools.ant.Task.perform(Task.java:348)
>   at org.apache.tools.ant.Target.execute(Target.java:390)
>   at org.apache.tools.ant.Target.performTasks(Target.java:411)
>   at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
>   at 
> org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
>   at o

[jira] [Commented] (LUCENE-5243) Update Clover

2013-09-25 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777556#comment-13777556
 ] 

Uwe Schindler commented on LUCENE-5243:
---

I have to wait for the new Clover License, until I can apply this patch to 
trunk-only.

> Update Clover
> -
>
> Key: LUCENE-5243
> URL: https://issues.apache.org/jira/browse/LUCENE-5243
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Affects Versions: 5.0
> Environment: Jenkins build
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 5.0
>
> Attachments: LUCENE-5243.patch
>
>
> Currently we sometimes get the following build error on the Clover builds 
> (only Java 7, so happens only in Lucene/Solr trunk):
> {noformat}
> BUILD FAILED
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/build.xml:452:
>  The following error occurred while executing this line:
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/build.xml:364:
>  The following error occurred while executing this line:
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/build.xml:382:
>  The following error occurred while executing this line:
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/extra-targets.xml:41:
>  com.atlassian.clover.api.CloverException: 
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
>   at 
> com.cenqua.clover.reporters.html.HtmlReporter.executeImpl(HtmlReporter.java:165)
>   at 
> com.cenqua.clover.reporters.CloverReporter.execute(CloverReporter.java:41)
>   at 
> com.cenqua.clover.tasks.CloverReportTask.generateReports(CloverReportTask.java:427)
>   at 
> com.cenqua.clover.tasks.CloverReportTask.cloverExecute(CloverReportTask.java:384)
>   at 
> com.cenqua.clover.tasks.AbstractCloverTask.execute(AbstractCloverTask.java:55)
>   at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
>   at org.apache.tools.ant.Task.perform(Task.java:348)
>   at org.apache.tools.ant.Target.execute(Target.java:390)
>   at org.apache.tools.ant.Target.performTasks(Target.java:411)
>   at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
>   at 
> org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
>   at org.apache.tools.ant.Project.executeTargets(Project.java:1251)
>   at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442)
>   at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:302)
>   at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:221)
>   at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
>   at org.apache.tools.ant.Task.perform(Task.java:348)
>   at org.apache.tools.ant.Target.execute(Target.java:390)
>   at org.apache.tools.ant.Target.performTasks(Target.java:411)
>   at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
>   at 
> org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
>   at org.apache.tools.ant.Project.executeTargets(Project.java:1251)
>   at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442)
>   at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105)
>   at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
>   at org.apache.tools.ant.Task.perform(Task.java:348)
>   at org.apache.tools.ant.Target.execute(Target.java:390)
>   at org.apache.tools.ant.Target.performTasks(Target.java:411)
>   at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
>   at 
> org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
>   at org.apache.tools.ant.Project.executeTargets

[jira] [Created] (LUCENE-5243) Update Clover

2013-09-25 Thread Uwe Schindler (JIRA)
Uwe Schindler created LUCENE-5243:
-

 Summary: Update Clover
 Key: LUCENE-5243
 URL: https://issues.apache.org/jira/browse/LUCENE-5243
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/build
Affects Versions: 5.0
 Environment: Jenkins build
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 5.0


Currently we sometimes get the following build error on the Clover builds (only 
Java 7, so happens only in Lucene/Solr trunk):

{noformat}
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/build.xml:452: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/build.xml:364: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/build.xml:382: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/extra-targets.xml:41:
 com.atlassian.clover.api.CloverException: java.lang.IllegalArgumentException: 
Comparison method violates its general contract!
at 
com.cenqua.clover.reporters.html.HtmlReporter.executeImpl(HtmlReporter.java:165)
at 
com.cenqua.clover.reporters.CloverReporter.execute(CloverReporter.java:41)
at 
com.cenqua.clover.tasks.CloverReportTask.generateReports(CloverReportTask.java:427)
at 
com.cenqua.clover.tasks.CloverReportTask.cloverExecute(CloverReportTask.java:384)
at 
com.cenqua.clover.tasks.AbstractCloverTask.execute(AbstractCloverTask.java:55)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:390)
at org.apache.tools.ant.Target.performTasks(Target.java:411)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
at 
org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
at org.apache.tools.ant.Project.executeTargets(Project.java:1251)
at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442)
at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:302)
at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:221)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:390)
at org.apache.tools.ant.Target.performTasks(Target.java:411)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
at 
org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
at org.apache.tools.ant.Project.executeTargets(Project.java:1251)
at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442)
at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:390)
at org.apache.tools.ant.Target.performTasks(Target.java:411)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
at 
org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
at org.apache.tools.ant.Project.executeTargets(Project.java:1251)
at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442)
at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorIm

[jira] [Resolved] (SOLR-5271) Untill jboss restarted unable to get newly inserted/updated indexing data

2013-09-25 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-5271.
-

Resolution: Not A Problem

Please ask questions on the solr-user mailing list before opening a bug report.

http://lucene.apache.org/solr/discussion.html

> Untill jboss restarted unable to get newly inserted/updated indexing data
> -
>
> Key: SOLR-5271
> URL: https://issues.apache.org/jira/browse/SOLR-5271
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 3.5
>Reporter: mohan raja reddy
>
> I have deployed Solr.war file in Jboss. before starting jboss i have 10 
> entries of data in solr indexing. After starting jboss, i had inserted 5 more 
> entries in solr indexing. But those newly inserted entries i am unable to get 
> using Solr.war(port:8080). If i restarted jboss then only i and able to read 
> all 15 records.
> But same 15 records without restarting jboss, using solar server(port:9090) i 
> am able to get. Please reply me if any one have proper soluion for this. I 
> need using Solr.war only. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5271) Untill jboss restarted unable to get newly inserted/updated indexing data

2013-09-25 Thread mohan raja reddy (JIRA)
mohan raja reddy created SOLR-5271:
--

 Summary: Untill jboss restarted unable to get newly 
inserted/updated indexing data
 Key: SOLR-5271
 URL: https://issues.apache.org/jira/browse/SOLR-5271
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 3.5
Reporter: mohan raja reddy


I have deployed Solr.war file in Jboss. before starting jboss i have 10 entries 
of data in solr indexing. After starting jboss, i had inserted 5 more entries 
in solr indexing. But those newly inserted entries i am unable to get using 
Solr.war(port:8080). If i restarted jboss then only i and able to read all 15 
records.

But same 15 records without restarting jboss, using solar server(port:9090) i 
am able to get. Please reply me if any one have proper soluion for this. I need 
using Solr.war only. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5270) lastModified not updating when selecting another core in Core Admin

2013-09-25 Thread Simon Endele (JIRA)
Simon Endele created SOLR-5270:
--

 Summary: lastModified not updating when selecting another core in 
Core Admin
 Key: SOLR-5270
 URL: https://issues.apache.org/jira/browse/SOLR-5270
 Project: Solr
  Issue Type: Bug
  Components: web gui
Reporter: Simon Endele
Priority: Minor


When selecting a core in the section "Core Admin" in the Solr Admin web UI, 
data like dataDir, version, numDocs, maxDoc are updated via JavaScript, but 
lastModified is not. A refresh of the page does the trick.

Had a look into the network traffic of my browser and it seems that the JSON 
fetched via AJAX contains the correct information.

Can be reproduced in different browsers with the example by cloning collection1 
into a collection2 and indexing collection2 anew by calling "java -jar post.jar 
*.xml" in the exampledocs directory.

Tested with Solr 4.4.0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5258) router.field support for compositeId router

2013-09-25 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5258:
-

Attachment: SOLR-5258.patch

new clients will not be able to talk to old clusters as reported by @Mark 
Jelsma in SOLR-4221

> router.field support for compositeId router
> ---
>
> Key: SOLR-5258
> URL: https://issues.apache.org/jira/browse/SOLR-5258
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Noble Paul
>Priority: Minor
> Attachments: SOLR-5258.patch, SOLR-5258.patch
>
>
> Although there is code to support router.field for CompositeId, it only 
> calculates a simple (non-compound) hash, which isn't that useful unless you 
> don't use compound ids (this is why I changed the docs to say router.field is 
> only supported for the implicit router).  The field value should either
> - be used to calculate the full compound hash
> - be used to calculate the prefix bits, and the uniqueKey will still be used 
> for the lower bits.
> For consistency, I'd suggest the former.
> If we want to be able to specify a separate field that is only used for the 
> prefix bits, then perhaps that should be "router.prefixField"

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4221) Custom sharding

2013-09-25 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777436#comment-13777436
 ] 

Noble Paul edited comment on SOLR-4221 at 9/25/13 1:15 PM:
---

Thanks [~markus17] good catch

I'll connect it to [SOLR-5258|https://issues.apache.org/jira/browse/SOLR-5258]


  was (Author: noble.paul):
Thanks [~markus17] good catch



  
> Custom sharding
> ---
>
> Key: SOLR-4221
> URL: https://issues.apache.org/jira/browse/SOLR-4221
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Noble Paul
> Fix For: 4.5, 5.0
>
> Attachments: SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, 
> SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, 
> SOLR-4221.patch
>
>
> Features to let users control everything about sharding/routing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4221) Custom sharding

2013-09-25 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777436#comment-13777436
 ] 

Noble Paul commented on SOLR-4221:
--

Thanks [~markus17] good catch




> Custom sharding
> ---
>
> Key: SOLR-4221
> URL: https://issues.apache.org/jira/browse/SOLR-4221
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Noble Paul
> Fix For: 4.5, 5.0
>
> Attachments: SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, 
> SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, 
> SOLR-4221.patch
>
>
> Features to let users control everything about sharding/routing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4221) Custom sharding

2013-09-25 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777407#comment-13777407
 ] 

Markus Jelsma commented on SOLR-4221:
-

Ah, i come from SOLR-5261 and noticed the discussion here so i'll comment here 
as well. The stack trace i posted earlier is gone now and is replaced by the 
trace below. This only happens if a new SolrJ attempts to talk to a slightly 
older cluster.

{code}
java.lang.ClassCastException: java.lang.String cannot be cast to java.util.Map
at 
org.apache.solr.common.cloud.DocRouter.getRouteField(DocRouter.java:54)
at 
org.apache.solr.common.cloud.CompositeIdRouter.sliceHash(CompositeIdRouter.java:64)
at 
org.apache.solr.common.cloud.HashBasedRouter.getTargetSlice(HashBasedRouter.java:33)
at 
org.apache.solr.client.solrj.request.UpdateRequest.getRoutes(UpdateRequest.java:190)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.directUpdate(CloudSolrServer.java:313)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:506)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117)
{code}


> Custom sharding
> ---
>
> Key: SOLR-4221
> URL: https://issues.apache.org/jira/browse/SOLR-4221
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Noble Paul
> Fix For: 4.5, 5.0
>
> Attachments: SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, 
> SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, SOLR-4221.patch, 
> SOLR-4221.patch
>
>
> Features to let users control everything about sharding/routing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5261) can't update current trunk or 4x with 4.4 or earlier binary protocol

2013-09-25 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777375#comment-13777375
 ] 

Markus Jelsma commented on SOLR-5261:
-

The good news is that today's SolrJ can talk to the today's cluster.

> can't update current trunk or 4x with 4.4 or earlier binary protocol
> 
>
> Key: SOLR-5261
> URL: https://issues.apache.org/jira/browse/SOLR-5261
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Blocker
> Fix For: 4.5, 5.0
>
> Attachments: SOLR-5261.patch
>
>
> Seems back compat in the binary protocol was broke broke sometime after 4.4

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5235) throw illegalstate from Tokenizer (instead of NPE/IIOBE) if reset not called

2013-09-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777367#comment-13777367
 ] 

ASF subversion and git services commented on LUCENE-5235:
-

Commit 1526158 from [~thetaphi] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1526158 ]

Merged revision(s) 1526155 from lucene/dev/trunk:
LUCENE-5235: This is according to Clover never hit, so be more strict, may help 
Robert

> throw illegalstate from Tokenizer (instead of NPE/IIOBE) if reset not called
> 
>
> Key: LUCENE-5235
> URL: https://issues.apache.org/jira/browse/LUCENE-5235
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Robert Muir
>Assignee: Uwe Schindler
> Fix For: 5.0, 4.6
>
> Attachments: LUCENE-5235.patch, LUCENE-5235.patch, LUCENE-5235.patch, 
> LUCENE-5235.patch, LUCENE-5235.patch, LUCENE-5235.patch, 
> LUCENE-5235_test.patch
>
>
> We added these best effort checks, but it would be much better if we somehow 
> gave a clear exception... this comes up often

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5235) throw illegalstate from Tokenizer (instead of NPE/IIOBE) if reset not called

2013-09-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777365#comment-13777365
 ] 

ASF subversion and git services commented on LUCENE-5235:
-

Commit 1526155 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1526155 ]

LUCENE-5235: This is according to Clover never hit, so be more strict, may help 
Robert

> throw illegalstate from Tokenizer (instead of NPE/IIOBE) if reset not called
> 
>
> Key: LUCENE-5235
> URL: https://issues.apache.org/jira/browse/LUCENE-5235
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Robert Muir
>Assignee: Uwe Schindler
> Fix For: 5.0, 4.6
>
> Attachments: LUCENE-5235.patch, LUCENE-5235.patch, LUCENE-5235.patch, 
> LUCENE-5235.patch, LUCENE-5235.patch, LUCENE-5235.patch, 
> LUCENE-5235_test.patch
>
>
> We added these best effort checks, but it would be much better if we somehow 
> gave a clear exception... this comes up often

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5215) Add support for FieldInfos generation

2013-09-25 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777362#comment-13777362
 ] 

Michael McCandless commented on LUCENE-5215:


The RLD changes look great to me!

> Add support for FieldInfos generation
> -
>
> Key: LUCENE-5215
> URL: https://issues.apache.org/jira/browse/LUCENE-5215
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/index
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5215.patch, LUCENE-5215.patch, LUCENE-5215.patch, 
> LUCENE-5215.patch, LUCENE-5215.patch, LUCENE-5215.patch
>
>
> In LUCENE-5189 we've identified few reasons to do that:
> # If you want to update docs' values of field 'foo', where 'foo' exists in 
> the index, but not in a specific segment (sparse DV), we cannot allow that 
> and have to throw a late UOE. If we could rewrite FieldInfos (with 
> generation), this would be possible since we'd also write a new generation of 
> FIS.
> # When we apply NDV updates, we call DVF.fieldsConsumer. Currently the 
> consumer isn't allowed to change FI.attributes because we cannot modify the 
> existing FIS. This is implicit however, and we silently ignore any modified 
> attributes. FieldInfos.gen will allow that too.
> The idea is to add to SIPC fieldInfosGen, add to each FieldInfo a dvGen and 
> add support for FIS generation in FieldInfosFormat, SegReader etc., like we 
> now do for DocValues. I'll work on a patch.
> Also on LUCENE-5189, Rob raised a concern about SegmentInfo.attributes that 
> have same limitation -- if a Codec modifies them, they are silently being 
> ignored, since we don't gen the .si files. I think we can easily solve that 
> by recording SI.attributes in SegmentInfos, so they are recorded per-commit. 
> But I think it should be handled in a separate issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5261) can't update current trunk or 4x with 4.4 or earlier binary protocol

2013-09-25 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777356#comment-13777356
 ] 

Markus Jelsma commented on SOLR-5261:
-

Alright, i've hit a new issue here. Today's (updated just now) SolrJ won't talk 
to a slightly older cluster:

{code}
java.lang.ClassCastException: java.lang.String cannot be cast to java.util.Map
at 
org.apache.solr.common.cloud.DocRouter.getRouteField(DocRouter.java:54)
at 
org.apache.solr.common.cloud.CompositeIdRouter.sliceHash(CompositeIdRouter.java:64)
at 
org.apache.solr.common.cloud.HashBasedRouter.getTargetSlice(HashBasedRouter.java:33)
at 
org.apache.solr.client.solrj.request.UpdateRequest.getRoutes(UpdateRequest.java:190)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.directUpdate(CloudSolrServer.java:313)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:506)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117)
{code}

> can't update current trunk or 4x with 4.4 or earlier binary protocol
> 
>
> Key: SOLR-5261
> URL: https://issues.apache.org/jira/browse/SOLR-5261
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Blocker
> Fix For: 4.5, 5.0
>
> Attachments: SOLR-5261.patch
>
>
> Seems back compat in the binary protocol was broke broke sometime after 4.4

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5246) Shard splitting should support collections configured with a hash router and routeField.

2013-09-25 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-5246:


Affects Version/s: 4.5

> Shard splitting should support collections configured with a hash router and 
> routeField.
> 
>
> Key: SOLR-5246
> URL: https://issues.apache.org/jira/browse/SOLR-5246
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.5
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.0, 4.6
>
> Attachments: SOLR-5246.patch, SOLR-5246.patch
>
>
> Follow up with work done in SOLR-5017:
> Shard splitting doesn't support collections configured with a hash router and 
> routeField.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5246) Shard splitting should support collections configured with a hash router and routeField.

2013-09-25 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-5246.
-

   Resolution: Fixed
Fix Version/s: 4.6
   5.0

> Shard splitting should support collections configured with a hash router and 
> routeField.
> 
>
> Key: SOLR-5246
> URL: https://issues.apache.org/jira/browse/SOLR-5246
> Project: Solr
>  Issue Type: Improvement
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.0, 4.6
>
> Attachments: SOLR-5246.patch, SOLR-5246.patch
>
>
> Follow up with work done in SOLR-5017:
> Shard splitting doesn't support collections configured with a hash router and 
> routeField.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5246) Shard splitting should support collections configured with a hash router and routeField.

2013-09-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777342#comment-13777342
 ] 

ASF subversion and git services commented on SOLR-5246:
---

Commit 1526153 from sha...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1526153 ]

SOLR-5246: Shard splitting now supports collections configured with router.field

> Shard splitting should support collections configured with a hash router and 
> routeField.
> 
>
> Key: SOLR-5246
> URL: https://issues.apache.org/jira/browse/SOLR-5246
> Project: Solr
>  Issue Type: Improvement
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-5246.patch, SOLR-5246.patch
>
>
> Follow up with work done in SOLR-5017:
> Shard splitting doesn't support collections configured with a hash router and 
> routeField.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5246) Shard splitting should support collections configured with a hash router and routeField.

2013-09-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777339#comment-13777339
 ] 

ASF subversion and git services commented on SOLR-5246:
---

Commit 1526151 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1526151 ]

SOLR-5246: Shard splitting now supports collections configured with router.field

> Shard splitting should support collections configured with a hash router and 
> routeField.
> 
>
> Key: SOLR-5246
> URL: https://issues.apache.org/jira/browse/SOLR-5246
> Project: Solr
>  Issue Type: Improvement
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-5246.patch, SOLR-5246.patch
>
>
> Follow up with work done in SOLR-5017:
> Shard splitting doesn't support collections configured with a hash router and 
> routeField.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 4.5.0 RC2

2013-09-25 Thread Martijn v Groningen
+1 smoker is also happy on my end


On 25 September 2013 08:55, Adrien Grand  wrote:

> Here is a new release candidate that fixes some JavaBin codec backward
> compatibility issues (SOLR-5261, SOLR-4221).
>
> Please vote to release the following artifacts:
>
> http://people.apache.org/~jpountz/staging_area/lucene-solr-4.5.0-RC2-rev1526012/
>
> Smoke tester was happy on my end so here is my +1.
>
> --
> Adrien
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


-- 
Met vriendelijke groet,

Martijn van Groningen


[jira] [Resolved] (SOLR-5269) a field named text must be present in schema or an "undefined field text" will occur.

2013-09-25 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-5269.
--

Resolution: Not A Problem

I just coached another person through this issue, it's just that, as Hoss says, 
the standard distro solrconfig.xml references the "text" field several times. 
If you remove the text field, you have to remove where it's used too.

John:

Please bring things like this up on the user's list first before raising a JIRA 
in case it's pilot error.

> a field named text must be present in schema or an "undefined field text" 
> will occur.
> -
>
> Key: SOLR-5269
> URL: https://issues.apache.org/jira/browse/SOLR-5269
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis
> Environment: Ubuntu (13.10 beta, kernel 3.11.0-5-generic x64, Jetty 
> 9.05, java 1.7.0_25, solr 4.4.)
>Reporter: John Karr
> Fix For: 4.4
>
>
> I changed the name of my catchall field ("text" in the example schema), and 
> although there were no other references to a field named "text" every time 
> solr started it complained about "undefined field text". 
> I confirmed the issue on my workstation with a locally installed copy of solr 
> (similar architecture, but using an older kernel and the embedded jetty).
> The error message never indicates a filename or line number where the 
> offending reference is. Renaming my catchall back to text permitted the 
> schema to load.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5027) Result Set Collapse and Expand Plugins

2013-09-25 Thread Simon Endele (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777303#comment-13777303
 ] 

Simon Endele edited comment on SOLR-5027 at 9/25/13 9:19 AM:
-

Sounds good.

I propose to add an additional parameter "expand.fq" to restrict the expanded 
documents to a certain filter query.
Sometimes the complete groups are very large and should only be expanded by one 
or a few representatives of that group (which can be addressed with a filter 
query). Other group members that are not hit by the main query are not 
interesting (at least in the first place).

Note that this is different from adding a basic filter query, since documents 
that are hit by the main query but not by expand.fq are kept.
Example: Group consisting of: representative "A", more group members "B" and 
"C".
Query hits "B", group is expanded by "A" (due to expand.fq), but not "C" => 
Result: "A", "B"
A filter query before expanding would filter out "B" and thus yield no results 
for this group.
A filter query after expanding would filter out "B" and "C" thus keep only "A".

Is that technically possible? Maybe this is worth a separate issue... 

  was (Author: simon.endele):
Sounds good.

I propose to add an additional parameter "expand.fq" to restrict the expanded 
documents to a certain filter query.
Sometimes the complete groups are very large and should only be expanded by one 
or a few representatives of that group. Other group members that are not hit by 
the main query are not interesting (at least in the first place).

Note that this is different from adding a basic filter query, since documents 
that are hit by the main query but not by expand.fq are kept.
Example: Group consisting of: representative "A", more group members "B" and 
"C".
Query hits "B", group is expanded by "A", but not "C" (due to expand.fq) => 
Result: "A", "B"
A filter query before expanding would filter out "B" and thus yield no results 
for this group.
A filter query after expanding would filter out "B" and "C" thus keep only "A".

Is that technically possible? Maybe this is worth a separate issue... 
  
> Result Set Collapse and Expand Plugins
> --
>
> Key: SOLR-5027
> URL: https://issues.apache.org/jira/browse/SOLR-5027
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 5.0
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-5027.patch, SOLR-5027.patch, SOLR-5027.patch
>
>
> This ticket introduces two new Solr plugins, the *CollapsingQParserPlugin* 
> and the *ExpandComponent*.
> The *CollapsingQParserPlugin* is a PostFilter that performs field collapsing.
> This allows field collapsing to be done within the normal search flow.
> Initial syntax:
> fq=(!collapse field=}
> All documents in a group will be collapsed to the highest ranking document in 
> the group.
> The *ExpandComponent* is a search component that takes the collapsed docList 
> and expands the groups for a single page based on parameters provided.
> Initial syntax:
> expand=true   - Turns on the expand component.
> expand.field= - Expands results for this field
> expand.limit=5 - Limits the documents for each expanded group.
> expand.sort= - The sort spec for the expanded documents. Default 
> is score.
> expand.rows=500 - The max number of expanded results to bring back. Default 
> is 500.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5027) Result Set Collapse and Expand Plugins

2013-09-25 Thread Simon Endele (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777303#comment-13777303
 ] 

Simon Endele commented on SOLR-5027:


Sounds good.

I propose to add an additional parameter "expand.fq" to restrict the expanded 
documents to a certain filter query.
Sometimes the complete groups are very large and should only be expanded by one 
or a few representatives of that group. Other group members that are not hit by 
the main query are not interesting (at least in the first place).

Note that this is different from adding a basic filter query, since documents 
that are hit by the main query but not by expand.fq are kept.
Example: Group consisting of: representative "A", more group members "B" and 
"C".
Query hits "B", group is expanded by "A", but not "C" (due to expand.fq) => 
Result: "A", "B"
A filter query before expanding would filter out "B" and thus yield no results 
for this group.
A filter query after expanding would filter out "B" and "C" thus keep only "A".

Is that technically possible? Maybe this is worth a separate issue... 

> Result Set Collapse and Expand Plugins
> --
>
> Key: SOLR-5027
> URL: https://issues.apache.org/jira/browse/SOLR-5027
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 5.0
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-5027.patch, SOLR-5027.patch, SOLR-5027.patch
>
>
> This ticket introduces two new Solr plugins, the *CollapsingQParserPlugin* 
> and the *ExpandComponent*.
> The *CollapsingQParserPlugin* is a PostFilter that performs field collapsing.
> This allows field collapsing to be done within the normal search flow.
> Initial syntax:
> fq=(!collapse field=}
> All documents in a group will be collapsed to the highest ranking document in 
> the group.
> The *ExpandComponent* is a search component that takes the collapsed docList 
> and expands the groups for a single page based on parameters provided.
> Initial syntax:
> expand=true   - Turns on the expand component.
> expand.field= - Expands results for this field
> expand.limit=5 - Limits the documents for each expanded group.
> expand.sort= - The sort spec for the expanded documents. Default 
> is score.
> expand.rows=500 - The max number of expanded results to bring back. Default 
> is 500.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5215) Add support for FieldInfos generation

2013-09-25 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777279#comment-13777279
 ] 

Shai Erera commented on LUCENE-5215:


bq. e.g. search for 4.5, Lucene45, such strings in eclipse

Good idea. I searched for references to Lucene45Codec, and fixed them. I now 
searched for "4.5", "45" and "lucene45" and found few other places to fix.

bq. wherever you see @Deprecated (eg Lucene45Codec) ensure @deprecated  
in javadocs too

done.

bq. the SegmentReader.readFieldInfos seems an awkward place to me for this: 
must it really be public or can it be package-private?

I tried to find a good place for it too, and chose SegmentReader since it's 
mostly needed by it. As for package-private, it's also accessed by 
_TestUtil.getFieldInfos, but I see the only tests that call it are under 
o.a.l.index, so I think for now we can make it package-private and get rid of 
_TestUtil.getFieldInfos? Note that it's also marked @lucene.internal.

bq. In perFieldDocValuesFormat where we have suffixAtt = 
Integer.valueOf(suffixAtt);, do we have any concerns?

Isn't it increased per unique format? I don't mind changing it to a long, but 
do we really expect more than Integer.MAX_VAL formats!?

> Add support for FieldInfos generation
> -
>
> Key: LUCENE-5215
> URL: https://issues.apache.org/jira/browse/LUCENE-5215
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/index
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5215.patch, LUCENE-5215.patch, LUCENE-5215.patch, 
> LUCENE-5215.patch, LUCENE-5215.patch, LUCENE-5215.patch
>
>
> In LUCENE-5189 we've identified few reasons to do that:
> # If you want to update docs' values of field 'foo', where 'foo' exists in 
> the index, but not in a specific segment (sparse DV), we cannot allow that 
> and have to throw a late UOE. If we could rewrite FieldInfos (with 
> generation), this would be possible since we'd also write a new generation of 
> FIS.
> # When we apply NDV updates, we call DVF.fieldsConsumer. Currently the 
> consumer isn't allowed to change FI.attributes because we cannot modify the 
> existing FIS. This is implicit however, and we silently ignore any modified 
> attributes. FieldInfos.gen will allow that too.
> The idea is to add to SIPC fieldInfosGen, add to each FieldInfo a dvGen and 
> add support for FIS generation in FieldInfosFormat, SegReader etc., like we 
> now do for DocValues. I'll work on a patch.
> Also on LUCENE-5189, Rob raised a concern about SegmentInfo.attributes that 
> have same limitation -- if a Codec modifies them, they are silently being 
> ignored, since we don't gen the .si files. I think we can easily solve that 
> by recording SI.attributes in SegmentInfos, so they are recorded per-commit. 
> But I think it should be handled in a separate issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org