[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 63 - Still Failing

2012-10-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/63/

1 tests failed.
REGRESSION:  org.apache.lucene.index.TestBagOfPositions.test

Error Message:
Captured an uncaught exception in thread: Thread[id=644, name=Thread-561, 
state=RUNNABLE, group=TGRP-TestBagOfPositions]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=644, name=Thread-561, state=RUNNABLE, 
group=TGRP-TestBagOfPositions]
Caused by: java.lang.AssertionError: ram was 33879456 expected: 33851840 flush 
mem: 18092896 activeMem: 15786560 pendingMem: 0 flushingMem: 3 blockedMem: 0 
peakDeltaMem: 99136
at __randomizedtesting.SeedInfo.seed([11A534B74B63930E]:0)
at 
org.apache.lucene.index.DocumentsWriterFlushControl.assertMemory(DocumentsWriterFlushControl.java:114)
at 
org.apache.lucene.index.DocumentsWriterFlushControl.doAfterDocument(DocumentsWriterFlushControl.java:181)
at 
org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:384)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1443)
at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1122)
at 
org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:201)
at 
org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:160)
at 
org.apache.lucene.index.TestBagOfPositions$1.run(TestBagOfPositions.java:110)




Build Log:
[...truncated 420 lines...]
[junit4:junit4] Suite: org.apache.lucene.index.TestBagOfPositions
[junit4:junit4]   2 NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
[junit4:junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=TestBagOfPositions -Dtests.method=test -Dtests.seed=11A534B74B63930E 
-Dtests.multiplier=3 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/hudson/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=fi -Dtests.timezone=Africa/Conakry 
-Dtests.file.encoding=ISO-8859-1
[junit4:junit4] ERROR206s J0 | TestBagOfPositions.test 
[junit4:junit4] Throwable #1: 
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=644, name=Thread-561, state=RUNNABLE, 
group=TGRP-TestBagOfPositions]
[junit4:junit4] Caused by: java.lang.AssertionError: ram was 33879456 
expected: 33851840 flush mem: 18092896 activeMem: 15786560 pendingMem: 0 
flushingMem: 3 blockedMem: 0 peakDeltaMem: 99136
[junit4:junit4]at 
__randomizedtesting.SeedInfo.seed([11A534B74B63930E]:0)
[junit4:junit4]at 
org.apache.lucene.index.DocumentsWriterFlushControl.assertMemory(DocumentsWriterFlushControl.java:114)
[junit4:junit4]at 
org.apache.lucene.index.DocumentsWriterFlushControl.doAfterDocument(DocumentsWriterFlushControl.java:181)
[junit4:junit4]at 
org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:384)
[junit4:junit4]at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1443)
[junit4:junit4]at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1122)
[junit4:junit4]at 
org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:201)
[junit4:junit4]at 
org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:160)
[junit4:junit4]at 
org.apache.lucene.index.TestBagOfPositions$1.run(TestBagOfPositions.java:110)
[junit4:junit4] Throwable #2: 
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=643, name=Thread-560, state=RUNNABLE, 
group=TGRP-TestBagOfPositions]
[junit4:junit4] Caused by: java.lang.AssertionError: ram was 33879456 
expected: 33851840 flush mem: 18092896 activeMem: 15786560 pendingMem: 0 
flushingMem: 3 blockedMem: 0 peakDeltaMem: 99136
[junit4:junit4]at 
__randomizedtesting.SeedInfo.seed([11A534B74B63930E]:0)
[junit4:junit4]at 
org.apache.lucene.index.DocumentsWriterFlushControl.assertMemory(DocumentsWriterFlushControl.java:114)
[junit4:junit4]at 
org.apache.lucene.index.DocumentsWriterFlushControl.doAfterDocument(DocumentsWriterFlushControl.java:181)
[junit4:junit4]at 
org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:384)
[junit4:junit4]at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1443)
[junit4:junit4]at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1122)
[junit4:junit4]at 
org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:201)
[junit4:junit4]at 
org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:160)
[junit4:junit4]at 
org.apache.lucene.index.TestBagOfPositions$1.run(TestBagOfPositions.java:110)
[junit4:junit4]   2 NOTE: test params are: codec=Lucene41: 

[jira] [Created] (SOLR-3945) SolrJ SolrQuery.setFacet(false) does not add 'facet=false' to URL params

2012-10-15 Thread Sascha Szott (JIRA)
Sascha Szott created SOLR-3945:
--

 Summary: SolrJ SolrQuery.setFacet(false) does not add 
'facet=false' to URL params
 Key: SOLR-3945
 URL: https://issues.apache.org/jira/browse/SOLR-3945
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 3.6.1
Reporter: Sascha Szott


The standard request handler is configured with
{code}
bool name=facettrue/bool
{code}
in the {{default}} section.

If I do not want to retrieve (and compute) facets for a specific query, I can 
add 'facet=false' to the list of URL parameters. But SolrJ's 
SolrQuery.setFacet(false) does not add this parameter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b58) - Build # 1761 - Failure!

2012-10-15 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Linux/1761/
Java: 32bit/jdk1.8.0-ea-b58 -server -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 22408 lines...]
  [javadoc] Generating Javadoc
  [javadoc] Javadoc execution
  [javadoc] warning: [options] bootstrap class path not set in conjunction with 
-source 1.7
  [javadoc] Loading source files for package org.apache.lucene...
  [javadoc] Loading source files for package org.apache.lucene.analysis...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.tokenattributes...
  [javadoc] Loading source files for package org.apache.lucene.codecs...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.lucene3x...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.lucene40...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.lucene40.values...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.lucene41...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.perfield...
  [javadoc] Loading source files for package org.apache.lucene.document...
  [javadoc] Loading source files for package org.apache.lucene.index...
  [javadoc] Loading source files for package org.apache.lucene.search...
  [javadoc] Loading source files for package 
org.apache.lucene.search.payloads...
  [javadoc] Loading source files for package 
org.apache.lucene.search.similarities...
  [javadoc] Loading source files for package org.apache.lucene.search.spans...
  [javadoc] Loading source files for package org.apache.lucene.store...
  [javadoc] Loading source files for package org.apache.lucene.util...
  [javadoc] Loading source files for package org.apache.lucene.util.automaton...
  [javadoc] Loading source files for package org.apache.lucene.util.fst...
  [javadoc] Loading source files for package org.apache.lucene.util.mutable...
  [javadoc] Loading source files for package org.apache.lucene.util.packed...
  [javadoc] Constructing Javadoc information...
  [javadoc] Standard Doclet version 1.8.0-ea
  [javadoc] Building tree for all the packages and classes...
  [javadoc] Building index for all the packages and classes...
  [javadoc] Building index for all classes...
  [javadoc] Generating 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/docs/core/help-doc.html...
  [javadoc] 1 warning

[...truncated 44 lines...]
  [javadoc] Generating Javadoc
  [javadoc] Javadoc execution
  [javadoc] Loading source files for package org.apache.lucene.analysis.ar...
  [javadoc] warning: [options] bootstrap class path not set in conjunction with 
-source 1.7
  [javadoc] Loading source files for package org.apache.lucene.analysis.bg...
  [javadoc] Loading source files for package org.apache.lucene.analysis.br...
  [javadoc] Loading source files for package org.apache.lucene.analysis.ca...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.charfilter...
  [javadoc] Loading source files for package org.apache.lucene.analysis.cjk...
  [javadoc] Loading source files for package org.apache.lucene.analysis.cn...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.commongrams...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.compound...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.compound.hyphenation...
  [javadoc] Loading source files for package org.apache.lucene.analysis.core...
  [javadoc] Loading source files for package org.apache.lucene.analysis.cz...
  [javadoc] Loading source files for package org.apache.lucene.analysis.da...
  [javadoc] Loading source files for package org.apache.lucene.analysis.de...
  [javadoc] Loading source files for package org.apache.lucene.analysis.el...
  [javadoc] Loading source files for package org.apache.lucene.analysis.en...
  [javadoc] Loading source files for package org.apache.lucene.analysis.es...
  [javadoc] Loading source files for package org.apache.lucene.analysis.eu...
  [javadoc] Loading source files for package org.apache.lucene.analysis.fa...
  [javadoc] Loading source files for package org.apache.lucene.analysis.fi...
  [javadoc] Loading source files for package org.apache.lucene.analysis.fr...
  [javadoc] Loading source files for package org.apache.lucene.analysis.ga...
  [javadoc] Loading source files for package org.apache.lucene.analysis.gl...
  [javadoc] Loading source files for package org.apache.lucene.analysis.hi...
  [javadoc] Loading source files for package org.apache.lucene.analysis.hu...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.hunspell...
  [javadoc] Loading source files for package org.apache.lucene.analysis.hy...
  [javadoc] Loading source files for package org.apache.lucene.analysis.id...
  [javadoc] Loading source files for package org.apache.lucene.analysis.in...
  [javadoc] Loading source files for package org.apache.lucene.analysis.it...
  [javadoc] Loading source files 

[jira] [Commented] (LUCENE-4467) SegmentReader.loadDeletedDocs FileNotFoundExceptio load _hko_7.del - corrupted index

2012-10-15 Thread B.Nicolotti (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476027#comment-13476027
 ] 

B.Nicolotti commented on LUCENE-4467:
-

The problem seems to be really the fact that we had 2 web application writing 
to the same index from one linux process hosting the tomcat application server.

I switched off the indexing from one web application, leaving only the other 
one writing the index, and it has worked without problems for 3 days.

We'll write some lock mechanism between the two web applications.

Many thanks

Regards


 SegmentReader.loadDeletedDocs FileNotFoundExceptio load _hko_7.del - 
 corrupted index
 

 Key: LUCENE-4467
 URL: https://issues.apache.org/jira/browse/LUCENE-4467
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 3.6
 Environment: Currently using:
 java -version
 java version 1.5.0_13
 Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_13-b05)
 Java HotSpot(TM) Client VM (build 1.5.0_13-b05, mixed mode, sharing)
 Tomcat 5.5
 lucene 3.6.0
Reporter: B.Nicolotti
 Attachments: index.zip


 We're using lucene to index XML. We've had it in test on a server for some 
 weeks with no problem, but today we've got the error below and the index 
 seems no longer usable.
 Could you please tell us 
 1) is there a way to recover the index?
 2) is there a way to avoid this error?
 I can supply the index if needed
 many thanks
 Tue Oct 09 17:41:02 CEST 2012:com.siap.WebServices.Utility.UtiIndexerLucene 
 caught an exception: 32225010 java.io.FileNotFoundException
  e.toString():java.io.FileNotFoundException: 
 /usr/local/WS_DynPkg/logs/index/_hko_7.del (No such file or directory),
  e.getMessage():/usr/local/WS_DynPkg/logs/index/_hko_7.del (No such file or 
 directory)
 java.io.RandomAccessFile.open(Native Method)
 java.io.RandomAccessFile.init(RandomAccessFile.java:212)
 org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput$Descriptor.init(SimpleFSDirectory.java:71)
 org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.init(SimpleFSDirectory.java:98)
 org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.init(NIOFSDirectory.java:92)
 org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:79)
 org.apache.lucene.store.FSDirectory.openInput(FSDirectory.java:345)
 org.apache.lucene.util.BitVector.init(BitVector.java:266)
 org.apache.lucene.index.SegmentReader.loadDeletedDocs(SegmentReader.java:160)
 org.apache.lucene.index.SegmentReader.get(SegmentReader.java:120)
 org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:696)
 org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:671)
 org.apache.lucene.index.BufferedDeletesStream.applyDeletes(BufferedDeletesStream.java:244)
 org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3608)
 org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3545)
 org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:1852)
 org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1812)
 org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1776)
 com.siap.WebServices.Utility.UtiIndexerLucene.delete(UtiIndexerLucene.java:143)
 com.siap.WebServices.Utility.UtiIndexerLucene.indexFile(UtiIndexerLucene.java:221)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3772) Highlighter needs the whole text in memory to work

2012-10-15 Thread Mark Harwood (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476044#comment-13476044
 ] 

Mark Harwood commented on LUCENE-3772:
--

For bigger-than-memory docs is it not possible to use nested documents to 
represent subsections (e.g. a child doc for each of the chapters in a book) and 
then use BlockJoinQuery to select the best child docs?
Highlighting can then be used on a more-manageable subset of the original 
content and Lucene's ranking algos are being used to select the best fragment 
rather than the highlighter's own attempts to reproduce this logic.

Obviously depends on the shape of your content/queries but books-and-chapters 
is probably a good fit for this approach.

 Highlighter needs the whole text in memory to work
 --

 Key: LUCENE-3772
 URL: https://issues.apache.org/jira/browse/LUCENE-3772
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/highlighter
Affects Versions: 3.5
 Environment: Windows 7 Enterprise x64, JRE 1.6.0_25
Reporter: Luis Filipe Nassif
  Labels: highlighter, improvement, memory

 Highlighter methods getBestFragment(s) and getBestTextFragments only accept a 
 String object representing the whole text to highlight. When dealing with 
 very large docs simultaneously, it can lead to heap consumption problems. It 
 would be better if the API could accept a Reader objetct additionally, like 
 Lucene Document Fields do.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4006) system requirements is duplicated across versioned/unversioned

2012-10-15 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476071#comment-13476071
 ] 

Uwe Schindler commented on LUCENE-4006:
---

In Lucene 4.0 we had no system requirements inside the release package at all 
(lost when moving away from forrest). I will attach a patch that adds a 
markdown version.

 system requirements is duplicated across versioned/unversioned
 --

 Key: LUCENE-4006
 URL: https://issues.apache.org/jira/browse/LUCENE-4006
 Project: Lucene - Core
  Issue Type: Task
  Components: general/javadocs
Reporter: Robert Muir
Assignee: Uwe Schindler
 Fix For: 4.1


 Our System requirements page is located here on the unversioned site: 
 http://lucene.apache.org/core/systemreqs.html
 But its also in forrest under each release. Can we just nuke the forrested 
 one?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4006) system requirements is duplicated across versioned/unversioned

2012-10-15 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-4006:
--

Fix Version/s: 5.0

 system requirements is duplicated across versioned/unversioned
 --

 Key: LUCENE-4006
 URL: https://issues.apache.org/jira/browse/LUCENE-4006
 Project: Lucene - Core
  Issue Type: Task
  Components: general/javadocs
Reporter: Robert Muir
Assignee: Uwe Schindler
 Fix For: 4.1, 5.0

 Attachments: LUCENE-4006.patch


 Our System requirements page is located here on the unversioned site: 
 http://lucene.apache.org/core/systemreqs.html
 But its also in forrest under each release. Can we just nuke the forrested 
 one?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4006) system requirements is duplicated across versioned/unversioned

2012-10-15 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-4006:
--

Attachment: LUCENE-4006.patch

Patch with a markdown sysreq page.

I will set fix version to 4.0.1, too, as this should be fixed asap. We can only 
remove the duplicated system requirements page on the web once this is fixed in 
*all* releases.

 system requirements is duplicated across versioned/unversioned
 --

 Key: LUCENE-4006
 URL: https://issues.apache.org/jira/browse/LUCENE-4006
 Project: Lucene - Core
  Issue Type: Task
  Components: general/javadocs
Reporter: Robert Muir
Assignee: Uwe Schindler
 Fix For: 4.1, 5.0

 Attachments: LUCENE-4006.patch


 Our System requirements page is located here on the unversioned site: 
 http://lucene.apache.org/core/systemreqs.html
 But its also in forrest under each release. Can we just nuke the forrested 
 one?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4006) system requirements is duplicated across versioned/unversioned

2012-10-15 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-4006:
--

Fix Version/s: 4.0.1

 system requirements is duplicated across versioned/unversioned
 --

 Key: LUCENE-4006
 URL: https://issues.apache.org/jira/browse/LUCENE-4006
 Project: Lucene - Core
  Issue Type: Task
  Components: general/javadocs
Reporter: Robert Muir
Assignee: Uwe Schindler
 Fix For: 4.1, 5.0, 4.0.1

 Attachments: LUCENE-4006.patch


 Our System requirements page is located here on the unversioned site: 
 http://lucene.apache.org/core/systemreqs.html
 But its also in forrest under each release. Can we just nuke the forrested 
 one?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4467) SegmentReader.loadDeletedDocs FileNotFoundExceptio load _hko_7.del - corrupted index

2012-10-15 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476086#comment-13476086
 ] 

Michael McCandless commented on LUCENE-4467:


Hmm, but Lucene has locking (write.lock file in the index) to prevent two 
IndexWriters from writing to the same index at the same time.

The 2nd IndexWriter should have hit a LockObtainFailedException.

Are you setting your own LockFactory on the directory? Or just using its 
default?  Which Directory implementation are you using...?

 SegmentReader.loadDeletedDocs FileNotFoundExceptio load _hko_7.del - 
 corrupted index
 

 Key: LUCENE-4467
 URL: https://issues.apache.org/jira/browse/LUCENE-4467
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 3.6
 Environment: Currently using:
 java -version
 java version 1.5.0_13
 Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_13-b05)
 Java HotSpot(TM) Client VM (build 1.5.0_13-b05, mixed mode, sharing)
 Tomcat 5.5
 lucene 3.6.0
Reporter: B.Nicolotti
 Attachments: index.zip


 We're using lucene to index XML. We've had it in test on a server for some 
 weeks with no problem, but today we've got the error below and the index 
 seems no longer usable.
 Could you please tell us 
 1) is there a way to recover the index?
 2) is there a way to avoid this error?
 I can supply the index if needed
 many thanks
 Tue Oct 09 17:41:02 CEST 2012:com.siap.WebServices.Utility.UtiIndexerLucene 
 caught an exception: 32225010 java.io.FileNotFoundException
  e.toString():java.io.FileNotFoundException: 
 /usr/local/WS_DynPkg/logs/index/_hko_7.del (No such file or directory),
  e.getMessage():/usr/local/WS_DynPkg/logs/index/_hko_7.del (No such file or 
 directory)
 java.io.RandomAccessFile.open(Native Method)
 java.io.RandomAccessFile.init(RandomAccessFile.java:212)
 org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput$Descriptor.init(SimpleFSDirectory.java:71)
 org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.init(SimpleFSDirectory.java:98)
 org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.init(NIOFSDirectory.java:92)
 org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:79)
 org.apache.lucene.store.FSDirectory.openInput(FSDirectory.java:345)
 org.apache.lucene.util.BitVector.init(BitVector.java:266)
 org.apache.lucene.index.SegmentReader.loadDeletedDocs(SegmentReader.java:160)
 org.apache.lucene.index.SegmentReader.get(SegmentReader.java:120)
 org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:696)
 org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:671)
 org.apache.lucene.index.BufferedDeletesStream.applyDeletes(BufferedDeletesStream.java:244)
 org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3608)
 org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3545)
 org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:1852)
 org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1812)
 org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1776)
 com.siap.WebServices.Utility.UtiIndexerLucene.delete(UtiIndexerLucene.java:143)
 com.siap.WebServices.Utility.UtiIndexerLucene.indexFile(UtiIndexerLucene.java:221)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4472) Add setting that prevents merging on updateDocument

2012-10-15 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-4472:


Attachment: LUCENE-4472.patch

I played around with this and sketched out how this could look like. I don't 
think we should entirely break BW compat but open up the context like I did in 
the patch. There are still edges in the patch but some initial feedback would 
be great.

 Add setting that prevents merging on updateDocument
 ---

 Key: LUCENE-4472
 URL: https://issues.apache.org/jira/browse/LUCENE-4472
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Affects Versions: 4.0
Reporter: Simon Willnauer
 Fix For: 4.1, 5.0

 Attachments: LUCENE-4472.patch, LUCENE-4472.patch


 Currently we always call maybeMerge if a segment was flushed after 
 updateDocument. Some apps and in particular ElasticSearch uses some hacky 
 workarounds to disable that ie for merge throttling. It should be easier to 
 enable this kind of behavior. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 63 - Still Failing

2012-10-15 Thread Michael McCandless
Hmm spooky assert trip (something's wrong w/ DWPT's RAM tracking?)...
but it doesn't repro for me ...

Mike McCandless

http://blog.mikemccandless.com

On Mon, Oct 15, 2012 at 2:50 AM, Apache Jenkins Server
jenk...@builds.apache.org wrote:
 Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/63/

 1 tests failed.
 REGRESSION:  org.apache.lucene.index.TestBagOfPositions.test

 Error Message:
 Captured an uncaught exception in thread: Thread[id=644, name=Thread-561, 
 state=RUNNABLE, group=TGRP-TestBagOfPositions]

 Stack Trace:
 com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
 uncaught exception in thread: Thread[id=644, name=Thread-561, state=RUNNABLE, 
 group=TGRP-TestBagOfPositions]
 Caused by: java.lang.AssertionError: ram was 33879456 expected: 33851840 
 flush mem: 18092896 activeMem: 15786560 pendingMem: 0 flushingMem: 3 
 blockedMem: 0 peakDeltaMem: 99136
 at __randomizedtesting.SeedInfo.seed([11A534B74B63930E]:0)
 at 
 org.apache.lucene.index.DocumentsWriterFlushControl.assertMemory(DocumentsWriterFlushControl.java:114)
 at 
 org.apache.lucene.index.DocumentsWriterFlushControl.doAfterDocument(DocumentsWriterFlushControl.java:181)
 at 
 org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:384)
 at 
 org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1443)
 at 
 org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1122)
 at 
 org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:201)
 at 
 org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:160)
 at 
 org.apache.lucene.index.TestBagOfPositions$1.run(TestBagOfPositions.java:110)




 Build Log:
 [...truncated 420 lines...]
 [junit4:junit4] Suite: org.apache.lucene.index.TestBagOfPositions
 [junit4:junit4]   2 NOTE: download the large Jenkins line-docs file by 
 running 'ant get-jenkins-line-docs' in the lucene directory.
 [junit4:junit4]   2 NOTE: reproduce with: ant test  
 -Dtestcase=TestBagOfPositions -Dtests.method=test 
 -Dtests.seed=11A534B74B63930E -Dtests.multiplier=3 -Dtests.nightly=true 
 -Dtests.slow=true 
 -Dtests.linedocsfile=/home/hudson/lucene-data/enwiki.random.lines.txt 
 -Dtests.locale=fi -Dtests.timezone=Africa/Conakry 
 -Dtests.file.encoding=ISO-8859-1
 [junit4:junit4] ERROR206s J0 | TestBagOfPositions.test 
 [junit4:junit4] Throwable #1: 
 com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
 uncaught exception in thread: Thread[id=644, name=Thread-561, state=RUNNABLE, 
 group=TGRP-TestBagOfPositions]
 [junit4:junit4] Caused by: java.lang.AssertionError: ram was 33879456 
 expected: 33851840 flush mem: 18092896 activeMem: 15786560 pendingMem: 0 
 flushingMem: 3 blockedMem: 0 peakDeltaMem: 99136
 [junit4:junit4]at 
 __randomizedtesting.SeedInfo.seed([11A534B74B63930E]:0)
 [junit4:junit4]at 
 org.apache.lucene.index.DocumentsWriterFlushControl.assertMemory(DocumentsWriterFlushControl.java:114)
 [junit4:junit4]at 
 org.apache.lucene.index.DocumentsWriterFlushControl.doAfterDocument(DocumentsWriterFlushControl.java:181)
 [junit4:junit4]at 
 org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:384)
 [junit4:junit4]at 
 org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1443)
 [junit4:junit4]at 
 org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1122)
 [junit4:junit4]at 
 org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:201)
 [junit4:junit4]at 
 org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:160)
 [junit4:junit4]at 
 org.apache.lucene.index.TestBagOfPositions$1.run(TestBagOfPositions.java:110)
 [junit4:junit4] Throwable #2: 
 com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
 uncaught exception in thread: Thread[id=643, name=Thread-560, state=RUNNABLE, 
 group=TGRP-TestBagOfPositions]
 [junit4:junit4] Caused by: java.lang.AssertionError: ram was 33879456 
 expected: 33851840 flush mem: 18092896 activeMem: 15786560 pendingMem: 0 
 flushingMem: 3 blockedMem: 0 peakDeltaMem: 99136
 [junit4:junit4]at 
 __randomizedtesting.SeedInfo.seed([11A534B74B63930E]:0)
 [junit4:junit4]at 
 org.apache.lucene.index.DocumentsWriterFlushControl.assertMemory(DocumentsWriterFlushControl.java:114)
 [junit4:junit4]at 
 org.apache.lucene.index.DocumentsWriterFlushControl.doAfterDocument(DocumentsWriterFlushControl.java:181)
 [junit4:junit4]at 
 org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:384)
 [junit4:junit4]at 
 org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1443)
 [junit4:junit4]at 
 org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1122)
 [junit4:junit4]at 
 

[jira] [Commented] (LUCENE-4467) SegmentReader.loadDeletedDocs FileNotFoundExceptio load _hko_7.del - corrupted index

2012-10-15 Thread B.Nicolotti (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476118#comment-13476118
 ] 

B.Nicolotti commented on LUCENE-4467:
-

Hello,

no we didn't implement any LockFactory class. My doubt is that in this case 
we've one linux process that's trying to obtain a lock file. If the file is 
already opened by the process the second is stopped?

reviewing tomcat logs we've this error:

Fri Oct 05 22:32:20 CEST 2012:com.siap.WebServices.Utility.UtiIndexerLucene 
caught an exception: 9290338 org.apache.lucene.store.LockObtainFailedException
 e.toString():org.apache.lucene.store.LockObtainFailedException: Lock obtain 
timed out: NativeFSLock@/usr/local/WS_DynPkg/logs/index/write.lock,
 e.getMessage():Lock obtain timed out: 
NativeFSLock@/usr/local/WS_DynPkg/logs/index/write.lock
org.apache.lucene.store.Lock.obtain(Lock.java:84)
org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1098)
com.siap.WebServices.Utility.UtiIndexerLucene.delete(UtiIndexerLucene.java:139)
com.siap.WebServices.Utility.SerLogSearch.deleteIdList(SerLogSearch.java:720

at the time the lock time out was set to 10s.

We'll try to switch on 2 application again.

many thanks

Best regards

Bartolomeo

 SegmentReader.loadDeletedDocs FileNotFoundExceptio load _hko_7.del - 
 corrupted index
 

 Key: LUCENE-4467
 URL: https://issues.apache.org/jira/browse/LUCENE-4467
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 3.6
 Environment: Currently using:
 java -version
 java version 1.5.0_13
 Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_13-b05)
 Java HotSpot(TM) Client VM (build 1.5.0_13-b05, mixed mode, sharing)
 Tomcat 5.5
 lucene 3.6.0
Reporter: B.Nicolotti
 Attachments: index.zip


 We're using lucene to index XML. We've had it in test on a server for some 
 weeks with no problem, but today we've got the error below and the index 
 seems no longer usable.
 Could you please tell us 
 1) is there a way to recover the index?
 2) is there a way to avoid this error?
 I can supply the index if needed
 many thanks
 Tue Oct 09 17:41:02 CEST 2012:com.siap.WebServices.Utility.UtiIndexerLucene 
 caught an exception: 32225010 java.io.FileNotFoundException
  e.toString():java.io.FileNotFoundException: 
 /usr/local/WS_DynPkg/logs/index/_hko_7.del (No such file or directory),
  e.getMessage():/usr/local/WS_DynPkg/logs/index/_hko_7.del (No such file or 
 directory)
 java.io.RandomAccessFile.open(Native Method)
 java.io.RandomAccessFile.init(RandomAccessFile.java:212)
 org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput$Descriptor.init(SimpleFSDirectory.java:71)
 org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.init(SimpleFSDirectory.java:98)
 org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.init(NIOFSDirectory.java:92)
 org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:79)
 org.apache.lucene.store.FSDirectory.openInput(FSDirectory.java:345)
 org.apache.lucene.util.BitVector.init(BitVector.java:266)
 org.apache.lucene.index.SegmentReader.loadDeletedDocs(SegmentReader.java:160)
 org.apache.lucene.index.SegmentReader.get(SegmentReader.java:120)
 org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:696)
 org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:671)
 org.apache.lucene.index.BufferedDeletesStream.applyDeletes(BufferedDeletesStream.java:244)
 org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3608)
 org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3545)
 org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:1852)
 org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1812)
 org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1776)
 com.siap.WebServices.Utility.UtiIndexerLucene.delete(UtiIndexerLucene.java:143)
 com.siap.WebServices.Utility.UtiIndexerLucene.indexFile(UtiIndexerLucene.java:221)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3946) Support delta import in SolrEntityProcessor

2012-10-15 Thread yuanyun.cn (JIRA)
yuanyun.cn created SOLR-3946:


 Summary: Support delta import in SolrEntityProcessor
 Key: SOLR-3946
 URL: https://issues.apache.org/jira/browse/SOLR-3946
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Affects Versions: 4.0
Reporter: yuanyun.cn
Priority: Minor
 Fix For: 4.1


SolrEntityProcessor is very useful to copy a part of index from central solr to 
another solr server based on some query.
But its function is quite limited, doesn't support delta import, which is a 
quite useful feature, for example:

One central solr server stores index of all docs, in the index we record 
information such as owner, last_modified and etc. Then create a local cache 
solr server in client side which only contains index of docs created by this 
user, so user can search his/her docs even when there is no internet 
connection. After the first full import to copy index of doc created by this 
user in last several weeks (or month), we want to update index in client's 
local solr server consistently from the central server.

But now, we can't do this, as SolrEntityProcessor doesn't support delta-import 
- which already supports in SqlEntityProcessor: using deltaQuery, 
deltaImportQuery to do delta-import, using deletedPkQuery to remove deleted 
index when do delta-import.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3946) Support delta import in SolrEntityProcessor

2012-10-15 Thread James Dyer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476237#comment-13476237
 ] 

James Dyer commented on SOLR-3946:
--

It might be difficult to make command=delta-import work with anything other 
than SqlEntityProcessor, as it seems to be designed around SQL and RDBMS 
concepts.  However, you might be able to do deltas with SolrEntityProcessor 
using command=full-importclean=false.  Then, parameterize 
SolrEntityProcessor's query and/or fq parameters to retrieve just the 
documents that were added or changed since the last sync.  Of course deletes 
are going to be a problem, and you might need to invent some multiple-step 
process to find a way to do these. 

Given that you can do incremental updates on your index using 
command=full-importclean=false, and that the Delta Update is unsupported 
(indeed often cannot be supported) for anything other than Sql, I wonder if 
command=delta-update could just be removed entirely from DIH.  As DIH is 
slipping more and more towards death, it might someday be necessary to amputate 
the sickest parts to save the patient...

 Support delta import in SolrEntityProcessor
 ---

 Key: SOLR-3946
 URL: https://issues.apache.org/jira/browse/SOLR-3946
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Affects Versions: 4.0
Reporter: yuanyun.cn
Priority: Minor
  Labels: SolrEntityProcessor, deltaimport
 Fix For: 4.1


 SolrEntityProcessor is very useful to copy a part of index from central solr 
 to another solr server based on some query.
 But its function is quite limited, doesn't support delta import, which is a 
 quite useful feature, for example:
 One central solr server stores index of all docs, in the index we record 
 information such as owner, last_modified and etc. Then create a local cache 
 solr server in client side which only contains index of docs created by this 
 user, so user can search his/her docs even when there is no internet 
 connection. After the first full import to copy index of doc created by this 
 user in last several weeks (or month), we want to update index in client's 
 local solr server consistently from the central server.
 But now, we can't do this, as SolrEntityProcessor doesn't support 
 delta-import - which already supports in SqlEntityProcessor: using 
 deltaQuery, deltaImportQuery to do delta-import, using deletedPkQuery to 
 remove deleted index when do delta-import.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.6.0_35) - Build # 1180 - Failure!

2012-10-15 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Windows/1180/
Java: 32bit/jdk1.6.0_35 -client -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 23520 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:342: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:65: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\build.xml:512: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\common-build.xml:1910:
 java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:168)
at 
com.sun.net.ssl.internal.ssl.InputRecord.readFully(InputRecord.java:293)
at com.sun.net.ssl.internal.ssl.InputRecord.read(InputRecord.java:331)
at 
com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:863)
at 
com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1203)
at 
com.sun.net.ssl.internal.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1230)
at 
com.sun.net.ssl.internal.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1214)
at 
sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:434)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:166)
at 
sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:133)
at 
org.apache.tools.ant.taskdefs.Get$GetThread.openConnection(Get.java:660)
at org.apache.tools.ant.taskdefs.Get$GetThread.get(Get.java:579)
at org.apache.tools.ant.taskdefs.Get$GetThread.run(Get.java:569)

Total time: 50 minutes 19 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Description set: Java: 32bit/jdk1.6.0_35 -client -XX:+UseSerialGC
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Created] (SOLR-3947) Solr build should compile include lucene codecs

2012-10-15 Thread Alan Woodward (JIRA)
Alan Woodward created SOLR-3947:
---

 Summary: Solr build should compile  include lucene codecs
 Key: SOLR-3947
 URL: https://issues.apache.org/jira/browse/SOLR-3947
 Project: Solr
  Issue Type: Bug
  Components: Build
Affects Versions: 4.0
Reporter: Alan Woodward
Priority: Minor
 Fix For: 4.1
 Attachments: SOLR-3947.patch

The lucene codecs classes were recently moved into a module and jar of their 
own, but the Solr build wasn't updated to compile/copy those over.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3947) Solr build should compile include lucene codecs

2012-10-15 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated SOLR-3947:


Attachment: SOLR-3947.patch

 Solr build should compile  include lucene codecs
 -

 Key: SOLR-3947
 URL: https://issues.apache.org/jira/browse/SOLR-3947
 Project: Solr
  Issue Type: Bug
  Components: Build
Affects Versions: 4.0
Reporter: Alan Woodward
Priority: Minor
 Fix For: 4.1

 Attachments: SOLR-3947.patch


 The lucene codecs classes were recently moved into a module and jar of their 
 own, but the Solr build wasn't updated to compile/copy those over.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3939) Solr Cloud recovery and leader election when unloading leader core

2012-10-15 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476297#comment-13476297
 ] 

Mark Miller commented on SOLR-3939:
---

is this with an empty index?

 Solr Cloud recovery and leader election when unloading leader core
 --

 Key: SOLR-3939
 URL: https://issues.apache.org/jira/browse/SOLR-3939
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA, 4.0
Reporter: Joel Bernstein
Assignee: Mark Miller
  Labels: 4.0.1_Candidate
 Fix For: 4.1, 5.0

 Attachments: SOLR-3939.patch


 When a leader core is unloaded using the core admin api, the followers in the 
 shard go into recovery but do not come out. Leader election doesn't take 
 place and the shard goes down.
 This effects the ability to move a micro-shard from one Solr instance to 
 another Solr instance.
 The problem does not occur 100% of the time but a large % of the time. 
 To setup a test, startup Solr Cloud with a single shard. Add cores to that 
 shard as replicas using core admin. Then unload the leader core using core 
 admin. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3939) Solr Cloud recovery and leader election when unloading leader core

2012-10-15 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476316#comment-13476316
 ] 

Joel Bernstein commented on SOLR-3939:
--

I tested with the exampledocs loaded. Step 3 in the test above. I loaded the 
shards before starting up the replica on the second solr instance.

 Solr Cloud recovery and leader election when unloading leader core
 --

 Key: SOLR-3939
 URL: https://issues.apache.org/jira/browse/SOLR-3939
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA, 4.0
Reporter: Joel Bernstein
Assignee: Mark Miller
  Labels: 4.0.1_Candidate
 Fix For: 4.1, 5.0

 Attachments: SOLR-3939.patch


 When a leader core is unloaded using the core admin api, the followers in the 
 shard go into recovery but do not come out. Leader election doesn't take 
 place and the shard goes down.
 This effects the ability to move a micro-shard from one Solr instance to 
 another Solr instance.
 The problem does not occur 100% of the time but a large % of the time. 
 To setup a test, startup Solr Cloud with a single shard. Add cores to that 
 shard as replicas using core admin. Then unload the leader core using core 
 admin. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3947) Solr build should compile include lucene codecs

2012-10-15 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-3947.


Resolution: Won't Fix

This was a deliberate decision, see SOLR-3843

 Solr build should compile  include lucene codecs
 -

 Key: SOLR-3947
 URL: https://issues.apache.org/jira/browse/SOLR-3947
 Project: Solr
  Issue Type: Bug
  Components: Build
Affects Versions: 4.0
Reporter: Alan Woodward
Priority: Minor
 Fix For: 4.1

 Attachments: SOLR-3947.patch


 The lucene codecs classes were recently moved into a module and jar of their 
 own, but the Solr build wasn't updated to compile/copy those over.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-4464) Intersects spatial query returns polygons it shouldn't

2012-10-15 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned LUCENE-4464:


Assignee: David Smiley

 Intersects spatial query returns polygons it shouldn't
 

 Key: LUCENE-4464
 URL: https://issues.apache.org/jira/browse/LUCENE-4464
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/spatial
Affects Versions: 3.6.1
 Environment: linux and windows
Reporter: solr-user
Assignee: David Smiley
Priority: Critical
  Labels: solr, spatial, spatialsearch

 full description, including sample schema and data, can be found at 
 http://lucene.472066.n3.nabble.com/quot-Intersects-quot-spatial-query-returns-polygons-it-shouldn-t-td4008646.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3843) Add lucene-codecs to Solr libs?

2012-10-15 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476345#comment-13476345
 ] 

Mark Miller commented on SOLR-3843:
---

bq. Also I had to turn off per-field codec support by default anyway because 
Solr keeps the IndexWriter open across core reloads (SOLR-3610).

We should probably consider that again. Some of my initial work around this 
area when this first came up was not really up to dealing with it well. Opening 
a new IndexWriter was kind of a hackey operation for replication. Things have 
changed though, and open a new IndexWriter should be first class now. I think 
it's probably fine to reopen it on core reloads.

 Add lucene-codecs to Solr libs?
 ---

 Key: SOLR-3843
 URL: https://issues.apache.org/jira/browse/SOLR-3843
 Project: Solr
  Issue Type: Wish
Reporter: Adrien Grand
Priority: Minor

 Solr gives the ability to its users to select the postings format to use on a 
 per-field basis but only Lucene40PostingsFormat is available by default 
 (unless users add lucene-codecs to the Solr lib directory). Maybe we should 
 add lucene-codecs to Solr libs (I mean in the WAR file) so that people can 
 try our non-default postings formats with minimum effort?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3939) Solr Cloud recovery and leader election when unloading leader core

2012-10-15 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476400#comment-13476400
 ] 

Mark Miller commented on SOLR-3939:
---

interesting - I can see an issue when I run the test with empty indexes, but my 
current test is passing if I add some docs. The main reason I see for this at 
the moment is that a leader who tries to sync with his replicas will always 
fail with an empty tlog (no frame of reference).

I'll have to dig deeper for the 'docs in index' case.

 Solr Cloud recovery and leader election when unloading leader core
 --

 Key: SOLR-3939
 URL: https://issues.apache.org/jira/browse/SOLR-3939
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA, 4.0
Reporter: Joel Bernstein
Assignee: Mark Miller
  Labels: 4.0.1_Candidate
 Fix For: 4.1, 5.0

 Attachments: SOLR-3939.patch


 When a leader core is unloaded using the core admin api, the followers in the 
 shard go into recovery but do not come out. Leader election doesn't take 
 place and the shard goes down.
 This effects the ability to move a micro-shard from one Solr instance to 
 another Solr instance.
 The problem does not occur 100% of the time but a large % of the time. 
 To setup a test, startup Solr Cloud with a single shard. Add cores to that 
 shard as replicas using core admin. Then unload the leader core using core 
 admin. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3930) eDismax Multivalued boost

2012-10-15 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-3930.


Resolution: Not A Problem

the boost param of edismax lets you boost by function.

if you want to use it to boost documents matching a query, then you need to use 
the query() function to generate a function to produces values 

http://localhost:8983/solr/select?defType=edismaxdebugQuery=trueq=fooboost=query%28{!lucene%20v=%27foo_ss:bar%27}%29


 eDismax Multivalued boost
 -

 Key: SOLR-3930
 URL: https://issues.apache.org/jira/browse/SOLR-3930
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.0
Reporter: Bill Bell

 Want to replace bq with boost, but we
 get the multi-valued field issue when we try to do the equivalent queriesŠ
 HTTP ERROR 400
 Problem accessing /solr/providersearch/select. Reason:
 can not use FieldCache on multivalued field: specialties_ids
 q=*:*bq=multi_field:87^2defType=dismax
 How do you do this using boost?
 q=*:*boost=multi_field:87defType=edismax
 We know we can use bq with edismax, but we like the multiply feature of
 boost.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3939) Solr Cloud recovery and leader election when unloading leader core

2012-10-15 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476422#comment-13476422
 ] 

Joel Bernstein commented on SOLR-3939:
--

No sure if this helps. Here is stack trace from my second solr instance. This 
is the instance that would be the leader after the leader core was unloaded on 
the first instance.

SEVERE: There was a problem finding the leader in 
zk:org.apache.solr.common.SolrException: Could not get leader props
at 
org.apache.solr.cloud.ZkController.getLeaderProps(ZkController.java:709)
at 
org.apache.solr.cloud.ZkController.getLeaderProps(ZkController.java:673)
at 
org.apache.solr.cloud.ZkController.waitForLeaderToSeeDownState(ZkController.java:1070)
at 
org.apache.solr.cloud.ZkController.registerAllCoresAsDown(ZkController.java:273)
at org.apache.solr.cloud.ZkController.access$100(ZkController.java:82)
at org.apache.solr.cloud.ZkController$1.command(ZkController.java:190)
at 
org.apache.solr.common.cloud.ConnectionManager$1.update(ConnectionManager.java:116)
at 
org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:46)
at 
org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:90)
at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:526)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)
Caused by: org.apache.zookeeper.KeeperException$NoNodeException: 
KeeperErrorCode = NoNode for /collections/collection1/leaders/shard1
at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:42)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:927)
at 
org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:244)
at 
org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:241)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:63)
at 
org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:241)
at 
org.apache.solr.cloud.ZkController.getLeaderProps(ZkController.java:687)
... 10 more

Oct 15, 2012 3:39:18 PM org.apache.solr.common.SolrException log
SEVERE: :org.apache.solr.common.SolrException: There was a problem finding the 
leader in zk
at 
org.apache.solr.cloud.ZkController.waitForLeaderToSeeDownState(ZkController.java:1080)
at 
org.apache.solr.cloud.ZkController.registerAllCoresAsDown(ZkController.java:273)
at org.apache.solr.cloud.ZkController.access$100(ZkController.java:82)
at org.apache.solr.cloud.ZkController$1.command(ZkController.java:190)
at 
org.apache.solr.common.cloud.ConnectionManager$1.update(ConnectionManager.java:116)
at 
org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:46)
at 
org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:90)
at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:526)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)

 Solr Cloud recovery and leader election when unloading leader core
 --

 Key: SOLR-3939
 URL: https://issues.apache.org/jira/browse/SOLR-3939
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA, 4.0
Reporter: Joel Bernstein
Assignee: Mark Miller
  Labels: 4.0.1_Candidate
 Fix For: 4.1, 5.0

 Attachments: SOLR-3939.patch


 When a leader core is unloaded using the core admin api, the followers in the 
 shard go into recovery but do not come out. Leader election doesn't take 
 place and the shard goes down.
 This effects the ability to move a micro-shard from one Solr instance to 
 another Solr instance.
 The problem does not occur 100% of the time but a large % of the time. 
 To setup a test, startup Solr Cloud with a single shard. Add cores to that 
 shard as replicas using core admin. Then unload the leader core using core 
 admin. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3939) Solr Cloud recovery and leader election when unloading leader core

2012-10-15 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476449#comment-13476449
 ] 

Joel Bernstein commented on SOLR-3939:
--

I restarted the second solr instance and it came up as the leader for shard1, 
with no errors. 

I'll try to re-produce again.


 Solr Cloud recovery and leader election when unloading leader core
 --

 Key: SOLR-3939
 URL: https://issues.apache.org/jira/browse/SOLR-3939
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA, 4.0
Reporter: Joel Bernstein
Assignee: Mark Miller
  Labels: 4.0.1_Candidate
 Fix For: 4.1, 5.0

 Attachments: SOLR-3939.patch


 When a leader core is unloaded using the core admin api, the followers in the 
 shard go into recovery but do not come out. Leader election doesn't take 
 place and the shard goes down.
 This effects the ability to move a micro-shard from one Solr instance to 
 another Solr instance.
 The problem does not occur 100% of the time but a large % of the time. 
 To setup a test, startup Solr Cloud with a single shard. Add cores to that 
 shard as replicas using core admin. Then unload the leader core using core 
 admin. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3939) Solr Cloud recovery and leader election when unloading leader core

2012-10-15 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476487#comment-13476487
 ] 

Joel Bernstein commented on SOLR-3939:
--

I reproduced it again. I pulled again from the top of the 4x branch. I didn't 
apply the patch because it was committed, I believe.

Same exact steps as described above. Attached is part of the log file from the 
second Solr instance that shows the replica going into recovery. It's looking 
for the collection1 core that was unloaded from the first solr instance.

 Solr Cloud recovery and leader election when unloading leader core
 --

 Key: SOLR-3939
 URL: https://issues.apache.org/jira/browse/SOLR-3939
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA, 4.0
Reporter: Joel Bernstein
Assignee: Mark Miller
  Labels: 4.0.1_Candidate
 Fix For: 4.1, 5.0

 Attachments: cloud.log, SOLR-3939.patch


 When a leader core is unloaded using the core admin api, the followers in the 
 shard go into recovery but do not come out. Leader election doesn't take 
 place and the shard goes down.
 This effects the ability to move a micro-shard from one Solr instance to 
 another Solr instance.
 The problem does not occur 100% of the time but a large % of the time. 
 To setup a test, startup Solr Cloud with a single shard. Add cores to that 
 shard as replicas using core admin. Then unload the leader core using core 
 admin. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3939) Solr Cloud recovery and leader election when unloading leader core

2012-10-15 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-3939:
-

Attachment: cloud.log

The log output from solr.

 Solr Cloud recovery and leader election when unloading leader core
 --

 Key: SOLR-3939
 URL: https://issues.apache.org/jira/browse/SOLR-3939
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA, 4.0
Reporter: Joel Bernstein
Assignee: Mark Miller
  Labels: 4.0.1_Candidate
 Fix For: 4.1, 5.0

 Attachments: cloud.log, SOLR-3939.patch


 When a leader core is unloaded using the core admin api, the followers in the 
 shard go into recovery but do not come out. Leader election doesn't take 
 place and the shard goes down.
 This effects the ability to move a micro-shard from one Solr instance to 
 another Solr instance.
 The problem does not occur 100% of the time but a large % of the time. 
 To setup a test, startup Solr Cloud with a single shard. Add cores to that 
 shard as replicas using core admin. Then unload the leader core using core 
 admin. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1398564 - in /lucene/dev/trunk: lucene/SYSTEM_REQUIREMENTS.txt lucene/build.xml lucene/site/xsl/index.xsl solr/SYSTEM_REQUIREMENTS.txt solr/build.xml solr/site/xsl/index.xsl

2012-10-15 Thread Robert Muir
i think you put the solr sysreqs in the lucene/ directory, and vice versa!

On Mon, Oct 15, 2012 at 4:06 PM,  uschind...@apache.org wrote:
 Author: uschindler
 Date: Mon Oct 15 23:06:34 2012
 New Revision: 1398564

 URL: http://svn.apache.org/viewvc?rev=1398564view=rev
 Log:
 LUCENE-4006: Add system requirements page (markdown)

 Added:
 lucene/dev/trunk/lucene/SYSTEM_REQUIREMENTS.txt   (with props)
 lucene/dev/trunk/solr/SYSTEM_REQUIREMENTS.txt   (with props)
 Modified:
 lucene/dev/trunk/lucene/build.xml
 lucene/dev/trunk/lucene/site/xsl/index.xsl
 lucene/dev/trunk/solr/build.xml
 lucene/dev/trunk/solr/site/xsl/index.xsl

 Added: lucene/dev/trunk/lucene/SYSTEM_REQUIREMENTS.txt
 URL: 
 http://svn.apache.org/viewvc/lucene/dev/trunk/lucene/SYSTEM_REQUIREMENTS.txt?rev=1398564view=auto
 ==
 --- lucene/dev/trunk/lucene/SYSTEM_REQUIREMENTS.txt (added)
 +++ lucene/dev/trunk/lucene/SYSTEM_REQUIREMENTS.txt Mon Oct 15 23:06:34 2012
 @@ -0,0 +1,16 @@
 +# System Requirements
 +
 +Apache Solr runs of Java 6 or greater. When using Java 7, be sure to
 +install at least Update 1! With all Java versions it is strongly
 +recommended to not use experimental `-XX` JVM options. It is also
 +recommended to always use the latest update version of your Java VM,
 +because bugs may affect Solr. An overview of known JVM bugs can be
 +found on http://wiki.apache.org/lucene-java/SunJavaBugs.
 +
 +CPU, disk and memory requirements are based on the many choices made in
 +implementing Solr (document size, number of documents, and number of
 +hits retrieved to name a few). The benchmarks page has some information
 +related to performance on particular platforms.
 +
 +*To build Apache Solr from source, refer to the `BUILD.txt` file in
 +the distribution directory.*

 Modified: lucene/dev/trunk/lucene/build.xml
 URL: 
 http://svn.apache.org/viewvc/lucene/dev/trunk/lucene/build.xml?rev=1398564r1=1398563r2=1398564view=diff
 ==
 --- lucene/dev/trunk/lucene/build.xml (original)
 +++ lucene/dev/trunk/lucene/build.xml Mon Oct 15 23:06:34 2012
 @@ -33,6 +33,7 @@
patternset id=binary.root.dist.patterns
includes=LICENSE.txt,NOTICE.txt,README.txt,
  MIGRATE.txt,JRE_VERSION_MIGRATION.txt,
 +SYSTEM_REQUIREMENTS.txt,
  CHANGES.txt,
  **/lib/*.jar,
  licenses/**,
 @@ -297,7 +298,7 @@
  /xslt

  pegdown todir=${javadoc.dir}
 -  fileset dir=. includes=MIGRATE.txt,JRE_VERSION_MIGRATION.txt/
 +  fileset dir=. 
 includes=MIGRATE.txt,JRE_VERSION_MIGRATION.txt,SYSTEM_REQUIREMENTS.txt/
globmapper from=*.txt to=*.html/
  /pegdown


 Modified: lucene/dev/trunk/lucene/site/xsl/index.xsl
 URL: 
 http://svn.apache.org/viewvc/lucene/dev/trunk/lucene/site/xsl/index.xsl?rev=1398564r1=1398563r2=1398564view=diff
 ==
 --- lucene/dev/trunk/lucene/site/xsl/index.xsl (original)
 +++ lucene/dev/trunk/lucene/site/xsl/index.xsl Mon Oct 15 23:06:34 2012
 @@ -63,6 +63,7 @@
  h2Reference Documents/h2
ul
  lia href=changes/Changes.htmlChanges/a: List of changes 
 in this release./li
 +lia href=SYSTEM_REQUIREMENTS.htmlSystem Requirements/a: 
 Minimum and supported Java versions./li
  lia href=MIGRATE.htmlMigration Guide/a: What changed in 
 Lucene 4; how to migrate code from Lucene 3.x./li
  lia href=JRE_VERSION_MIGRATION.htmlJRE Version 
 Migration/a: Information about upgrading between major JRE versions./li
  lia 
 href=core/org/apache/lucene/codecs/lucene41/package-summary.html#package_descriptionFile
  Formats/a: Guide to the supported index format used by Lucene.  This can 
 be customized by using a 
 href=core/org/apache/lucene/codecs/package-summary.html#package_descriptionan
  alternate codec/a./li

 Added: lucene/dev/trunk/solr/SYSTEM_REQUIREMENTS.txt
 URL: 
 http://svn.apache.org/viewvc/lucene/dev/trunk/solr/SYSTEM_REQUIREMENTS.txt?rev=1398564view=auto
 ==
 --- lucene/dev/trunk/solr/SYSTEM_REQUIREMENTS.txt (added)
 +++ lucene/dev/trunk/solr/SYSTEM_REQUIREMENTS.txt Mon Oct 15 23:06:34 2012
 @@ -0,0 +1,16 @@
 +# System Requirements
 +
 +Apache Lucene runs of Java 6 or greater. When using Java 7, be sure to
 +install at least Update 1! With all Java versions it is strongly
 +recommended to not use experimental `-XX` JVM options. It is also
 +recommended to always use the latest update version of your Java VM,
 +because bugs may affect Lucene. An overview of known JVM bugs can be
 +found on http://wiki.apache.org/lucene-java/SunJavaBugs.
 +
 +CPU, disk and memory requirements are based on the many choices made 

AW: svn commit: r1398564 - in /lucene/dev/trunk: lucene/SYSTEM_REQUIREMENTS.txt lucene/build.xml lucene/site/xsl/index.xsl solr/SYSTEM_REQUIREMENTS.txt solr/build.xml solr/site/xsl/index.xsl

2012-10-15 Thread Uwe Schindler
Oh, I'll fix. That was my fisrt commit with my new Fast-IO-Laptop :-)

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de

-Ursprüngliche Nachricht-
Von: Robert Muir [mailto:rcm...@gmail.com] 
Gesendet: Dienstag, 16. Oktober 2012 01:11
An: dev@lucene.apache.org
Betreff: Re: svn commit: r1398564 - in /lucene/dev/trunk: 
lucene/SYSTEM_REQUIREMENTS.txt lucene/build.xml lucene/site/xsl/index.xsl 
solr/SYSTEM_REQUIREMENTS.txt solr/build.xml solr/site/xsl/index.xsl

i think you put the solr sysreqs in the lucene/ directory, and vice versa!

On Mon, Oct 15, 2012 at 4:06 PM,  uschind...@apache.org wrote:
 Author: uschindler
 Date: Mon Oct 15 23:06:34 2012
 New Revision: 1398564

 URL: http://svn.apache.org/viewvc?rev=1398564view=rev
 Log:
 LUCENE-4006: Add system requirements page (markdown)

 Added:
 lucene/dev/trunk/lucene/SYSTEM_REQUIREMENTS.txt   (with props)
 lucene/dev/trunk/solr/SYSTEM_REQUIREMENTS.txt   (with props)
 Modified:
 lucene/dev/trunk/lucene/build.xml
 lucene/dev/trunk/lucene/site/xsl/index.xsl
 lucene/dev/trunk/solr/build.xml
 lucene/dev/trunk/solr/site/xsl/index.xsl

 Added: lucene/dev/trunk/lucene/SYSTEM_REQUIREMENTS.txt
 URL: 
 http://svn.apache.org/viewvc/lucene/dev/trunk/lucene/SYSTEM_REQUIREMEN
 TS.txt?rev=1398564view=auto 
 ==
 
 --- lucene/dev/trunk/lucene/SYSTEM_REQUIREMENTS.txt (added)
 +++ lucene/dev/trunk/lucene/SYSTEM_REQUIREMENTS.txt Mon Oct 15 
 +++ 23:06:34 2012
 @@ -0,0 +1,16 @@
 +# System Requirements
 +
 +Apache Solr runs of Java 6 or greater. When using Java 7, be sure to 
 +install at least Update 1! With all Java versions it is strongly 
 +recommended to not use experimental `-XX` JVM options. It is also 
 +recommended to always use the latest update version of your Java VM, 
 +because bugs may affect Solr. An overview of known JVM bugs can be 
 +found on http://wiki.apache.org/lucene-java/SunJavaBugs.
 +
 +CPU, disk and memory requirements are based on the many choices made 
 +in implementing Solr (document size, number of documents, and number 
 +of hits retrieved to name a few). The benchmarks page has some 
 +information related to performance on particular platforms.
 +
 +*To build Apache Solr from source, refer to the `BUILD.txt` file in 
 +the distribution directory.*

 Modified: lucene/dev/trunk/lucene/build.xml
 URL: 
 http://svn.apache.org/viewvc/lucene/dev/trunk/lucene/build.xml?rev=139
 8564r1=1398563r2=1398564view=diff
 ==
 
 --- lucene/dev/trunk/lucene/build.xml (original)
 +++ lucene/dev/trunk/lucene/build.xml Mon Oct 15 23:06:34 2012
 @@ -33,6 +33,7 @@
patternset id=binary.root.dist.patterns
includes=LICENSE.txt,NOTICE.txt,README.txt,
  MIGRATE.txt,JRE_VERSION_MIGRATION.txt,
 +SYSTEM_REQUIREMENTS.txt,
  CHANGES.txt,
  **/lib/*.jar,
  licenses/**,
 @@ -297,7 +298,7 @@
  /xslt

  pegdown todir=${javadoc.dir}
 -  fileset dir=. includes=MIGRATE.txt,JRE_VERSION_MIGRATION.txt/
 +  fileset dir=. 
 + includes=MIGRATE.txt,JRE_VERSION_MIGRATION.txt,SYSTEM_REQUIREMENTS.
 + txt/
globmapper from=*.txt to=*.html/
  /pegdown


 Modified: lucene/dev/trunk/lucene/site/xsl/index.xsl
 URL: 
 http://svn.apache.org/viewvc/lucene/dev/trunk/lucene/site/xsl/index.xs
 l?rev=1398564r1=1398563r2=1398564view=diff
 ==
 
 --- lucene/dev/trunk/lucene/site/xsl/index.xsl (original)
 +++ lucene/dev/trunk/lucene/site/xsl/index.xsl Mon Oct 15 23:06:34 
 +++ 2012
 @@ -63,6 +63,7 @@
  h2Reference Documents/h2
ul
  lia href=changes/Changes.htmlChanges/a: List of 
 changes in this release./li
 +lia href=SYSTEM_REQUIREMENTS.htmlSystem 
 + Requirements/a: Minimum and supported Java versions./li
  lia href=MIGRATE.htmlMigration Guide/a: What changed in 
 Lucene 4; how to migrate code from Lucene 3.x./li
  lia href=JRE_VERSION_MIGRATION.htmlJRE Version 
 Migration/a: Information about upgrading between major JRE versions./li
  lia 
 href=core/org/apache/lucene/codecs/lucene41/package-summary.html#pack
 age_descriptionFile Formats/a: Guide to the supported index format 
 used by Lucene.  This can be customized by using a 
 href=core/org/apache/lucene/codecs/package-summary.html#package_descr
 iptionan alternate codec/a./li

 Added: lucene/dev/trunk/solr/SYSTEM_REQUIREMENTS.txt
 URL: 
 http://svn.apache.org/viewvc/lucene/dev/trunk/solr/SYSTEM_REQUIREMENTS
 .txt?rev=1398564view=auto 
 ==
 
 --- lucene/dev/trunk/solr/SYSTEM_REQUIREMENTS.txt (added)
 +++ 

[jira] [Resolved] (LUCENE-4006) system requirements is duplicated across versioned/unversioned

2012-10-15 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-4006.
---

Resolution: Fixed

Committed trunk revision: 1398564
Committed 4.x revision: 1398565
Committed 4.0 revision: 1398567

 system requirements is duplicated across versioned/unversioned
 --

 Key: LUCENE-4006
 URL: https://issues.apache.org/jira/browse/LUCENE-4006
 Project: Lucene - Core
  Issue Type: Task
  Components: general/javadocs
Reporter: Robert Muir
Assignee: Uwe Schindler
 Fix For: 4.1, 5.0, 4.0.1

 Attachments: LUCENE-4006.patch


 Our System requirements page is located here on the unversioned site: 
 http://lucene.apache.org/core/systemreqs.html
 But its also in forrest under each release. Can we just nuke the forrested 
 one?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: AW: svn commit: r1398564 - in /lucene/dev/trunk: lucene/SYSTEM_REQUIREMENTS.txt lucene/build.xml lucene/site/xsl/index.xsl solr/SYSTEM_REQUIREMENTS.txt solr/build.xml solr/site/xsl/index.xsl

2012-10-15 Thread Chris Hostetter

: Oh, I'll fix. That was my fisrt commit with my new Fast-IO-Laptop :-)

Is the IO is so fast that the files bounced hydroplaned past their 
intended directories?



-Hoss

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



AW: AW: svn commit: r1398564 - in /lucene/dev/trunk: lucene/SYSTEM_REQUIREMENTS.txt lucene/build.xml lucene/site/xsl/index.xsl solr/SYSTEM_REQUIREMENTS.txt solr/build.xml solr/site/xsl/index.xsl

2012-10-15 Thread Uwe Schindler
Exactly!

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


-Ursprüngliche Nachricht-
Von: Chris Hostetter [mailto:hossman_luc...@fucit.org] 
Gesendet: Dienstag, 16. Oktober 2012 01:17
An: dev@lucene.apache.org
Betreff: Re: AW: svn commit: r1398564 - in /lucene/dev/trunk:
lucene/SYSTEM_REQUIREMENTS.txt lucene/build.xml lucene/site/xsl/index.xsl
solr/SYSTEM_REQUIREMENTS.txt solr/build.xml solr/site/xsl/index.xsl


: Oh, I'll fix. That was my fisrt commit with my new Fast-IO-Laptop :-)

Is the IO is so fast that the files bounced hydroplaned past their intended
directories?



-Hoss

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4484) NRTCachingDir can't handle large files

2012-10-15 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-4484:
--

 Summary: NRTCachingDir can't handle large files
 Key: LUCENE-4484
 URL: https://issues.apache.org/jira/browse/LUCENE-4484
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless


I dug into this OOME, which easily repros for me on rev 1398268:
{noformat}
ant test  -Dtestcase=Test4GBStoredFields -Dtests.method=test 
-Dtests.seed=2D89DD229CD304F5 -Dtests.multiplier=3 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/hudson/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=ru -Dtests.timezone=Asia/Vladivostok -Dtests.file.encoding=UTF-8 
-Dtests.verbose=true
{noformat}

The problem is the test got NRTCachingDir ... which cannot handle large files 
because it decides up front (when createOutput is called) whether the file will 
be in RAMDir vs wrapped dir ... so if that file turns out to be immense (which 
this test does since stored fields files can grow arbitrarily huge w/o any 
flush happening) then it takes unbounded RAM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4484) NRTCachingDir can't handle large files

2012-10-15 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476565#comment-13476565
 ] 

Robert Muir commented on LUCENE-4484:
-

Can uncache() be changed to return the still-open newly created IndexOutput? 
this way you could uncache() in writeBytes or wherever you want and it would be 
seamless...


 NRTCachingDir can't handle large files
 --

 Key: LUCENE-4484
 URL: https://issues.apache.org/jira/browse/LUCENE-4484
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless

 I dug into this OOME, which easily repros for me on rev 1398268:
 {noformat}
 ant test  -Dtestcase=Test4GBStoredFields -Dtests.method=test 
 -Dtests.seed=2D89DD229CD304F5 -Dtests.multiplier=3 -Dtests.nightly=true 
 -Dtests.slow=true 
 -Dtests.linedocsfile=/home/hudson/lucene-data/enwiki.random.lines.txt 
 -Dtests.locale=ru -Dtests.timezone=Asia/Vladivostok 
 -Dtests.file.encoding=UTF-8 -Dtests.verbose=true
 {noformat}
 The problem is the test got NRTCachingDir ... which cannot handle large files 
 because it decides up front (when createOutput is called) whether the file 
 will be in RAMDir vs wrapped dir ... so if that file turns out to be immense 
 (which this test does since stored fields files can grow arbitrarily huge w/o 
 any flush happening) then it takes unbounded RAM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-trunk-Linux-Java6-64-test-only - Build # 9932 - Failure!

2012-10-15 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java6-64-test-only/9932/

1 tests failed.
REGRESSION:  org.apache.lucene.index.TestNRTThreads.testNRTThreads

Error Message:
saw non-zero open-but-deleted count

Stack Trace:
java.lang.AssertionError: saw non-zero open-but-deleted count
at 
__randomizedtesting.SeedInfo.seed([447148DE18F87BA8:DFA85CC559036DC3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.lucene.index.TestNRTThreads.doSearching(TestNRTThreads.java:89)
at 
org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase.runTest(ThreadedIndexingAndSearchingTestCase.java:507)
at 
org.apache.lucene.index.TestNRTThreads.testNRTThreads(TestNRTThreads.java:127)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:662)




Build Log:
[...truncated 335 lines...]
[junit4:junit4] Suite: org.apache.lucene.index.TestNRTThreads
[junit4:junit4]   1 OBD files: [_8.cfs]
[junit4:junit4]   1 OBD files: [_8.cfs]
[junit4:junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestNRTThreads 

[jira] [Commented] (SOLR-3881) frequent OOM in LanguageIdentifierUpdateProcessor

2012-10-15 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476584#comment-13476584
 ] 

Hoss Man commented on SOLR-3881:


bq. One possible solution is to limit the size of the string that is selected 
for concatenation.

I don't know if there is anyway to make LanguageIdentifierUpdateProcessor more 
memory efficient (in particular, i'm not sure why it needs to concat the field 
values instead of operating on them directly) but if you want to give langId 
just the first N characters of another field: that should already be possible 
w/o cod changes by wiring together the  CloneFieldUpdateProcessorFactory with 
the TruncateFieldUpdateProcessorFactory.

Something like this should work...

{code}
 ...
 processor class=solr.CloneFieldUpdateProcessorFactory
   str name=sourceGIANT_HONKING_STRING_FIELD/str
   str name=desttruncated_string_field_for_lang_detect/str
 /processor
 processor class=solr.TruncateFieldUpdateProcessorFactory
   str name=fieldNametruncated_string_field_for_lang_detect/str
   int name=maxLength65536/int
 /processor
 processor class=solr.LangDetectLanguageIdentifierUpdateProcessorFactory
   !-- str name=langid.fltitle,subject,GIANT_HONKING_STRING_FIELD/str --
   str 
name=langid.fltitle,subject,truncated_string_field_for_lang_detect/str
   ...
 /processor
 processor class=solr.IgnoreFieldUpdateProcessorFactory
   str name=fieldNametruncated_string_field_for_lang_detect/str
 /processor
 ...
{code}

Neither CloneFieldUpdateProcessorFactory nor 
TruncateFieldUpdateProcessorFactory will make a full copy of the original 
String value, and TruncateFieldUpdateProcessorFactory will only make a 
truncated copy if the sources is longer then the configured max (and even then 
wether any copy is actaully made really just depends on how the JVM implements 
substring). IgnoreFieldUpdateProcessorFactory will ensure that the truncated 
copy is freed up for GC as soon as you are done with LangId.

 frequent OOM in LanguageIdentifierUpdateProcessor
 -

 Key: SOLR-3881
 URL: https://issues.apache.org/jira/browse/SOLR-3881
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.0
 Environment: CentOS 6.x, JDK 1.6, (java -server -Xms2G -Xmx2G 
 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=)
Reporter: Rob Tulloh

 We are seeing frequent failures from Solr causing it to OOM. Here is the 
 stack trace we observe when this happens:
 {noformat}
 Caused by: java.lang.OutOfMemoryError: Java heap space
 at java.util.Arrays.copyOf(Arrays.java:2882)
 at 
 java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
 at 
 java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:390)
 at java.lang.StringBuffer.append(StringBuffer.java:224)
 at 
 org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.concatFields(LanguageIdentifierUpdateProcessor.java:286)
 at 
 org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.process(LanguageIdentifierUpdateProcessor.java:189)
 at 
 org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.processAdd(LanguageIdentifierUpdateProcessor.java:171)
 at 
 org.apache.solr.handler.BinaryUpdateRequestHandler$2.update(BinaryUpdateRequestHandler.java:90)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:140)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:120)
 at 
 org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:221)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:105)
 at 
 org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:186)
 at 
 org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:112)
 at 
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:147)
 at 
 org.apache.solr.handler.BinaryUpdateRequestHandler.parseAndLoadDocs(BinaryUpdateRequestHandler.java:100)
 at 
 org.apache.solr.handler.BinaryUpdateRequestHandler.access$000(BinaryUpdateRequestHandler.java:47)
 at 
 org.apache.solr.handler.BinaryUpdateRequestHandler$1.load(BinaryUpdateRequestHandler.java:58)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:59)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1540)
 at 
 

[jira] [Created] (SOLR-3948) Caculate/display deleted documents in admin interface

2012-10-15 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-3948:
--

 Summary: Caculate/display deleted documents in admin interface
 Key: SOLR-3948
 URL: https://issues.apache.org/jira/browse/SOLR-3948
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Affects Versions: 4.0
Reporter: Shawn Heisey
Priority: Minor
 Fix For: 4.1


The admin interface shows you two totals that let you infer how many deleted 
documents exist in the index by subtracting Num Docs from Max Doc.  It would 
make things much easier for novice users and for automated statistics gathering 
if the number of deleted documents were calculated for you and displayed.

Last Modified:
3 minutes ago
Num Docs:
12924551
Max Doc:
13011778
Version:
862
Segment Count:
23


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3949) Query time Boosting

2012-10-15 Thread Pradeep (JIRA)
Pradeep created SOLR-3949:
-

 Summary: Query time Boosting 
 Key: SOLR-3949
 URL: https://issues.apache.org/jira/browse/SOLR-3949
 Project: Solr
  Issue Type: Improvement
  Components: Response Writers, search
Affects Versions: 3.6
Reporter: Pradeep
Priority: Minor


Suppose I boost the query terms and give different weights to different words 
e.g TV repair can be converted TV repair^0.001. When query INFO is printed 
(by org.apache.solr.core.SolrCore execute), it does not print q=TV 
repair^0.001. This causes confusion whether boost factor is applied or not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4464) Intersects spatial query returns polygons it shouldn't

2012-10-15 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476690#comment-13476690
 ] 

David Smiley commented on LUCENE-4464:
--

I'm starting to diagnose this. One problem I see is that the 1st polygon has a 
self-intersection. I got this error when trying to generate a KML file 
depicting the geohash rectangles via the Solr-Spatial-Sandbox spatial-demo:

com.spatial4j.core.exception.InvalidShapeException: Ring Self-intersection at 
or near point (-92.81473397710002, 45.20993823293909, NaN)
at com.spatial4j.core.shape.jts.JtsGeometry.init(JtsGeometry.java:90)
at 
com.spatial4j.core.io.JtsShapeReadWriter.readShape(JtsShapeReadWriter.java:93)
at 
com.spatial4j.core.context.SpatialContext.readShape(SpatialContext.java:195)
at 
com.spatial4j.demo.servlet.GridInfoServlet.doPost(GridInfoServlet.java:113)

I also got this error when trying validating the polygon via the JTS 
TestBuilder (a GUI); I attached a screenshot.  It's very strange that I'm 
seeing this error yet you are not; you wouldn't have been able to index it 
without getting this error.



 Intersects spatial query returns polygons it shouldn't
 

 Key: LUCENE-4464
 URL: https://issues.apache.org/jira/browse/LUCENE-4464
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/spatial
Affects Versions: 3.6.1
 Environment: linux and windows
Reporter: solr-user
Assignee: David Smiley
Priority: Critical
  Labels: solr, spatial, spatialsearch
 Attachments: LUCENE-4464 self intersect.png


 full description, including sample schema and data, can be found at 
 http://lucene.472066.n3.nabble.com/quot-Intersects-quot-spatial-query-returns-polygons-it-shouldn-t-td4008646.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4464) Intersects spatial query returns polygons it shouldn't

2012-10-15 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-4464:
-

Attachment: LUCENE-4464 self intersect.png

 Intersects spatial query returns polygons it shouldn't
 

 Key: LUCENE-4464
 URL: https://issues.apache.org/jira/browse/LUCENE-4464
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/spatial
Affects Versions: 3.6.1
 Environment: linux and windows
Reporter: solr-user
Assignee: David Smiley
Priority: Critical
  Labels: solr, spatial, spatialsearch
 Attachments: LUCENE-4464 self intersect.png


 full description, including sample schema and data, can be found at 
 http://lucene.472066.n3.nabble.com/quot-Intersects-quot-spatial-query-returns-polygons-it-shouldn-t-td4008646.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4464) Intersects spatial query returns polygons it shouldn't

2012-10-15 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476703#comment-13476703
 ] 

David Smiley commented on LUCENE-4464:
--

Oh, I know why you didn't get that error.  You're using an older version of 
Spatial4j back when it was a portion of LSP.  And back then, JtsGeometry didn't 
ask JTS to validate the geometry but it does now.

 Intersects spatial query returns polygons it shouldn't
 

 Key: LUCENE-4464
 URL: https://issues.apache.org/jira/browse/LUCENE-4464
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/spatial
Affects Versions: 3.6.1
 Environment: linux and windows
Reporter: solr-user
Assignee: David Smiley
Priority: Critical
  Labels: solr, spatial, spatialsearch
 Attachments: LUCENE-4464 self intersect.png


 full description, including sample schema and data, can be found at 
 http://lucene.472066.n3.nabble.com/quot-Intersects-quot-spatial-query-returns-polygons-it-shouldn-t-td4008646.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3948) Calculate/display deleted documents in admin interface

2012-10-15 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-3948:
---

Description: 
The admin interface shows you two totals that let you infer how many deleted 
documents exist in the index by subtracting Num Docs from Max Doc.  It would 
make things much easier for novice users and for automated statistics gathering 
if the number of deleted documents were calculated for you and displayed.

Last Modified: 3 minutes ago
Num Docs: 12924551
Max Doc: 13011778
Version: 862
Segment Count: 23


  was:
The admin interface shows you two totals that let you infer how many deleted 
documents exist in the index by subtracting Num Docs from Max Doc.  It would 
make things much easier for novice users and for automated statistics gathering 
if the number of deleted documents were calculated for you and displayed.

Last Modified:
3 minutes ago
Num Docs:
12924551
Max Doc:
13011778
Version:
862
Segment Count:
23


Summary: Calculate/display deleted documents in admin interface  (was: 
Caculate/display deleted documents in admin interface)

 Calculate/display deleted documents in admin interface
 --

 Key: SOLR-3948
 URL: https://issues.apache.org/jira/browse/SOLR-3948
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Affects Versions: 4.0
Reporter: Shawn Heisey
Priority: Minor
 Fix For: 4.1


 The admin interface shows you two totals that let you infer how many deleted 
 documents exist in the index by subtracting Num Docs from Max Doc.  It would 
 make things much easier for novice users and for automated statistics 
 gathering if the number of deleted documents were calculated for you and 
 displayed.
 Last Modified: 3 minutes ago
 Num Docs: 12924551
 Max Doc: 13011778
 Version: 862
 Segment Count: 23

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4464) Intersects spatial query returns polygons it shouldn't

2012-10-15 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-4464:
-

Attachment: LUCENE-4464_polygon_almost_touch_test.patch
LUCENE-4464 google maps geohashes.png

I attached another screenshot of Google Earth with KML loaded for the 1st 
indexed polygon and for the query shape.  It shows the lines almost touch but 
not quite -- showing ~28.4 meters in-between.  The KML files were generated via 
the spatial-demo app, with 0.01 distErrPct.  I was able to load the indexed 
polygon my adjusting the data near the self-intersection error.

I also attached a new test and I was not able to reproduce the problem you 
report, even with the default 2.5% distErrPct.  I had to raise it to about 6% 
until I saw a false intersection.  The fact that you see an intersection and I 
don't could very well be related to small improvements in the interpretation of 
distErrPct / distErr / maxDistErr that were done a couple months ago.

I'm going to commit this patch tomorrow.  It does an assume call to check if 
JTS is on the classpath.  The test has no compile-time dependencies on JTS, 
just runtime.

 Intersects spatial query returns polygons it shouldn't
 

 Key: LUCENE-4464
 URL: https://issues.apache.org/jira/browse/LUCENE-4464
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/spatial
Affects Versions: 3.6.1
 Environment: linux and windows
Reporter: solr-user
Assignee: David Smiley
Priority: Critical
  Labels: solr, spatial, spatialsearch
 Attachments: LUCENE-4464 google maps geohashes.png, 
 LUCENE-4464_polygon_almost_touch_test.patch, LUCENE-4464 self intersect.png


 full description, including sample schema and data, can be found at 
 http://lucene.472066.n3.nabble.com/quot-Intersects-quot-spatial-query-returns-polygons-it-shouldn-t-td4008646.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3843) Add lucene-codecs to Solr libs?

2012-10-15 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476748#comment-13476748
 ] 

Robert Muir commented on SOLR-3843:
---

+1. My approach so far was to disable this (currently expert) stuff because of 
the problems you get if you add new fields to the schema and reload. But it 
seems bad to not allow anything passed to IndexWriter to interact with 
IndexSchema: if we can do a better job we can make things easier.

 Add lucene-codecs to Solr libs?
 ---

 Key: SOLR-3843
 URL: https://issues.apache.org/jira/browse/SOLR-3843
 Project: Solr
  Issue Type: Wish
Reporter: Adrien Grand
Priority: Minor

 Solr gives the ability to its users to select the postings format to use on a 
 per-field basis but only Lucene40PostingsFormat is available by default 
 (unless users add lucene-codecs to the Solr lib directory). Maybe we should 
 add lucene-codecs to Solr libs (I mean in the WAR file) so that people can 
 try our non-default postings formats with minimum effort?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4472) Add setting that prevents merging on updateDocument

2012-10-15 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476753#comment-13476753
 ] 

Robert Muir commented on LUCENE-4472:
-

I like this patch much better than the first one.

As far as back compat, I'm not sure if we should try to do anything tricky, the 
current patch isn't really a break
it just allows MP to handle this stuff at a more fine-grained level, I think 
its fine.

p.s. UNKOWN and EMPTY_FOCED_SEGMENTS look like typos :)


 Add setting that prevents merging on updateDocument
 ---

 Key: LUCENE-4472
 URL: https://issues.apache.org/jira/browse/LUCENE-4472
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Affects Versions: 4.0
Reporter: Simon Willnauer
 Fix For: 4.1, 5.0

 Attachments: LUCENE-4472.patch, LUCENE-4472.patch


 Currently we always call maybeMerge if a segment was flushed after 
 updateDocument. Some apps and in particular ElasticSearch uses some hacky 
 workarounds to disable that ie for merge throttling. It should be easier to 
 enable this kind of behavior. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



SOLR-3947, SOLR-3843 - lucene-codecs and Solr

2012-10-15 Thread Shawn Heisey
SOLR-3947 was filed today and later resolved as a duplicate of 
SOLR-3843, which is also resolved as 'wont fix.' These issues are about 
putting the lucene-codecs jar into the solr.war.  I understand the 
reasoning behind not doing that, but I do believe that when you do 'ant 
dist' that the lucene-codecs jar should be created and placed somewhere 
convenient so it can be easily copied to an appropriate lib directory.


Currently the only way I've found to create lucene-codecs is to go up to 
the root of the checkout and do 'ant generate-maven-artifacts' which 
takes about ten minutes and generates a lot of things that are not very 
useful to a Solr admin.


A possible worry, no idea how likely it would be: This might start a 
precedent where people want another jar, then another, and so on until 
most of the Lucene universe is being created for Solr.


Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3950) Attempting postings=BloomFilter results in UnsupportedOperationException

2012-10-15 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-3950:
--

 Summary: Attempting postings=BloomFilter results in 
UnsupportedOperationException
 Key: SOLR-3950
 URL: https://issues.apache.org/jira/browse/SOLR-3950
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.1
 Environment: Linux bigindy5 2.6.32-279.9.1.el6.centos.plus.x86_64 #1 
SMP Wed Sep 26 03:52:55 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

[root@bigindy5 ~]# java -version
java version 1.7.0_07
Java(TM) SE Runtime Environment (build 1.7.0_07-b10)
Java HotSpot(TM) 64-Bit Server VM (build 23.3-b01, mixed mode)

Reporter: Shawn Heisey
 Fix For: 4.1


Tested on branch_4x, checked out after BlockPostingsFormat was made the default 
by LUCENE-4446.

I used 'ant generate-maven-artifacts' to create the lucene-codecs jar, and 
copied it into my sharedLib directory.  When I subsequently tried 
postings=BloomFilter I got a the following exception in the log:

{code}
Oct 15, 2012 11:14:02 AM org.apache.solr.common.SolrException log
SEVERE: java.lang.UnsupportedOperationException: Error - 
org.apache.lucene.codecs.bloom.BloomFilteringPostingsFormat has been 
constructed without a choice of PostingsFormat
{code}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3950) Attempting postings=BloomFilter results in UnsupportedOperationException

2012-10-15 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476763#comment-13476763
 ] 

Shawn Heisey commented on SOLR-3950:


Full stacktrace:
{code}
Oct 15, 2012 11:14:02 AM org.apache.solr.common.SolrException log
SEVERE: java.lang.UnsupportedOperationException: Error - 
org.apache.lucene.codecs.bloom.BloomFilteringPostingsFormat has been 
constructed without a choice of PostingsFormat
at 
org.apache.lucene.codecs.bloom.BloomFilteringPostingsFormat.fieldsConsumer(BloomFilteringPostingsFormat.java:139)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.addField(PerFieldPostingsFormat.java:130)
at 
org.apache.lucene.index.FreqProxTermsWriterPerField.flush(FreqProxTermsWriterPerField.java:335)
at 
org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
at org.apache.lucene.index.TermsHash.flush(TermsHash.java:117)
at org.apache.lucene.index.DocInverter.flush(DocInverter.java:53)
at 
org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:82)
at 
org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:483)
at 
org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:422)
at 
org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:559)
at 
org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:2656)
at 
org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2792)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2772)
at 
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:525)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:87)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1007)
at 
org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1750)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:455)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:276)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1337)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:484)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:233)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1065)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:413)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:192)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:999)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:250)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:149)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:111)
at org.eclipse.jetty.server.Server.handle(Server.java:351)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:454)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:47)
at 
org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:900)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:954)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:857)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:66)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:254)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:599)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:534)
  

[jira] [Commented] (SOLR-3950) Attempting postings=BloomFilter results in UnsupportedOperationException

2012-10-15 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476764#comment-13476764
 ] 

Shawn Heisey commented on SOLR-3950:


I don't know if this affects 4.0, as I have only tried it on 4.1.  I did add 
codecFactory to solrconfig.xml.  I'm fairly sure that I've got at least part of 
it right, because I got the following beforehand when I was using the wrong 
format name:

{code}
Oct 15, 2012 11:13:01 AM org.apache.solr.common.SolrException log
SEVERE: null:java.lang.IllegalArgumentException: A SPI class of type 
org.apache.lucene.codecs.PostingsFormat with name 'Bloom' does not exist. You 
need to add the corresponding JAR file supporting this SPI to your 
classpath.The current classpath supports the following names: [Lucene40, 
Lucene41, Pulsing41, SimpleText, Memory, BloomFilter, Direct]
{code}

 Attempting postings=BloomFilter results in UnsupportedOperationException
 --

 Key: SOLR-3950
 URL: https://issues.apache.org/jira/browse/SOLR-3950
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.1
 Environment: Linux bigindy5 2.6.32-279.9.1.el6.centos.plus.x86_64 #1 
 SMP Wed Sep 26 03:52:55 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
 [root@bigindy5 ~]# java -version
 java version 1.7.0_07
 Java(TM) SE Runtime Environment (build 1.7.0_07-b10)
 Java HotSpot(TM) 64-Bit Server VM (build 23.3-b01, mixed mode)
Reporter: Shawn Heisey
 Fix For: 4.1


 Tested on branch_4x, checked out after BlockPostingsFormat was made the 
 default by LUCENE-4446.
 I used 'ant generate-maven-artifacts' to create the lucene-codecs jar, and 
 copied it into my sharedLib directory.  When I subsequently tried 
 postings=BloomFilter I got a the following exception in the log:
 {code}
 Oct 15, 2012 11:14:02 AM org.apache.solr.common.SolrException log
 SEVERE: java.lang.UnsupportedOperationException: Error - 
 org.apache.lucene.codecs.bloom.BloomFilteringPostingsFormat has been 
 constructed without a choice of PostingsFormat
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3950) Attempting postings=BloomFilter results in UnsupportedOperationException

2012-10-15 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13476767#comment-13476767
 ] 

Shawn Heisey commented on SOLR-3950:


One more bit of info:

solr-spec 4.1.0.2012.10.14.17.26.04
solr-impl 4.1-SNAPSHOT 1398145 - ncindex - 2012-10-14 17:26:04
lucene-spec 4.1-SNAPSHOT
lucene-impl 4.1-SNAPSHOT 1398145 - ncindex - 2012-10-14 17:09:00


 Attempting postings=BloomFilter results in UnsupportedOperationException
 --

 Key: SOLR-3950
 URL: https://issues.apache.org/jira/browse/SOLR-3950
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.1
 Environment: Linux bigindy5 2.6.32-279.9.1.el6.centos.plus.x86_64 #1 
 SMP Wed Sep 26 03:52:55 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
 [root@bigindy5 ~]# java -version
 java version 1.7.0_07
 Java(TM) SE Runtime Environment (build 1.7.0_07-b10)
 Java HotSpot(TM) 64-Bit Server VM (build 23.3-b01, mixed mode)
Reporter: Shawn Heisey
 Fix For: 4.1


 Tested on branch_4x, checked out after BlockPostingsFormat was made the 
 default by LUCENE-4446.
 I used 'ant generate-maven-artifacts' to create the lucene-codecs jar, and 
 copied it into my sharedLib directory.  When I subsequently tried 
 postings=BloomFilter I got a the following exception in the log:
 {code}
 Oct 15, 2012 11:14:02 AM org.apache.solr.common.SolrException log
 SEVERE: java.lang.UnsupportedOperationException: Error - 
 org.apache.lucene.codecs.bloom.BloomFilteringPostingsFormat has been 
 constructed without a choice of PostingsFormat
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org