Re: JCC linux patch

2012-10-01 Thread Caleb Burns
On Mon, Oct 1, 2012 at 12:55 AM, Andi Vajda va...@apache.org wrote:

 That would be great !
 Could you please make your monkey patch detect the version of
 setuptools/distribute used and issue the same the same error message as is
 currently emitted by the JCC linux setup.py code when the version is not
 supported by your monkey patch, ie, when manual patching is still needed.
 Thanks !

 Andi.,

Right now I'm assuming that future versions of distribute will be
supported because 0.6.1 through 0.6.28 (the latest in pypi) all work.
Do you mean issue the same error for any version newer than 0.6.28?

Thanks,
Caleb Burns


Re: JCC linux patch

2012-10-01 Thread Andi Vajda

On Oct 1, 2012, at 16:37, Caleb Burns cpbu...@gmail.com wrote:

 On Mon, Oct 1, 2012 at 12:55 AM, Andi Vajda va...@apache.org wrote:
 
 That would be great !
 Could you please make your monkey patch detect the version of
 setuptools/distribute used and issue the same the same error message as is
 currently emitted by the JCC linux setup.py code when the version is not
 supported by your monkey patch, ie, when manual patching is still needed.
 Thanks !
 
 Andi.,
 
 Right now I'm assuming that future versions of distribute will be
 supported because 0.6.1 through 0.6.28 (the latest in pypi) all work.
 Do you mean issue the same error for any version newer than 0.6.28?

I mean, on particular, all the setuptools versions out there.

Andi..

 
 Thanks,
 Caleb Burns


Re: JCC linux patch

2012-10-01 Thread Andi Vajda


On Tue, 2 Oct 2012, Caleb Burns wrote:


On Mon, Oct 1, 2012 at 7:46 PM, Andi Vajda va...@apache.org wrote:


On Oct 1, 2012, at 16:37, Caleb Burns cpbu...@gmail.com wrote:

I mean, on particular, all the setuptools versions out there.

Andi..


I added a condition for supported versions of the monkey patch while
leaving the previous conditions to display the info for manual
patching. Let me know if it's not right.


Great. Now, in order to get the patch to me, you need to either:
  - open a bug in JIRA and attach it there (PYLUCENE project)
  - or send it to me directly

If you send patches to the list, they get eaten by the list processor.
Sorry.

Thanks !

Andi..


[jira] [Commented] (LUCENE-4451) Memory leak per unique thread caused by RandomizedContext.contexts static map

2012-10-01 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466691#comment-13466691
 ] 

Dawid Weiss commented on LUCENE-4451:
-

I looked at the code and I don't have an easy fix for now yet. The problem is 
that there are circular reference needs between Threads, Randoms and the runner 
so that we can assert that Random instances issued for a thread are not reused 
on other threads (or outside of a given test's lifespan). This leads to a 
cyclic dependency between Thread-PerThreadContext-AssertingRandom-Thread so 
even if you put a weak hash map for Thread-PerThreadContext it'll still keep a 
hard reference to the thread it's bound to.

I could make AssertingRandom store a weak/soft reference to the thread it's 
bound to but I'm kind of afraid it'll affect the performance.

Could we temporarily make a pool of threads for this test and reuse these? I'll 
be thinking of a workaround but it's going to take me some time.




 Memory leak per unique thread caused by RandomizedContext.contexts static map
 -

 Key: LUCENE-4451
 URL: https://issues.apache.org/jira/browse/LUCENE-4451
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Dawid Weiss

 In digging on the hard-to-understand OOMEs with
 TestDirectPostingsFormat ... I found (thank you YourKit) that
 RandomizedContext (in randomizedtesting JAR) seems to be holding onto
 all threads created by the test.  The test does create many very short
 lived threads (testing the thread safety of the postings format), in
 BasePostingsFormatTestCase.testTerms), and somehow these seem to tie
 up a lot (~100 MB) of RAM in RandomizedContext.contexts static map.
 For now I've disabled all thread testing (committed {{false }} inside
 {{BPFTC.testTerms}}), but hopefully we can fix the root cause here, eg
 when a thread exits can we clear it from that map?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4451) Memory leak per unique thread caused by RandomizedContext.contexts static map

2012-10-01 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466695#comment-13466695
 ] 

Dawid Weiss commented on LUCENE-4451:
-

I pushed a tentative fix for this (includes a test case).
https://github.com/carrotsearch/randomizedtesting/issues/127

I'd still like to hold for some time to make sure it's the best way to solve it.

 Memory leak per unique thread caused by RandomizedContext.contexts static map
 -

 Key: LUCENE-4451
 URL: https://issues.apache.org/jira/browse/LUCENE-4451
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Dawid Weiss

 In digging on the hard-to-understand OOMEs with
 TestDirectPostingsFormat ... I found (thank you YourKit) that
 RandomizedContext (in randomizedtesting JAR) seems to be holding onto
 all threads created by the test.  The test does create many very short
 lived threads (testing the thread safety of the postings format), in
 BasePostingsFormatTestCase.testTerms), and somehow these seem to tie
 up a lot (~100 MB) of RAM in RandomizedContext.contexts static map.
 For now I've disabled all thread testing (committed {{false }} inside
 {{BPFTC.testTerms}}), but hopefully we can fix the root cause here, eg
 when a thread exits can we clear it from that map?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0-ea-b51) - Build # 1466 - Failure!

2012-10-01 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Linux/1466/
Java: 64bit/jdk1.8.0-ea-b51 -XX:+UseParallelGC

1 tests failed.
REGRESSION:  org.apache.lucene.codecs.memory.TestMemoryPostingsFormat.testRandom

Error Message:
Captured an uncaught exception in thread: Thread[id=924, name=Thread-904, 
state=RUNNABLE, group=TGRP-TestMemoryPostingsFormat]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=924, name=Thread-904, state=RUNNABLE, 
group=TGRP-TestMemoryPostingsFormat]
Caused by: java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap 
space
at __randomizedtesting.SeedInfo.seed([70D34E362FE62C54]:0)
at 
org.apache.lucene.index.BasePostingsFormatTestCase$TestThread.run(BasePostingsFormatTestCase.java:826)
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.ArrayUtil.grow(ArrayUtil.java:338)
at 
org.apache.lucene.codecs.memory.MemoryPostingsFormat$FSTDocsAndPositionsEnum.reset(MemoryPostingsFormat.java:474)
at 
org.apache.lucene.codecs.memory.MemoryPostingsFormat$FSTTermsEnum.docsAndPositions(MemoryPostingsFormat.java:720)
at 
org.apache.lucene.index.BasePostingsFormatTestCase.verifyEnum(BasePostingsFormatTestCase.java:605)
at 
org.apache.lucene.index.BasePostingsFormatTestCase.testTermsOneThread(BasePostingsFormatTestCase.java:907)
at 
org.apache.lucene.index.BasePostingsFormatTestCase.access$200(BasePostingsFormatTestCase.java:80)
at 
org.apache.lucene.index.BasePostingsFormatTestCase$TestThread.run(BasePostingsFormatTestCase.java:824)




Build Log:
[...truncated 6013 lines...]
[junit4:junit4] Suite: org.apache.lucene.codecs.memory.TestMemoryPostingsFormat
[junit4:junit4]   2 thg 9 30, 2012 11:53:04 CH 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
[junit4:junit4]   2 WARNING: Uncaught exception in thread: 
Thread[Thread-904,5,TGRP-TestMemoryPostingsFormat]
[junit4:junit4]   2 java.lang.RuntimeException: java.lang.OutOfMemoryError: 
Java heap space
[junit4:junit4]   2at 
__randomizedtesting.SeedInfo.seed([70D34E362FE62C54]:0)
[junit4:junit4]   2at 
org.apache.lucene.index.BasePostingsFormatTestCase$TestThread.run(BasePostingsFormatTestCase.java:826)
[junit4:junit4]   2 Caused by: java.lang.OutOfMemoryError: Java heap space
[junit4:junit4]   2at 
org.apache.lucene.util.ArrayUtil.grow(ArrayUtil.java:338)
[junit4:junit4]   2at 
org.apache.lucene.codecs.memory.MemoryPostingsFormat$FSTDocsAndPositionsEnum.reset(MemoryPostingsFormat.java:474)
[junit4:junit4]   2at 
org.apache.lucene.codecs.memory.MemoryPostingsFormat$FSTTermsEnum.docsAndPositions(MemoryPostingsFormat.java:720)
[junit4:junit4]   2at 
org.apache.lucene.index.BasePostingsFormatTestCase.verifyEnum(BasePostingsFormatTestCase.java:605)
[junit4:junit4]   2at 
org.apache.lucene.index.BasePostingsFormatTestCase.testTermsOneThread(BasePostingsFormatTestCase.java:907)
[junit4:junit4]   2at 
org.apache.lucene.index.BasePostingsFormatTestCase.access$200(BasePostingsFormatTestCase.java:80)
[junit4:junit4]   2at 
org.apache.lucene.index.BasePostingsFormatTestCase$TestThread.run(BasePostingsFormatTestCase.java:824)
[junit4:junit4]   2 
[junit4:junit4]   2 thg 9 30, 2012 11:53:06 CH 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
[junit4:junit4]   2 WARNING: Uncaught exception in thread: 
Thread[Thread-906,5,TGRP-TestMemoryPostingsFormat]
[junit4:junit4]   2 java.lang.RuntimeException: java.lang.OutOfMemoryError: 
Java heap space
[junit4:junit4]   2at 
__randomizedtesting.SeedInfo.seed([70D34E362FE62C54]:0)
[junit4:junit4]   2at 
org.apache.lucene.index.BasePostingsFormatTestCase$TestThread.run(BasePostingsFormatTestCase.java:826)
[junit4:junit4]   2 Caused by: java.lang.OutOfMemoryError: Java heap space
[junit4:junit4]   2at 
org.apache.lucene.util.ArrayUtil.grow(ArrayUtil.java:338)
[junit4:junit4]   2at 
org.apache.lucene.codecs.memory.MemoryPostingsFormat$FSTDocsAndPositionsEnum.reset(MemoryPostingsFormat.java:474)
[junit4:junit4]   2at 
org.apache.lucene.codecs.memory.MemoryPostingsFormat$FSTTermsEnum.docsAndPositions(MemoryPostingsFormat.java:720)
[junit4:junit4]   2at 
org.apache.lucene.index.BasePostingsFormatTestCase.verifyEnum(BasePostingsFormatTestCase.java:605)
[junit4:junit4]   2at 
org.apache.lucene.index.BasePostingsFormatTestCase.testTermsOneThread(BasePostingsFormatTestCase.java:907)
[junit4:junit4]   2at 
org.apache.lucene.index.BasePostingsFormatTestCase.access$200(BasePostingsFormatTestCase.java:80)
[junit4:junit4]   2at 
org.apache.lucene.index.BasePostingsFormatTestCase$TestThread.run(BasePostingsFormatTestCase.java:824)
[junit4:junit4]   2 
[junit4:junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=TestMemoryPostingsFormat -Dtests.method=testRandom 

Re: [JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0-ea-b51) - Build # 1466 - Failure!

2012-10-01 Thread Michael McCandless
I'll dig.

Mike McCandless

http://blog.mikemccandless.com


On Mon, Oct 1, 2012 at 4:53 AM, Policeman Jenkins Server
jenk...@sd-datasolutions.de wrote:
 Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Linux/1466/
 Java: 64bit/jdk1.8.0-ea-b51 -XX:+UseParallelGC

 1 tests failed.
 REGRESSION:  
 org.apache.lucene.codecs.memory.TestMemoryPostingsFormat.testRandom

 Error Message:
 Captured an uncaught exception in thread: Thread[id=924, name=Thread-904, 
 state=RUNNABLE, group=TGRP-TestMemoryPostingsFormat]

 Stack Trace:
 com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
 uncaught exception in thread: Thread[id=924, name=Thread-904, state=RUNNABLE, 
 group=TGRP-TestMemoryPostingsFormat]
 Caused by: java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap 
 space
 at __randomizedtesting.SeedInfo.seed([70D34E362FE62C54]:0)
 at 
 org.apache.lucene.index.BasePostingsFormatTestCase$TestThread.run(BasePostingsFormatTestCase.java:826)
 Caused by: java.lang.OutOfMemoryError: Java heap space
 at org.apache.lucene.util.ArrayUtil.grow(ArrayUtil.java:338)
 at 
 org.apache.lucene.codecs.memory.MemoryPostingsFormat$FSTDocsAndPositionsEnum.reset(MemoryPostingsFormat.java:474)
 at 
 org.apache.lucene.codecs.memory.MemoryPostingsFormat$FSTTermsEnum.docsAndPositions(MemoryPostingsFormat.java:720)
 at 
 org.apache.lucene.index.BasePostingsFormatTestCase.verifyEnum(BasePostingsFormatTestCase.java:605)
 at 
 org.apache.lucene.index.BasePostingsFormatTestCase.testTermsOneThread(BasePostingsFormatTestCase.java:907)
 at 
 org.apache.lucene.index.BasePostingsFormatTestCase.access$200(BasePostingsFormatTestCase.java:80)
 at 
 org.apache.lucene.index.BasePostingsFormatTestCase$TestThread.run(BasePostingsFormatTestCase.java:824)




 Build Log:
 [...truncated 6013 lines...]
 [junit4:junit4] Suite: 
 org.apache.lucene.codecs.memory.TestMemoryPostingsFormat
 [junit4:junit4]   2 thg 9 30, 2012 11:53:04 CH 
 com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
  uncaughtException
 [junit4:junit4]   2 WARNING: Uncaught exception in thread: 
 Thread[Thread-904,5,TGRP-TestMemoryPostingsFormat]
 [junit4:junit4]   2 java.lang.RuntimeException: java.lang.OutOfMemoryError: 
 Java heap space
 [junit4:junit4]   2at 
 __randomizedtesting.SeedInfo.seed([70D34E362FE62C54]:0)
 [junit4:junit4]   2at 
 org.apache.lucene.index.BasePostingsFormatTestCase$TestThread.run(BasePostingsFormatTestCase.java:826)
 [junit4:junit4]   2 Caused by: java.lang.OutOfMemoryError: Java heap space
 [junit4:junit4]   2at 
 org.apache.lucene.util.ArrayUtil.grow(ArrayUtil.java:338)
 [junit4:junit4]   2at 
 org.apache.lucene.codecs.memory.MemoryPostingsFormat$FSTDocsAndPositionsEnum.reset(MemoryPostingsFormat.java:474)
 [junit4:junit4]   2at 
 org.apache.lucene.codecs.memory.MemoryPostingsFormat$FSTTermsEnum.docsAndPositions(MemoryPostingsFormat.java:720)
 [junit4:junit4]   2at 
 org.apache.lucene.index.BasePostingsFormatTestCase.verifyEnum(BasePostingsFormatTestCase.java:605)
 [junit4:junit4]   2at 
 org.apache.lucene.index.BasePostingsFormatTestCase.testTermsOneThread(BasePostingsFormatTestCase.java:907)
 [junit4:junit4]   2at 
 org.apache.lucene.index.BasePostingsFormatTestCase.access$200(BasePostingsFormatTestCase.java:80)
 [junit4:junit4]   2at 
 org.apache.lucene.index.BasePostingsFormatTestCase$TestThread.run(BasePostingsFormatTestCase.java:824)
 [junit4:junit4]   2
 [junit4:junit4]   2 thg 9 30, 2012 11:53:06 CH 
 com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
  uncaughtException
 [junit4:junit4]   2 WARNING: Uncaught exception in thread: 
 Thread[Thread-906,5,TGRP-TestMemoryPostingsFormat]
 [junit4:junit4]   2 java.lang.RuntimeException: java.lang.OutOfMemoryError: 
 Java heap space
 [junit4:junit4]   2at 
 __randomizedtesting.SeedInfo.seed([70D34E362FE62C54]:0)
 [junit4:junit4]   2at 
 org.apache.lucene.index.BasePostingsFormatTestCase$TestThread.run(BasePostingsFormatTestCase.java:826)
 [junit4:junit4]   2 Caused by: java.lang.OutOfMemoryError: Java heap space
 [junit4:junit4]   2at 
 org.apache.lucene.util.ArrayUtil.grow(ArrayUtil.java:338)
 [junit4:junit4]   2at 
 org.apache.lucene.codecs.memory.MemoryPostingsFormat$FSTDocsAndPositionsEnum.reset(MemoryPostingsFormat.java:474)
 [junit4:junit4]   2at 
 org.apache.lucene.codecs.memory.MemoryPostingsFormat$FSTTermsEnum.docsAndPositions(MemoryPostingsFormat.java:720)
 [junit4:junit4]   2at 
 org.apache.lucene.index.BasePostingsFormatTestCase.verifyEnum(BasePostingsFormatTestCase.java:605)
 [junit4:junit4]   2at 
 org.apache.lucene.index.BasePostingsFormatTestCase.testTermsOneThread(BasePostingsFormatTestCase.java:907)
 [junit4:junit4]   2at 
 org.apache.lucene.index.BasePostingsFormatTestCase.access$200(BasePostingsFormatTestCase.java:80)
 [junit4:junit4]   2at 
 

[jira] [Commented] (LUCENE-4451) Memory leak per unique thread caused by RandomizedContext.contexts static map

2012-10-01 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466727#comment-13466727
 ] 

Michael McCandless commented on LUCENE-4451:


bq. Could we temporarily make a pool of threads for this test and reuse these? 

I already committed a fix (to use static Thread subclass, and to null out the 
heavy stuff after the thread is done), and it seemed to work around the issue 
in my testing ... however, I don't re-use (pool) the threads.

 Memory leak per unique thread caused by RandomizedContext.contexts static map
 -

 Key: LUCENE-4451
 URL: https://issues.apache.org/jira/browse/LUCENE-4451
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Dawid Weiss

 In digging on the hard-to-understand OOMEs with
 TestDirectPostingsFormat ... I found (thank you YourKit) that
 RandomizedContext (in randomizedtesting JAR) seems to be holding onto
 all threads created by the test.  The test does create many very short
 lived threads (testing the thread safety of the postings format), in
 BasePostingsFormatTestCase.testTerms), and somehow these seem to tie
 up a lot (~100 MB) of RAM in RandomizedContext.contexts static map.
 For now I've disabled all thread testing (committed {{false }} inside
 {{BPFTC.testTerms}}), but hopefully we can fix the root cause here, eg
 when a thread exits can we clear it from that map?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4451) Memory leak per unique thread caused by RandomizedContext.contexts static map

2012-10-01 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466732#comment-13466732
 ] 

Dawid Weiss commented on LUCENE-4451:
-

It'll still collect references to all these threads (and whatever they may be 
holding onto) so eventually it'll OOM if you create a really large number of 
them. I'll push the fix above in the next release; holding to Thread instances 
seems to be doing more evil than good.

 Memory leak per unique thread caused by RandomizedContext.contexts static map
 -

 Key: LUCENE-4451
 URL: https://issues.apache.org/jira/browse/LUCENE-4451
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Dawid Weiss

 In digging on the hard-to-understand OOMEs with
 TestDirectPostingsFormat ... I found (thank you YourKit) that
 RandomizedContext (in randomizedtesting JAR) seems to be holding onto
 all threads created by the test.  The test does create many very short
 lived threads (testing the thread safety of the postings format), in
 BasePostingsFormatTestCase.testTerms), and somehow these seem to tie
 up a lot (~100 MB) of RAM in RandomizedContext.contexts static map.
 For now I've disabled all thread testing (committed {{false }} inside
 {{BPFTC.testTerms}}), but hopefully we can fix the root cause here, eg
 when a thread exits can we clear it from that map?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3908) I have a solr issue when i run it on tomcate

2012-10-01 Thread bhavesh jogi (JIRA)
bhavesh jogi created SOLR-3908:
--

 Summary: I have a solr issue when i run it on tomcate
 Key: SOLR-3908
 URL: https://issues.apache.org/jira/browse/SOLR-3908
 Project: Solr
  Issue Type: Bug
Reporter: bhavesh jogi


Oct 1, 2012 6:04:48 PM org.apache.coyote.AbstractProtocol pause
INFO: Pausing ProtocolHandler [http-apr-8082]
Oct 1, 2012 6:04:48 PM org.apache.coyote.AbstractProtocol pause
INFO: Pausing ProtocolHandler [ajp-apr-8009]
Oct 1, 2012 6:04:48 PM org.apache.catalina.core.StandardService stopInternal
INFO: Stopping service Catalina
Oct 1, 2012 6:04:48 PM org.apache.solr.core.SolrCore close
INFO: []  CLOSING SolrCore org.apache.solr.core.SolrCore@da1515
Oct 1, 2012 6:04:48 PM org.apache.solr.update.DirectUpdateHandler2 close
INFO: closing DirectUpdateHandler2{commits=6,autocommit 
maxDocs=1,autocommit 
maxTime=1000ms,autocommits=3,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=9,cumulative_deletesById=0,cumulative_deletesByQuery=3,cumulative_errors=0}
Oct 1, 2012 6:04:48 PM org.apache.solr.update.DirectUpdateHandler2 close
INFO: closed DirectUpdateHandler2{commits=6,autocommit maxDocs=1,autocommit 
maxTime=1000ms,autocommits=3,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=9,cumulative_deletesById=0,cumulative_deletesByQuery=3,cumulative_errors=0}
Oct 1, 2012 6:04:48 PM org.apache.solr.core.SolrCore closeSearcher
INFO: [] Closing main searcher on request.
Oct 1, 2012 6:04:48 PM org.apache.solr.search.SolrIndexSearcher close
INFO: Closing Searcher@1b0d2d0 main

fieldValueCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}

filterCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}

queryResultCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}

documentCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
Oct 1, 2012 6:04:48 PM org.apache.catalina.loader.WebappClassLoader 
clearReferencesJdbc
SEVERE: The web application [/Solr_Search] registered the JDBC driver 
[com.mysql.jdbc.Driver] but failed to unregister it when the web application 
was stopped. To prevent a memory leak, the JDBC Driver has been forcibly 
unregistered.
Oct 1, 2012 6:04:48 PM org.apache.catalina.loader.WebappClassLoader 
clearReferencesThreads
SEVERE: The web application [/Solr_Search] appears to have started a thread 
named [MySQL Statement Cancellation Timer] but has failed to stop it. This is 
very likely to create a memory leak.
Oct 1, 2012 6:04:48 PM org.apache.catalina.loader.WebappClassLoader 
clearReferencesThreads
SEVERE: The web application [/Solr_Search] appears to have started a thread 
named [MultiThreadedHttpConnectionManager cleanup] but has failed to stop it. 
This is very likely to create a memory leak.
Oct 1, 2012 6:04:48 PM org.apache.solr.core.SolrCore close
INFO: []  CLOSING SolrCore org.apache.solr.core.SolrCore@41a12f
Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
INFO: Failed to unregister mbean: org.apache.solr.highlight.RegexFragmenter 
because it was not registered
Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
INFO: Failed to unregister mbean: /admin/plugins because it was not registered
Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
INFO: Failed to unregister mbean: /admin/system because it was not registered
Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
INFO: Failed to unregister mbean: queryResultCache because it was not registered
Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
INFO: Failed to unregister mbean: 
org.apache.solr.highlight.BreakIteratorBoundaryScanner because it was not 
registered
Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
INFO: Failed to unregister mbean: org.apache.solr.highlight.HtmlFormatter 
because it was not registered
Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
INFO: Failed to unregister mbean: org.apache.solr.highlight.GapFragmenter 
because it was not registered
Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
INFO: Failed to unregister mbean: /admin/file because it was not registered
Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
INFO: 

[jira] [Updated] (SOLR-3908) I have a solr issue when i run it on tomcate

2012-10-01 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-3908:


Labels:   (was: /admin/ /admin/file /admin/luke /admin/mbeans /admin/ping 
/admin/plugins /admin/properties /admin/system /admin/threads 
/analysis/document /analysis/field /browse /dataimport /debug/dump /elevate 
/select /spell /terms /tvrh /update /update/csv /update/extract /update/javabin 
/update/json /update/xslt 1, 2012 6:04:48 CLOSING Cancellation Catalina Closing 
DirectUpdateHandler2{commits=0,autocommit 
DirectUpdateHandler2{commits=6,autocommit Driver Failed INFO: JDBC Oct PM 
Pausing ProtocolHandler SEVERE: Searcher@1950e0a Searcher@1b0d2d0 SolrCore 
Statement Stopping The This Timer] To [ajp-apr-8009] [http-apr-8082] 
[/Solr_Search] [MultiThreadedHttpConnectionManager [MySQL [] 
[com.mysql.jdbc.Driver] a appears application because been but cleanup] 
clearReferencesJdbc clearReferencesThreads close closeSearcher closed closing 
core create documentCache 
documentCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
 driver failed fieldCache fieldValueCache 
fieldValueCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
 filterCache 
filterCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
 forcibly has have is it it. leak, leak. likely main maxDocs=1,autocommit 
maxTime=1000ms,autocommits=0,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=0,cumulative_deletesById=0,cumulative_deletesByQuery=0,cumulative_errors=0}
 
maxTime=1000ms,autocommits=3,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=9,cumulative_deletesById=0,cumulative_deletesByQuery=3,cumulative_errors=0}
 mbean: memory named not on org.apache.catalina.core.StandardService 
org.apache.catalina.loader.WebappClassLoader org.apache.coyote.AbstractProtocol 
org.apache.solr.core.JmxMonitoredMap org.apache.solr.core.SolrCore 
org.apache.solr.core.SolrCore@41a12f org.apache.solr.core.SolrCore@da1515 
org.apache.solr.handler.BinaryUpdateRequestHandler 
org.apache.solr.handler.DumpRequestHandler 
org.apache.solr.handler.PingRequestHandler 
org.apache.solr.handler.XmlUpdateRequestHandler 
org.apache.solr.handler.admin.AdminHandlers 
org.apache.solr.handler.component.DebugComponent 
org.apache.solr.handler.component.FacetComponent 
org.apache.solr.handler.component.HighlightComponent 
org.apache.solr.handler.component.MoreLikeThisComponent 
org.apache.solr.handler.component.QueryComponent 
org.apache.solr.handler.component.QueryElevationComponent 
org.apache.solr.handler.component.SearchHandler 
org.apache.solr.handler.component.SpellCheckComponent 
org.apache.solr.handler.component.StatsComponent 
org.apache.solr.handler.component.TermVectorComponent 
org.apache.solr.handler.component.TermsComponent 
org.apache.solr.handler.dataimport.DataImportHandler 
org.apache.solr.highlight.BreakIteratorBoundaryScanner 
org.apache.solr.highlight.GapFragmenter org.apache.solr.highlight.HtmlEncoder 
org.apache.solr.highlight.HtmlFormatter 
org.apache.solr.highlight.RegexFragmenter 
org.apache.solr.highlight.ScoreOrderFragmentsBuilder 
org.apache.solr.highlight.SimpleBoundaryScanner 
org.apache.solr.highlight.SimpleFragListBuilder 
org.apache.solr.highlight.SingleFragListBuilder 
org.apache.solr.search.SolrIndexSearcher 
org.apache.solr.update.DirectUpdateHandler2 pause prevent queryResultCache 
queryResultCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
 
queryResultCache{lookups=0,hits=0,hitratio=0.00,inserts=1,evictions=0,size=1,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
 registered request. searcher service started stop stopInternal stopped. the 
thread to unregister unregistered. updateHandler very was web when)

 I have a solr issue when i run it on tomcate
 

 Key: SOLR-3908
 URL: https://issues.apache.org/jira/browse/SOLR-3908
 Project: Solr
  Issue Type: Bug
Reporter: bhavesh jogi

 Oct 1, 2012 6:04:48 PM org.apache.coyote.AbstractProtocol pause
 INFO: Pausing ProtocolHandler [http-apr-8082]
 Oct 1, 2012 6:04:48 PM org.apache.coyote.AbstractProtocol pause
 INFO: Pausing ProtocolHandler 

[jira] [Commented] (SOLR-3908) I have a solr issue when i run it on tomcate

2012-10-01 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466768#comment-13466768
 ] 

Uwe Schindler commented on SOLR-3908:
-

What is the problem?

 I have a solr issue when i run it on tomcate
 

 Key: SOLR-3908
 URL: https://issues.apache.org/jira/browse/SOLR-3908
 Project: Solr
  Issue Type: Bug
Reporter: bhavesh jogi

 Oct 1, 2012 6:04:48 PM org.apache.coyote.AbstractProtocol pause
 INFO: Pausing ProtocolHandler [http-apr-8082]
 Oct 1, 2012 6:04:48 PM org.apache.coyote.AbstractProtocol pause
 INFO: Pausing ProtocolHandler [ajp-apr-8009]
 Oct 1, 2012 6:04:48 PM org.apache.catalina.core.StandardService stopInternal
 INFO: Stopping service Catalina
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.SolrCore close
 INFO: []  CLOSING SolrCore org.apache.solr.core.SolrCore@da1515
 Oct 1, 2012 6:04:48 PM org.apache.solr.update.DirectUpdateHandler2 close
 INFO: closing DirectUpdateHandler2{commits=6,autocommit 
 maxDocs=1,autocommit 
 maxTime=1000ms,autocommits=3,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=9,cumulative_deletesById=0,cumulative_deletesByQuery=3,cumulative_errors=0}
 Oct 1, 2012 6:04:48 PM org.apache.solr.update.DirectUpdateHandler2 close
 INFO: closed DirectUpdateHandler2{commits=6,autocommit 
 maxDocs=1,autocommit 
 maxTime=1000ms,autocommits=3,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=9,cumulative_deletesById=0,cumulative_deletesByQuery=3,cumulative_errors=0}
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.SolrCore closeSearcher
 INFO: [] Closing main searcher on request.
 Oct 1, 2012 6:04:48 PM org.apache.solr.search.SolrIndexSearcher close
 INFO: Closing Searcher@1b0d2d0 main
   
 fieldValueCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
   
 filterCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
   
 queryResultCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
   
 documentCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
 Oct 1, 2012 6:04:48 PM org.apache.catalina.loader.WebappClassLoader 
 clearReferencesJdbc
 SEVERE: The web application [/Solr_Search] registered the JDBC driver 
 [com.mysql.jdbc.Driver] but failed to unregister it when the web application 
 was stopped. To prevent a memory leak, the JDBC Driver has been forcibly 
 unregistered.
 Oct 1, 2012 6:04:48 PM org.apache.catalina.loader.WebappClassLoader 
 clearReferencesThreads
 SEVERE: The web application [/Solr_Search] appears to have started a thread 
 named [MySQL Statement Cancellation Timer] but has failed to stop it. This is 
 very likely to create a memory leak.
 Oct 1, 2012 6:04:48 PM org.apache.catalina.loader.WebappClassLoader 
 clearReferencesThreads
 SEVERE: The web application [/Solr_Search] appears to have started a thread 
 named [MultiThreadedHttpConnectionManager cleanup] but has failed to stop it. 
 This is very likely to create a memory leak.
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.SolrCore close
 INFO: []  CLOSING SolrCore org.apache.solr.core.SolrCore@41a12f
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
 INFO: Failed to unregister mbean: org.apache.solr.highlight.RegexFragmenter 
 because it was not registered
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
 INFO: Failed to unregister mbean: /admin/plugins because it was not registered
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
 INFO: Failed to unregister mbean: /admin/system because it was not registered
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
 INFO: Failed to unregister mbean: queryResultCache because it was not 
 registered
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
 INFO: Failed to unregister mbean: 
 org.apache.solr.highlight.BreakIteratorBoundaryScanner because it was not 
 registered
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
 INFO: Failed to unregister mbean: org.apache.solr.highlight.HtmlFormatter 
 because it was not registered
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
 INFO: Failed to unregister 

Re: VOTE: release 4.0 (take two)

2012-10-01 Thread Robert Muir
NOTE: vote stays open until Tuesday (since it was over a weekend).

On Thu, Sep 27, 2012 at 3:15 PM, Robert Muir rcm...@gmail.com wrote:
 artifacts are here: http://s.apache.org/lusolr40rc1

 By the way, thanks for all the help improving smoketesting and
 packaging and so on. This will pay off in the future!

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3734) Forever loop in schema browser

2012-10-01 Thread Stefan Matheis (steffkes) (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Matheis (steffkes) resolved SOLR-3734.
-

   Resolution: Fixed
Fix Version/s: 4.1

Committed revision 1392318. trunk
Committed revision 1392320. branch_4x

 Forever loop in schema browser
 --

 Key: SOLR-3734
 URL: https://issues.apache.org/jira/browse/SOLR-3734
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis, web gui
Reporter: Lance Norskog
Assignee: Stefan Matheis (steffkes)
 Fix For: 4.1

 Attachments: SOLR-3734.patch, SOLR-3734.patch, 
 SOLR-3734_schema_browser_blocks_solr_conf_dir.zip


 When I start Solr with the attached conf directory, and hit the Schema 
 Browser, the loading circle spins permanently. 
 I don't know if the problem is in the UI or in Solr. The UI does not display 
 the Ajax solr calls, and I don't have a debugging proxy.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3637) The commit status of a core is allways as false at the core admin page

2012-10-01 Thread Stefan Matheis (steffkes) (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Matheis (steffkes) resolved SOLR-3637.
-

   Resolution: Fixed
Fix Version/s: 4.1

Committed revision 1392327. trunk
Committed revision 1392335. branch_4x

 The commit status of a core is allways as false at the core admin page 
 ---

 Key: SOLR-3637
 URL: https://issues.apache.org/jira/browse/SOLR-3637
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.0-ALPHA
 Environment: Solaris11, Java7 and newly downloaded 4.0-Alpha (jetty)
Reporter: Uwe Reh
Assignee: Stefan Matheis (steffkes)
Priority: Trivial
  Labels: admin, gui
 Fix For: 4.1

 Attachments: SOLR-3637.patch

   Original Estimate: 2h
  Remaining Estimate: 2h

 Using the admin gui, the page 'Core Admin' (...solr/#/~cores/coreX) says 
 allways that the selected core isn't optimized. The main page of the core's 
 submenu (solr/#/coreX) shows the correct state.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3908) I have a solr issue when i run it on tomcate

2012-10-01 Thread bhavesh jogi (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466818#comment-13466818
 ] 

bhavesh jogi commented on SOLR-3908:


When i deploy my WAR file in tomcat and run in browser that time its give me 
error like this.

When i run this in eclipse its run very well.

Can you help me how to solve this?

 I have a solr issue when i run it on tomcate
 

 Key: SOLR-3908
 URL: https://issues.apache.org/jira/browse/SOLR-3908
 Project: Solr
  Issue Type: Bug
Reporter: bhavesh jogi

 Oct 1, 2012 6:04:48 PM org.apache.coyote.AbstractProtocol pause
 INFO: Pausing ProtocolHandler [http-apr-8082]
 Oct 1, 2012 6:04:48 PM org.apache.coyote.AbstractProtocol pause
 INFO: Pausing ProtocolHandler [ajp-apr-8009]
 Oct 1, 2012 6:04:48 PM org.apache.catalina.core.StandardService stopInternal
 INFO: Stopping service Catalina
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.SolrCore close
 INFO: []  CLOSING SolrCore org.apache.solr.core.SolrCore@da1515
 Oct 1, 2012 6:04:48 PM org.apache.solr.update.DirectUpdateHandler2 close
 INFO: closing DirectUpdateHandler2{commits=6,autocommit 
 maxDocs=1,autocommit 
 maxTime=1000ms,autocommits=3,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=9,cumulative_deletesById=0,cumulative_deletesByQuery=3,cumulative_errors=0}
 Oct 1, 2012 6:04:48 PM org.apache.solr.update.DirectUpdateHandler2 close
 INFO: closed DirectUpdateHandler2{commits=6,autocommit 
 maxDocs=1,autocommit 
 maxTime=1000ms,autocommits=3,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=9,cumulative_deletesById=0,cumulative_deletesByQuery=3,cumulative_errors=0}
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.SolrCore closeSearcher
 INFO: [] Closing main searcher on request.
 Oct 1, 2012 6:04:48 PM org.apache.solr.search.SolrIndexSearcher close
 INFO: Closing Searcher@1b0d2d0 main
   
 fieldValueCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
   
 filterCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
   
 queryResultCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
   
 documentCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
 Oct 1, 2012 6:04:48 PM org.apache.catalina.loader.WebappClassLoader 
 clearReferencesJdbc
 SEVERE: The web application [/Solr_Search] registered the JDBC driver 
 [com.mysql.jdbc.Driver] but failed to unregister it when the web application 
 was stopped. To prevent a memory leak, the JDBC Driver has been forcibly 
 unregistered.
 Oct 1, 2012 6:04:48 PM org.apache.catalina.loader.WebappClassLoader 
 clearReferencesThreads
 SEVERE: The web application [/Solr_Search] appears to have started a thread 
 named [MySQL Statement Cancellation Timer] but has failed to stop it. This is 
 very likely to create a memory leak.
 Oct 1, 2012 6:04:48 PM org.apache.catalina.loader.WebappClassLoader 
 clearReferencesThreads
 SEVERE: The web application [/Solr_Search] appears to have started a thread 
 named [MultiThreadedHttpConnectionManager cleanup] but has failed to stop it. 
 This is very likely to create a memory leak.
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.SolrCore close
 INFO: []  CLOSING SolrCore org.apache.solr.core.SolrCore@41a12f
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
 INFO: Failed to unregister mbean: org.apache.solr.highlight.RegexFragmenter 
 because it was not registered
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
 INFO: Failed to unregister mbean: /admin/plugins because it was not registered
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
 INFO: Failed to unregister mbean: /admin/system because it was not registered
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
 INFO: Failed to unregister mbean: queryResultCache because it was not 
 registered
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
 INFO: Failed to unregister mbean: 
 org.apache.solr.highlight.BreakIteratorBoundaryScanner because it was not 
 registered
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
 INFO: Failed to unregister mbean: 

[jira] [Updated] (SOLR-3861) Refactor SolrCoreState so that it's managed by SolrCore .

2012-10-01 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-3861:
--

Affects Version/s: (was: 4.0-BETA)
   (was: 4.0-ALPHA)
  Summary: Refactor SolrCoreState so that it's managed by SolrCore 
.  (was: regresion of SOLR-2008 - updateHandler should be closed before 
searcherExecutor)

 Refactor SolrCoreState so that it's managed by SolrCore .
 -

 Key: SOLR-3861
 URL: https://issues.apache.org/jira/browse/SOLR-3861
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Mark Miller
Priority: Blocker
 Fix For: 4.1, 5.0

 Attachments: SOLR-3861.patch, SOLR-3861.patch, SOLR-3861.patch


 SOLR-2008 fixed a possible RejectedExecutionException by ensuring that 
 SolrCore closed the updateHandler before the searcherExecutor.
 [~markrmil...@gmail.com] re-flipped this logic in r1159378, which is 
 annotated as fixing both SOLR-2654 and SOLR-2654 (dup typo i guess) but it's 
 not clear why - pretty sure this means that the risk of a Rejected exception 
 is back in 4.0-BETA...
 https://svn.apache.org/viewvc/lucene/dev/trunk/solr/core/src/java/org/apache/solr/core/SolrCore.java?r1=1146905r2=1159378

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3861) Refactor SolrCoreState so that it's managed by SolrCore .

2012-10-01 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466832#comment-13466832
 ] 

Mark Miller commented on SOLR-3861:
---

I'll commit this refactor shortly.

 Refactor SolrCoreState so that it's managed by SolrCore .
 -

 Key: SOLR-3861
 URL: https://issues.apache.org/jira/browse/SOLR-3861
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Mark Miller
Priority: Blocker
 Fix For: 4.1, 5.0

 Attachments: SOLR-3861.patch, SOLR-3861.patch, SOLR-3861.patch


 SOLR-2008 fixed a possible RejectedExecutionException by ensuring that 
 SolrCore closed the updateHandler before the searcherExecutor.
 [~markrmil...@gmail.com] re-flipped this logic in r1159378, which is 
 annotated as fixing both SOLR-2654 and SOLR-2654 (dup typo i guess) but it's 
 not clear why - pretty sure this means that the risk of a Rejected exception 
 is back in 4.0-BETA...
 https://svn.apache.org/viewvc/lucene/dev/trunk/solr/core/src/java/org/apache/solr/core/SolrCore.java?r1=1146905r2=1159378

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4452) Need to test BlockPostings when payloads/offsets are indexed, but DPEnum flags=0

2012-10-01 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-4452.
-

   Resolution: Fixed
Fix Version/s: 5.0
   4.1

I added a test for this

 Need to test BlockPostings when payloads/offsets are indexed, but DPEnum 
 flags=0
 -

 Key: LUCENE-4452
 URL: https://issues.apache.org/jira/browse/LUCENE-4452
 Project: Lucene - Core
  Issue Type: Sub-task
  Components: core/codecs
Reporter: Robert Muir
 Fix For: 4.1, 5.0


 In this case we get a BlockDocsAndPositionsEnum just reading positions and 
 ignoring the stuff in the .pay: but this is untested.
 see BlockDocsAndPositionsEnum.refillPositions in 
 https://builds.apache.org/job/Lucene-Solr-Clover-4.x/34/clover-report/org/apache/lucene/codecs/block/BlockPostingsReader.html#BlockPostingsReader

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3861) Refactor SolrCoreState so that it's managed by SolrCore .

2012-10-01 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-3861.
---

Resolution: Fixed

 Refactor SolrCoreState so that it's managed by SolrCore .
 -

 Key: SOLR-3861
 URL: https://issues.apache.org/jira/browse/SOLR-3861
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Mark Miller
Priority: Blocker
 Fix For: 4.1, 5.0

 Attachments: SOLR-3861.patch, SOLR-3861.patch, SOLR-3861.patch


 SOLR-2008 fixed a possible RejectedExecutionException by ensuring that 
 SolrCore closed the updateHandler before the searcherExecutor.
 [~markrmil...@gmail.com] re-flipped this logic in r1159378, which is 
 annotated as fixing both SOLR-2654 and SOLR-2654 (dup typo i guess) but it's 
 not clear why - pretty sure this means that the risk of a Rejected exception 
 is back in 4.0-BETA...
 https://svn.apache.org/viewvc/lucene/dev/trunk/solr/core/src/java/org/apache/solr/core/SolrCore.java?r1=1146905r2=1159378

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3906) Add support for AnalyzingSuggester / coerce it to work for Japanese

2012-10-01 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated SOLR-3906:
--

Attachment: SOLR-3906_notestsyet.patch

 Add support for AnalyzingSuggester / coerce it to work for Japanese 
 

 Key: SOLR-3906
 URL: https://issues.apache.org/jira/browse/SOLR-3906
 Project: Solr
  Issue Type: New Feature
  Components: spellchecker
Reporter: Robert Muir
 Attachments: SOLR-3906_notestsyet.patch


 We should add a factory for this to solr, and try to add a test/example using 
 JapaneseReadingFormFilter, to see if we can at least get some basic 
 auto-suggest working for this language.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3897) Preserve multi-value fields during hit highlighting

2012-10-01 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-3897:
-

Attachment: SOLR-3897.patch

Added test case.

 Preserve multi-value fields during hit highlighting
 ---

 Key: SOLR-3897
 URL: https://issues.apache.org/jira/browse/SOLR-3897
 Project: Solr
  Issue Type: Improvement
  Components: highlighter
Affects Versions: 4.0-BETA
Reporter: Joel Bernstein
Priority: Critical
 Fix For: 4.0-BETA

 Attachments: SOLR-3897.patch, SOLR-3897.patch


 The behavior of the default Solr hit highlighter on multi-value fields is to 
 only return the values that have a hit and sort them by score.
 This ticket supplies a patch that adds a new highlight parameter called 
 preserveMulti which can be used on a feild by field basis to return all of 
 the values in their original order. If this parameter is used, the values 
 that have a hit are highlighted and the ones that do not contain a hit are 
 returned un-highlighted.
 The preserveMulti parameter works with the default standard highlighter and 
 follows the standard highlighting conventions.
 Sample usage for a field called cat:
 f.cat.hl.preserveMulti=true

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4455) CheckIndex shows wrong segment size in 4.0 because SegmentInfoPerCommit.sizeInBytes counts every file 2 times; check for deletions is negated and results in wrong output

2012-10-01 Thread Uwe Schindler (JIRA)
Uwe Schindler created LUCENE-4455:
-

 Summary: CheckIndex shows wrong segment size in 4.0 because 
SegmentInfoPerCommit.sizeInBytes counts every file 2 times; check for deletions 
is negated and results in wrong output
 Key: LUCENE-4455
 URL: https://issues.apache.org/jira/browse/LUCENE-4455
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 4.0-BETA
Reporter: Uwe Schindler
Priority: Blocker
 Fix For: 4.0


I found this bug in 4.0-RC1 when I compared the checkindex outputs for 4.0 and 
3.6.1:
- The segment size is twice as big as reported by ls -lh. The reason is that 
SegmentInfoPerCommit.sizeInBytes counts every file 2 times. This seems to be 
serious (it is just statistics), because MergePolicy chooses merges because of 
this. On the other hand if all segments are twice as big it should not affect 
merging behaviour (unless absolute sizes in megabytes are used). So we should 
really fix this - sorry for investigating this so late!
- The deletions in the segments are inverted. Segments that have no deleteions 
are reported as those *with deletions* but delGen=-1, and those with deletions 
show no deletions, this is not serious, but should be fixed, too.

There is one bug in sizeInBytes (which we should NOT fix), is that for 3.x 
indexes, if they are from 3.0 and have shared doc stores they are 
overestimated. But that's fine. For this case, the index was a 3.6.1 segment 
and a 4.0 segment, both showed double size.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-4455) CheckIndex shows wrong segment size in 4.0 because SegmentInfoPerCommit.sizeInBytes counts every file 2 times; check for deletions is negated and results in wrong outpu

2012-10-01 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless reassigned LUCENE-4455:
--

Assignee: Michael McCandless

 CheckIndex shows wrong segment size in 4.0 because 
 SegmentInfoPerCommit.sizeInBytes counts every file 2 times; check for 
 deletions is negated and results in wrong output
 -

 Key: LUCENE-4455
 URL: https://issues.apache.org/jira/browse/LUCENE-4455
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 4.0-BETA
Reporter: Uwe Schindler
Assignee: Michael McCandless
Priority: Blocker
 Fix For: 4.0


 I found this bug in 4.0-RC1 when I compared the checkindex outputs for 4.0 
 and 3.6.1:
 - The segment size is twice as big as reported by ls -lh. The reason is 
 that SegmentInfoPerCommit.sizeInBytes counts every file 2 times. This seems 
 to be serious (it is just statistics), because MergePolicy chooses merges 
 because of this. On the other hand if all segments are twice as big it should 
 not affect merging behaviour (unless absolute sizes in megabytes are used). 
 So we should really fix this - sorry for investigating this so late!
 - The deletions in the segments are inverted. Segments that have no 
 deleteions are reported as those *with deletions* but delGen=-1, and those 
 with deletions show no deletions, this is not serious, but should be fixed, 
 too.
 There is one bug in sizeInBytes (which we should NOT fix), is that for 3.x 
 indexes, if they are from 3.0 and have shared doc stores they are 
 overestimated. But that's fine. For this case, the index was a 3.6.1 segment 
 and a 4.0 segment, both showed double size.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4455) CheckIndex shows wrong segment size in 4.0 because SegmentInfoPerCommit.sizeInBytes counts every file 2 times; check for deletions is negated and results in wrong output

2012-10-01 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-4455:
--

Description: 
I found this bug in 4.0-RC1 when I compared the checkindex outputs for 4.0 and 
3.6.1:
- The segment size is twice as big as reported by ls -lh. The reason is that 
SegmentInfoPerCommit.sizeInBytes counts every file 2 times. This seems to be 
not so serious (it is just statistics), *but*: MergePolicy chooses merges 
because of this. On the other hand if all segments are twice as big it should 
not affect merging behaviour (unless absolute sizes in megabytes are used). So 
we should really fix this - sorry for investigating this so late!
- The deletions in the segments are inverted. Segments that have no deleteions 
are reported as those *with deletions* but delGen=-1, and those with deletions 
show no deletions, this is not serious, but should be fixed, too.

There is one bug in sizeInBytes (which we should NOT fix), is that for 3.x 
indexes, if they are from 3.0 and have shared doc stores they are 
overestimated. But that's fine. For this case, the index was a 3.6.1 segment 
and a 4.0 segment, both showed double size.


  was:
I found this bug in 4.0-RC1 when I compared the checkindex outputs for 4.0 and 
3.6.1:
- The segment size is twice as big as reported by ls -lh. The reason is that 
SegmentInfoPerCommit.sizeInBytes counts every file 2 times. This seems to be 
serious (it is just statistics), because MergePolicy chooses merges because of 
this. On the other hand if all segments are twice as big it should not affect 
merging behaviour (unless absolute sizes in megabytes are used). So we should 
really fix this - sorry for investigating this so late!
- The deletions in the segments are inverted. Segments that have no deleteions 
are reported as those *with deletions* but delGen=-1, and those with deletions 
show no deletions, this is not serious, but should be fixed, too.

There is one bug in sizeInBytes (which we should NOT fix), is that for 3.x 
indexes, if they are from 3.0 and have shared doc stores they are 
overestimated. But that's fine. For this case, the index was a 3.6.1 segment 
and a 4.0 segment, both showed double size.



 CheckIndex shows wrong segment size in 4.0 because 
 SegmentInfoPerCommit.sizeInBytes counts every file 2 times; check for 
 deletions is negated and results in wrong output
 -

 Key: LUCENE-4455
 URL: https://issues.apache.org/jira/browse/LUCENE-4455
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 4.0-BETA
Reporter: Uwe Schindler
Assignee: Michael McCandless
Priority: Blocker
 Fix For: 4.0


 I found this bug in 4.0-RC1 when I compared the checkindex outputs for 4.0 
 and 3.6.1:
 - The segment size is twice as big as reported by ls -lh. The reason is 
 that SegmentInfoPerCommit.sizeInBytes counts every file 2 times. This seems 
 to be not so serious (it is just statistics), *but*: MergePolicy chooses 
 merges because of this. On the other hand if all segments are twice as big it 
 should not affect merging behaviour (unless absolute sizes in megabytes are 
 used). So we should really fix this - sorry for investigating this so late!
 - The deletions in the segments are inverted. Segments that have no 
 deleteions are reported as those *with deletions* but delGen=-1, and those 
 with deletions show no deletions, this is not serious, but should be fixed, 
 too.
 There is one bug in sizeInBytes (which we should NOT fix), is that for 3.x 
 indexes, if they are from 3.0 and have shared doc stores they are 
 overestimated. But that's fine. For this case, the index was a 3.6.1 segment 
 and a 4.0 segment, both showed double size.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2305) DataImportScheduler - Marko Bonaci

2012-10-01 Thread Billy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466941#comment-13466941
 ] 

Billy commented on SOLR-2305:
-

Are there still plans to add this to the the version 4 distro?  IMHO, I see 
great benefit for adding this, please consider.  Thanks!


 DataImportScheduler -  Marko Bonaci
 ---

 Key: SOLR-2305
 URL: https://issues.apache.org/jira/browse/SOLR-2305
 Project: Solr
  Issue Type: New Feature
Affects Versions: 4.0-ALPHA
Reporter: Bill Bell
 Fix For: 4.1

 Attachments: patch.txt, SOLR-2305-1.diff


 Marko Bonaci has updated the WIKI page to add the DataImportScheduler, but I 
 cannot find a JIRA ticket for it?
 http://wiki.apache.org/solr/DataImportHandler
 Do we have a ticket so the code can be tracked?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-2305) DataImportScheduler - Marko Bonaci

2012-10-01 Thread Billy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466941#comment-13466941
 ] 

Billy edited comment on SOLR-2305 at 10/2/12 4:02 AM:
--

Are there still plans to add this to the the version 4 distro?  I don't see it 
in the 4.0.0-BETA distro yet. IMHO, I see great benefit for adding this, please 
consider.  Thanks!


  was (Author: newmanw10):
Are there still plans to add this to the the version 4 distro?  IMHO, I see 
great benefit for adding this, please consider.  Thanks!

  
 DataImportScheduler -  Marko Bonaci
 ---

 Key: SOLR-2305
 URL: https://issues.apache.org/jira/browse/SOLR-2305
 Project: Solr
  Issue Type: New Feature
Affects Versions: 4.0-ALPHA
Reporter: Bill Bell
 Fix For: 4.1

 Attachments: patch.txt, SOLR-2305-1.diff


 Marko Bonaci has updated the WIKI page to add the DataImportScheduler, but I 
 cannot find a JIRA ticket for it?
 http://wiki.apache.org/solr/DataImportHandler
 Do we have a ticket so the code can be tracked?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4455) CheckIndex shows wrong segment size in 4.0 because SegmentInfoPerCommit.sizeInBytes counts every file 2 times; check for deletions is negated and results in wrong outp

2012-10-01 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466944#comment-13466944
 ] 

Robert Muir commented on LUCENE-4455:
-

Thanks Uwe for finding this!

 CheckIndex shows wrong segment size in 4.0 because 
 SegmentInfoPerCommit.sizeInBytes counts every file 2 times; check for 
 deletions is negated and results in wrong output
 -

 Key: LUCENE-4455
 URL: https://issues.apache.org/jira/browse/LUCENE-4455
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 4.0-BETA
Reporter: Uwe Schindler
Assignee: Michael McCandless
Priority: Blocker
 Fix For: 4.0


 I found this bug in 4.0-RC1 when I compared the checkindex outputs for 4.0 
 and 3.6.1:
 - The segment size is twice as big as reported by ls -lh. The reason is 
 that SegmentInfoPerCommit.sizeInBytes counts every file 2 times. This seems 
 to be not so serious (it is just statistics), *but*: MergePolicy chooses 
 merges because of this. On the other hand if all segments are twice as big it 
 should not affect merging behaviour (unless absolute sizes in megabytes are 
 used). So we should really fix this - sorry for investigating this so late!
 - The deletions in the segments are inverted. Segments that have no 
 deleteions are reported as those *with deletions* but delGen=-1, and those 
 with deletions show no deletions, this is not serious, but should be fixed, 
 too.
 There is one bug in sizeInBytes (which we should NOT fix), is that for 3.x 
 indexes, if they are from 3.0 and have shared doc stores they are 
 overestimated. But that's fine. For this case, the index was a 3.6.1 segment 
 and a 4.0 segment, both showed double size.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: VOTE: release 4.0 (take two)

2012-10-01 Thread Uwe Schindler
Hi,

-1 to release those artifacts!

First the good thing: I ran smoketester on an Ubuntu 12.04.1 Server with 2 
older Opteron CPUs (NUMA), passed perfectly (1.6.0_33 and 1.7.0_07). I also 
inspected the artifacts and had a look at JavaDocs. I found some typos in 
READMEs, but all not very problematic. I had no real Lucene 4.0 Application 
ready to test with (I am still in process to upgrade PANGAEA), but so far so 
good.

But I found a serious bug when comparing the output of checkindex on the same 
index in 4.0 and 3.6.1 (used a 7 GiB PANGAEA index): 
https://issues.apache.org/jira/browse/LUCENE-4455; The deletions are reported 
negative (segment with no deletions is reported to have them) and the size of 
the segments is doubled, causing MergePolicy to behave wrong (depending on 
settings).

In any case: Many thanks to Robert for making the release - if we must respin, 
depends on you!

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Robert Muir [mailto:rcm...@gmail.com]
 Sent: Thursday, September 27, 2012 9:16 PM
 To: dev@lucene.apache.org
 Subject: VOTE: release 4.0 (take two)
 
 artifacts are here: http://s.apache.org/lusolr40rc1
 
 By the way, thanks for all the help improving smoketesting and packaging and
 so on. This will pay off in the future!
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: VOTE: release 4.0 (take two)

2012-10-01 Thread Robert Muir
Lets fix this stuff. I don't like this sizeInBytes double-counting!

On Mon, Oct 1, 2012 at 1:05 PM, Uwe Schindler u...@thetaphi.de wrote:
 Hi,

 -1 to release those artifacts!

 First the good thing: I ran smoketester on an Ubuntu 12.04.1 Server with 2 
 older Opteron CPUs (NUMA), passed perfectly (1.6.0_33 and 1.7.0_07). I also 
 inspected the artifacts and had a look at JavaDocs. I found some typos in 
 READMEs, but all not very problematic. I had no real Lucene 4.0 Application 
 ready to test with (I am still in process to upgrade PANGAEA), but so far so 
 good.

 But I found a serious bug when comparing the output of checkindex on the same 
 index in 4.0 and 3.6.1 (used a 7 GiB PANGAEA index): 
 https://issues.apache.org/jira/browse/LUCENE-4455; The deletions are reported 
 negative (segment with no deletions is reported to have them) and the size of 
 the segments is doubled, causing MergePolicy to behave wrong (depending on 
 settings).

 In any case: Many thanks to Robert for making the release - if we must 
 respin, depends on you!

 Uwe

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de


 -Original Message-
 From: Robert Muir [mailto:rcm...@gmail.com]
 Sent: Thursday, September 27, 2012 9:16 PM
 To: dev@lucene.apache.org
 Subject: VOTE: release 4.0 (take two)

 artifacts are here: http://s.apache.org/lusolr40rc1

 By the way, thanks for all the help improving smoketesting and packaging and
 so on. This will pay off in the future!

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1875) per-segment single valued string faceting

2012-10-01 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher resolved SOLR-1875.


   Resolution: Fixed
Fix Version/s: (was: 4.1)
   4.0-ALPHA
   4.0-BETA
   5.0
   4.0

 per-segment single valued string faceting
 -

 Key: SOLR-1875
 URL: https://issues.apache.org/jira/browse/SOLR-1875
 Project: Solr
  Issue Type: New Feature
Reporter: Yonik Seeley
Assignee: Yonik Seeley
 Fix For: 4.0, 5.0, 4.0-BETA, 4.0-ALPHA

 Attachments: ASF.LICENSE.NOT.GRANTED--SOLR-1875.patch, 
 ASF.LICENSE.NOT.GRANTED--SOLR-1875.patch


 A little stepping stone to NRT:
 Per-segment single-valued string faceting using the Lucene FieldCache.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (SOLR-3908) I have a solr issue when i run it on tomcate

2012-10-01 Thread Erick Erickson
Have you followed the instructions on this page?
http://wiki.apache.org/solr/SolrTomcat

Erick

On Mon, Oct 1, 2012 at 10:08 AM, bhavesh jogi (JIRA) j...@apache.org wrote:

 [ 
 https://issues.apache.org/jira/browse/SOLR-3908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466818#comment-13466818
  ]

 bhavesh jogi commented on SOLR-3908:
 

 When i deploy my WAR file in tomcat and run in browser that time its give me 
 error like this.

 When i run this in eclipse its run very well.

 Can you help me how to solve this?

 I have a solr issue when i run it on tomcate
 

 Key: SOLR-3908
 URL: https://issues.apache.org/jira/browse/SOLR-3908
 Project: Solr
  Issue Type: Bug
Reporter: bhavesh jogi

 Oct 1, 2012 6:04:48 PM org.apache.coyote.AbstractProtocol pause
 INFO: Pausing ProtocolHandler [http-apr-8082]
 Oct 1, 2012 6:04:48 PM org.apache.coyote.AbstractProtocol pause
 INFO: Pausing ProtocolHandler [ajp-apr-8009]
 Oct 1, 2012 6:04:48 PM org.apache.catalina.core.StandardService stopInternal
 INFO: Stopping service Catalina
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.SolrCore close
 INFO: []  CLOSING SolrCore org.apache.solr.core.SolrCore@da1515
 Oct 1, 2012 6:04:48 PM org.apache.solr.update.DirectUpdateHandler2 close
 INFO: closing DirectUpdateHandler2{commits=6,autocommit 
 maxDocs=1,autocommit 
 maxTime=1000ms,autocommits=3,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=9,cumulative_deletesById=0,cumulative_deletesByQuery=3,cumulative_errors=0}
 Oct 1, 2012 6:04:48 PM org.apache.solr.update.DirectUpdateHandler2 close
 INFO: closed DirectUpdateHandler2{commits=6,autocommit 
 maxDocs=1,autocommit 
 maxTime=1000ms,autocommits=3,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=9,cumulative_deletesById=0,cumulative_deletesByQuery=3,cumulative_errors=0}
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.SolrCore closeSearcher
 INFO: [] Closing main searcher on request.
 Oct 1, 2012 6:04:48 PM org.apache.solr.search.SolrIndexSearcher close
 INFO: Closing Searcher@1b0d2d0 main
   
 fieldValueCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
   
 filterCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
   
 queryResultCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
   
 documentCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
 Oct 1, 2012 6:04:48 PM org.apache.catalina.loader.WebappClassLoader 
 clearReferencesJdbc
 SEVERE: The web application [/Solr_Search] registered the JDBC driver 
 [com.mysql.jdbc.Driver] but failed to unregister it when the web application 
 was stopped. To prevent a memory leak, the JDBC Driver has been forcibly 
 unregistered.
 Oct 1, 2012 6:04:48 PM org.apache.catalina.loader.WebappClassLoader 
 clearReferencesThreads
 SEVERE: The web application [/Solr_Search] appears to have started a thread 
 named [MySQL Statement Cancellation Timer] but has failed to stop it. This 
 is very likely to create a memory leak.
 Oct 1, 2012 6:04:48 PM org.apache.catalina.loader.WebappClassLoader 
 clearReferencesThreads
 SEVERE: The web application [/Solr_Search] appears to have started a thread 
 named [MultiThreadedHttpConnectionManager cleanup] but has failed to stop 
 it. This is very likely to create a memory leak.
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.SolrCore close
 INFO: []  CLOSING SolrCore org.apache.solr.core.SolrCore@41a12f
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
 INFO: Failed to unregister mbean: org.apache.solr.highlight.RegexFragmenter 
 because it was not registered
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
 INFO: Failed to unregister mbean: /admin/plugins because it was not 
 registered
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
 INFO: Failed to unregister mbean: /admin/system because it was not registered
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
 INFO: Failed to unregister mbean: queryResultCache because it was not 
 registered
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
 INFO: Failed to unregister mbean: 
 

[jira] [Updated] (LUCENE-4455) CheckIndex shows wrong segment size in 4.0 because SegmentInfoPerCommit.sizeInBytes counts every file 2 times; check for deletions is negated and results in wrong output

2012-10-01 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-4455:
---

Attachment: LUCENE-4455.patch

Patch w/ tests + fixes.

 CheckIndex shows wrong segment size in 4.0 because 
 SegmentInfoPerCommit.sizeInBytes counts every file 2 times; check for 
 deletions is negated and results in wrong output
 -

 Key: LUCENE-4455
 URL: https://issues.apache.org/jira/browse/LUCENE-4455
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 4.0-BETA
Reporter: Uwe Schindler
Assignee: Michael McCandless
Priority: Blocker
 Fix For: 4.0

 Attachments: LUCENE-4455.patch


 I found this bug in 4.0-RC1 when I compared the checkindex outputs for 4.0 
 and 3.6.1:
 - The segment size is twice as big as reported by ls -lh. The reason is 
 that SegmentInfoPerCommit.sizeInBytes counts every file 2 times. This seems 
 to be not so serious (it is just statistics), *but*: MergePolicy chooses 
 merges because of this. On the other hand if all segments are twice as big it 
 should not affect merging behaviour (unless absolute sizes in megabytes are 
 used). So we should really fix this - sorry for investigating this so late!
 - The deletions in the segments are inverted. Segments that have no 
 deleteions are reported as those *with deletions* but delGen=-1, and those 
 with deletions show no deletions, this is not serious, but should be fixed, 
 too.
 There is one bug in sizeInBytes (which we should NOT fix), is that for 3.x 
 indexes, if they are from 3.0 and have shared doc stores they are 
 overestimated. But that's fine. For this case, the index was a 3.6.1 segment 
 and a 4.0 segment, both showed double size.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4455) CheckIndex shows wrong segment size in 4.0 because SegmentInfoPerCommit.sizeInBytes counts every file 2 times; check for deletions is negated and results in wrong outp

2012-10-01 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466987#comment-13466987
 ] 

Robert Muir commented on LUCENE-4455:
-

+1

 CheckIndex shows wrong segment size in 4.0 because 
 SegmentInfoPerCommit.sizeInBytes counts every file 2 times; check for 
 deletions is negated and results in wrong output
 -

 Key: LUCENE-4455
 URL: https://issues.apache.org/jira/browse/LUCENE-4455
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 4.0-BETA
Reporter: Uwe Schindler
Assignee: Michael McCandless
Priority: Blocker
 Fix For: 4.0

 Attachments: LUCENE-4455.patch


 I found this bug in 4.0-RC1 when I compared the checkindex outputs for 4.0 
 and 3.6.1:
 - The segment size is twice as big as reported by ls -lh. The reason is 
 that SegmentInfoPerCommit.sizeInBytes counts every file 2 times. This seems 
 to be not so serious (it is just statistics), *but*: MergePolicy chooses 
 merges because of this. On the other hand if all segments are twice as big it 
 should not affect merging behaviour (unless absolute sizes in megabytes are 
 used). So we should really fix this - sorry for investigating this so late!
 - The deletions in the segments are inverted. Segments that have no 
 deleteions are reported as those *with deletions* but delGen=-1, and those 
 with deletions show no deletions, this is not serious, but should be fixed, 
 too.
 There is one bug in sizeInBytes (which we should NOT fix), is that for 3.x 
 indexes, if they are from 3.0 and have shared doc stores they are 
 overestimated. But that's fine. For this case, the index was a 3.6.1 segment 
 and a 4.0 segment, both showed double size.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4455) CheckIndex shows wrong segment size in 4.0 because SegmentInfoPerCommit.sizeInBytes counts every file 2 times; check for deletions is negated and results in wrong outp

2012-10-01 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466992#comment-13466992
 ] 

Uwe Schindler commented on LUCENE-4455:
---

Thanks for fixing!

 CheckIndex shows wrong segment size in 4.0 because 
 SegmentInfoPerCommit.sizeInBytes counts every file 2 times; check for 
 deletions is negated and results in wrong output
 -

 Key: LUCENE-4455
 URL: https://issues.apache.org/jira/browse/LUCENE-4455
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 4.0-BETA
Reporter: Uwe Schindler
Assignee: Michael McCandless
Priority: Blocker
 Fix For: 4.0

 Attachments: LUCENE-4455.patch


 I found this bug in 4.0-RC1 when I compared the checkindex outputs for 4.0 
 and 3.6.1:
 - The segment size is twice as big as reported by ls -lh. The reason is 
 that SegmentInfoPerCommit.sizeInBytes counts every file 2 times. This seems 
 to be not so serious (it is just statistics), *but*: MergePolicy chooses 
 merges because of this. On the other hand if all segments are twice as big it 
 should not affect merging behaviour (unless absolute sizes in megabytes are 
 used). So we should really fix this - sorry for investigating this so late!
 - The deletions in the segments are inverted. Segments that have no 
 deleteions are reported as those *with deletions* but delGen=-1, and those 
 with deletions show no deletions, this is not serious, but should be fixed, 
 too.
 There is one bug in sizeInBytes (which we should NOT fix), is that for 3.x 
 indexes, if they are from 3.0 and have shared doc stores they are 
 overestimated. But that's fine. For this case, the index was a 3.6.1 segment 
 and a 4.0 segment, both showed double size.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: VOTE: release 4.0 (take two)

2012-10-01 Thread David Smiley (@MITRE.org)
Does this mean a re-spin?

I have a low-risk but high impact (in terms of features) bug-fix I would
like to get in to 4.0:  https://issues.apache.org/jira/browse/LUCENE-
but I did not want to put the breaks on any release that was being voted on.

~ David 



-
 Author: http://www.packtpub.com/apache-solr-3-enterprise-search-server/book
--
View this message in context: 
http://lucene.472066.n3.nabble.com/VOTE-release-4-0-take-two-tp4010808p4011255.html
Sent from the Lucene - Java Developer mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: VOTE: release 4.0 (take two)

2012-10-01 Thread Robert Muir
Patch looks fine: though it should have a CHANGES entry (since it was
introduced after the beta release from what I can tell)

The only other things i see listed as bugs are
https://issues.apache.org/jira/browse/SOLR-3637 and
https://issues.apache.org/jira/browse/SOLR-3560

If these things are safe and useful to fix in 4.0 (the three patches
look safe to me), then do it asap (but of course run the proper
tests).

Thanks

On Mon, Oct 1, 2012 at 1:46 PM, David Smiley (@MITRE.org)
dsmi...@mitre.org wrote:
 Does this mean a re-spin?

 I have a low-risk but high impact (in terms of features) bug-fix I would
 like to get in to 4.0:  https://issues.apache.org/jira/browse/LUCENE-
 but I did not want to put the breaks on any release that was being voted on.

 ~ David



 -
  Author: http://www.packtpub.com/apache-solr-3-enterprise-search-server/book
 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/VOTE-release-4-0-take-two-tp4010808p4011255.html
 Sent from the Lucene - Java Developer mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: VOTE: release 4.0 (take two)

2012-10-01 Thread Robert Muir
Sorry, i had this backwards: if it makes 4.0 it needs no CHANGES entry :)

If it doesnt, then it should have one for 4.1

On Mon, Oct 1, 2012 at 1:50 PM, Robert Muir rcm...@gmail.com wrote:
 Patch looks fine: though it should have a CHANGES entry (since it was
 introduced after the beta release from what I can tell)

 The only other things i see listed as bugs are
 https://issues.apache.org/jira/browse/SOLR-3637 and
 https://issues.apache.org/jira/browse/SOLR-3560

 If these things are safe and useful to fix in 4.0 (the three patches
 look safe to me), then do it asap (but of course run the proper
 tests).

 Thanks

 On Mon, Oct 1, 2012 at 1:46 PM, David Smiley (@MITRE.org)
 dsmi...@mitre.org wrote:
 Does this mean a re-spin?

 I have a low-risk but high impact (in terms of features) bug-fix I would
 like to get in to 4.0:  https://issues.apache.org/jira/browse/LUCENE-
 but I did not want to put the breaks on any release that was being voted on.

 ~ David



 -
  Author: http://www.packtpub.com/apache-solr-3-enterprise-search-server/book
 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/VOTE-release-4-0-take-two-tp4010808p4011255.html
 Sent from the Lucene - Java Developer mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4455) CheckIndex shows wrong segment size in 4.0 because SegmentInfoPerCommit.sizeInBytes counts every file 2 times; check for deletions is negated and results in wrong outpu

2012-10-01 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-4455.


   Resolution: Fixed
Fix Version/s: 5.0

Thanks Uwe!  Keeps testing :)

 CheckIndex shows wrong segment size in 4.0 because 
 SegmentInfoPerCommit.sizeInBytes counts every file 2 times; check for 
 deletions is negated and results in wrong output
 -

 Key: LUCENE-4455
 URL: https://issues.apache.org/jira/browse/LUCENE-4455
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 4.0-BETA
Reporter: Uwe Schindler
Assignee: Michael McCandless
Priority: Blocker
 Fix For: 5.0, 4.0

 Attachments: LUCENE-4455.patch


 I found this bug in 4.0-RC1 when I compared the checkindex outputs for 4.0 
 and 3.6.1:
 - The segment size is twice as big as reported by ls -lh. The reason is 
 that SegmentInfoPerCommit.sizeInBytes counts every file 2 times. This seems 
 to be not so serious (it is just statistics), *but*: MergePolicy chooses 
 merges because of this. On the other hand if all segments are twice as big it 
 should not affect merging behaviour (unless absolute sizes in megabytes are 
 used). So we should really fix this - sorry for investigating this so late!
 - The deletions in the segments are inverted. Segments that have no 
 deleteions are reported as those *with deletions* but delGen=-1, and those 
 with deletions show no deletions, this is not serious, but should be fixed, 
 too.
 There is one bug in sizeInBytes (which we should NOT fix), is that for 3.x 
 indexes, if they are from 3.0 and have shared doc stores they are 
 overestimated. But that's fine. For this case, the index was a 3.6.1 segment 
 and a 4.0 segment, both showed double size.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3908) I have a solr issue when i run it on tomcate

2012-10-01 Thread Otis Gospodnetic (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Otis Gospodnetic resolved SOLR-3908.


Resolution: Invalid

Please ask on the Solr user mailing list.

 I have a solr issue when i run it on tomcate
 

 Key: SOLR-3908
 URL: https://issues.apache.org/jira/browse/SOLR-3908
 Project: Solr
  Issue Type: Bug
Reporter: bhavesh jogi

 Oct 1, 2012 6:04:48 PM org.apache.coyote.AbstractProtocol pause
 INFO: Pausing ProtocolHandler [http-apr-8082]
 Oct 1, 2012 6:04:48 PM org.apache.coyote.AbstractProtocol pause
 INFO: Pausing ProtocolHandler [ajp-apr-8009]
 Oct 1, 2012 6:04:48 PM org.apache.catalina.core.StandardService stopInternal
 INFO: Stopping service Catalina
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.SolrCore close
 INFO: []  CLOSING SolrCore org.apache.solr.core.SolrCore@da1515
 Oct 1, 2012 6:04:48 PM org.apache.solr.update.DirectUpdateHandler2 close
 INFO: closing DirectUpdateHandler2{commits=6,autocommit 
 maxDocs=1,autocommit 
 maxTime=1000ms,autocommits=3,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=9,cumulative_deletesById=0,cumulative_deletesByQuery=3,cumulative_errors=0}
 Oct 1, 2012 6:04:48 PM org.apache.solr.update.DirectUpdateHandler2 close
 INFO: closed DirectUpdateHandler2{commits=6,autocommit 
 maxDocs=1,autocommit 
 maxTime=1000ms,autocommits=3,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=9,cumulative_deletesById=0,cumulative_deletesByQuery=3,cumulative_errors=0}
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.SolrCore closeSearcher
 INFO: [] Closing main searcher on request.
 Oct 1, 2012 6:04:48 PM org.apache.solr.search.SolrIndexSearcher close
 INFO: Closing Searcher@1b0d2d0 main
   
 fieldValueCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
   
 filterCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
   
 queryResultCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
   
 documentCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
 Oct 1, 2012 6:04:48 PM org.apache.catalina.loader.WebappClassLoader 
 clearReferencesJdbc
 SEVERE: The web application [/Solr_Search] registered the JDBC driver 
 [com.mysql.jdbc.Driver] but failed to unregister it when the web application 
 was stopped. To prevent a memory leak, the JDBC Driver has been forcibly 
 unregistered.
 Oct 1, 2012 6:04:48 PM org.apache.catalina.loader.WebappClassLoader 
 clearReferencesThreads
 SEVERE: The web application [/Solr_Search] appears to have started a thread 
 named [MySQL Statement Cancellation Timer] but has failed to stop it. This is 
 very likely to create a memory leak.
 Oct 1, 2012 6:04:48 PM org.apache.catalina.loader.WebappClassLoader 
 clearReferencesThreads
 SEVERE: The web application [/Solr_Search] appears to have started a thread 
 named [MultiThreadedHttpConnectionManager cleanup] but has failed to stop it. 
 This is very likely to create a memory leak.
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.SolrCore close
 INFO: []  CLOSING SolrCore org.apache.solr.core.SolrCore@41a12f
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
 INFO: Failed to unregister mbean: org.apache.solr.highlight.RegexFragmenter 
 because it was not registered
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
 INFO: Failed to unregister mbean: /admin/plugins because it was not registered
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
 INFO: Failed to unregister mbean: /admin/system because it was not registered
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
 INFO: Failed to unregister mbean: queryResultCache because it was not 
 registered
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
 INFO: Failed to unregister mbean: 
 org.apache.solr.highlight.BreakIteratorBoundaryScanner because it was not 
 registered
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
 INFO: Failed to unregister mbean: org.apache.solr.highlight.HtmlFormatter 
 because it was not registered
 Oct 1, 2012 6:04:48 PM org.apache.solr.core.JmxMonitoredMap unregister
 INFO: Failed to unregister 

[jira] [Updated] (LUCENE-4451) Memory leak per unique thread caused by RandomizedContext.contexts static map

2012-10-01 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-4451:


Attachment: LUCENE-4451.patch

Patch against the trunk updating rr to 2.0.2. I tested very quickly and at 
least one seed  that was failing with an OOM now passes. I commented out 
GC-helpers Mike added to make it even harder (and un-ignored 
TestDirectPostingsFormat).

Mike could you take a look and maybe beast it a bit? It's getting late on my 
side -- feel free to commit if everything is all right.

 Memory leak per unique thread caused by RandomizedContext.contexts static map
 -

 Key: LUCENE-4451
 URL: https://issues.apache.org/jira/browse/LUCENE-4451
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Dawid Weiss
 Attachments: LUCENE-4451.patch


 In digging on the hard-to-understand OOMEs with
 TestDirectPostingsFormat ... I found (thank you YourKit) that
 RandomizedContext (in randomizedtesting JAR) seems to be holding onto
 all threads created by the test.  The test does create many very short
 lived threads (testing the thread safety of the postings format), in
 BasePostingsFormatTestCase.testTerms), and somehow these seem to tie
 up a lot (~100 MB) of RAM in RandomizedContext.contexts static map.
 For now I've disabled all thread testing (committed {{false }} inside
 {{BPFTC.testTerms}}), but hopefully we can fix the root cause here, eg
 when a thread exits can we clear it from that map?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4444) SpatialArgsParser should let the context parse the shape string

2012-10-01 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-:
-

Fix Version/s: (was: 4.1)
   4.0

I committed to 4.0 in r1392506  since 4.0 is being re-spun and this is fairly 
important.

http://lucene.472066.n3.nabble.com/VOTE-release-4-0-take-two-tp4010808p4011255.html

 SpatialArgsParser should let the context parse the shape string
 ---

 Key: LUCENE-
 URL: https://issues.apache.org/jira/browse/LUCENE-
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/spatial
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 4.0

 Attachments: 
 LUCENE-_Use_SpatialContext_to_read_shape_strings.patch


 SpatialArgsParser is not letting the SpatialContext read the shape string 
 (via readShape(); instead it's using new 
 SpatialArgsParser(ctx).readShape(...shapestring...).  For the standard 
 SpatialContext there is no difference.  But the JTS extension has its own 
 which parses WKT for polygon support, etc.
 Quick fix of course but this really sucks if 4.0 won't have the ability to 
 plug in alternative shapes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: VOTE: release 4.0 (take two)

2012-10-01 Thread David Smiley (@MITRE.org)
I just got it committed to the 4.0 release branch, after I ran tests.

I didn't add a CHANGES.txt entry because this is basically a bug to an
existing entry that is already post-beta (SOLR-3304).

~ David



-
 Author: http://www.packtpub.com/apache-solr-3-enterprise-search-server/book
--
View this message in context: 
http://lucene.472066.n3.nabble.com/VOTE-release-4-0-take-two-tp4010808p4011273.html
Sent from the Lucene - Java Developer mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4451) Memory leak per unique thread caused by RandomizedContext.contexts static map

2012-10-01 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467125#comment-13467125
 ] 

Michael McCandless commented on LUCENE-4451:


+1, seems to work great!  TestDirectPF -mult 3 -nightly quickly OOMEs on trunk 
if I comment out the GC helper nullings, but w/ the patch I ran for 24 iters 
before OOME (this test separately has OOME problems).  So this seems like good 
progress!

Thanks Dawid, I'll commit!

 Memory leak per unique thread caused by RandomizedContext.contexts static map
 -

 Key: LUCENE-4451
 URL: https://issues.apache.org/jira/browse/LUCENE-4451
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Dawid Weiss
 Attachments: LUCENE-4451.patch


 In digging on the hard-to-understand OOMEs with
 TestDirectPostingsFormat ... I found (thank you YourKit) that
 RandomizedContext (in randomizedtesting JAR) seems to be holding onto
 all threads created by the test.  The test does create many very short
 lived threads (testing the thread safety of the postings format), in
 BasePostingsFormatTestCase.testTerms), and somehow these seem to tie
 up a lot (~100 MB) of RAM in RandomizedContext.contexts static map.
 For now I've disabled all thread testing (committed {{false }} inside
 {{BPFTC.testTerms}}), but hopefully we can fix the root cause here, eg
 when a thread exits can we clear it from that map?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4451) Memory leak per unique thread caused by RandomizedContext.contexts static map

2012-10-01 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-4451.


   Resolution: Fixed
Fix Version/s: 5.0
   4.1

Thanks Dawid!

 Memory leak per unique thread caused by RandomizedContext.contexts static map
 -

 Key: LUCENE-4451
 URL: https://issues.apache.org/jira/browse/LUCENE-4451
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Dawid Weiss
 Fix For: 4.1, 5.0

 Attachments: LUCENE-4451.patch


 In digging on the hard-to-understand OOMEs with
 TestDirectPostingsFormat ... I found (thank you YourKit) that
 RandomizedContext (in randomizedtesting JAR) seems to be holding onto
 all threads created by the test.  The test does create many very short
 lived threads (testing the thread safety of the postings format), in
 BasePostingsFormatTestCase.testTerms), and somehow these seem to tie
 up a lot (~100 MB) of RAM in RandomizedContext.contexts static map.
 For now I've disabled all thread testing (committed {{false }} inside
 {{BPFTC.testTerms}}), but hopefully we can fix the root cause here, eg
 when a thread exits can we clear it from that map?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4451) Memory leak per unique thread caused by RandomizedContext.contexts static map

2012-10-01 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467129#comment-13467129
 ] 

Dawid Weiss commented on LUCENE-4451:
-

Thanks Mike. If you're looking into OOMs with YourKit then try to save a 
differential snapshot - this helps greatly in analysis typically. Also, keep 
those snapshots if you think something in the runner may be the cause (I have 
YourKit as well).

 Memory leak per unique thread caused by RandomizedContext.contexts static map
 -

 Key: LUCENE-4451
 URL: https://issues.apache.org/jira/browse/LUCENE-4451
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Dawid Weiss
 Fix For: 4.1, 5.0

 Attachments: LUCENE-4451.patch


 In digging on the hard-to-understand OOMEs with
 TestDirectPostingsFormat ... I found (thank you YourKit) that
 RandomizedContext (in randomizedtesting JAR) seems to be holding onto
 all threads created by the test.  The test does create many very short
 lived threads (testing the thread safety of the postings format), in
 BasePostingsFormatTestCase.testTerms), and somehow these seem to tie
 up a lot (~100 MB) of RAM in RandomizedContext.contexts static map.
 For now I've disabled all thread testing (committed {{false }} inside
 {{BPFTC.testTerms}}), but hopefully we can fix the root cause here, eg
 when a thread exits can we clear it from that map?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-java7 - Build # 3264 - Failure

2012-10-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-java7/3264/

All tests passed

Build Log:
[...truncated 13420 lines...]
check-licenses:
 [echo] License check under: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-java7/lucene
 [licenses] MISSING sha1 checksum file for: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-java7/lucene/test-framework/lib/junit4-ant-2.0.2.jar
 [licenses] MISSING sha1 checksum file for: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-java7/lucene/test-framework/lib/randomizedtesting-runner-2.0.2.jar

[...truncated 1 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-java7/build.xml:73:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-java7/lucene/build.xml:156:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-java7/lucene/tools/custom-tasks.xml:44:
 License check failed. Check the logs.

Total time: 40 minutes 55 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.7.0_07) - Build # 1005 - Failure!

2012-10-01 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Windows/1005/
Java: 64bit/jdk1.7.0_07 -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 13370 lines...]
check-licenses:
 [echo] License check under: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene
 [licenses] MISSING sha1 checksum file for: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\test-framework\lib\junit4-ant-2.0.2.jar
 [licenses] MISSING sha1 checksum file for: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\test-framework\lib\randomizedtesting-runner-2.0.2.jar

[...truncated 1 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:73: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\build.xml:156: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\tools\custom-tasks.xml:44:
 License check failed. Check the logs.

Total time: 46 minutes 43 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Description set: Java: 64bit/jdk1.7.0_07 -XX:+UseParallelGC
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0-ea-b51) - Build # 1488 - Failure!

2012-10-01 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Linux/1488/
Java: 32bit/jdk1.8.0-ea-b51 -client -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 13349 lines...]
check-licenses:
 [echo] License check under: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene
 [licenses] MISSING sha1 checksum file for: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/test-framework/lib/junit4-ant-2.0.2.jar
 [licenses] MISSING sha1 checksum file for: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/test-framework/lib/randomizedtesting-runner-2.0.2.jar

[...truncated 1 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:73: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build.xml:156: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/custom-tasks.xml:44:
 License check failed. Check the logs.

Total time: 33 minutes 27 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Description set: Java: 32bit/jdk1.8.0-ea-b51 -client -XX:+UseConcMarkSweepGC
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Build failed in Jenkins: Lucene-Solr-40-ReleaseSmoke #302

2012-10-01 Thread hudsonseviltwin
See http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/302/

--
[...truncated 41614 lines...]
check-analyzers-kuromoji-uptodate:

jar-analyzers-kuromoji:

check-suggest-uptodate:

jar-suggest:

check-highlighter-uptodate:

jar-highlighter:

check-memory-uptodate:

jar-memory:

check-misc-uptodate:

jar-misc:

check-spatial-uptodate:

jar-spatial:

check-grouping-uptodate:

jar-grouping:

check-queries-uptodate:

jar-queries:

check-queryparser-uptodate:

jar-queryparser:

prep-lucene-jars:

resolve-example:
 [echo] Building solr-example...

ivy-availability-check:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/lucene/ivy-settings.xml

resolve:
 [echo] Building solr-example-DIH...

ivy-availability-check:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/lucene/ivy-settings.xml

resolve:

resolve:

common.init:

compile-lucene-core:

ivy-availability-check:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/lucene/ivy-settings.xml

resolve:

init:

-clover.disable:

-clover.setup:

clover:

compile-core:

init:

-clover.disable:

-clover.setup:

clover:

common.compile-core:

common-solr.compile-core:

compile-core:

jar-core:
  [jar] Building jar: 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/build/contrib/solr-velocity/apache-solr-velocity-4.0.0.jar

jar-src:
  [jar] Building jar: 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/build/contrib/solr-velocity/apache-solr-velocity-4.0.0-src.jar

resolve-groovy:

define-lucene-javadoc-url:

check-lucene-core-javadocs-uptodate:

javadocs-lucene-core:

check-analyzers-common-javadocs-uptodate:

javadocs-analyzers-common:

check-analyzers-icu-javadocs-uptodate:

javadocs-analyzers-icu:

check-analyzers-kuromoji-javadocs-uptodate:

javadocs-analyzers-kuromoji:

check-analyzers-phonetic-javadocs-uptodate:

javadocs-analyzers-phonetic:

check-analyzers-smartcn-javadocs-uptodate:

javadocs-analyzers-smartcn:

check-analyzers-morfologik-javadocs-uptodate:

javadocs-analyzers-morfologik:

check-analyzers-stempel-javadocs-uptodate:

javadocs-analyzers-stempel:

check-analyzers-uima-javadocs-uptodate:

javadocs-analyzers-uima:

check-suggest-javadocs-uptodate:

javadocs-suggest:

check-grouping-javadocs-uptodate:

javadocs-grouping:

check-queries-javadocs-uptodate:

javadocs-queries:

check-queryparser-javadocs-uptodate:

javadocs-queryparser:

check-highlighter-javadocs-uptodate:

javadocs-highlighter:

check-memory-javadocs-uptodate:

javadocs-memory:

check-misc-javadocs-uptodate:

javadocs-misc:

check-spatial-javadocs-uptodate:

javadocs-spatial:

check-test-framework-javadocs-uptodate:

javadocs-test-framework:

lucene-javadocs:

check-solr-core-javadocs-uptodate:

javadocs-solr-core:

javadocs:
 [echo] Building solr-velocity...

download-java6-javadoc-packagelist:
   [delete] Deleting: 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/build/docs/solr-velocity/stylesheet.css
  [javadoc] Generating Javadoc
  [javadoc] Javadoc execution
  [javadoc] Loading source files for package org.apache.solr.response...
  [javadoc] Constructing Javadoc information...
  [javadoc] Standard Doclet version 1.7.0_01
  [javadoc] Building tree for all the packages and classes...
  [javadoc] Building index for all the packages and classes...
  [javadoc] Building index for all classes...
  [javadoc] Generating 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/build/docs/solr-velocity/help-doc.html...
  [javadoc] Note: Custom tags that were not seen:  @lucene.internal, 
@lucene.experimental
  [jar] Building jar: 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/build/contrib/solr-velocity/apache-solr-velocity-4.0.0-javadoc.jar

dist-maven-common:
[artifact:install-provider] Installing provider: 
org.apache.maven.wagon:wagon-ssh:jar:1.0-beta-7:runtime
[artifact:deploy] Deploying to 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/package/maven/
[artifact:deploy] Uploading: 
org/apache/solr/solr-velocity/4.0.0/solr-velocity-4.0.0.jar to repository local 
at 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/package/maven/
[artifact:deploy] Transferring 22K from local
[artifact:deploy] Uploaded 22K
[artifact:deploy] [INFO] Retrieving previous metadata from local
[artifact:deploy] [INFO] repository metadata for: 'artifact 
org.apache.solr:solr-velocity' could not be found on repository: local, so will 
be created
[artifact:deploy] [INFO] Uploading repository metadata for: 'artifact 
org.apache.solr:solr-velocity'
[artifact:deploy] [INFO] Uploading project 

Build failed in Jenkins: Lucene-Solr-40-ReleaseSmoke #303

2012-10-01 Thread hudsonseviltwin
See http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/303/

--
[...truncated 41599 lines...]
check-analyzers-kuromoji-uptodate:

jar-analyzers-kuromoji:

check-suggest-uptodate:

jar-suggest:

check-highlighter-uptodate:

jar-highlighter:

check-memory-uptodate:

jar-memory:

check-misc-uptodate:

jar-misc:

check-spatial-uptodate:

jar-spatial:

check-grouping-uptodate:

jar-grouping:

check-queries-uptodate:

jar-queries:

check-queryparser-uptodate:

jar-queryparser:

prep-lucene-jars:

resolve-example:
 [echo] Building solr-example...

ivy-availability-check:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/lucene/ivy-settings.xml

resolve:
 [echo] Building solr-example-DIH...

ivy-availability-check:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/lucene/ivy-settings.xml

resolve:

resolve:

common.init:

compile-lucene-core:

ivy-availability-check:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/lucene/ivy-settings.xml

resolve:

init:

-clover.disable:

-clover.setup:

clover:

compile-core:

init:

-clover.disable:

-clover.setup:

clover:

common.compile-core:

common-solr.compile-core:

compile-core:

jar-core:
  [jar] Building jar: 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/build/contrib/solr-velocity/apache-solr-velocity-4.0.0.jar

jar-src:
  [jar] Building jar: 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/build/contrib/solr-velocity/apache-solr-velocity-4.0.0-src.jar

resolve-groovy:

define-lucene-javadoc-url:

check-lucene-core-javadocs-uptodate:

javadocs-lucene-core:

check-analyzers-common-javadocs-uptodate:

javadocs-analyzers-common:

check-analyzers-icu-javadocs-uptodate:

javadocs-analyzers-icu:

check-analyzers-kuromoji-javadocs-uptodate:

javadocs-analyzers-kuromoji:

check-analyzers-phonetic-javadocs-uptodate:

javadocs-analyzers-phonetic:

check-analyzers-smartcn-javadocs-uptodate:

javadocs-analyzers-smartcn:

check-analyzers-morfologik-javadocs-uptodate:

javadocs-analyzers-morfologik:

check-analyzers-stempel-javadocs-uptodate:

javadocs-analyzers-stempel:

check-analyzers-uima-javadocs-uptodate:

javadocs-analyzers-uima:

check-suggest-javadocs-uptodate:

javadocs-suggest:

check-grouping-javadocs-uptodate:

javadocs-grouping:

check-queries-javadocs-uptodate:

javadocs-queries:

check-queryparser-javadocs-uptodate:

javadocs-queryparser:

check-highlighter-javadocs-uptodate:

javadocs-highlighter:

check-memory-javadocs-uptodate:

javadocs-memory:

check-misc-javadocs-uptodate:

javadocs-misc:

check-spatial-javadocs-uptodate:

javadocs-spatial:

check-test-framework-javadocs-uptodate:

javadocs-test-framework:

lucene-javadocs:

check-solr-core-javadocs-uptodate:

javadocs-solr-core:

javadocs:
 [echo] Building solr-velocity...

download-java6-javadoc-packagelist:
   [delete] Deleting: 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/build/docs/solr-velocity/stylesheet.css
  [javadoc] Generating Javadoc
  [javadoc] Javadoc execution
  [javadoc] Loading source files for package org.apache.solr.response...
  [javadoc] Constructing Javadoc information...
  [javadoc] Standard Doclet version 1.7.0_01
  [javadoc] Building tree for all the packages and classes...
  [javadoc] Building index for all the packages and classes...
  [javadoc] Building index for all classes...
  [javadoc] Generating 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/build/docs/solr-velocity/help-doc.html...
  [javadoc] Note: Custom tags that were not seen:  @lucene.internal, 
@lucene.experimental
  [jar] Building jar: 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/build/contrib/solr-velocity/apache-solr-velocity-4.0.0-javadoc.jar

dist-maven-common:
[artifact:install-provider] Installing provider: 
org.apache.maven.wagon:wagon-ssh:jar:1.0-beta-7:runtime
[artifact:deploy] Deploying to 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/package/maven/
[artifact:deploy] Uploading: 
org/apache/solr/solr-velocity/4.0.0/solr-velocity-4.0.0.jar to repository local 
at 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/package/maven/
[artifact:deploy] Transferring 22K from local
[artifact:deploy] Uploaded 22K
[artifact:deploy] [INFO] Retrieving previous metadata from local
[artifact:deploy] [INFO] repository metadata for: 'artifact 
org.apache.solr:solr-velocity' could not be found on repository: local, so will 
be created
[artifact:deploy] [INFO] Uploading repository metadata for: 'artifact 
org.apache.solr:solr-velocity'
[artifact:deploy] [INFO] Uploading project 

Build failed in Jenkins: Lucene-Solr-40-ReleaseSmoke #304

2012-10-01 Thread hudsonseviltwin
See http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/304/

--
[...truncated 41600 lines...]
check-analyzers-kuromoji-uptodate:

jar-analyzers-kuromoji:

check-suggest-uptodate:

jar-suggest:

check-highlighter-uptodate:

jar-highlighter:

check-memory-uptodate:

jar-memory:

check-misc-uptodate:

jar-misc:

check-spatial-uptodate:

jar-spatial:

check-grouping-uptodate:

jar-grouping:

check-queries-uptodate:

jar-queries:

check-queryparser-uptodate:

jar-queryparser:

prep-lucene-jars:

resolve-example:
 [echo] Building solr-example...

ivy-availability-check:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/lucene/ivy-settings.xml

resolve:
 [echo] Building solr-example-DIH...

ivy-availability-check:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/lucene/ivy-settings.xml

resolve:

resolve:

common.init:

compile-lucene-core:

ivy-availability-check:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/lucene/ivy-settings.xml

resolve:

init:

-clover.disable:

-clover.setup:

clover:

compile-core:

init:

-clover.disable:

-clover.setup:

clover:

common.compile-core:

common-solr.compile-core:

compile-core:

jar-core:
  [jar] Building jar: 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/build/contrib/solr-velocity/apache-solr-velocity-4.0.0.jar

jar-src:
  [jar] Building jar: 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/build/contrib/solr-velocity/apache-solr-velocity-4.0.0-src.jar

resolve-groovy:

define-lucene-javadoc-url:

check-lucene-core-javadocs-uptodate:

javadocs-lucene-core:

check-analyzers-common-javadocs-uptodate:

javadocs-analyzers-common:

check-analyzers-icu-javadocs-uptodate:

javadocs-analyzers-icu:

check-analyzers-kuromoji-javadocs-uptodate:

javadocs-analyzers-kuromoji:

check-analyzers-phonetic-javadocs-uptodate:

javadocs-analyzers-phonetic:

check-analyzers-smartcn-javadocs-uptodate:

javadocs-analyzers-smartcn:

check-analyzers-morfologik-javadocs-uptodate:

javadocs-analyzers-morfologik:

check-analyzers-stempel-javadocs-uptodate:

javadocs-analyzers-stempel:

check-analyzers-uima-javadocs-uptodate:

javadocs-analyzers-uima:

check-suggest-javadocs-uptodate:

javadocs-suggest:

check-grouping-javadocs-uptodate:

javadocs-grouping:

check-queries-javadocs-uptodate:

javadocs-queries:

check-queryparser-javadocs-uptodate:

javadocs-queryparser:

check-highlighter-javadocs-uptodate:

javadocs-highlighter:

check-memory-javadocs-uptodate:

javadocs-memory:

check-misc-javadocs-uptodate:

javadocs-misc:

check-spatial-javadocs-uptodate:

javadocs-spatial:

check-test-framework-javadocs-uptodate:

javadocs-test-framework:

lucene-javadocs:

check-solr-core-javadocs-uptodate:

javadocs-solr-core:

javadocs:
 [echo] Building solr-velocity...

download-java6-javadoc-packagelist:
   [delete] Deleting: 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/build/docs/solr-velocity/stylesheet.css
  [javadoc] Generating Javadoc
  [javadoc] Javadoc execution
  [javadoc] Loading source files for package org.apache.solr.response...
  [javadoc] Constructing Javadoc information...
  [javadoc] Standard Doclet version 1.7.0_01
  [javadoc] Building tree for all the packages and classes...
  [javadoc] Building index for all the packages and classes...
  [javadoc] Building index for all classes...
  [javadoc] Generating 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/build/docs/solr-velocity/help-doc.html...
  [javadoc] Note: Custom tags that were not seen:  @lucene.internal, 
@lucene.experimental
  [jar] Building jar: 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/build/contrib/solr-velocity/apache-solr-velocity-4.0.0-javadoc.jar

dist-maven-common:
[artifact:install-provider] Installing provider: 
org.apache.maven.wagon:wagon-ssh:jar:1.0-beta-7:runtime
[artifact:deploy] Deploying to 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/package/maven/
[artifact:deploy] Uploading: 
org/apache/solr/solr-velocity/4.0.0/solr-velocity-4.0.0.jar to repository local 
at 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/package/maven/
[artifact:deploy] Transferring 22K from local
[artifact:deploy] Uploaded 22K
[artifact:deploy] [INFO] Retrieving previous metadata from local
[artifact:deploy] [INFO] repository metadata for: 'artifact 
org.apache.solr:solr-velocity' could not be found on repository: local, so will 
be created
[artifact:deploy] [INFO] Uploading repository metadata for: 'artifact 
org.apache.solr:solr-velocity'
[artifact:deploy] [INFO] Uploading project 

Build failed in Jenkins: Lucene-Solr-40-ReleaseSmoke #305

2012-10-01 Thread hudsonseviltwin
See http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/305/

--
[...truncated 41599 lines...]
check-analyzers-kuromoji-uptodate:

jar-analyzers-kuromoji:

check-suggest-uptodate:

jar-suggest:

check-highlighter-uptodate:

jar-highlighter:

check-memory-uptodate:

jar-memory:

check-misc-uptodate:

jar-misc:

check-spatial-uptodate:

jar-spatial:

check-grouping-uptodate:

jar-grouping:

check-queries-uptodate:

jar-queries:

check-queryparser-uptodate:

jar-queryparser:

prep-lucene-jars:

resolve-example:
 [echo] Building solr-example...

ivy-availability-check:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/lucene/ivy-settings.xml

resolve:
 [echo] Building solr-example-DIH...

ivy-availability-check:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/lucene/ivy-settings.xml

resolve:

resolve:

common.init:

compile-lucene-core:

ivy-availability-check:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/lucene/ivy-settings.xml

resolve:

init:

-clover.disable:

-clover.setup:

clover:

compile-core:

init:

-clover.disable:

-clover.setup:

clover:

common.compile-core:

common-solr.compile-core:

compile-core:

jar-core:
  [jar] Building jar: 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/build/contrib/solr-velocity/apache-solr-velocity-4.0.0.jar

jar-src:
  [jar] Building jar: 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/build/contrib/solr-velocity/apache-solr-velocity-4.0.0-src.jar

resolve-groovy:

define-lucene-javadoc-url:

check-lucene-core-javadocs-uptodate:

javadocs-lucene-core:

check-analyzers-common-javadocs-uptodate:

javadocs-analyzers-common:

check-analyzers-icu-javadocs-uptodate:

javadocs-analyzers-icu:

check-analyzers-kuromoji-javadocs-uptodate:

javadocs-analyzers-kuromoji:

check-analyzers-phonetic-javadocs-uptodate:

javadocs-analyzers-phonetic:

check-analyzers-smartcn-javadocs-uptodate:

javadocs-analyzers-smartcn:

check-analyzers-morfologik-javadocs-uptodate:

javadocs-analyzers-morfologik:

check-analyzers-stempel-javadocs-uptodate:

javadocs-analyzers-stempel:

check-analyzers-uima-javadocs-uptodate:

javadocs-analyzers-uima:

check-suggest-javadocs-uptodate:

javadocs-suggest:

check-grouping-javadocs-uptodate:

javadocs-grouping:

check-queries-javadocs-uptodate:

javadocs-queries:

check-queryparser-javadocs-uptodate:

javadocs-queryparser:

check-highlighter-javadocs-uptodate:

javadocs-highlighter:

check-memory-javadocs-uptodate:

javadocs-memory:

check-misc-javadocs-uptodate:

javadocs-misc:

check-spatial-javadocs-uptodate:

javadocs-spatial:

check-test-framework-javadocs-uptodate:

javadocs-test-framework:

lucene-javadocs:

check-solr-core-javadocs-uptodate:

javadocs-solr-core:

javadocs:
 [echo] Building solr-velocity...

download-java6-javadoc-packagelist:
   [delete] Deleting: 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/build/docs/solr-velocity/stylesheet.css
  [javadoc] Generating Javadoc
  [javadoc] Javadoc execution
  [javadoc] Loading source files for package org.apache.solr.response...
  [javadoc] Constructing Javadoc information...
  [javadoc] Standard Doclet version 1.7.0_01
  [javadoc] Building tree for all the packages and classes...
  [javadoc] Building index for all the packages and classes...
  [javadoc] Building index for all classes...
  [javadoc] Generating 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/build/docs/solr-velocity/help-doc.html...
  [javadoc] Note: Custom tags that were not seen:  @lucene.internal, 
@lucene.experimental
  [jar] Building jar: 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/build/contrib/solr-velocity/apache-solr-velocity-4.0.0-javadoc.jar

dist-maven-common:
[artifact:install-provider] Installing provider: 
org.apache.maven.wagon:wagon-ssh:jar:1.0-beta-7:runtime
[artifact:deploy] Deploying to 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/package/maven/
[artifact:deploy] Uploading: 
org/apache/solr/solr-velocity/4.0.0/solr-velocity-4.0.0.jar to repository local 
at 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/package/maven/
[artifact:deploy] Transferring 22K from local
[artifact:deploy] Uploaded 22K
[artifact:deploy] [INFO] Retrieving previous metadata from local
[artifact:deploy] [INFO] repository metadata for: 'artifact 
org.apache.solr:solr-velocity' could not be found on repository: local, so will 
be created
[artifact:deploy] [INFO] Uploading repository metadata for: 'artifact 
org.apache.solr:solr-velocity'
[artifact:deploy] [INFO] Uploading project 

Build failed in Jenkins: Lucene-Solr-40-ReleaseSmoke #306

2012-10-01 Thread hudsonseviltwin
See http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/306/

--
[...truncated 41606 lines...]
jar-suggest:

check-highlighter-uptodate:

jar-highlighter:

check-memory-uptodate:

jar-memory:

check-misc-uptodate:

jar-misc:

check-spatial-uptodate:

jar-spatial:

check-grouping-uptodate:

jar-grouping:

check-queries-uptodate:

jar-queries:

check-queryparser-uptodate:

jar-queryparser:

prep-lucene-jars:

resolve-example:
 [echo] Building solr-example...

ivy-availability-check:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/lucene/ivy-settings.xml

resolve:
 [echo] Building solr-example-DIH...

ivy-availability-check:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/lucene/ivy-settings.xml

resolve:

resolve:

common.init:

compile-lucene-core:

ivy-availability-check:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/lucene/ivy-settings.xml

resolve:

init:

-clover.disable:

-clover.setup:

clover:

compile-core:

init:

-clover.disable:

-clover.setup:

clover:

common.compile-core:

common-solr.compile-core:

compile-core:

jar-core:
  [jar] Building jar: 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/build/contrib/solr-velocity/apache-solr-velocity-4.0.0.jar

jar-src:
  [jar] Building jar: 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/build/contrib/solr-velocity/apache-solr-velocity-4.0.0-src.jar

resolve-groovy:

define-lucene-javadoc-url:

check-lucene-core-javadocs-uptodate:

javadocs-lucene-core:

check-analyzers-common-javadocs-uptodate:

javadocs-analyzers-common:

check-analyzers-icu-javadocs-uptodate:

javadocs-analyzers-icu:

check-analyzers-kuromoji-javadocs-uptodate:

javadocs-analyzers-kuromoji:

check-analyzers-phonetic-javadocs-uptodate:

javadocs-analyzers-phonetic:

check-analyzers-smartcn-javadocs-uptodate:

javadocs-analyzers-smartcn:

check-analyzers-morfologik-javadocs-uptodate:

javadocs-analyzers-morfologik:

check-analyzers-stempel-javadocs-uptodate:

javadocs-analyzers-stempel:

check-analyzers-uima-javadocs-uptodate:

javadocs-analyzers-uima:

check-suggest-javadocs-uptodate:

javadocs-suggest:

check-grouping-javadocs-uptodate:

javadocs-grouping:

check-queries-javadocs-uptodate:

javadocs-queries:

check-queryparser-javadocs-uptodate:

javadocs-queryparser:

check-highlighter-javadocs-uptodate:

javadocs-highlighter:

check-memory-javadocs-uptodate:

javadocs-memory:

check-misc-javadocs-uptodate:

javadocs-misc:

check-spatial-javadocs-uptodate:

javadocs-spatial:

check-test-framework-javadocs-uptodate:

javadocs-test-framework:

lucene-javadocs:

check-solr-core-javadocs-uptodate:

javadocs-solr-core:

javadocs:
 [echo] Building solr-velocity...

download-java6-javadoc-packagelist:
   [delete] Deleting: 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/build/docs/solr-velocity/stylesheet.css
  [javadoc] Generating Javadoc
  [javadoc] Javadoc execution
  [javadoc] Loading source files for package org.apache.solr.response...
  [javadoc] Constructing Javadoc information...
  [javadoc] Standard Doclet version 1.7.0_01
  [javadoc] Building tree for all the packages and classes...
  [javadoc] Building index for all the packages and classes...
  [javadoc] Building index for all classes...
  [javadoc] Generating 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/build/docs/solr-velocity/help-doc.html...
  [javadoc] Note: Custom tags that were not seen:  @lucene.internal, 
@lucene.experimental
  [jar] Building jar: 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/build/contrib/solr-velocity/apache-solr-velocity-4.0.0-javadoc.jar

dist-maven-common:
[artifact:install-provider] Installing provider: 
org.apache.maven.wagon:wagon-ssh:jar:1.0-beta-7:runtime
[artifact:deploy] Deploying to 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/package/maven/
[artifact:deploy] Uploading: 
org/apache/solr/solr-velocity/4.0.0/solr-velocity-4.0.0.jar to repository local 
at 
http://sierranevada.servebeer.com/job/Lucene-Solr-40-ReleaseSmoke/ws/solr/package/maven/
[artifact:deploy] Transferring 22K from local
[artifact:deploy] Uploaded 22K
[artifact:deploy] [INFO] Retrieving previous metadata from local
[artifact:deploy] [INFO] repository metadata for: 'artifact 
org.apache.solr:solr-velocity' could not be found on repository: local, so will 
be created
[artifact:deploy] [INFO] Uploading repository metadata for: 'artifact 
org.apache.solr:solr-velocity'
[artifact:deploy] [INFO] Uploading project information for solr-velocity 4.0.0
[artifact:deploy] Uploading: 

[jira] [Updated] (LUCENE-3846) Fuzzy suggester

2012-10-01 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-3846:


Attachment: LUCENE-3846_fuzzy_analyzing.patch

here's my hacky Fuzzy+Analyzing prototype.

But we need to fix intersectPrefixPaths to be able to efficiently intersect 
transition ranges (e.g. findTargetArc + readNextArc through the range?).

anyway we should see how slow this is compared to mike's: the advantage would 
be you would still get all the stuff AnalyzingSuggester has...

 Fuzzy suggester
 ---

 Key: LUCENE-3846
 URL: https://issues.apache.org/jira/browse/LUCENE-3846
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 4.1

 Attachments: LUCENE-3846_fuzzy_analyzing.patch, LUCENE-3846.patch, 
 LUCENE-3846.patch


 Would be nice to have a suggester that can handle some fuzziness (like spell 
 correction) so that it's able to suggest completions that are near what you 
 typed.
 As a first go at this, I implemented 1T (ie up to 1 edit, including a 
 transposition), except the first letter must be correct.
 But there is a penalty, ie, the corrected suggestion needs to have a much 
 higher freq than the exact match suggestion before it can compete.
 Still tons of nocommits, and somehow we should merge this / make it work with 
 analyzing suggester too (LUCENE-3842).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



SolrIndexSearcher and PostFilter questions/suggestions

2012-10-01 Thread Amit Nithian
Hi all,.

I was working on implementing a custom PostFilter based on Mikhail's
response to one of my questions and this looks like a new and *very
very* awesome feature that, if no one else does, I plan to blog and
talk about at the next meetup if possible (at least my limited
understanding of it).

However, in so doing this, I had to read the code and figure out how
to get everything to hook up properly which led me to some (mostly
design/style) questions:

1) In the SolrIndexSearcher, why are the notCached and postFilter
lists of type Query and not ExtendedQuery, likewise for the
Comparator. I see a lot of type casting between Query and
ExtendedQuery which simply having this be defined using the
ExtendedQuery type would solve. Granted there would be a forced
casting from ExtendedQuery to Query later in the method but it's
tolerable since the method signatures require Query and ExtendedQuery
doesn't inherit from Query.

2) In the ExtendedQuery interface, change getCache() to isCached() to
keep consistent with the Java beans method naming convention

3) Make 100 a constant in the ExtendedQuery to avoid hardcoding this
number and allow for future changes to this notion of expensive

4) In the SolrIndexSearcher, it's a bit confusing to me that a cheap
(cost  100) PostFilter implementation won't get added to the list of
postFilters which made it somewhat misleading as to the reason to
implement the PostFilter interface since the delegating collector (the
core aspect of this feature) won't get called. What would be the
downside to saying if(cost = 100 OR  instanceof PostFilter) then add
to postFilters list?

As always, I am more than willing to make a patch for any and all of
these suggestions but before I go doing that (and thus creating a
dependency on a custom built version of Solr as opposed to a standard
build), I wanted to ask these first.

Thanks!
Amit

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) - Build # 1006 - Failure!

2012-10-01 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Windows/1006/
Java: 32bit/jdk1.7.0_07 -client -XX:+UseParallelGC

1 tests failed.
REGRESSION:  org.apache.lucene.index.TestBagOfPostings.test

Error Message:
expected:418 but was:-1

Stack Trace:
java.lang.AssertionError: expected:418 but was:-1
at 
__randomizedtesting.SeedInfo.seed([3A7F77FA1A15AADB:B22B4820B4E9C723]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.lucene.index.TestBagOfPostings.test(TestBagOfPostings.java:114)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:722)




Build Log:
[...truncated 331 lines...]
[junit4:junit4] Suite: org.apache.lucene.index.TestBagOfPostings
[junit4:junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=TestBagOfPostings -Dtests.method=test -Dtests.seed=3A7F77FA1A15AADB 
-Dtests.slow=true -Dtests.locale=sr -Dtests.timezone=America/Campo_Grande 
-Dtests.file.encoding=Cp1252
[junit4:junit4] FAILURE 

Re: [JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) - Build # 1006 - Failure!

2012-10-01 Thread Robert Muir
Ill take care of this: 3.x indexes don't support this statistic.

On Mon, Oct 1, 2012 at 10:19 PM, Policeman Jenkins Server
jenk...@sd-datasolutions.de wrote:
 Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Windows/1006/
 Java: 32bit/jdk1.7.0_07 -client -XX:+UseParallelGC

 1 tests failed.
 REGRESSION:  org.apache.lucene.index.TestBagOfPostings.test

 Error Message:
 expected:418 but was:-1

 Stack Trace:
 java.lang.AssertionError: expected:418 but was:-1
 at 
 __randomizedtesting.SeedInfo.seed([3A7F77FA1A15AADB:B22B4820B4E9C723]:0)
 at org.junit.Assert.fail(Assert.java:93)
 at org.junit.Assert.failNotEquals(Assert.java:647)
 at org.junit.Assert.assertEquals(Assert.java:128)
 at org.junit.Assert.assertEquals(Assert.java:472)
 at org.junit.Assert.assertEquals(Assert.java:456)
 at 
 org.apache.lucene.index.TestBagOfPostings.test(TestBagOfPostings.java:114)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
 at 
 org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
 at java.lang.Thread.run(Thread.java:722)




 Build Log:
 [...truncated 331 lines...]
 [junit4:junit4] Suite: org.apache.lucene.index.TestBagOfPostings
 

[JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_07) - Build # 1007 - Still Failing!

2012-10-01 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Windows/1007/
Java: 32bit/jdk1.7.0_07 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.lucene.index.TestBagOfPostings.test

Error Message:
expected:437 but was:-1

Stack Trace:
java.lang.AssertionError: expected:437 but was:-1
at 
__randomizedtesting.SeedInfo.seed([5420E26FA5494CAD:DC74DDB50BB52155]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.lucene.index.TestBagOfPostings.test(TestBagOfPostings.java:114)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:722)




Build Log:
[...truncated 1060 lines...]
[junit4:junit4] Suite: org.apache.lucene.index.TestBagOfPostings
[junit4:junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=TestBagOfPostings -Dtests.method=test -Dtests.seed=5420E26FA5494CAD 
-Dtests.slow=true -Dtests.locale=hr_HR -Dtests.timezone=America/Virgin 
-Dtests.file.encoding=US-ASCII
[junit4:junit4] 

[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0-ea-b51) - Build # 1482 - Failure!

2012-10-01 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-4.x-Linux/1482/
Java: 32bit/jdk1.8.0-ea-b51 -client -XX:+UseParallelGC

1 tests failed.
REGRESSION:  org.apache.lucene.codecs.memory.TestMemoryPostingsFormat.testRandom

Error Message:
Captured an uncaught exception in thread: Thread[id=740, name=Thread-723, 
state=RUNNABLE, group=TGRP-TestMemoryPostingsFormat]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=740, name=Thread-723, state=RUNNABLE, 
group=TGRP-TestMemoryPostingsFormat]
Caused by: java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap 
space
at __randomizedtesting.SeedInfo.seed([323F98FED4A9601D]:0)
at 
org.apache.lucene.index.BasePostingsFormatTestCase$TestThread.run(BasePostingsFormatTestCase.java:826)
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.BytesRef.init(BytesRef.java:75)
at 
org.apache.lucene.util.fst.ByteSequenceOutputs.read(ByteSequenceOutputs.java:124)
at 
org.apache.lucene.util.fst.ByteSequenceOutputs.read(ByteSequenceOutputs.java:33)
at org.apache.lucene.util.fst.FST.readNextRealArc(FST.java:962)
at org.apache.lucene.util.fst.FST.readFirstRealTargetArc(FST.java:873)
at org.apache.lucene.util.fst.FST.readFirstTargetArc(FST.java:842)
at org.apache.lucene.util.fst.FSTEnum.rewindPrefix(FSTEnum.java:67)
at org.apache.lucene.util.fst.FSTEnum.doSeekExact(FSTEnum.java:431)
at 
org.apache.lucene.util.fst.BytesRefFSTEnum.seekExact(BytesRefFSTEnum.java:84)
at 
org.apache.lucene.codecs.memory.MemoryPostingsFormat$FSTTermsEnum.seekExact(MemoryPostingsFormat.java:656)
at 
org.apache.lucene.index.BasePostingsFormatTestCase.testTermsOneThread(BasePostingsFormatTestCase.java:893)
at 
org.apache.lucene.index.BasePostingsFormatTestCase.access$200(BasePostingsFormatTestCase.java:80)
at 
org.apache.lucene.index.BasePostingsFormatTestCase$TestThread.run(BasePostingsFormatTestCase.java:824)




Build Log:
[...truncated 6019 lines...]
[junit4:junit4] Suite: org.apache.lucene.codecs.memory.TestMemoryPostingsFormat
[junit4:junit4]   2 十月 02, 2012 12:37:47 上午 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
[junit4:junit4]   2 WARNING: Uncaught exception in thread: 
Thread[Thread-723,5,TGRP-TestMemoryPostingsFormat]
[junit4:junit4]   2 java.lang.RuntimeException: java.lang.OutOfMemoryError: 
Java heap space
[junit4:junit4]   2at 
__randomizedtesting.SeedInfo.seed([323F98FED4A9601D]:0)
[junit4:junit4]   2at 
org.apache.lucene.index.BasePostingsFormatTestCase$TestThread.run(BasePostingsFormatTestCase.java:826)
[junit4:junit4]   2 Caused by: java.lang.OutOfMemoryError: Java heap space
[junit4:junit4]   2at 
org.apache.lucene.util.BytesRef.init(BytesRef.java:75)
[junit4:junit4]   2at 
org.apache.lucene.util.fst.ByteSequenceOutputs.read(ByteSequenceOutputs.java:124)
[junit4:junit4]   2at 
org.apache.lucene.util.fst.ByteSequenceOutputs.read(ByteSequenceOutputs.java:33)
[junit4:junit4]   2at 
org.apache.lucene.util.fst.FST.readNextRealArc(FST.java:962)
[junit4:junit4]   2at 
org.apache.lucene.util.fst.FST.readFirstRealTargetArc(FST.java:873)
[junit4:junit4]   2at 
org.apache.lucene.util.fst.FST.readFirstTargetArc(FST.java:842)
[junit4:junit4]   2at 
org.apache.lucene.util.fst.FSTEnum.rewindPrefix(FSTEnum.java:67)
[junit4:junit4]   2at 
org.apache.lucene.util.fst.FSTEnum.doSeekExact(FSTEnum.java:431)
[junit4:junit4]   2at 
org.apache.lucene.util.fst.BytesRefFSTEnum.seekExact(BytesRefFSTEnum.java:84)
[junit4:junit4]   2at 
org.apache.lucene.codecs.memory.MemoryPostingsFormat$FSTTermsEnum.seekExact(MemoryPostingsFormat.java:656)
[junit4:junit4]   2at 
org.apache.lucene.index.BasePostingsFormatTestCase.testTermsOneThread(BasePostingsFormatTestCase.java:893)
[junit4:junit4]   2at 
org.apache.lucene.index.BasePostingsFormatTestCase.access$200(BasePostingsFormatTestCase.java:80)
[junit4:junit4]   2at 
org.apache.lucene.index.BasePostingsFormatTestCase$TestThread.run(BasePostingsFormatTestCase.java:824)
[junit4:junit4]   2 
[junit4:junit4]   2 十月 02, 2012 12:37:56 上午 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
[junit4:junit4]   2 WARNING: Uncaught exception in thread: 
Thread[Thread-745,5,TGRP-TestMemoryPostingsFormat]
[junit4:junit4]   2 java.lang.RuntimeException: java.lang.OutOfMemoryError: 
Java heap space
[junit4:junit4]   2at 
__randomizedtesting.SeedInfo.seed([323F98FED4A9601D]:0)
[junit4:junit4]   2at 
org.apache.lucene.index.BasePostingsFormatTestCase$TestThread.run(BasePostingsFormatTestCase.java:826)
[junit4:junit4]   2 Caused by: java.lang.OutOfMemoryError: Java heap space
[junit4:junit4]   2at 
org.apache.lucene.util.BytesRef.init(BytesRef.java:75)
[junit4:junit4]   2at 

[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.7.0_07) - Build # 1010 - Failure!

2012-10-01 Thread Policeman Jenkins Server
Build: http://jenkins.sd-datasolutions.de/job/Lucene-Solr-trunk-Windows/1010/
Java: 64bit/jdk1.7.0_07 -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 27318 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:245: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\build.xml:552: 
Unable to delete file 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\lucene\build\analysis\common\lucene-analyzers-common-5.0-SNAPSHOT.jar

Total time: 57 minutes 5 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Description set: Java: 64bit/jdk1.7.0_07 -XX:+UseSerialGC
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org