[jira] [Commented] (SOLR-4208) Refactor edismax query parser

2012-12-18 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534780#comment-13534780
 ] 

Markus Jelsma commented on SOLR-4208:
-

This is a very welcome change.The unit test 
TestExtendedDismaxParser.testAliasingBoost fails but also fails without your 
patch.
+1 

 Refactor edismax query parser
 -

 Key: SOLR-4208
 URL: https://issues.apache.org/jira/browse/SOLR-4208
 Project: Solr
  Issue Type: Improvement
Reporter: Tomás Fernández Löbbe
Priority: Minor
 Fix For: 4.1, 5.0

 Attachments: SOLR-4208.patch


 With successive changes, the edismax query parser has become more complex. It 
 would be nice to refactor it to reduce code complexity, also to allow better 
 extension and code reuse.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: DocsEnum.freq()

2012-12-18 Thread Shai Erera
Are you sure that all Codecs return 1 if you indexed with DOCS_ONLY? Do we
have a test that can trip bad Codecs?
It may be more than just changing the documentation...

Why would e.g. TermQuery need to write specialized code for these cases? I
looked at TermScorer, and its freq() just returns docsEnum.freq().

I think that Similarity may be affected? Which brings the question - how do
Similarity impls know what flags the DE was opened with, and shouldn't they
be specialized?
E.g. TFIDFSimilarity.ExactTFIDFDocScorer uses the freq passed to score() as
an index to an array, so clearly it assumes it is = 0 and also 
scoreCache.length.
So I wonder what will happen to it when someone's Codec will return a
negative value or MAX_INT in case frequencies aren't needed?

I do realize that you shouldn't call Similarity with missing information,
and TermWeight obtains a DocsEnum with frequencies, so in that regard it is
safe.
And if you do obtain a DocsEnum with FLAG_NONE, you'd better know what
you're doing and don't pass a random freq() to Similarity.

I lean towards documenting the spec from above, and ensuring that all
Codecs return 1 for DOCS_ONLY.

If in the future we'll need to handle the case where someone receives a
DocsEnum which it needs to consume, and doesn't know which flags were used
to open it, we can always add a getFlags to DE.

Shai


On Mon, Dec 17, 2012 at 11:30 PM, Michael McCandless 
luc...@mikemccandless.com wrote:

 On Mon, Dec 17, 2012 at 4:02 PM, Shai Erera ser...@gmail.com wrote:
  How do these two go together?
 
  I think for DOCS_ONLY it makes sense that we lie (say freq=1 when we
  don't know): lots of places would otherwise have to be special cased
  for when they consume DOCS_ONLY vs DOCS_AND_POSITIONS.
 
 
  and
 
  I'm also not sure that
  all codecs return 1 today if the fields was indexed with DOCS_ONLY ...
 
 
   That just makes it even worse right? I.e., we have code today that
 relies
  no that behavior, but we're not sure it works w/ all Codecs?

 Sorry, for my last sentence above I think I meant I'm also not sure
 that all codecs return 1 today if you ask for FLAG_NONE.

  Remember that DocIdSetIterator.nextDoc() was loosely specified? It was
 very
  hard to write a decent DISI consumer. Sometimes calling nextDoc()
 returned
  MAX_VAL, sometimes -1, sometimes who knows. When we hardened the spec, it
  actually made consumers' life easier, I think?

 Right, locking down the API makes total sense in general.

  It's ok if we say that for DOCS_ONLY you have to return 1. That's even
 99.9%
  of the time the correct value to return (unless someone adds e.g. the
 same
  StringField twice to the document).

 Right.

  And it's also ok to say that if you passed FLAG_NONE, freq()'s value is
  unspecified. I think it would be wrong to lie here .. not sure if the
  consumer always knows how DocsEnum was requested. Not sure if this
 happens
  in real life though (consuming a DocsEnum that you didn't obtain
 yourself),
  so I'm willing to ignore that case.

 +1: I think FLAG_NONE should remain undefined.  I think we have codecs
 today that will return 1, 0, the actual doc freq (when the field was
 indexed as = DOCS_AND_FREQS).

  These two together sound like a reasonable spec to me?

 +1

 So I think just change your javadocs patch to say that FLAG_NONE means
 freq is not defined, and if field was indexed as DOCS_ONLY and you
 asked for FLAG_FREQS then we promise to lie and say freq=1?

 Mike McCandless

 http://blog.mikemccandless.com

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Commented] (SOLR-4205) Clover runs on ASF Jenkins idle dead without a test or any thread running in main() loop waiting for file descriptor

2012-12-18 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534799#comment-13534799
 ] 

Commit Tag Bot commented on SOLR-4205:
--

[trunk commit] Uwe Schindler
http://svn.apache.org/viewvc?view=revisionrevision=1423389

SOLR-4205: Add permgen space for Clover runs and raise memory for nightly 
jenkins builds, too.


 Clover runs on ASF Jenkins idle dead without a test or any thread running in 
 main() loop waiting for file descriptor
 

 Key: SOLR-4205
 URL: https://issues.apache.org/jira/browse/SOLR-4205
 Project: Solr
  Issue Type: Bug
  Components: Tests
 Environment: FreeBSD Jenkins
Reporter: Uwe Schindler
Assignee: Dawid Weiss

 I had to kill ASF Jenkins Clover builds two times after several 10 hours of 
 inactivity in a random Solr test. I requested a stack trace before killing 
 the only running JVM (clover runs with one JVM only, because clover does not 
 like multiple processes writing the same clover metrics file).
 In both cases (4.x and trunk) the stack trace was looking identical after 
 sending kill -3...
 https://builds.apache.org/job/Lucene-Solr-Clover-trunk/76/consoleFull 
 (yesterday):
 {noformat}
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:01:00, stalled for 28447s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:02:00, stalled for 28507s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:03:00, stalled for 28567s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] JVM J0: stdout was not empty, see: 
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/solr/build/solr-core/test/temp/junit4-J0-20121216_044733_583.sysout
 [junit4:junit4]  JVM J0: stdout (verbatim) 
 [junit4:junit4] 2012-12-16 13:03:49
 [junit4:junit4] Full thread dump OpenJDK 64-Bit Server VM (20.0-b12 mixed 
 mode):
 [junit4:junit4] 
 [junit4:junit4] searcherExecutor-2577-thread-1 prio=5 
 tid=0x00085eb67000 nid=0x61c105b waiting on condition [0x70b0d000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x0008178c9c40 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI TCP Accept-0 daemon prio=5 tid=0x000840ce2800 
 nid=0x61c0aa2 runnable [0x79496000]
 [junit4:junit4]java.lang.Thread.State: RUNNABLE
 [junit4:junit4]   at java.net.PlainSocketImpl.socketAccept(Native Method)
 [junit4:junit4]   at 
 java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:375)
 [junit4:junit4]   at 
 java.net.ServerSocket.implAccept(ServerSocket.java:470)
 [junit4:junit4]   at java.net.ServerSocket.accept(ServerSocket.java:438)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:387)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:359)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI Scheduler(0) daemon prio=5 tid=0x000840ce1000 
 nid=0x61c0969 waiting on condition [0x70f11000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x000814f12f88 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 java.util.concurrent.DelayQueue.take(DelayQueue.java:189)
 

[jira] [Commented] (SOLR-4205) Clover runs on ASF Jenkins idle dead without a test or any thread running in main() loop waiting for file descriptor

2012-12-18 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534801#comment-13534801
 ] 

Commit Tag Bot commented on SOLR-4205:
--

[branch_4x commit] Uwe Schindler
http://svn.apache.org/viewvc?view=revisionrevision=1423390

Merged revision(s) 1423389 from lucene/dev/trunk:
SOLR-4205: Add permgen space for Clover runs and raise memory for nightly 
jenkins builds, too.


 Clover runs on ASF Jenkins idle dead without a test or any thread running in 
 main() loop waiting for file descriptor
 

 Key: SOLR-4205
 URL: https://issues.apache.org/jira/browse/SOLR-4205
 Project: Solr
  Issue Type: Bug
  Components: Tests
 Environment: FreeBSD Jenkins
Reporter: Uwe Schindler
Assignee: Dawid Weiss

 I had to kill ASF Jenkins Clover builds two times after several 10 hours of 
 inactivity in a random Solr test. I requested a stack trace before killing 
 the only running JVM (clover runs with one JVM only, because clover does not 
 like multiple processes writing the same clover metrics file).
 In both cases (4.x and trunk) the stack trace was looking identical after 
 sending kill -3...
 https://builds.apache.org/job/Lucene-Solr-Clover-trunk/76/consoleFull 
 (yesterday):
 {noformat}
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:01:00, stalled for 28447s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:02:00, stalled for 28507s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:03:00, stalled for 28567s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] JVM J0: stdout was not empty, see: 
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/solr/build/solr-core/test/temp/junit4-J0-20121216_044733_583.sysout
 [junit4:junit4]  JVM J0: stdout (verbatim) 
 [junit4:junit4] 2012-12-16 13:03:49
 [junit4:junit4] Full thread dump OpenJDK 64-Bit Server VM (20.0-b12 mixed 
 mode):
 [junit4:junit4] 
 [junit4:junit4] searcherExecutor-2577-thread-1 prio=5 
 tid=0x00085eb67000 nid=0x61c105b waiting on condition [0x70b0d000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x0008178c9c40 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI TCP Accept-0 daemon prio=5 tid=0x000840ce2800 
 nid=0x61c0aa2 runnable [0x79496000]
 [junit4:junit4]java.lang.Thread.State: RUNNABLE
 [junit4:junit4]   at java.net.PlainSocketImpl.socketAccept(Native Method)
 [junit4:junit4]   at 
 java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:375)
 [junit4:junit4]   at 
 java.net.ServerSocket.implAccept(ServerSocket.java:470)
 [junit4:junit4]   at java.net.ServerSocket.accept(ServerSocket.java:438)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:387)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:359)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI Scheduler(0) daemon prio=5 tid=0x000840ce1000 
 nid=0x61c0969 waiting on condition [0x70f11000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x000814f12f88 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 

[jira] [Commented] (SOLR-4205) Clover runs on ASF Jenkins idle dead without a test or any thread running in main() loop waiting for file descriptor

2012-12-18 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534804#comment-13534804
 ] 

Uwe Schindler commented on SOLR-4205:
-

I committed a permgen memory increase to both nightly and clover tasks (for 
jenkins).

The issue in Clover may not be resolveable without raising permgen, but the 
nightly builds hanging is more crazy. It looks like the testDistributedSearch 
start way too many jetty instances that don't clean up enough their 
classloaders. Please note: -Dtests.nightly and -Dtests.multiplier=3 was set, i 
changed the multiplier to 2, too (in the jenkins config).

Maybe change the nightly/multiplier effect for BaseDistributedTestCase? Mark?

 Clover runs on ASF Jenkins idle dead without a test or any thread running in 
 main() loop waiting for file descriptor
 

 Key: SOLR-4205
 URL: https://issues.apache.org/jira/browse/SOLR-4205
 Project: Solr
  Issue Type: Bug
  Components: Tests
 Environment: FreeBSD Jenkins
Reporter: Uwe Schindler
Assignee: Dawid Weiss

 I had to kill ASF Jenkins Clover builds two times after several 10 hours of 
 inactivity in a random Solr test. I requested a stack trace before killing 
 the only running JVM (clover runs with one JVM only, because clover does not 
 like multiple processes writing the same clover metrics file).
 In both cases (4.x and trunk) the stack trace was looking identical after 
 sending kill -3...
 https://builds.apache.org/job/Lucene-Solr-Clover-trunk/76/consoleFull 
 (yesterday):
 {noformat}
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:01:00, stalled for 28447s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:02:00, stalled for 28507s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:03:00, stalled for 28567s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] JVM J0: stdout was not empty, see: 
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/solr/build/solr-core/test/temp/junit4-J0-20121216_044733_583.sysout
 [junit4:junit4]  JVM J0: stdout (verbatim) 
 [junit4:junit4] 2012-12-16 13:03:49
 [junit4:junit4] Full thread dump OpenJDK 64-Bit Server VM (20.0-b12 mixed 
 mode):
 [junit4:junit4] 
 [junit4:junit4] searcherExecutor-2577-thread-1 prio=5 
 tid=0x00085eb67000 nid=0x61c105b waiting on condition [0x70b0d000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x0008178c9c40 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI TCP Accept-0 daemon prio=5 tid=0x000840ce2800 
 nid=0x61c0aa2 runnable [0x79496000]
 [junit4:junit4]java.lang.Thread.State: RUNNABLE
 [junit4:junit4]   at java.net.PlainSocketImpl.socketAccept(Native Method)
 [junit4:junit4]   at 
 java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:375)
 [junit4:junit4]   at 
 java.net.ServerSocket.implAccept(ServerSocket.java:470)
 [junit4:junit4]   at java.net.ServerSocket.accept(ServerSocket.java:438)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:387)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:359)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI Scheduler(0) daemon prio=5 tid=0x000840ce1000 
 nid=0x61c0969 waiting on condition [0x70f11000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x000814f12f88 (a 
 

[jira] [Comment Edited] (SOLR-4205) Clover runs on ASF Jenkins idle dead without a test or any thread running in main() loop waiting for file descriptor

2012-12-18 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534804#comment-13534804
 ] 

Uwe Schindler edited comment on SOLR-4205 at 12/18/12 10:14 AM:


I committed a permgen memory increase to both nightly and clover tasks (for 
jenkins).

The issue in Clover may not be resolveable without raising permgen, but the 
nightly builds hanging is more crazy. It looks like the testDistributedSearch 
start way too many jetty instances that don't clean up enough their 
classloaders. Please note: -Dtests.nightly and -Dtests.multiplier=3 was set, i 
changed the multiplier to 2, too (in the jenkins config).

Maybe change the nightly/multiplier effect for BaseDistributedTestCase? Mark? 
If we can handle this, I would like the remove the hack for the nightly runs 
again!

  was (Author: thetaphi):
I committed a permgen memory increase to both nightly and clover tasks (for 
jenkins).

The issue in Clover may not be resolveable without raising permgen, but the 
nightly builds hanging is more crazy. It looks like the testDistributedSearch 
start way too many jetty instances that don't clean up enough their 
classloaders. Please note: -Dtests.nightly and -Dtests.multiplier=3 was set, i 
changed the multiplier to 2, too (in the jenkins config).

Maybe change the nightly/multiplier effect for BaseDistributedTestCase? Mark?
  
 Clover runs on ASF Jenkins idle dead without a test or any thread running in 
 main() loop waiting for file descriptor
 

 Key: SOLR-4205
 URL: https://issues.apache.org/jira/browse/SOLR-4205
 Project: Solr
  Issue Type: Bug
  Components: Tests
 Environment: FreeBSD Jenkins
Reporter: Uwe Schindler
Assignee: Dawid Weiss

 I had to kill ASF Jenkins Clover builds two times after several 10 hours of 
 inactivity in a random Solr test. I requested a stack trace before killing 
 the only running JVM (clover runs with one JVM only, because clover does not 
 like multiple processes writing the same clover metrics file).
 In both cases (4.x and trunk) the stack trace was looking identical after 
 sending kill -3...
 https://builds.apache.org/job/Lucene-Solr-Clover-trunk/76/consoleFull 
 (yesterday):
 {noformat}
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:01:00, stalled for 28447s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:02:00, stalled for 28507s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:03:00, stalled for 28567s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] JVM J0: stdout was not empty, see: 
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/solr/build/solr-core/test/temp/junit4-J0-20121216_044733_583.sysout
 [junit4:junit4]  JVM J0: stdout (verbatim) 
 [junit4:junit4] 2012-12-16 13:03:49
 [junit4:junit4] Full thread dump OpenJDK 64-Bit Server VM (20.0-b12 mixed 
 mode):
 [junit4:junit4] 
 [junit4:junit4] searcherExecutor-2577-thread-1 prio=5 
 tid=0x00085eb67000 nid=0x61c105b waiting on condition [0x70b0d000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x0008178c9c40 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI TCP Accept-0 daemon prio=5 tid=0x000840ce2800 
 nid=0x61c0aa2 runnable [0x79496000]
 [junit4:junit4]java.lang.Thread.State: RUNNABLE
 [junit4:junit4]   at java.net.PlainSocketImpl.socketAccept(Native Method)
 [junit4:junit4]   at 
 java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:375)
 [junit4:junit4]   at 
 java.net.ServerSocket.implAccept(ServerSocket.java:470)
 [junit4:junit4]   at 

Re: DocsEnum.freq()

2012-12-18 Thread Michael McCandless
On Tue, Dec 18, 2012 at 4:46 AM, Shai Erera ser...@gmail.com wrote:
 Are you sure that all Codecs return 1 if you indexed with DOCS_ONLY? Do we
 have a test that can trip bad Codecs?

I'm not sure!  We should make a test  fix any failing ones ...

 It may be more than just changing the documentation...

Right.

 Why would e.g. TermQuery need to write specialized code for these cases? I
 looked at TermScorer, and its freq() just returns docsEnum.freq().

I meant if we did not adopt this spec (freq() will lie and return 1
when the field was indexed as DOCS_ONLY), then e.g. TermQuery would
need specialized code.

 I think that Similarity may be affected? Which brings the question - how do
 Similarity impls know what flags the DE was opened with, and shouldn't they
 be specialized?
 E.g. TFIDFSimilarity.ExactTFIDFDocScorer uses the freq passed to score() as
 an index to an array, so clearly it assumes it is = 0 and also 
 scoreCache.length.
 So I wonder what will happen to it when someone's Codec will return a
 negative value or MAX_INT in case frequencies aren't needed?

Well, if you passed FLAGS_NONE when you opened the DE then it's your
responsibility to never call freq() ... ie, don't call freq() and pass
that to the sim.

 I do realize that you shouldn't call Similarity with missing information,
 and TermWeight obtains a DocsEnum with frequencies, so in that regard it is
 safe.
 And if you do obtain a DocsEnum with FLAG_NONE, you'd better know what
 you're doing and don't pass a random freq() to Similarity.

Right.

 I lean towards documenting the spec from above, and ensuring that all Codecs
 return 1 for DOCS_ONLY.

+1

So freq() is undefined if you had passed FLAGS_NONE, and we will lie
and say freq=1 (need a test verifying this) if the field was indexed
as DOCS_ONLY.

 If in the future we'll need to handle the case where someone receives a
 DocsEnum which it needs to consume, and doesn't know which flags were used
 to open it, we can always add a getFlags to DE.

Yeah ...

Mike McCandless

http://blog.mikemccandless.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4210) if couldn't find the collection locally when searching, we should look on other nodes

2012-12-18 Thread Po Rui (JIRA)
Po Rui created SOLR-4210:


 Summary:  if couldn't find the collection locally when searching, 
we should look on other nodes
 Key: SOLR-4210
 URL: https://issues.apache.org/jira/browse/SOLR-4210
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.0, 4.0-BETA
Reporter: Po Rui
Priority: Critical
 Fix For: 4.0, 4.0-BETA


It only check the local collection or core  when searching, doesn't look on 
other nodes. e.g. a cluster have 4 nodes. 1th 2th 3th contribute to 
collection1. 2th 3th 4th contribute to collection2. now send query to 4th 
to searching collection1 will failed. 
It is an imperfect feature for searching. it is a TODO part in 
SolrDispatchFilter-line 220.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4210) if couldn't find the collection locally when searching, we should look on other nodes. one of TODOs part in SolrDispatchFilter

2012-12-18 Thread Po Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Po Rui updated SOLR-4210:
-

Summary:  if couldn't find the collection locally when searching, we should 
look on other nodes. one of TODOs part in SolrDispatchFilter  (was:  if 
couldn't find the collection locally when searching, we should look on other 
nodes)

  if couldn't find the collection locally when searching, we should look on 
 other nodes. one of TODOs part in SolrDispatchFilter
 ---

 Key: SOLR-4210
 URL: https://issues.apache.org/jira/browse/SOLR-4210
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.0-BETA, 4.0
Reporter: Po Rui
Priority: Critical
 Fix For: 4.0-BETA, 4.0


 It only check the local collection or core  when searching, doesn't look on 
 other nodes. e.g. a cluster have 4 nodes. 1th 2th 3th contribute to 
 collection1. 2th 3th 4th contribute to collection2. now send query to 4th 
 to searching collection1 will failed. 
 It is an imperfect feature for searching. it is a TODO part in 
 SolrDispatchFilter-line 220.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4210) if couldn't find the collection locally when searching, we should look on other nodes. one of TODOs part in SolrDispatchFilter

2012-12-18 Thread Po Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Po Rui updated SOLR-4210:
-

Attachment: SOLR-4210.patch

  if couldn't find the collection locally when searching, we should look on 
 other nodes. one of TODOs part in SolrDispatchFilter
 ---

 Key: SOLR-4210
 URL: https://issues.apache.org/jira/browse/SOLR-4210
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.0-BETA, 4.0
Reporter: Po Rui
Priority: Critical
 Fix For: 4.0-BETA, 4.0

 Attachments: SOLR-4210.patch


 It only check the local collection or core  when searching, doesn't look on 
 other nodes. e.g. a cluster have 4 nodes. 1th 2th 3th contribute to 
 collection1. 2th 3th 4th contribute to collection2. now send query to 4th 
 to searching collection1 will failed. 
 It is an imperfect feature for searching. it is a TODO part in 
 SolrDispatchFilter-line 220.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4209) asm-3.1.jar is missing

2012-12-18 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534841#comment-13534841
 ] 

Robert Muir commented on SOLR-4209:
---

I dont think we should do this.

We are using asm 4.0 elsewhere. we cannot also depend on 3.1

 asm-3.1.jar is missing
 --

 Key: SOLR-4209
 URL: https://issues.apache.org/jira/browse/SOLR-4209
 Project: Solr
  Issue Type: Bug
  Components: contrib - Solr Cell (Tika extraction)
Affects Versions: 4.0
Reporter: Shinichiro Abe
Priority: Minor
 Attachments: SOLR-4209.patch


 One of Tika dependency file is missing on Solr 4.0. 
 When posting java class files into Solr via SolrCell, those files can't be 
 indexed without asm-3.1.jar.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4209) asm-3.1.jar is missing

2012-12-18 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534849#comment-13534849
 ] 

Robert Muir commented on SOLR-4209:
---

Reference discussion: LUCENE-4263

 asm-3.1.jar is missing
 --

 Key: SOLR-4209
 URL: https://issues.apache.org/jira/browse/SOLR-4209
 Project: Solr
  Issue Type: Bug
  Components: contrib - Solr Cell (Tika extraction)
Affects Versions: 4.0
Reporter: Shinichiro Abe
Priority: Minor
 Attachments: SOLR-4209.patch


 One of Tika dependency file is missing on Solr 4.0. 
 When posting java class files into Solr via SolrCell, those files can't be 
 indexed without asm-3.1.jar.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4258) Incremental Field Updates through Stacked Segments

2012-12-18 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534859#comment-13534859
 ] 

Michael McCandless commented on LUCENE-4258:


{quote}
bq. Are stored fields now sparse? Meaning if I have a segment w/ many docs, and 
I update stored fields on one doc, in that tiny stacked segments will the 
stored fields files also be tiny?

In such case you will get the equivalent of a segment with multiple docs with 
only one of them containing stored fields. I assume the impls of stored fields 
handle these cases well and you will indeed get tiny stored fields.
{quote}

You're right, this is up to the codec ... hmm but the API isn't sparse (you have
to .addDocument 1M times to skip over 1M docs right?), and I'm not sure how 
well our
current default (Lucene41StoredFieldsFormat) handles it.  Have you tested it?

bq. Regarding the API - I made some cleanup, and removed also 
Operation.ADD_DOCUMENT. Now there is only one way to perform each operation, 
and updateFields only allows you to add or replace fields given a term.

OK thanks!

{quote}
bq. This means you cannot reuse fields, you have to be careful with 
pre-tokenized fields (can't reuse the TokenStream), etc.

This is referred in the Javadoc of updateFields, let me know if there's a 
better way to address it.
{quote}

Maybe also state that one cannot reuse Field instances, since the
Field may not be actually consumed until some later time (we should
be vague since this really is an implementation detail).

bq. As for the heavier questions. NRT support should be considered separately, 
but the guideline I followed was to keep things as closely as possible to the 
way deletions are handled. Therefore, we need to add to SegmentReader a field 
named liveUpdates - an equivalent to liveDocs. I already put a TODO for this 
(SegmentReader line 131), implementing it won't be simple...

OK ... yeah it's not simple!

bq. The performance tradeoff you are rightfully concerned about should be 
handled through merging. Once you merge an updated segment all updates are 
cleaned, and the new segment has no performance issues. Apps that perform 
updates should make sure (through MergePolicy) to avoid reader-side updates as 
much as possible.

Merging is very important.  Hmm, are we able to just merge all updates
down to a single update?  Ie, without merging the base segment?  We
can't express that today from MergePolicy right?  In an NRT setting
this seems very important (ie it'd be best bang (= improved search
performance) for the buck (= merge cost)).

I suspect we need to do something with merging before committing
here.

Hmm I see that
StackedTerms.size()/getSumTotalTermFreq()/getSumDocFreq() pulls a
TermsEnum and goes and counts/aggregates all terms ... which in
general is horribly costly?  EG these methods are called per-query to
setup the Sim for scoring ... I think we need another solution here
(not sure what).  Also getDocCount() just returns -1 now ... maybe we
should only allow updates against DOCS_ONLY/omitsNorms fields for now?

Have you done any performance tests on biggish indices?

I think we need a test that indexes a known (randomly generated) set
of documents, randomly sometimes using add and sometimes using
update/replace field, mixing in deletes (just like TestField.addDocuments()),
for the first index, and for the second index only using addDocument
on the surviving documents, and then we assertIndexEquals(...) in the
end?  Maybe we can factor out code from TestDuelingCodecs or
TestStressIndexing2.

Where do we account for the RAM used by these buffered updates?  I see
BufferedUpdates.addTerm has some accounting the first time it sees a
given term, but where do we actually add in the RAM used by the
FieldsUpdate itself?


 Incremental Field Updates through Stacked Segments
 --

 Key: LUCENE-4258
 URL: https://issues.apache.org/jira/browse/LUCENE-4258
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Sivan Yogev
 Attachments: IncrementalFieldUpdates.odp, 
 LUCENE-4258-API-changes.patch, LUCENE-4258.r1410593.patch, 
 LUCENE-4258.r1412262.patch, LUCENE-4258.r1416438.patch, 
 LUCENE-4258.r1416617.patch, LUCENE-4258.r1422495.patch, 
 LUCENE-4258.r1423010.patch

   Original Estimate: 2,520h
  Remaining Estimate: 2,520h

 Shai and I would like to start working on the proposal to Incremental Field 
 Updates outlined here (http://markmail.org/message/zhrdxxpfk6qvdaex).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, 

[jira] [Commented] (SOLR-1337) Spans and Payloads Query Support

2012-12-18 Thread Dmitry Kan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534860#comment-13534860
 ] 

Dmitry Kan commented on SOLR-1337:
--

[~janhoy]  Jan: we implemented a new operator for Lucene / SOLR 3.4 that does 
exactly what you say, see: 
https://issues.apache.org/jira/browse/LUCENE-3758?focusedCommentId=13207710page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13207710

if you or anyone else needs a patch, just let me know.

 Spans and Payloads Query Support
 

 Key: SOLR-1337
 URL: https://issues.apache.org/jira/browse/SOLR-1337
 Project: Solr
  Issue Type: New Feature
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: 4.1


 It would be really nice to have query side support for: Spans and Payloads.  
 The main ingredient missing at this point is QueryParser support and a output 
 format for the spans and the payload spans.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4209) asm-3.1.jar is missing

2012-12-18 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534861#comment-13534861
 ] 

Uwe Schindler commented on SOLR-4209:
-

asm 4.1 already!

 asm-3.1.jar is missing
 --

 Key: SOLR-4209
 URL: https://issues.apache.org/jira/browse/SOLR-4209
 Project: Solr
  Issue Type: Bug
  Components: contrib - Solr Cell (Tika extraction)
Affects Versions: 4.0
Reporter: Shinichiro Abe
Priority: Minor
 Attachments: SOLR-4209.patch


 One of Tika dependency file is missing on Solr 4.0. 
 When posting java class files into Solr via SolrCell, those files can't be 
 indexed without asm-3.1.jar.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4209) asm-3.1.jar is missing

2012-12-18 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534863#comment-13534863
 ] 

Uwe Schindler commented on SOLR-4209:
-

In general, tika-extraction only ships with all JARS needed to extract text 
documents. Indexing Java CLASS files is somehow strange with Solr, especially 
as the metadata extracted is very limited. If you want to do this, you can 
always place the mssing JAR files in the lib folder of your Solr installation. 
We are also missing other JAR files, like NETCDF support (at least in 3.6, 
because NETCDF needs Java 6, but Lucene 3.x is Java 5 only).

 asm-3.1.jar is missing
 --

 Key: SOLR-4209
 URL: https://issues.apache.org/jira/browse/SOLR-4209
 Project: Solr
  Issue Type: Bug
  Components: contrib - Solr Cell (Tika extraction)
Affects Versions: 4.0
Reporter: Shinichiro Abe
Priority: Minor
 Attachments: SOLR-4209.patch


 One of Tika dependency file is missing on Solr 4.0. 
 When posting java class files into Solr via SolrCell, those files can't be 
 indexed without asm-3.1.jar.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4205) Clover runs on ASF Jenkins idle dead without a test or any thread running in main() loop waiting for file descriptor

2012-12-18 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534866#comment-13534866
 ] 

Uwe Schindler commented on SOLR-4205:
-

The latest jenkins-nightly build succeeded. jenkins-clover is currently 
running...

 Clover runs on ASF Jenkins idle dead without a test or any thread running in 
 main() loop waiting for file descriptor
 

 Key: SOLR-4205
 URL: https://issues.apache.org/jira/browse/SOLR-4205
 Project: Solr
  Issue Type: Bug
  Components: Tests
 Environment: FreeBSD Jenkins
Reporter: Uwe Schindler
Assignee: Dawid Weiss

 I had to kill ASF Jenkins Clover builds two times after several 10 hours of 
 inactivity in a random Solr test. I requested a stack trace before killing 
 the only running JVM (clover runs with one JVM only, because clover does not 
 like multiple processes writing the same clover metrics file).
 In both cases (4.x and trunk) the stack trace was looking identical after 
 sending kill -3...
 https://builds.apache.org/job/Lucene-Solr-Clover-trunk/76/consoleFull 
 (yesterday):
 {noformat}
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:01:00, stalled for 28447s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:02:00, stalled for 28507s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:03:00, stalled for 28567s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] JVM J0: stdout was not empty, see: 
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/solr/build/solr-core/test/temp/junit4-J0-20121216_044733_583.sysout
 [junit4:junit4]  JVM J0: stdout (verbatim) 
 [junit4:junit4] 2012-12-16 13:03:49
 [junit4:junit4] Full thread dump OpenJDK 64-Bit Server VM (20.0-b12 mixed 
 mode):
 [junit4:junit4] 
 [junit4:junit4] searcherExecutor-2577-thread-1 prio=5 
 tid=0x00085eb67000 nid=0x61c105b waiting on condition [0x70b0d000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x0008178c9c40 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI TCP Accept-0 daemon prio=5 tid=0x000840ce2800 
 nid=0x61c0aa2 runnable [0x79496000]
 [junit4:junit4]java.lang.Thread.State: RUNNABLE
 [junit4:junit4]   at java.net.PlainSocketImpl.socketAccept(Native Method)
 [junit4:junit4]   at 
 java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:375)
 [junit4:junit4]   at 
 java.net.ServerSocket.implAccept(ServerSocket.java:470)
 [junit4:junit4]   at java.net.ServerSocket.accept(ServerSocket.java:438)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:387)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:359)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI Scheduler(0) daemon prio=5 tid=0x000840ce1000 
 nid=0x61c0969 waiting on condition [0x70f11000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x000814f12f88 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 java.util.concurrent.DelayQueue.take(DelayQueue.java:189)
 [junit4:junit4]   at 
 

[jira] [Created] (LUCENE-4634) PackedInts: streaming API that supports variable numbers of bits per value

2012-12-18 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-4634:


 Summary: PackedInts: streaming API that supports variable numbers 
of bits per value
 Key: LUCENE-4634
 URL: https://issues.apache.org/jira/browse/LUCENE-4634
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/other
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor


It could be convenient to have a streaming API (writers and iterators, no 
random access) that supports variable numbers of bits per value. Although this 
would be much slower than the current fixed-size APIs, it could help save bytes 
in our codec formats.

The API could look like:
{code}
Iterator {
  long next(int bitsPerValue);
}

Writer {
  void write(long value, int bitsPerValue); // assert 
PackedInts.bitsRequired(value) = bitsPerValue;
}
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4208) Refactor edismax query parser

2012-12-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-4208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534883#comment-13534883
 ] 

Tomás Fernández Löbbe commented on SOLR-4208:
-

Does it fail with an assertion or an exception? It runs OK to me, with or 
without the patch. I'm running on Mac OS X and Java 6 on trunk.

 Refactor edismax query parser
 -

 Key: SOLR-4208
 URL: https://issues.apache.org/jira/browse/SOLR-4208
 Project: Solr
  Issue Type: Improvement
Reporter: Tomás Fernández Löbbe
Priority: Minor
 Fix For: 4.1, 5.0

 Attachments: SOLR-4208.patch


 With successive changes, the edismax query parser has become more complex. It 
 would be nice to refactor it to reduce code complexity, also to allow better 
 extension and code reuse.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4208) Refactor edismax query parser

2012-12-18 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534888#comment-13534888
 ] 

Markus Jelsma commented on SOLR-4208:
-

I am on trunk too. I get some exceptions like:

{code}
[junit4:junit4]   2 5475 T10 C0 oasc.SolrException.log SEVERE 
org.apache.solr.common.SolrException: org.apache.solr.search.SyntaxError: Field 
aliases lead to a cycle
...
{code}

and,

{code}
[junit4:junit4]   2 6288 T10 oasc.SolrException.log SEVERE 
java.lang.NullPointerException
[junit4:junit4]   2at 
org.apache.solr.handler.component.HttpShardHandlerFactory.close(HttpShardHandlerFactory.java:170)
{code}

But they don't fail the unit test. The testAliasingBoost is marked as failed:

{code}
[junit4:junit4] Tests with failures:
[junit4:junit4]   - 
org.apache.solr.search.TestExtendedDismaxParser.testAliasingBoost
{code}

{code}
   testcase classname=org.apache.solr.search.TestExtendedDismaxParser 
name=testAliasingBoost time=0.189
  error message=Exception during query 
type=java.lang.RuntimeExceptionjava.lang.RuntimeException: Exception during 
query
at 
__randomizedtesting.SeedInfo.seed([9B33524C2584B3F3:57A2A7EB2388F581]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:515)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:482)
at 
org.apache.solr.search.TestExtendedDismaxParser.testAliasingBoost(TestExtendedDismaxParser.java:507)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 

Could we get SOLR-3926 submitted?

2012-12-18 Thread Eirik Lygre
A little over a month ago I submitted patch under the issue
https://issues.apache.org/jira/browse/SOLR-3926. After comments from Yonik
and Hoss it was significantly rewritten, and a latest version was submitted
on 4 December.

Is there anything I can do to help complete the process of committing this
patch?

-- 
Eirik

There is no high like a tango high
There is no low like a tango low


[jira] [Commented] (SOLR-3972) Missing admin-extra files result in SEVERE log entries with giant stacktrace

2012-12-18 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534955#comment-13534955
 ] 

Shawn Heisey commented on SOLR-3972:


There is a workaround.  Just create 0 byte files in the conf directory with 
these names:

admin-extra.html
admin-extra.menu-bottom.html
admin-extra.menu-top.html

The unix touch command works great for this.

 Missing admin-extra files result in SEVERE log entries with giant stacktrace
 

 Key: SOLR-3972
 URL: https://issues.apache.org/jira/browse/SOLR-3972
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Affects Versions: 4.0, 4.1
 Environment: Linux bigindy5 2.6.32-279.9.1.el6.centos.plus.x86_64 #1 
 SMP Wed Sep 26 03:52:55 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
 java version 1.7.0_07
 Java(TM) SE Runtime Environment (build 1.7.0_07-b10)
 Java HotSpot(TM) 64-Bit Server VM (build 23.3-b01, mixed mode)
Reporter: Shawn Heisey
 Fix For: 4.1


 Missing admin-extra files result in SEVERE log entries with giant stacktrace.
 If a log entry is warranted at all, it should just be a one-line warning.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4209) asm-3.1.jar is missing

2012-12-18 Thread Shinichiro Abe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534960#comment-13534960
 ] 

Shinichiro Abe commented on SOLR-4209:
--

Thank you for the reply.

ManifoldCF can crawl to file server where it stores various files, .class files 
too, and posts to Solr.
As ManifoldCF user, we would like to search text which is extracted not only 
from general files but also from class files. That 's why I requested.

Currently by missing a part of Tika dependencies as Solr's OOTB, Solr returns 
500 and then by this server error ManifoldCF's crawling is aborted after some 
retrying.  
Of course I can place jar files manually but I think I need ams in lib folder 
by default.
Tika 1.2 depends on asm-3.1. Conflicting versions of same jar is not good?


 asm-3.1.jar is missing
 --

 Key: SOLR-4209
 URL: https://issues.apache.org/jira/browse/SOLR-4209
 Project: Solr
  Issue Type: Bug
  Components: contrib - Solr Cell (Tika extraction)
Affects Versions: 4.0
Reporter: Shinichiro Abe
Priority: Minor
 Attachments: SOLR-4209.patch


 One of Tika dependency file is missing on Solr 4.0. 
 When posting java class files into Solr via SolrCell, those files can't be 
 indexed without asm-3.1.jar.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #711: POMs out of sync

2012-12-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/711/

2 tests failed.
REGRESSION:  org.apache.solr.cloud.RecoveryZkTest.testDistribSearch

Error Message:
expected:222 but was:105

Stack Trace:
java.lang.AssertionError: expected:222 but was:105
at 
__randomizedtesting.SeedInfo.seed([7600AB46414D289E:F7E6255E361248A2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at org.apache.solr.cloud.RecoveryZkTest.doTest(RecoveryZkTest.java:99)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:794)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-4205) Clover runs on ASF Jenkins idle dead without a test or any thread running in main() loop waiting for file descriptor

2012-12-18 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534974#comment-13534974
 ] 

Uwe Schindler commented on SOLR-4205:
-

With the raised permgen also clover build succeeded. Most interinesting: The 
coverage get back to 80%. I assume the change to SocketConnector in Jetty is 
causing this, making the tests on FreeBSD succeed and so raising the coverage 
instead of timing out. I assume testDistributedSearch tests almost every single 
line of code :-)

Mark: We should at least work on making testDistributesSearch in the nightly 
builds with a multiplicator of 3 does not OOM. I would like to revert the 
changes in jenkins nightly task because they are nox cross-JVM portable.

 Clover runs on ASF Jenkins idle dead without a test or any thread running in 
 main() loop waiting for file descriptor
 

 Key: SOLR-4205
 URL: https://issues.apache.org/jira/browse/SOLR-4205
 Project: Solr
  Issue Type: Bug
  Components: Tests
 Environment: FreeBSD Jenkins
Reporter: Uwe Schindler
Assignee: Dawid Weiss

 I had to kill ASF Jenkins Clover builds two times after several 10 hours of 
 inactivity in a random Solr test. I requested a stack trace before killing 
 the only running JVM (clover runs with one JVM only, because clover does not 
 like multiple processes writing the same clover metrics file).
 In both cases (4.x and trunk) the stack trace was looking identical after 
 sending kill -3...
 https://builds.apache.org/job/Lucene-Solr-Clover-trunk/76/consoleFull 
 (yesterday):
 {noformat}
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:01:00, stalled for 28447s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:02:00, stalled for 28507s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:03:00, stalled for 28567s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] JVM J0: stdout was not empty, see: 
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/solr/build/solr-core/test/temp/junit4-J0-20121216_044733_583.sysout
 [junit4:junit4]  JVM J0: stdout (verbatim) 
 [junit4:junit4] 2012-12-16 13:03:49
 [junit4:junit4] Full thread dump OpenJDK 64-Bit Server VM (20.0-b12 mixed 
 mode):
 [junit4:junit4] 
 [junit4:junit4] searcherExecutor-2577-thread-1 prio=5 
 tid=0x00085eb67000 nid=0x61c105b waiting on condition [0x70b0d000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x0008178c9c40 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI TCP Accept-0 daemon prio=5 tid=0x000840ce2800 
 nid=0x61c0aa2 runnable [0x79496000]
 [junit4:junit4]java.lang.Thread.State: RUNNABLE
 [junit4:junit4]   at java.net.PlainSocketImpl.socketAccept(Native Method)
 [junit4:junit4]   at 
 java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:375)
 [junit4:junit4]   at 
 java.net.ServerSocket.implAccept(ServerSocket.java:470)
 [junit4:junit4]   at java.net.ServerSocket.accept(ServerSocket.java:438)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:387)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:359)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI Scheduler(0) daemon prio=5 tid=0x000840ce1000 
 nid=0x61c0969 waiting on condition [0x70f11000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x000814f12f88 (a 
 

[jira] [Commented] (SOLR-4208) Refactor edismax query parser

2012-12-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-4208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534979#comment-13534979
 ] 

Tomás Fernández Löbbe commented on SOLR-4208:
-

I can see the Field aliases lead to a cycle exceptions. Those are being 
generated by the test testCyclicAliasing() and are expected exceptions (maybe 
the bad thing is that those are being logged)
I also see the NPE, that seems to be generated when finishing the whole test, 
when shutting down the core. 
I can't see yet the failure with the testAliasingBoost yet, even trying with 
your seed.

I'll continue looking into this.

 Refactor edismax query parser
 -

 Key: SOLR-4208
 URL: https://issues.apache.org/jira/browse/SOLR-4208
 Project: Solr
  Issue Type: Improvement
Reporter: Tomás Fernández Löbbe
Priority: Minor
 Fix For: 4.1, 5.0

 Attachments: SOLR-4208.patch


 With successive changes, the edismax query parser has become more complex. It 
 would be nice to refactor it to reduce code complexity, also to allow better 
 extension and code reuse.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4205) Clover runs on ASF Jenkins idle dead without a test or any thread running in main() loop waiting for file descriptor

2012-12-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534990#comment-13534990
 ] 

Mark Miller commented on SOLR-4205:
---

I think it succeeded for a few reasons:

1. We switched back to using the socket connector on freebsd - some runs 
started to pass, but many, especially nightly and maven ones, still failed.

2. I broke up the basic zk test that was using so much perm space into 2 tests 
and took out the atLeasts that may have been making it make a more cores than 
could be handled.

3. I stopped trying to do work arounds for black hole and added timeouts for 
pretty much every call we do (still no timeouts in prod, but tests override to 
add timeouts). This is part of the make tests work with blackhole issue I 
opened.

Between the 3, we are getting closer. It's un-lodged some new fails though, so 
there is still some work to do, but we are getting there.

 Clover runs on ASF Jenkins idle dead without a test or any thread running in 
 main() loop waiting for file descriptor
 

 Key: SOLR-4205
 URL: https://issues.apache.org/jira/browse/SOLR-4205
 Project: Solr
  Issue Type: Bug
  Components: Tests
 Environment: FreeBSD Jenkins
Reporter: Uwe Schindler
Assignee: Dawid Weiss

 I had to kill ASF Jenkins Clover builds two times after several 10 hours of 
 inactivity in a random Solr test. I requested a stack trace before killing 
 the only running JVM (clover runs with one JVM only, because clover does not 
 like multiple processes writing the same clover metrics file).
 In both cases (4.x and trunk) the stack trace was looking identical after 
 sending kill -3...
 https://builds.apache.org/job/Lucene-Solr-Clover-trunk/76/consoleFull 
 (yesterday):
 {noformat}
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:01:00, stalled for 28447s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:02:00, stalled for 28507s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:03:00, stalled for 28567s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] JVM J0: stdout was not empty, see: 
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/solr/build/solr-core/test/temp/junit4-J0-20121216_044733_583.sysout
 [junit4:junit4]  JVM J0: stdout (verbatim) 
 [junit4:junit4] 2012-12-16 13:03:49
 [junit4:junit4] Full thread dump OpenJDK 64-Bit Server VM (20.0-b12 mixed 
 mode):
 [junit4:junit4] 
 [junit4:junit4] searcherExecutor-2577-thread-1 prio=5 
 tid=0x00085eb67000 nid=0x61c105b waiting on condition [0x70b0d000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x0008178c9c40 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI TCP Accept-0 daemon prio=5 tid=0x000840ce2800 
 nid=0x61c0aa2 runnable [0x79496000]
 [junit4:junit4]java.lang.Thread.State: RUNNABLE
 [junit4:junit4]   at java.net.PlainSocketImpl.socketAccept(Native Method)
 [junit4:junit4]   at 
 java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:375)
 [junit4:junit4]   at 
 java.net.ServerSocket.implAccept(ServerSocket.java:470)
 [junit4:junit4]   at java.net.ServerSocket.accept(ServerSocket.java:438)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:387)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:359)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI Scheduler(0) daemon prio=5 tid=0x000840ce1000 
 nid=0x61c0969 waiting on condition [0x70f11000]
 [junit4:junit4]java.lang.Thread.State: 

[jira] [Commented] (SOLR-4205) Clover runs on ASF Jenkins idle dead without a test or any thread running in main() loop waiting for file descriptor

2012-12-18 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534994#comment-13534994
 ] 

Uwe Schindler commented on SOLR-4205:
-

When did you commit that? Because the new runs for clover and nightly were 
passing after my commit raising permgen, so should I revert that to try out?


 Clover runs on ASF Jenkins idle dead without a test or any thread running in 
 main() loop waiting for file descriptor
 

 Key: SOLR-4205
 URL: https://issues.apache.org/jira/browse/SOLR-4205
 Project: Solr
  Issue Type: Bug
  Components: Tests
 Environment: FreeBSD Jenkins
Reporter: Uwe Schindler
Assignee: Dawid Weiss

 I had to kill ASF Jenkins Clover builds two times after several 10 hours of 
 inactivity in a random Solr test. I requested a stack trace before killing 
 the only running JVM (clover runs with one JVM only, because clover does not 
 like multiple processes writing the same clover metrics file).
 In both cases (4.x and trunk) the stack trace was looking identical after 
 sending kill -3...
 https://builds.apache.org/job/Lucene-Solr-Clover-trunk/76/consoleFull 
 (yesterday):
 {noformat}
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:01:00, stalled for 28447s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:02:00, stalled for 28507s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:03:00, stalled for 28567s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] JVM J0: stdout was not empty, see: 
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/solr/build/solr-core/test/temp/junit4-J0-20121216_044733_583.sysout
 [junit4:junit4]  JVM J0: stdout (verbatim) 
 [junit4:junit4] 2012-12-16 13:03:49
 [junit4:junit4] Full thread dump OpenJDK 64-Bit Server VM (20.0-b12 mixed 
 mode):
 [junit4:junit4] 
 [junit4:junit4] searcherExecutor-2577-thread-1 prio=5 
 tid=0x00085eb67000 nid=0x61c105b waiting on condition [0x70b0d000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x0008178c9c40 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI TCP Accept-0 daemon prio=5 tid=0x000840ce2800 
 nid=0x61c0aa2 runnable [0x79496000]
 [junit4:junit4]java.lang.Thread.State: RUNNABLE
 [junit4:junit4]   at java.net.PlainSocketImpl.socketAccept(Native Method)
 [junit4:junit4]   at 
 java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:375)
 [junit4:junit4]   at 
 java.net.ServerSocket.implAccept(ServerSocket.java:470)
 [junit4:junit4]   at java.net.ServerSocket.accept(ServerSocket.java:438)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:387)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:359)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI Scheduler(0) daemon prio=5 tid=0x000840ce1000 
 nid=0x61c0969 waiting on condition [0x70f11000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x000814f12f88 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 java.util.concurrent.DelayQueue.take(DelayQueue.java:189)
 [junit4:junit4]   at 
 

[jira] [Updated] (SOLR-4080) SolrJ: CloudSolrServer atomic updates doesn´t work with Lists/Arrays (Objects, in general).

2012-12-18 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-4080:


Attachment: SOLR-4080.patch

Here's a patch (branch_4x) to reproduce the problem.

I'm working on the fix.

 SolrJ: CloudSolrServer atomic updates doesn´t work with Lists/Arrays 
 (Objects, in general).
 ---

 Key: SOLR-4080
 URL: https://issues.apache.org/jira/browse/SOLR-4080
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0
 Environment: Solr 4.0 with SolrCloud deployed with two SolrServers 
 with shards=1. solr-solrj artifact version 4.0.0 is used to execute atomic 
 update operations.
Reporter: Luis Cappa Banda
Assignee: Shalin Shekhar Mangar
 Fix For: 4.1

 Attachments: SOLR-4080.patch


 Atomic updates with a CloudSolrServer object instance doesn´t work properly. 
 - Code snippet:
 // CloudSolrSever instance.
 LBHttpSolrServer lbSolrServer = new LBHttpSolrServer(solrEndpoints);
 CloudSolrServer cloudSolrServer = new CloudSolrServer(zookeeperEndpoints, 
 lbSolrServer);
 // SolrInputDocument to update: 
 SolrInputDocument do = ne SolrInputDocument();
 doc.addField(id, myId);
 MapString, ListString operation = new HashMapString, ListString();
 operation.put(set, [[a list of String elements]]);  // I want a set 
 operation to override field values.
 doc.addField(fieldName, operation);
 // Atomic update operation.
 cloudSolrServer.add(doc); 
 - Result:
 doc: {
 id: myId,
 fieldName: [ {set=values}
 ],
 ...
 }
 - Changing map from snippet like Map operation = new HashMap() instead of 
 MapString, ListString operation = new HashMapString, ListString() 
 obtains the following result after the atomic update:
 doc: {
 id: myId,
 fieldName: [[Value1, Value2]
 ],
 ...
 }
 - Also, the old value is never erased, and instead of a set operation an 
 add operation happens.
 CONCLUSION: during an atomic update with CloudSolrServer the 
 List/Array/Object value passed is being processed with just a toString() 
 method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4205) Clover runs on ASF Jenkins idle dead without a test or any thread running in main() loop waiting for file descriptor

2012-12-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535011#comment-13535011
 ] 

Mark Miller commented on SOLR-4205:
---

#2 I committed like yesterday. #3 might have been the day before.

Probably worth trying at the lower perm gen to see if we should reduce the test 
anymore.

 Clover runs on ASF Jenkins idle dead without a test or any thread running in 
 main() loop waiting for file descriptor
 

 Key: SOLR-4205
 URL: https://issues.apache.org/jira/browse/SOLR-4205
 Project: Solr
  Issue Type: Bug
  Components: Tests
 Environment: FreeBSD Jenkins
Reporter: Uwe Schindler
Assignee: Dawid Weiss

 I had to kill ASF Jenkins Clover builds two times after several 10 hours of 
 inactivity in a random Solr test. I requested a stack trace before killing 
 the only running JVM (clover runs with one JVM only, because clover does not 
 like multiple processes writing the same clover metrics file).
 In both cases (4.x and trunk) the stack trace was looking identical after 
 sending kill -3...
 https://builds.apache.org/job/Lucene-Solr-Clover-trunk/76/consoleFull 
 (yesterday):
 {noformat}
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:01:00, stalled for 28447s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:02:00, stalled for 28507s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:03:00, stalled for 28567s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] JVM J0: stdout was not empty, see: 
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/solr/build/solr-core/test/temp/junit4-J0-20121216_044733_583.sysout
 [junit4:junit4]  JVM J0: stdout (verbatim) 
 [junit4:junit4] 2012-12-16 13:03:49
 [junit4:junit4] Full thread dump OpenJDK 64-Bit Server VM (20.0-b12 mixed 
 mode):
 [junit4:junit4] 
 [junit4:junit4] searcherExecutor-2577-thread-1 prio=5 
 tid=0x00085eb67000 nid=0x61c105b waiting on condition [0x70b0d000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x0008178c9c40 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI TCP Accept-0 daemon prio=5 tid=0x000840ce2800 
 nid=0x61c0aa2 runnable [0x79496000]
 [junit4:junit4]java.lang.Thread.State: RUNNABLE
 [junit4:junit4]   at java.net.PlainSocketImpl.socketAccept(Native Method)
 [junit4:junit4]   at 
 java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:375)
 [junit4:junit4]   at 
 java.net.ServerSocket.implAccept(ServerSocket.java:470)
 [junit4:junit4]   at java.net.ServerSocket.accept(ServerSocket.java:438)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:387)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:359)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI Scheduler(0) daemon prio=5 tid=0x000840ce1000 
 nid=0x61c0969 waiting on condition [0x70f11000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x000814f12f88 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 java.util.concurrent.DelayQueue.take(DelayQueue.java:189)
 [junit4:junit4]   at 
 

[jira] [Created] (LUCENE-4635) ArrayIndexOutOfBoundsException when a segment has many, many terms

2012-12-18 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-4635:
--

 Summary: ArrayIndexOutOfBoundsException when a segment has many, 
many terms
 Key: LUCENE-4635
 URL: https://issues.apache.org/jira/browse/LUCENE-4635
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless


Spinoff from Tom Burton-West's java-user thread CheckIndex 
ArrayIndexOutOfBounds error for merged index ( 
http://markmail.org/message/fatijkotwucn7hvu ).

I modified Test2BTerms to instead generate a little over 10B terms, ran it 
(took 17 hours and created a 162 GB index) and hit a similar exception:

{noformat}
Time: 62,164.058
There was 1 failure:
1) test2BTerms(org.apache.lucene.index.Test2BTerms)
java.lang.ArrayIndexOutOfBoundsException: 1246
at 
org.apache.lucene.index.TermInfosReaderIndex.compareField(TermInfosReaderIndex.java:249)
at 
org.apache.lucene.index.TermInfosReaderIndex.compareTo(TermInfosReaderIndex.java:225)
at 
org.apache.lucene.index.TermInfosReaderIndex.getIndexOffset(TermInfosReaderIndex.java:156)
at org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:232)
at org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:172)
at org.apache.lucene.index.SegmentReader.docFreq(SegmentReader.java:539)
at 
org.apache.lucene.search.TermQuery$TermWeight$1.add(TermQuery.java:56)
at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:81)
at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:87)
at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:70)
at 
org.apache.lucene.search.TermQuery$TermWeight.init(TermQuery.java:53)
at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:199)
at 
org.apache.lucene.search.Searcher.createNormalizedWeight(Searcher.java:168)
at 
org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:664)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:342)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:330)
at 
org.apache.lucene.index.Test2BTerms.testSavedTerms(Test2BTerms.java:205)
at org.apache.lucene.index.Test2BTerms.test2BTerms(Test2BTerms.java:154)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
{noformat}

The index actually succeeded building and optimizing, but it was only when we 
went to run searches of the random terms we collected along the way that the 
AIOOBE was hit.

I suspect this is a bug somewhere in the compact in-RAM terms index ... I'll 
dig.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Solr Bug/Improvement: No-op and no error for Solr /update if no application/xml header

2012-12-18 Thread Jack Krupansky
I don’t get any error or any effect from this curl command:

curl http://localhost:8983/solr/update?commit=true --data-binary '
deletequerysku:td-01/query/delete'

But, if I add the xml header, it works fine:

curl http://localhost:8983/solr/update?commit=true -H Content-Type: 
application/xml --data-binary '
deletequerysku:td-01/query/delete'

It would be nice if Solr would default to application/xml, but a friendly error 
return would be better than a no-op in this case.

FWIW, curl –v shows this header being sent if I don’t specify it explicitly:

Content-Type: application/x-www-form-urlencoded

-- Jack Krupansky

[jira] [Commented] (LUCENE-4599) Compressed term vectors

2012-12-18 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535030#comment-13535030
 ] 

Shawn Heisey commented on LUCENE-4599:
--

With the 4.1 release triage likely coming soon, I am wondering if this is ready 
to make the cut or if it needs more work.

 Compressed term vectors
 ---

 Key: LUCENE-4599
 URL: https://issues.apache.org/jira/browse/LUCENE-4599
 Project: Lucene - Core
  Issue Type: Task
  Components: core/codecs, core/termvectors
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 4.1

 Attachments: LUCENE-4599.patch


 We should have codec-compressed term vectors similarly to what we have with 
 stored fields.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4635) ArrayIndexOutOfBoundsException when a segment has many, many terms

2012-12-18 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-4635:
---

Attachment: LUCENE-4635.patch

I suspect this fixes the issue ... at least CheckIndex on my 162 GB index is 
getting beyond where it failed previously.

I'll make a separate Test2BPagedBytes test!

 ArrayIndexOutOfBoundsException when a segment has many, many terms
 --

 Key: LUCENE-4635
 URL: https://issues.apache.org/jira/browse/LUCENE-4635
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: LUCENE-4635.patch


 Spinoff from Tom Burton-West's java-user thread CheckIndex 
 ArrayIndexOutOfBounds error for merged index ( 
 http://markmail.org/message/fatijkotwucn7hvu ).
 I modified Test2BTerms to instead generate a little over 10B terms, ran it 
 (took 17 hours and created a 162 GB index) and hit a similar exception:
 {noformat}
 Time: 62,164.058
 There was 1 failure:
 1) test2BTerms(org.apache.lucene.index.Test2BTerms)
 java.lang.ArrayIndexOutOfBoundsException: 1246
   at 
 org.apache.lucene.index.TermInfosReaderIndex.compareField(TermInfosReaderIndex.java:249)
   at 
 org.apache.lucene.index.TermInfosReaderIndex.compareTo(TermInfosReaderIndex.java:225)
   at 
 org.apache.lucene.index.TermInfosReaderIndex.getIndexOffset(TermInfosReaderIndex.java:156)
   at org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:232)
   at org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:172)
   at org.apache.lucene.index.SegmentReader.docFreq(SegmentReader.java:539)
   at 
 org.apache.lucene.search.TermQuery$TermWeight$1.add(TermQuery.java:56)
   at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:81)
   at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:87)
   at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:70)
   at 
 org.apache.lucene.search.TermQuery$TermWeight.init(TermQuery.java:53)
   at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:199)
   at 
 org.apache.lucene.search.Searcher.createNormalizedWeight(Searcher.java:168)
   at 
 org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:664)
   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:342)
   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:330)
   at 
 org.apache.lucene.index.Test2BTerms.testSavedTerms(Test2BTerms.java:205)
   at org.apache.lucene.index.Test2BTerms.test2BTerms(Test2BTerms.java:154)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 {noformat}
 The index actually succeeded building and optimizing, but it was only when we 
 went to run searches of the random terms we collected along the way that the 
 AIOOBE was hit.
 I suspect this is a bug somewhere in the compact in-RAM terms index ... I'll 
 dig.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4061) CREATE action in Collections API should allow to upload a new configuration

2012-12-18 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-4061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-4061:


Fix Version/s: 5.0
   4.1

 CREATE action in Collections API should allow to upload a new configuration
 ---

 Key: SOLR-4061
 URL: https://issues.apache.org/jira/browse/SOLR-4061
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Tomás Fernández Löbbe
Priority: Minor
 Fix For: 4.1, 5.0

 Attachments: SOLR-4061.patch


 When creating new collections with the Collection API, the only option is to 
 point to an existing configuration in ZK. It would be nice to be able to 
 upload a new configuration in the same command. 
 For more details see 
 http://lucene.472066.n3.nabble.com/Error-with-SolrCloud-td4019351.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4202) Relax rules around accepting updates when not connected to zookeeper.

2012-12-18 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535061#comment-13535061
 ] 

Yonik Seeley commented on SOLR-4202:


Tricky it's hard to figure out of this will increase fail scenarios.

What if we lose connectivity, and miss a few updates.
Then we get back connectivity, accept a bunch of updates (more than the window 
of recent updates we keep track of), then reconnect to ZK.
We do a recovery, compare recent updates, and conclude that we are up to date.

Aside: I thought the leader requested a replica to go into recovery if it 
returns a failure from an update?

 Relax rules around accepting updates when not connected to zookeeper.
 -

 Key: SOLR-4202
 URL: https://issues.apache.org/jira/browse/SOLR-4202
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.1, 5.0


 We are pretty tight about this currently - I think it might be a bit nicer if 
 we relax a little.
 Right now, as soon we realize we cannot talk to zookeeper, we stop accepting 
 updates in all cases.
 I think it might be better if we change that a bit for a non leader. It might 
 be nicer if it would still accept updates from the leader, but fail them. 
 This way, there is some chance that if the problem was simply a connection 
 loss with zookeeper, when the leader asks the replica to recover because it 
 failed the update, it's more likely to just take a peersync to catch up.
 Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4205) Clover runs on ASF Jenkins idle dead without a test or any thread running in main() loop waiting for file descriptor

2012-12-18 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535076#comment-13535076
 ] 

Uwe Schindler commented on SOLR-4205:
-

The last failing run was last night, so I think we should maybe run the test 
suite locally with -Dtests.nightly and -Dtests.multiplicator=3 first before 
reverting parts of my commit. Thanks in any case!

 Clover runs on ASF Jenkins idle dead without a test or any thread running in 
 main() loop waiting for file descriptor
 

 Key: SOLR-4205
 URL: https://issues.apache.org/jira/browse/SOLR-4205
 Project: Solr
  Issue Type: Bug
  Components: Tests
 Environment: FreeBSD Jenkins
Reporter: Uwe Schindler
Assignee: Dawid Weiss

 I had to kill ASF Jenkins Clover builds two times after several 10 hours of 
 inactivity in a random Solr test. I requested a stack trace before killing 
 the only running JVM (clover runs with one JVM only, because clover does not 
 like multiple processes writing the same clover metrics file).
 In both cases (4.x and trunk) the stack trace was looking identical after 
 sending kill -3...
 https://builds.apache.org/job/Lucene-Solr-Clover-trunk/76/consoleFull 
 (yesterday):
 {noformat}
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:01:00, stalled for 28447s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:02:00, stalled for 28507s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:03:00, stalled for 28567s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] JVM J0: stdout was not empty, see: 
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/solr/build/solr-core/test/temp/junit4-J0-20121216_044733_583.sysout
 [junit4:junit4]  JVM J0: stdout (verbatim) 
 [junit4:junit4] 2012-12-16 13:03:49
 [junit4:junit4] Full thread dump OpenJDK 64-Bit Server VM (20.0-b12 mixed 
 mode):
 [junit4:junit4] 
 [junit4:junit4] searcherExecutor-2577-thread-1 prio=5 
 tid=0x00085eb67000 nid=0x61c105b waiting on condition [0x70b0d000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x0008178c9c40 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI TCP Accept-0 daemon prio=5 tid=0x000840ce2800 
 nid=0x61c0aa2 runnable [0x79496000]
 [junit4:junit4]java.lang.Thread.State: RUNNABLE
 [junit4:junit4]   at java.net.PlainSocketImpl.socketAccept(Native Method)
 [junit4:junit4]   at 
 java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:375)
 [junit4:junit4]   at 
 java.net.ServerSocket.implAccept(ServerSocket.java:470)
 [junit4:junit4]   at java.net.ServerSocket.accept(ServerSocket.java:438)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:387)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:359)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI Scheduler(0) daemon prio=5 tid=0x000840ce1000 
 nid=0x61c0969 waiting on condition [0x70f11000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x000814f12f88 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 

[jira] [Commented] (LUCENE-4599) Compressed term vectors

2012-12-18 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535079#comment-13535079
 ] 

Adrien Grand commented on LUCENE-4599:
--

Hey Shawn, I'm still working actively on this issue. I made good progress 
regarding compression ratio but term vectors are more complicated than stored 
fields (with lots of corner cases like negative start offsets, negative 
lengths, fields that don't always have the same options, etc.) so I will need 
time and lots of Jenkins builds to feel comfortable making it the default term 
vectors impl. It will depend on the 4.1 release schedule but given that it's 
likely to comme rather soon and that I will have very little time to work on 
this issue until next month it will probably only make it to 4.2.

 Compressed term vectors
 ---

 Key: LUCENE-4599
 URL: https://issues.apache.org/jira/browse/LUCENE-4599
 Project: Lucene - Core
  Issue Type: Task
  Components: core/codecs, core/termvectors
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 4.1

 Attachments: LUCENE-4599.patch


 We should have codec-compressed term vectors similarly to what we have with 
 stored fields.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4205) Clover runs on ASF Jenkins idle dead without a test or any thread running in main() loop waiting for file descriptor

2012-12-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535082#comment-13535082
 ] 

Mark Miller commented on SOLR-4205:
---

That's weird - I don't think there is anything in the test that looks at the 
multiplier...so not sure how that would still matter.

bq. we should maybe run the test suite locally with -Dtests.nightly and 
-Dtests.multiplicator=3 first 

That actually passed for me a few days ago on my dev machine when I tried, so I 
don't think I can learn much on my machine.

 Clover runs on ASF Jenkins idle dead without a test or any thread running in 
 main() loop waiting for file descriptor
 

 Key: SOLR-4205
 URL: https://issues.apache.org/jira/browse/SOLR-4205
 Project: Solr
  Issue Type: Bug
  Components: Tests
 Environment: FreeBSD Jenkins
Reporter: Uwe Schindler
Assignee: Dawid Weiss

 I had to kill ASF Jenkins Clover builds two times after several 10 hours of 
 inactivity in a random Solr test. I requested a stack trace before killing 
 the only running JVM (clover runs with one JVM only, because clover does not 
 like multiple processes writing the same clover metrics file).
 In both cases (4.x and trunk) the stack trace was looking identical after 
 sending kill -3...
 https://builds.apache.org/job/Lucene-Solr-Clover-trunk/76/consoleFull 
 (yesterday):
 {noformat}
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:01:00, stalled for 28447s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:02:00, stalled for 28507s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:03:00, stalled for 28567s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] JVM J0: stdout was not empty, see: 
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/solr/build/solr-core/test/temp/junit4-J0-20121216_044733_583.sysout
 [junit4:junit4]  JVM J0: stdout (verbatim) 
 [junit4:junit4] 2012-12-16 13:03:49
 [junit4:junit4] Full thread dump OpenJDK 64-Bit Server VM (20.0-b12 mixed 
 mode):
 [junit4:junit4] 
 [junit4:junit4] searcherExecutor-2577-thread-1 prio=5 
 tid=0x00085eb67000 nid=0x61c105b waiting on condition [0x70b0d000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x0008178c9c40 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI TCP Accept-0 daemon prio=5 tid=0x000840ce2800 
 nid=0x61c0aa2 runnable [0x79496000]
 [junit4:junit4]java.lang.Thread.State: RUNNABLE
 [junit4:junit4]   at java.net.PlainSocketImpl.socketAccept(Native Method)
 [junit4:junit4]   at 
 java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:375)
 [junit4:junit4]   at 
 java.net.ServerSocket.implAccept(ServerSocket.java:470)
 [junit4:junit4]   at java.net.ServerSocket.accept(ServerSocket.java:438)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:387)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:359)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI Scheduler(0) daemon prio=5 tid=0x000840ce1000 
 nid=0x61c0969 waiting on condition [0x70f11000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x000814f12f88 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 

[jira] [Comment Edited] (SOLR-4205) Clover runs on ASF Jenkins idle dead without a test or any thread running in main() loop waiting for file descriptor

2012-12-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535082#comment-13535082
 ] 

Mark Miller edited comment on SOLR-4205 at 12/18/12 5:33 PM:
-

That's weird - I don't think there is anything in the test that looks at the 
multiplier (since I changed it)...so not sure how that would still matter.

bq. we should maybe run the test suite locally with -Dtests.nightly and 
-Dtests.multiplicator=3 first 

That actually passed for me a few days ago on my dev machine when I tried, so I 
don't think I can learn much on my machine.

  was (Author: markrmil...@gmail.com):
That's weird - I don't think there is anything in the test that looks at 
the multiplier...so not sure how that would still matter.

bq. we should maybe run the test suite locally with -Dtests.nightly and 
-Dtests.multiplicator=3 first 

That actually passed for me a few days ago on my dev machine when I tried, so I 
don't think I can learn much on my machine.
  
 Clover runs on ASF Jenkins idle dead without a test or any thread running in 
 main() loop waiting for file descriptor
 

 Key: SOLR-4205
 URL: https://issues.apache.org/jira/browse/SOLR-4205
 Project: Solr
  Issue Type: Bug
  Components: Tests
 Environment: FreeBSD Jenkins
Reporter: Uwe Schindler
Assignee: Dawid Weiss

 I had to kill ASF Jenkins Clover builds two times after several 10 hours of 
 inactivity in a random Solr test. I requested a stack trace before killing 
 the only running JVM (clover runs with one JVM only, because clover does not 
 like multiple processes writing the same clover metrics file).
 In both cases (4.x and trunk) the stack trace was looking identical after 
 sending kill -3...
 https://builds.apache.org/job/Lucene-Solr-Clover-trunk/76/consoleFull 
 (yesterday):
 {noformat}
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:01:00, stalled for 28447s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:02:00, stalled for 28507s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:03:00, stalled for 28567s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] JVM J0: stdout was not empty, see: 
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/solr/build/solr-core/test/temp/junit4-J0-20121216_044733_583.sysout
 [junit4:junit4]  JVM J0: stdout (verbatim) 
 [junit4:junit4] 2012-12-16 13:03:49
 [junit4:junit4] Full thread dump OpenJDK 64-Bit Server VM (20.0-b12 mixed 
 mode):
 [junit4:junit4] 
 [junit4:junit4] searcherExecutor-2577-thread-1 prio=5 
 tid=0x00085eb67000 nid=0x61c105b waiting on condition [0x70b0d000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x0008178c9c40 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI TCP Accept-0 daemon prio=5 tid=0x000840ce2800 
 nid=0x61c0aa2 runnable [0x79496000]
 [junit4:junit4]java.lang.Thread.State: RUNNABLE
 [junit4:junit4]   at java.net.PlainSocketImpl.socketAccept(Native Method)
 [junit4:junit4]   at 
 java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:375)
 [junit4:junit4]   at 
 java.net.ServerSocket.implAccept(ServerSocket.java:470)
 [junit4:junit4]   at java.net.ServerSocket.accept(ServerSocket.java:438)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:387)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:359)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI Scheduler(0) daemon prio=5 

[jira] [Commented] (SOLR-4205) Clover runs on ASF Jenkins idle dead without a test or any thread running in main() loop waiting for file descriptor

2012-12-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535086#comment-13535086
 ] 

Mark Miller commented on SOLR-4205:
---

Are you sure the failure yesterday was a permgen one?

 Clover runs on ASF Jenkins idle dead without a test or any thread running in 
 main() loop waiting for file descriptor
 

 Key: SOLR-4205
 URL: https://issues.apache.org/jira/browse/SOLR-4205
 Project: Solr
  Issue Type: Bug
  Components: Tests
 Environment: FreeBSD Jenkins
Reporter: Uwe Schindler
Assignee: Dawid Weiss

 I had to kill ASF Jenkins Clover builds two times after several 10 hours of 
 inactivity in a random Solr test. I requested a stack trace before killing 
 the only running JVM (clover runs with one JVM only, because clover does not 
 like multiple processes writing the same clover metrics file).
 In both cases (4.x and trunk) the stack trace was looking identical after 
 sending kill -3...
 https://builds.apache.org/job/Lucene-Solr-Clover-trunk/76/consoleFull 
 (yesterday):
 {noformat}
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:01:00, stalled for 28447s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:02:00, stalled for 28507s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:03:00, stalled for 28567s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] JVM J0: stdout was not empty, see: 
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/solr/build/solr-core/test/temp/junit4-J0-20121216_044733_583.sysout
 [junit4:junit4]  JVM J0: stdout (verbatim) 
 [junit4:junit4] 2012-12-16 13:03:49
 [junit4:junit4] Full thread dump OpenJDK 64-Bit Server VM (20.0-b12 mixed 
 mode):
 [junit4:junit4] 
 [junit4:junit4] searcherExecutor-2577-thread-1 prio=5 
 tid=0x00085eb67000 nid=0x61c105b waiting on condition [0x70b0d000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x0008178c9c40 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI TCP Accept-0 daemon prio=5 tid=0x000840ce2800 
 nid=0x61c0aa2 runnable [0x79496000]
 [junit4:junit4]java.lang.Thread.State: RUNNABLE
 [junit4:junit4]   at java.net.PlainSocketImpl.socketAccept(Native Method)
 [junit4:junit4]   at 
 java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:375)
 [junit4:junit4]   at 
 java.net.ServerSocket.implAccept(ServerSocket.java:470)
 [junit4:junit4]   at java.net.ServerSocket.accept(ServerSocket.java:438)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:387)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:359)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI Scheduler(0) daemon prio=5 tid=0x000840ce1000 
 nid=0x61c0969 waiting on condition [0x70f11000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x000814f12f88 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 java.util.concurrent.DelayQueue.take(DelayQueue.java:189)
 [junit4:junit4]   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:688)
 

[jira] [Commented] (SOLR-4205) Clover runs on ASF Jenkins idle dead without a test or any thread running in main() loop waiting for file descriptor

2012-12-18 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535087#comment-13535087
 ] 

Uwe Schindler commented on SOLR-4205:
-

Yes! The Nightly one was a permgen failure: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/124/console
Maybe the size of permgen is different on different platforms.

Did you run *all* tests or only a selection. Permgen issues mostly happen when 
a test run does not allow GC to unload all classes, so it only happens when 
running all tests.

 Clover runs on ASF Jenkins idle dead without a test or any thread running in 
 main() loop waiting for file descriptor
 

 Key: SOLR-4205
 URL: https://issues.apache.org/jira/browse/SOLR-4205
 Project: Solr
  Issue Type: Bug
  Components: Tests
 Environment: FreeBSD Jenkins
Reporter: Uwe Schindler
Assignee: Dawid Weiss

 I had to kill ASF Jenkins Clover builds two times after several 10 hours of 
 inactivity in a random Solr test. I requested a stack trace before killing 
 the only running JVM (clover runs with one JVM only, because clover does not 
 like multiple processes writing the same clover metrics file).
 In both cases (4.x and trunk) the stack trace was looking identical after 
 sending kill -3...
 https://builds.apache.org/job/Lucene-Solr-Clover-trunk/76/consoleFull 
 (yesterday):
 {noformat}
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:01:00, stalled for 28447s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:02:00, stalled for 28507s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:03:00, stalled for 28567s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] JVM J0: stdout was not empty, see: 
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/solr/build/solr-core/test/temp/junit4-J0-20121216_044733_583.sysout
 [junit4:junit4]  JVM J0: stdout (verbatim) 
 [junit4:junit4] 2012-12-16 13:03:49
 [junit4:junit4] Full thread dump OpenJDK 64-Bit Server VM (20.0-b12 mixed 
 mode):
 [junit4:junit4] 
 [junit4:junit4] searcherExecutor-2577-thread-1 prio=5 
 tid=0x00085eb67000 nid=0x61c105b waiting on condition [0x70b0d000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x0008178c9c40 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI TCP Accept-0 daemon prio=5 tid=0x000840ce2800 
 nid=0x61c0aa2 runnable [0x79496000]
 [junit4:junit4]java.lang.Thread.State: RUNNABLE
 [junit4:junit4]   at java.net.PlainSocketImpl.socketAccept(Native Method)
 [junit4:junit4]   at 
 java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:375)
 [junit4:junit4]   at 
 java.net.ServerSocket.implAccept(ServerSocket.java:470)
 [junit4:junit4]   at java.net.ServerSocket.accept(ServerSocket.java:438)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:387)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:359)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI Scheduler(0) daemon prio=5 tid=0x000840ce1000 
 nid=0x61c0969 waiting on condition [0x70f11000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x000814f12f88 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 

[jira] [Comment Edited] (SOLR-4205) Clover runs on ASF Jenkins idle dead without a test or any thread running in main() loop waiting for file descriptor

2012-12-18 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535087#comment-13535087
 ] 

Uwe Schindler edited comment on SOLR-4205 at 12/18/12 5:40 PM:
---

Yes! The Nightly one was a permgen failure: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/124/console
Maybe the size of permgen is different on different platforms.

I have no idea if your patches were already committed, this one ran yesterday 
at 16:00 UTC, but took until this morning hanging after permgen :( - I killed 
it this morning.

Did you run *all* tests or only a selection. Permgen issues mostly happen when 
a test run does not allow GC to unload all classes, so it only happens when 
running all tests.

  was (Author: thetaphi):
Yes! The Nightly one was a permgen failure: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/124/console
Maybe the size of permgen is different on different platforms.

Did you run *all* tests or only a selection. Permgen issues mostly happen when 
a test run does not allow GC to unload all classes, so it only happens when 
running all tests.
  
 Clover runs on ASF Jenkins idle dead without a test or any thread running in 
 main() loop waiting for file descriptor
 

 Key: SOLR-4205
 URL: https://issues.apache.org/jira/browse/SOLR-4205
 Project: Solr
  Issue Type: Bug
  Components: Tests
 Environment: FreeBSD Jenkins
Reporter: Uwe Schindler
Assignee: Dawid Weiss

 I had to kill ASF Jenkins Clover builds two times after several 10 hours of 
 inactivity in a random Solr test. I requested a stack trace before killing 
 the only running JVM (clover runs with one JVM only, because clover does not 
 like multiple processes writing the same clover metrics file).
 In both cases (4.x and trunk) the stack trace was looking identical after 
 sending kill -3...
 https://builds.apache.org/job/Lucene-Solr-Clover-trunk/76/consoleFull 
 (yesterday):
 {noformat}
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:01:00, stalled for 28447s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:02:00, stalled for 28507s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:03:00, stalled for 28567s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] JVM J0: stdout was not empty, see: 
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/solr/build/solr-core/test/temp/junit4-J0-20121216_044733_583.sysout
 [junit4:junit4]  JVM J0: stdout (verbatim) 
 [junit4:junit4] 2012-12-16 13:03:49
 [junit4:junit4] Full thread dump OpenJDK 64-Bit Server VM (20.0-b12 mixed 
 mode):
 [junit4:junit4] 
 [junit4:junit4] searcherExecutor-2577-thread-1 prio=5 
 tid=0x00085eb67000 nid=0x61c105b waiting on condition [0x70b0d000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x0008178c9c40 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI TCP Accept-0 daemon prio=5 tid=0x000840ce2800 
 nid=0x61c0aa2 runnable [0x79496000]
 [junit4:junit4]java.lang.Thread.State: RUNNABLE
 [junit4:junit4]   at java.net.PlainSocketImpl.socketAccept(Native Method)
 [junit4:junit4]   at 
 java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:375)
 [junit4:junit4]   at 
 java.net.ServerSocket.implAccept(ServerSocket.java:470)
 [junit4:junit4]   at java.net.ServerSocket.accept(ServerSocket.java:438)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:387)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:359)
 

[jira] [Commented] (SOLR-4209) asm-3.1.jar is missing

2012-12-18 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535099#comment-13535099
 ] 

Uwe Schindler commented on SOLR-4209:
-

I think we should document which document types are supported by the 
solr-release out-of the box (README file). Class files are definitely not a 
common use case for Solr indexing so I disagree with including the parser 
dependencies. Solr is a text indexing server, so the primary list of parsers 
should be document parsers, not parsers for binary-only file formats without 
any useful text content for a full text search engine.

I just repeat: You can add the missing parsers to your lib folde rof the Solr 
installation.

-1 to add support for ASM and NETCDF or MP3 files out of the box. This bloats 
the release only useful for 0.01% of all users. It is so easy to download the 
remaining JAR files and place them in lib folder.

I would +1 to add a setting to SolrCell so it can ignore files that have no 
parser or where the parser is disabled because of missing dependencies (TIKA 
itsself already handles this by catching ClassNotFoundEx and ignoring those 
parsers).

bq. Tika 1.2 depends on asm-3.1. Conflicting versions of same jar is not good?

You cannot upgrade this version as ASM 4.x is incompatible to 3.x, but using 
same package names, but largely different API (e.g. interfaces got classes and 
so on).

 asm-3.1.jar is missing
 --

 Key: SOLR-4209
 URL: https://issues.apache.org/jira/browse/SOLR-4209
 Project: Solr
  Issue Type: Bug
  Components: contrib - Solr Cell (Tika extraction)
Affects Versions: 4.0
Reporter: Shinichiro Abe
Priority: Minor
 Attachments: SOLR-4209.patch


 One of Tika dependency file is missing on Solr 4.0. 
 When posting java class files into Solr via SolrCell, those files can't be 
 indexed without asm-3.1.jar.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4205) Clover runs on ASF Jenkins idle dead without a test or any thread running in main() loop waiting for file descriptor

2012-12-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535109#comment-13535109
 ] 

Mark Miller commented on SOLR-4205:
---

Okay - 16:00 UTC looks like 11am EST? I suck at timezones. If that is the case, 
I did not commit till 12:49 pm EST.

 Clover runs on ASF Jenkins idle dead without a test or any thread running in 
 main() loop waiting for file descriptor
 

 Key: SOLR-4205
 URL: https://issues.apache.org/jira/browse/SOLR-4205
 Project: Solr
  Issue Type: Bug
  Components: Tests
 Environment: FreeBSD Jenkins
Reporter: Uwe Schindler
Assignee: Dawid Weiss

 I had to kill ASF Jenkins Clover builds two times after several 10 hours of 
 inactivity in a random Solr test. I requested a stack trace before killing 
 the only running JVM (clover runs with one JVM only, because clover does not 
 like multiple processes writing the same clover metrics file).
 In both cases (4.x and trunk) the stack trace was looking identical after 
 sending kill -3...
 https://builds.apache.org/job/Lucene-Solr-Clover-trunk/76/consoleFull 
 (yesterday):
 {noformat}
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:01:00, stalled for 28447s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:02:00, stalled for 28507s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:03:00, stalled for 28567s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] JVM J0: stdout was not empty, see: 
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/solr/build/solr-core/test/temp/junit4-J0-20121216_044733_583.sysout
 [junit4:junit4]  JVM J0: stdout (verbatim) 
 [junit4:junit4] 2012-12-16 13:03:49
 [junit4:junit4] Full thread dump OpenJDK 64-Bit Server VM (20.0-b12 mixed 
 mode):
 [junit4:junit4] 
 [junit4:junit4] searcherExecutor-2577-thread-1 prio=5 
 tid=0x00085eb67000 nid=0x61c105b waiting on condition [0x70b0d000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x0008178c9c40 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI TCP Accept-0 daemon prio=5 tid=0x000840ce2800 
 nid=0x61c0aa2 runnable [0x79496000]
 [junit4:junit4]java.lang.Thread.State: RUNNABLE
 [junit4:junit4]   at java.net.PlainSocketImpl.socketAccept(Native Method)
 [junit4:junit4]   at 
 java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:375)
 [junit4:junit4]   at 
 java.net.ServerSocket.implAccept(ServerSocket.java:470)
 [junit4:junit4]   at java.net.ServerSocket.accept(ServerSocket.java:438)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:387)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:359)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI Scheduler(0) daemon prio=5 tid=0x000840ce1000 
 nid=0x61c0969 waiting on condition [0x70f11000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x000814f12f88 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 java.util.concurrent.DelayQueue.take(DelayQueue.java:189)
 [junit4:junit4]   at 
 

[jira] [Commented] (LUCENE-4599) Compressed term vectors

2012-12-18 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535126#comment-13535126
 ] 

Shawn Heisey commented on LUCENE-4599:
--

bq. it will probably only make it to 4.2.

I'm not surprised.  I had hoped it would make it, but there will be enough to 
do for release without working on half-baked features.  I might need to 
continue to use Solr from branch_4x even after 4.1 gets released.

Thank you for everything you've done for me personally and the entire project.


 Compressed term vectors
 ---

 Key: LUCENE-4599
 URL: https://issues.apache.org/jira/browse/LUCENE-4599
 Project: Lucene - Core
  Issue Type: Task
  Components: core/codecs, core/termvectors
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 4.1

 Attachments: LUCENE-4599.patch


 We should have codec-compressed term vectors similarly to what we have with 
 stored fields.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1337) Spans and Payloads Query Support

2012-12-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-1337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535131#comment-13535131
 ] 

Jan Høydahl commented on SOLR-1337:
---

[~dmitry_key], where is your code implemented? At Lucene query parser level?

 Spans and Payloads Query Support
 

 Key: SOLR-1337
 URL: https://issues.apache.org/jira/browse/SOLR-1337
 Project: Solr
  Issue Type: New Feature
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: 4.1


 It would be really nice to have query side support for: Spans and Payloads.  
 The main ingredient missing at this point is QueryParser support and a output 
 format for the spans and the payload spans.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4635) ArrayIndexOutOfBoundsException when a segment has many, many terms

2012-12-18 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-4635:
---

Attachment: LUCENE-4635.patch

New patch, with test, and fixing another place where we could overflow int.

I think it's ready.

 ArrayIndexOutOfBoundsException when a segment has many, many terms
 --

 Key: LUCENE-4635
 URL: https://issues.apache.org/jira/browse/LUCENE-4635
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: LUCENE-4635.patch, LUCENE-4635.patch


 Spinoff from Tom Burton-West's java-user thread CheckIndex 
 ArrayIndexOutOfBounds error for merged index ( 
 http://markmail.org/message/fatijkotwucn7hvu ).
 I modified Test2BTerms to instead generate a little over 10B terms, ran it 
 (took 17 hours and created a 162 GB index) and hit a similar exception:
 {noformat}
 Time: 62,164.058
 There was 1 failure:
 1) test2BTerms(org.apache.lucene.index.Test2BTerms)
 java.lang.ArrayIndexOutOfBoundsException: 1246
   at 
 org.apache.lucene.index.TermInfosReaderIndex.compareField(TermInfosReaderIndex.java:249)
   at 
 org.apache.lucene.index.TermInfosReaderIndex.compareTo(TermInfosReaderIndex.java:225)
   at 
 org.apache.lucene.index.TermInfosReaderIndex.getIndexOffset(TermInfosReaderIndex.java:156)
   at org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:232)
   at org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:172)
   at org.apache.lucene.index.SegmentReader.docFreq(SegmentReader.java:539)
   at 
 org.apache.lucene.search.TermQuery$TermWeight$1.add(TermQuery.java:56)
   at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:81)
   at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:87)
   at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:70)
   at 
 org.apache.lucene.search.TermQuery$TermWeight.init(TermQuery.java:53)
   at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:199)
   at 
 org.apache.lucene.search.Searcher.createNormalizedWeight(Searcher.java:168)
   at 
 org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:664)
   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:342)
   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:330)
   at 
 org.apache.lucene.index.Test2BTerms.testSavedTerms(Test2BTerms.java:205)
   at org.apache.lucene.index.Test2BTerms.test2BTerms(Test2BTerms.java:154)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 {noformat}
 The index actually succeeded building and optimizing, but it was only when we 
 went to run searches of the random terms we collected along the way that the 
 AIOOBE was hit.
 I suspect this is a bug somewhere in the compact in-RAM terms index ... I'll 
 dig.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4211) LBHttpSolrServer

2012-12-18 Thread Kevin Ludwig (JIRA)
Kevin Ludwig created SOLR-4211:
--

 Summary: LBHttpSolrServer
 Key: SOLR-4211
 URL: https://issues.apache.org/jira/browse/SOLR-4211
 Project: Solr
  Issue Type: New Feature
  Components: clients - java
Affects Versions: 4.0
Reporter: Kevin Ludwig
Priority: Minor


I would like for SOLRJ's LBHttpSolrServer to support graceful shutdown of SOLR 
machines. SOLR's PingRequestHandler (e.g. /admin/ping) already has support 
for healthcheck files, and LBHttpSolrServer already has a ping() method that 
calls this endpoint. 

Recent changes in LBHttpSolrServer introduced the notion of an alive list and a 
zombie list, as well as a background thread to check for dead nodes that are 
back alive. My proposal is to have the background thread:

1. determine if nodes are alive via ping() rather than query(*:*). 
2. also check for alive servers that have gone out of service (again, via 
ping()). 

Also the basic logic in the public request method is to try all alive nodes, 
and if none are reachable then try each zombie. If a node is brought offline 
(via removing healthcheck file, causing ping() to fail) then this retry should 
not be done.

I'm willing to submit a patch for this if needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4211) LBHttpSolrServer to support graceful shutdown

2012-12-18 Thread Kevin Ludwig (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Ludwig updated SOLR-4211:
---

Summary: LBHttpSolrServer to support graceful shutdown  (was: 
LBHttpSolrServer)

 LBHttpSolrServer to support graceful shutdown
 -

 Key: SOLR-4211
 URL: https://issues.apache.org/jira/browse/SOLR-4211
 Project: Solr
  Issue Type: New Feature
  Components: clients - java
Affects Versions: 4.0
Reporter: Kevin Ludwig
Priority: Minor

 I would like for SOLRJ's LBHttpSolrServer to support graceful shutdown of 
 SOLR machines. SOLR's PingRequestHandler (e.g. /admin/ping) already has 
 support for healthcheck files, and LBHttpSolrServer already has a ping() 
 method that calls this endpoint. 
 Recent changes in LBHttpSolrServer introduced the notion of an alive list and 
 a zombie list, as well as a background thread to check for dead nodes that 
 are back alive. My proposal is to have the background thread:
 1. determine if nodes are alive via ping() rather than query(*:*). 
 2. also check for alive servers that have gone out of service (again, via 
 ping()). 
 Also the basic logic in the public request method is to try all alive nodes, 
 and if none are reachable then try each zombie. If a node is brought offline 
 (via removing healthcheck file, causing ping() to fail) then this retry 
 should not be done.
 I'm willing to submit a patch for this if needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4205) Clover runs on ASF Jenkins idle dead without a test or any thread running in main() loop waiting for file descriptor

2012-12-18 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535192#comment-13535192
 ] 

Commit Tag Bot commented on SOLR-4205:
--

[trunk commit] Uwe Schindler
http://svn.apache.org/viewvc?view=revisionrevision=1423587

SOLR-4205: Give jenkins-nightly another try with default mem settings.


 Clover runs on ASF Jenkins idle dead without a test or any thread running in 
 main() loop waiting for file descriptor
 

 Key: SOLR-4205
 URL: https://issues.apache.org/jira/browse/SOLR-4205
 Project: Solr
  Issue Type: Bug
  Components: Tests
 Environment: FreeBSD Jenkins
Reporter: Uwe Schindler
Assignee: Dawid Weiss

 I had to kill ASF Jenkins Clover builds two times after several 10 hours of 
 inactivity in a random Solr test. I requested a stack trace before killing 
 the only running JVM (clover runs with one JVM only, because clover does not 
 like multiple processes writing the same clover metrics file).
 In both cases (4.x and trunk) the stack trace was looking identical after 
 sending kill -3...
 https://builds.apache.org/job/Lucene-Solr-Clover-trunk/76/consoleFull 
 (yesterday):
 {noformat}
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:01:00, stalled for 28447s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:02:00, stalled for 28507s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:03:00, stalled for 28567s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] JVM J0: stdout was not empty, see: 
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/solr/build/solr-core/test/temp/junit4-J0-20121216_044733_583.sysout
 [junit4:junit4]  JVM J0: stdout (verbatim) 
 [junit4:junit4] 2012-12-16 13:03:49
 [junit4:junit4] Full thread dump OpenJDK 64-Bit Server VM (20.0-b12 mixed 
 mode):
 [junit4:junit4] 
 [junit4:junit4] searcherExecutor-2577-thread-1 prio=5 
 tid=0x00085eb67000 nid=0x61c105b waiting on condition [0x70b0d000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x0008178c9c40 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI TCP Accept-0 daemon prio=5 tid=0x000840ce2800 
 nid=0x61c0aa2 runnable [0x79496000]
 [junit4:junit4]java.lang.Thread.State: RUNNABLE
 [junit4:junit4]   at java.net.PlainSocketImpl.socketAccept(Native Method)
 [junit4:junit4]   at 
 java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:375)
 [junit4:junit4]   at 
 java.net.ServerSocket.implAccept(ServerSocket.java:470)
 [junit4:junit4]   at java.net.ServerSocket.accept(ServerSocket.java:438)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:387)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:359)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI Scheduler(0) daemon prio=5 tid=0x000840ce1000 
 nid=0x61c0969 waiting on condition [0x70f11000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x000814f12f88 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 java.util.concurrent.DelayQueue.take(DelayQueue.java:189)
 [junit4:junit4]   at 
 

[jira] [Commented] (SOLR-4205) Clover runs on ASF Jenkins idle dead without a test or any thread running in main() loop waiting for file descriptor

2012-12-18 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535194#comment-13535194
 ] 

Uwe Schindler commented on SOLR-4205:
-

I reverted the nightly memory settings from build.xml and give it another try. 
The multiplier on jenkins is still set to 2 instead of 3. If this passes, I 
will reconfigure Jenkins, too.

Clover tests should run with more permgen, because it was always very critical 
(clover annotated classes are much larger). But clover is already very JVM 
specific, because it also needs more bytecode/assembler cache settings and 
stuff like that.

 Clover runs on ASF Jenkins idle dead without a test or any thread running in 
 main() loop waiting for file descriptor
 

 Key: SOLR-4205
 URL: https://issues.apache.org/jira/browse/SOLR-4205
 Project: Solr
  Issue Type: Bug
  Components: Tests
 Environment: FreeBSD Jenkins
Reporter: Uwe Schindler
Assignee: Dawid Weiss

 I had to kill ASF Jenkins Clover builds two times after several 10 hours of 
 inactivity in a random Solr test. I requested a stack trace before killing 
 the only running JVM (clover runs with one JVM only, because clover does not 
 like multiple processes writing the same clover metrics file).
 In both cases (4.x and trunk) the stack trace was looking identical after 
 sending kill -3...
 https://builds.apache.org/job/Lucene-Solr-Clover-trunk/76/consoleFull 
 (yesterday):
 {noformat}
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:01:00, stalled for 28447s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:02:00, stalled for 28507s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:03:00, stalled for 28567s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] JVM J0: stdout was not empty, see: 
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/solr/build/solr-core/test/temp/junit4-J0-20121216_044733_583.sysout
 [junit4:junit4]  JVM J0: stdout (verbatim) 
 [junit4:junit4] 2012-12-16 13:03:49
 [junit4:junit4] Full thread dump OpenJDK 64-Bit Server VM (20.0-b12 mixed 
 mode):
 [junit4:junit4] 
 [junit4:junit4] searcherExecutor-2577-thread-1 prio=5 
 tid=0x00085eb67000 nid=0x61c105b waiting on condition [0x70b0d000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x0008178c9c40 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI TCP Accept-0 daemon prio=5 tid=0x000840ce2800 
 nid=0x61c0aa2 runnable [0x79496000]
 [junit4:junit4]java.lang.Thread.State: RUNNABLE
 [junit4:junit4]   at java.net.PlainSocketImpl.socketAccept(Native Method)
 [junit4:junit4]   at 
 java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:375)
 [junit4:junit4]   at 
 java.net.ServerSocket.implAccept(ServerSocket.java:470)
 [junit4:junit4]   at java.net.ServerSocket.accept(ServerSocket.java:438)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:387)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:359)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI Scheduler(0) daemon prio=5 tid=0x000840ce1000 
 nid=0x61c0969 waiting on condition [0x70f11000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x000814f12f88 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 

[jira] [Commented] (SOLR-4205) Clover runs on ASF Jenkins idle dead without a test or any thread running in main() loop waiting for file descriptor

2012-12-18 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535199#comment-13535199
 ] 

Commit Tag Bot commented on SOLR-4205:
--

[branch_4x commit] Uwe Schindler
http://svn.apache.org/viewvc?view=revisionrevision=1423590

Merged revision(s) 1423587 from lucene/dev/trunk:
SOLR-4205: Give jenkins-nightly another try with default mem settings.


 Clover runs on ASF Jenkins idle dead without a test or any thread running in 
 main() loop waiting for file descriptor
 

 Key: SOLR-4205
 URL: https://issues.apache.org/jira/browse/SOLR-4205
 Project: Solr
  Issue Type: Bug
  Components: Tests
 Environment: FreeBSD Jenkins
Reporter: Uwe Schindler
Assignee: Dawid Weiss

 I had to kill ASF Jenkins Clover builds two times after several 10 hours of 
 inactivity in a random Solr test. I requested a stack trace before killing 
 the only running JVM (clover runs with one JVM only, because clover does not 
 like multiple processes writing the same clover metrics file).
 In both cases (4.x and trunk) the stack trace was looking identical after 
 sending kill -3...
 https://builds.apache.org/job/Lucene-Solr-Clover-trunk/76/consoleFull 
 (yesterday):
 {noformat}
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:01:00, stalled for 28447s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:02:00, stalled for 28507s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] HEARTBEAT J0 PID(81...@lucene.zones.apache.org): 
 2012-12-16T13:03:00, stalled for 28567s at: 
 TestFunctionQuery.testBooleanFunctions
 [junit4:junit4] JVM J0: stdout was not empty, see: 
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Clover-trunk/solr/build/solr-core/test/temp/junit4-J0-20121216_044733_583.sysout
 [junit4:junit4]  JVM J0: stdout (verbatim) 
 [junit4:junit4] 2012-12-16 13:03:49
 [junit4:junit4] Full thread dump OpenJDK 64-Bit Server VM (20.0-b12 mixed 
 mode):
 [junit4:junit4] 
 [junit4:junit4] searcherExecutor-2577-thread-1 prio=5 
 tid=0x00085eb67000 nid=0x61c105b waiting on condition [0x70b0d000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x0008178c9c40 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103)
 [junit4:junit4]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI TCP Accept-0 daemon prio=5 tid=0x000840ce2800 
 nid=0x61c0aa2 runnable [0x79496000]
 [junit4:junit4]java.lang.Thread.State: RUNNABLE
 [junit4:junit4]   at java.net.PlainSocketImpl.socketAccept(Native Method)
 [junit4:junit4]   at 
 java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:375)
 [junit4:junit4]   at 
 java.net.ServerSocket.implAccept(ServerSocket.java:470)
 [junit4:junit4]   at java.net.ServerSocket.accept(ServerSocket.java:438)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:387)
 [junit4:junit4]   at 
 sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:359)
 [junit4:junit4]   at java.lang.Thread.run(Thread.java:679)
 [junit4:junit4] 
 [junit4:junit4] RMI Scheduler(0) daemon prio=5 tid=0x000840ce1000 
 nid=0x61c0969 waiting on condition [0x70f11000]
 [junit4:junit4]java.lang.Thread.State: WAITING (parking)
 [junit4:junit4]   at sun.misc.Unsafe.park(Native Method)
 [junit4:junit4]   - parking to wait for  0x000814f12f88 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 [junit4:junit4]   at 
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
 [junit4:junit4]   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 [junit4:junit4]   at 
 

[jira] [Commented] (SOLR-3180) ChaosMonkey test failures

2012-12-18 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535200#comment-13535200
 ] 

Commit Tag Bot commented on SOLR-3180:
--

[trunk commit] Yonik Seeley
http://svn.apache.org/viewvc?view=revisionrevision=1423591

SOLR-3180: improve logging


 ChaosMonkey test failures
 -

 Key: SOLR-3180
 URL: https://issues.apache.org/jira/browse/SOLR-3180
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Yonik Seeley
 Attachments: test_report_1.txt


 Handle intermittent failures in the ChaosMonkey tests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3180) ChaosMonkey test failures

2012-12-18 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535215#comment-13535215
 ] 

Commit Tag Bot commented on SOLR-3180:
--

[branch_4x commit] Yonik Seeley
http://svn.apache.org/viewvc?view=revisionrevision=1423597

SOLR-3180: improve logging


 ChaosMonkey test failures
 -

 Key: SOLR-3180
 URL: https://issues.apache.org/jira/browse/SOLR-3180
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Yonik Seeley
 Attachments: test_report_1.txt


 Handle intermittent failures in the ChaosMonkey tests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4203) An ephemeral directory implementation should cause the transaction log to be ignored on startup.

2012-12-18 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535248#comment-13535248
 ] 

Commit Tag Bot commented on SOLR-4203:
--

[trunk commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1423608

SOLR-4203: dont check if ulog is set to delete tlogs files on startup - check 
if the tlog dir exists


 An ephemeral directory implementation should cause the transaction log to be 
 ignored on startup.
 

 Key: SOLR-4203
 URL: https://issues.apache.org/jira/browse/SOLR-4203
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.1, 5.0

 Attachments: SOLR-4203.patch


 If you are using something like ram dir, you can restart a node and if no 
 updates have come in, it will think its up to date but be empty - we should 
 clear the update log in these cases on startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4203) An ephemeral directory implementation should cause the transaction log to be ignored on startup.

2012-12-18 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535258#comment-13535258
 ] 

Commit Tag Bot commented on SOLR-4203:
--

[branch_4x commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1423611

SOLR-4203: dont check if ulog is set to delete tlogs files on startup - check 
if the tlog dir exists


 An ephemeral directory implementation should cause the transaction log to be 
 ignored on startup.
 

 Key: SOLR-4203
 URL: https://issues.apache.org/jira/browse/SOLR-4203
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.1, 5.0

 Attachments: SOLR-4203.patch


 If you are using something like ram dir, you can restart a node and if no 
 updates have come in, it will think its up to date but be empty - we should 
 clear the update log in these cases on startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4203) An ephemeral directory implementation should cause the transaction log to be ignored on startup.

2012-12-18 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535275#comment-13535275
 ] 

Commit Tag Bot commented on SOLR-4203:
--

[trunk commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1423625

SOLR-4203: whoops - fix npe


 An ephemeral directory implementation should cause the transaction log to be 
 ignored on startup.
 

 Key: SOLR-4203
 URL: https://issues.apache.org/jira/browse/SOLR-4203
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.1, 5.0

 Attachments: SOLR-4203.patch


 If you are using something like ram dir, you can restart a node and if no 
 updates have come in, it will think its up to date but be empty - we should 
 clear the update log in these cases on startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4203) An ephemeral directory implementation should cause the transaction log to be ignored on startup.

2012-12-18 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535286#comment-13535286
 ] 

Commit Tag Bot commented on SOLR-4203:
--

[branch_4x commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1423627

SOLR-4203: whoops - fix npe


 An ephemeral directory implementation should cause the transaction log to be 
 ignored on startup.
 

 Key: SOLR-4203
 URL: https://issues.apache.org/jira/browse/SOLR-4203
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.1, 5.0

 Attachments: SOLR-4203.patch


 If you are using something like ram dir, you can restart a node and if no 
 updates have come in, it will think its up to date but be empty - we should 
 clear the update log in these cases on startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4599) Compressed term vectors

2012-12-18 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535295#comment-13535295
 ] 

Robert Muir commented on LUCENE-4599:
-

{quote}
but term vectors are more complicated than stored fields (with lots of corner 
cases like negative start offsets, negative lengths, fields that don't always 
have the same options, etc.)
{quote}

And all of these corner cases are completely bogus with no real use cases. We 
definitely need to make the long-term investment to fix this. Its so sad this 
kinda nonsense bullshit is slowing down Adrien here. Its hard to fix... I know 
ive wasted a lot of brain cycles on trying to come up with perfect solutions. 
But we have to make some progress somehow.

 Compressed term vectors
 ---

 Key: LUCENE-4599
 URL: https://issues.apache.org/jira/browse/LUCENE-4599
 Project: Lucene - Core
  Issue Type: Task
  Components: core/codecs, core/termvectors
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 4.1

 Attachments: LUCENE-4599.patch


 We should have codec-compressed term vectors similarly to what we have with 
 stored fields.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4635) ArrayIndexOutOfBoundsException when a segment has many, many terms

2012-12-18 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535317#comment-13535317
 ] 

Robert Muir commented on LUCENE-4635:
-

In general we should do a review/better testing of this pagebytes. 

Stuff like whats going on in copy() really scares me. 

But for now I think you should commit. Even if all of pagedbytes isnt totally 
safe, we should at least fix the terms index problems in 3.6.2 that it causes.

I also think we should go for a 3.6.2 when this is fixed. We already have a 
nice amount of bugfixes sitting out there in the branch.

 ArrayIndexOutOfBoundsException when a segment has many, many terms
 --

 Key: LUCENE-4635
 URL: https://issues.apache.org/jira/browse/LUCENE-4635
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: LUCENE-4635.patch, LUCENE-4635.patch


 Spinoff from Tom Burton-West's java-user thread CheckIndex 
 ArrayIndexOutOfBounds error for merged index ( 
 http://markmail.org/message/fatijkotwucn7hvu ).
 I modified Test2BTerms to instead generate a little over 10B terms, ran it 
 (took 17 hours and created a 162 GB index) and hit a similar exception:
 {noformat}
 Time: 62,164.058
 There was 1 failure:
 1) test2BTerms(org.apache.lucene.index.Test2BTerms)
 java.lang.ArrayIndexOutOfBoundsException: 1246
   at 
 org.apache.lucene.index.TermInfosReaderIndex.compareField(TermInfosReaderIndex.java:249)
   at 
 org.apache.lucene.index.TermInfosReaderIndex.compareTo(TermInfosReaderIndex.java:225)
   at 
 org.apache.lucene.index.TermInfosReaderIndex.getIndexOffset(TermInfosReaderIndex.java:156)
   at org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:232)
   at org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:172)
   at org.apache.lucene.index.SegmentReader.docFreq(SegmentReader.java:539)
   at 
 org.apache.lucene.search.TermQuery$TermWeight$1.add(TermQuery.java:56)
   at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:81)
   at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:87)
   at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:70)
   at 
 org.apache.lucene.search.TermQuery$TermWeight.init(TermQuery.java:53)
   at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:199)
   at 
 org.apache.lucene.search.Searcher.createNormalizedWeight(Searcher.java:168)
   at 
 org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:664)
   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:342)
   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:330)
   at 
 org.apache.lucene.index.Test2BTerms.testSavedTerms(Test2BTerms.java:205)
   at org.apache.lucene.index.Test2BTerms.test2BTerms(Test2BTerms.java:154)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 {noformat}
 The index actually succeeded building and optimizing, but it was only when we 
 went to run searches of the random terms we collected along the way that the 
 AIOOBE was hit.
 I suspect this is a bug somewhere in the compact in-RAM terms index ... I'll 
 dig.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr Bug/Improvement: No-op and no error for Solr /update if no application/xml header

2012-12-18 Thread David Smiley (@MITRE.org)
I agree; it should return an error instead of mislead/confuse the user.
~ David


Jack Krupansky-2 wrote
 I don’t get any error or any effect from this curl command:
 
 curl http://localhost:8983/solr/update?commit=true --data-binary '
 delete
 query
 sku:td-01
 /query
 /delete
 '
 
 But, if I add the xml header, it works fine:
 
 curl http://localhost:8983/solr/update?commit=true -H Content-Type:
 application/xml --data-binary '
 delete
 query
 sku:td-01
 /query
 /delete
 '
 
 It would be nice if Solr would default to application/xml, but a friendly
 error return would be better than a no-op in this case.
 
 FWIW, curl –v shows this header being sent if I don’t specify it
 explicitly:
 
 Content-Type: application/x-www-form-urlencoded
 
 -- Jack Krupansky





-
 Author: http://www.packtpub.com/apache-solr-3-enterprise-search-server/book
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Bug-Improvement-No-op-and-no-error-for-Solr-update-if-no-application-xml-header-tp4027800p4027877.html
Sent from the Lucene - Java Developer mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3854) SolrCloud does not work with https

2012-12-18 Thread Steve Davids (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535379#comment-13535379
 ] 

Steve Davids commented on SOLR-3854:


+1 We are running into this problem right now. Glad to see Sami's patch is 
adding the option to the solr.xml.

 SolrCloud does not work with https
 --

 Key: SOLR-3854
 URL: https://issues.apache.org/jira/browse/SOLR-3854
 Project: Solr
  Issue Type: Bug
Reporter: Sami Siren
Assignee: Sami Siren
 Fix For: 4.1, 5.0

 Attachments: SOLR-3854.patch


 There are a few places in current codebase that assume http is used. This 
 prevents using https when running solr in cloud mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr Bug/Improvement: No-op and no error for Solr /update if no application/xml header

2012-12-18 Thread Yonik Seeley
On Tue, Dec 18, 2012 at 11:34 AM, Jack Krupansky
j...@basetechnology.com wrote:
 I don’t get any error or any effect from this curl command:

 curl http://localhost:8983/solr/update?commit=true --data-binary '
 deletequerysku:td-01/query/delete'

 But, if I add the xml header, it works fine:

 curl http://localhost:8983/solr/update?commit=true -H Content-Type:
 application/xml --data-binary '
 deletequerysku:td-01/query/delete'

 It would be nice if Solr would default to application/xml, but a friendly
 error return would be better than a no-op in this case.

 FWIW, curl –v shows this header being sent if I don’t specify it explicitly:

 Content-Type: application/x-www-form-urlencoded


That does suck.  The one thing I hate about curl (defaulting to that
content type for everything).
I think auto-detection of serialization format is generally the answer here.
https://issues.apache.org/jira/browse/SOLR-3389

-Yonik
http://lucidworks.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4106) Javac/ ivy path warnings with morfologik

2012-12-18 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated SOLR-4106:
--

Attachment: solr4106.zip

Works for me without a glitch. Can you try to reproduce the message you're 
seeing?

 Javac/ ivy path warnings with morfologik
 

 Key: SOLR-4106
 URL: https://issues.apache.org/jira/browse/SOLR-4106
 Project: Solr
  Issue Type: Task
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Attachments: solr4106.zip


 Does not break the build but brings javac warnings, as pointed out by rmuir:
 {code}
 [javac] warning: [path] bad path element 
 ~/.ivy2/cache/org.carrot2/morfologik-polish/jars/morfologik-stemming-1.5.3.jar:
  no such file or directory
[javac] warning: [path] bad path element 
 ~/.ivy2/cache/org.carrot2/morfologik-polish/jars/morfologik-fsa-1.5.3.jar: 
 no such file or directory
[javac] warning: [path] bad path element 
 ~/.ivy2/cache/org.carrot2/morfologik-stemming/jars/morfologik-fsa-1.5.3.jar:
  no such file or directory
[javac] warning: [path] bad path element 
 ~/.ivy2/cache/org.carrot2/morfologik-fsa/jars/hppc-0.4.1.jar: no such file 
 or directory
 i'm just doing
 ivy:cachepath pathid=solr.path log=download-only type=bundle,jar /
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4106) Javac/ ivy path warnings with morfologik

2012-12-18 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535405#comment-13535405
 ] 

Dawid Weiss commented on SOLR-4106:
---

Wait... this is odd -- if you really have ~ in your paths then javac won't be 
able to locate them because they're shell expansions, aren't they?

 Javac/ ivy path warnings with morfologik
 

 Key: SOLR-4106
 URL: https://issues.apache.org/jira/browse/SOLR-4106
 Project: Solr
  Issue Type: Task
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Attachments: solr4106.zip


 Does not break the build but brings javac warnings, as pointed out by rmuir:
 {code}
 [javac] warning: [path] bad path element 
 ~/.ivy2/cache/org.carrot2/morfologik-polish/jars/morfologik-stemming-1.5.3.jar:
  no such file or directory
[javac] warning: [path] bad path element 
 ~/.ivy2/cache/org.carrot2/morfologik-polish/jars/morfologik-fsa-1.5.3.jar: 
 no such file or directory
[javac] warning: [path] bad path element 
 ~/.ivy2/cache/org.carrot2/morfologik-stemming/jars/morfologik-fsa-1.5.3.jar:
  no such file or directory
[javac] warning: [path] bad path element 
 ~/.ivy2/cache/org.carrot2/morfologik-fsa/jars/hppc-0.4.1.jar: no such file 
 or directory
 i'm just doing
 ivy:cachepath pathid=solr.path log=download-only type=bundle,jar /
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4106) Javac/ ivy path warnings with morfologik

2012-12-18 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535412#comment-13535412
 ] 

Robert Muir commented on SOLR-4106:
---

I think this is a simpler way to reproduce (or maybe a different bug 
alltogether):

rmuir@beast:~/workspace/lucene-trunk/lucene$ ant test -Dtestcase=foo  test.log

{noformat}
common.compile-core:
   [mkdir] Created dir: 
/home/rmuir/workspace/lucene-trunk/lucene/build/analysis/morfologik/classes/java
   [javac] Compiling 5 source files to 
/home/rmuir/workspace/lucene-trunk/lucene/build/analysis/morfologik/classes/java
   [javac] warning: [path] bad path element 
/home/rmuir/workspace/lucene-trunk/lucene/analysis/morfologik/lib/hppc-0.4.1.jar:
 no such file or directory
   [javac] 1 warning
[copy] Copying 1 file to 
/home/rmuir/workspace/lucene-trunk/lucene/build/analysis/morfologik/classes/java
{noformat}

I looked into this, but i have no ideas yet what is causing this.

 Javac/ ivy path warnings with morfologik
 

 Key: SOLR-4106
 URL: https://issues.apache.org/jira/browse/SOLR-4106
 Project: Solr
  Issue Type: Task
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Attachments: solr4106.zip


 Does not break the build but brings javac warnings, as pointed out by rmuir:
 {code}
 [javac] warning: [path] bad path element 
 ~/.ivy2/cache/org.carrot2/morfologik-polish/jars/morfologik-stemming-1.5.3.jar:
  no such file or directory
[javac] warning: [path] bad path element 
 ~/.ivy2/cache/org.carrot2/morfologik-polish/jars/morfologik-fsa-1.5.3.jar: 
 no such file or directory
[javac] warning: [path] bad path element 
 ~/.ivy2/cache/org.carrot2/morfologik-stemming/jars/morfologik-fsa-1.5.3.jar:
  no such file or directory
[javac] warning: [path] bad path element 
 ~/.ivy2/cache/org.carrot2/morfologik-fsa/jars/hppc-0.4.1.jar: no such file 
 or directory
 i'm just doing
 ivy:cachepath pathid=solr.path log=download-only type=bundle,jar /
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4634) PackedInts: streaming API that supports variable numbers of bits per value

2012-12-18 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-4634:
-

Attachment: LUCENE-4634.patch

Here is a patch. (I would like to use it for LUCENE-4599.)

 PackedInts: streaming API that supports variable numbers of bits per value
 --

 Key: LUCENE-4634
 URL: https://issues.apache.org/jira/browse/LUCENE-4634
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/other
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-4634.patch


 It could be convenient to have a streaming API (writers and iterators, no 
 random access) that supports variable numbers of bits per value. Although 
 this would be much slower than the current fixed-size APIs, it could help 
 save bytes in our codec formats.
 The API could look like:
 {code}
 Iterator {
   long next(int bitsPerValue);
 }
 Writer {
   void write(long value, int bitsPerValue); // assert 
 PackedInts.bitsRequired(value) = bitsPerValue;
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4106) Javac/ ivy path warnings with morfologik

2012-12-18 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535426#comment-13535426
 ] 

Dawid Weiss commented on SOLR-4106:
---

This is caused by a manifest classpath entry in morfologik-fsa-1.5.3.jar 
referencing HPPC (which is a dependency required for constructing automata, not 
for traversals etc.). Javac is issuing a warning even though this isn't 
explicitly on the classpath.

Don't know what to do with it yet.

 Javac/ ivy path warnings with morfologik
 

 Key: SOLR-4106
 URL: https://issues.apache.org/jira/browse/SOLR-4106
 Project: Solr
  Issue Type: Task
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Attachments: solr4106.zip


 Does not break the build but brings javac warnings, as pointed out by rmuir:
 {code}
 [javac] warning: [path] bad path element 
 ~/.ivy2/cache/org.carrot2/morfologik-polish/jars/morfologik-stemming-1.5.3.jar:
  no such file or directory
[javac] warning: [path] bad path element 
 ~/.ivy2/cache/org.carrot2/morfologik-polish/jars/morfologik-fsa-1.5.3.jar: 
 no such file or directory
[javac] warning: [path] bad path element 
 ~/.ivy2/cache/org.carrot2/morfologik-stemming/jars/morfologik-fsa-1.5.3.jar:
  no such file or directory
[javac] warning: [path] bad path element 
 ~/.ivy2/cache/org.carrot2/morfologik-fsa/jars/hppc-0.4.1.jar: no such file 
 or directory
 i'm just doing
 ivy:cachepath pathid=solr.path log=download-only type=bundle,jar /
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4106) Javac/ ivy path warnings with morfologik

2012-12-18 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535428#comment-13535428
 ] 

Dawid Weiss commented on SOLR-4106:
---

http://stackoverflow.com/questions/3800462/can-i-prevent-javac-accessing-the-class-path-from-the-manifests-of-our-third-par


 Javac/ ivy path warnings with morfologik
 

 Key: SOLR-4106
 URL: https://issues.apache.org/jira/browse/SOLR-4106
 Project: Solr
  Issue Type: Task
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Attachments: solr4106.zip


 Does not break the build but brings javac warnings, as pointed out by rmuir:
 {code}
 [javac] warning: [path] bad path element 
 ~/.ivy2/cache/org.carrot2/morfologik-polish/jars/morfologik-stemming-1.5.3.jar:
  no such file or directory
[javac] warning: [path] bad path element 
 ~/.ivy2/cache/org.carrot2/morfologik-polish/jars/morfologik-fsa-1.5.3.jar: 
 no such file or directory
[javac] warning: [path] bad path element 
 ~/.ivy2/cache/org.carrot2/morfologik-stemming/jars/morfologik-fsa-1.5.3.jar:
  no such file or directory
[javac] warning: [path] bad path element 
 ~/.ivy2/cache/org.carrot2/morfologik-fsa/jars/hppc-0.4.1.jar: no such file 
 or directory
 i'm just doing
 ivy:cachepath pathid=solr.path log=download-only type=bundle,jar /
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Distributed result set merging in Solr

2012-12-18 Thread Steve McKay
Currently distributed requests are entirely initiated by whichever node 
receives a query, correct? That is, as far as I know shards don't talk to each 
other or send requests back to the controller.

I'm looking at sending stats facets between shards to speed up merging. Rather 
than have one node responsible for merging the facet sets from every shard, 
each facet set is partitioned by term and then each shard merges one partition 
of each facet set. A-D, E-G, etc. However, that kind of communication doesn't 
really fit into Solr's current model of distributed processing.  I think my use 
case isn't the only instance where it could help performance for shards to talk 
amongst themselves, so I'm curious why nothing in Solr does. Is it deliberate? 
No one bothered? I'm wrong and nothing else has a non-trivial reduce step?


Steve McKay | Software Developer | GCE
steve.mc...@gcecloud.commailto:steve.mc...@gcecloud.com | (703) 390-3044 desk 
| (703) 659-0608 Skype | (443) 710-2762 mobile

Connect with GCE:
www.GCEcloud.comhttp://www.gcecloud.com/ | 
Facebookhttps://www.facebook.com/GCEcloud | 
Twitterhttps://twitter.com/GCECloud | 
Google+https://plus.google.com/u/0/112948441992350338884/posts

The information contained in this e-mail and any attachment(s) is Confidential, 
Privileged, Protected from any disclosure, and proprietary to Global Computer 
Enterprises, Inc.  The person addressed in the email is the sole authorized 
recipient.  If you are not the intended recipient, you are hereby notified that 
any review, use, disclosure, retransmission, dissemination, distribution, 
copying, or any other actions related to this information is strictly 
prohibited. If you have received this communication in error, please inform the 
sender and delete or destroy any copy of this message.




IVY-1388 - probable fix for builds hanging at resolve

2012-12-18 Thread Shawn Heisey
This may not be news for you guys, but I know that a lot of people get 
bitten by it.


When the lucene/solr build hangs at the resolve target, it is because 
of old ivy lockfiles.  I verified this with strace.  They already knew 
about the problem:


https://issues.apache.org/jira/browse/IVY-1388

Currently the fix is only in their trunk.  I compiled that and replaced 
my ivy jar in ~/.ant/lib with the trunk one.  Then I wiped ~/.ivy2 and 
~/.m2 and ran ant clean test dist-excl-slf4j in branch_4x/solr.  
Everything passed and built OK.


Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4635) ArrayIndexOutOfBoundsException when a segment has many, many terms

2012-12-18 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535449#comment-13535449
 ] 

Michael McCandless commented on LUCENE-4635:


OK turns out this same issue was fixed in LUCENE-4568 for 4.x/5.x ... we just 
never backported to 3.6.x.

 ArrayIndexOutOfBoundsException when a segment has many, many terms
 --

 Key: LUCENE-4635
 URL: https://issues.apache.org/jira/browse/LUCENE-4635
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: LUCENE-4635.patch, LUCENE-4635.patch


 Spinoff from Tom Burton-West's java-user thread CheckIndex 
 ArrayIndexOutOfBounds error for merged index ( 
 http://markmail.org/message/fatijkotwucn7hvu ).
 I modified Test2BTerms to instead generate a little over 10B terms, ran it 
 (took 17 hours and created a 162 GB index) and hit a similar exception:
 {noformat}
 Time: 62,164.058
 There was 1 failure:
 1) test2BTerms(org.apache.lucene.index.Test2BTerms)
 java.lang.ArrayIndexOutOfBoundsException: 1246
   at 
 org.apache.lucene.index.TermInfosReaderIndex.compareField(TermInfosReaderIndex.java:249)
   at 
 org.apache.lucene.index.TermInfosReaderIndex.compareTo(TermInfosReaderIndex.java:225)
   at 
 org.apache.lucene.index.TermInfosReaderIndex.getIndexOffset(TermInfosReaderIndex.java:156)
   at org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:232)
   at org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:172)
   at org.apache.lucene.index.SegmentReader.docFreq(SegmentReader.java:539)
   at 
 org.apache.lucene.search.TermQuery$TermWeight$1.add(TermQuery.java:56)
   at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:81)
   at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:87)
   at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:70)
   at 
 org.apache.lucene.search.TermQuery$TermWeight.init(TermQuery.java:53)
   at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:199)
   at 
 org.apache.lucene.search.Searcher.createNormalizedWeight(Searcher.java:168)
   at 
 org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:664)
   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:342)
   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:330)
   at 
 org.apache.lucene.index.Test2BTerms.testSavedTerms(Test2BTerms.java:205)
   at org.apache.lucene.index.Test2BTerms.test2BTerms(Test2BTerms.java:154)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 {noformat}
 The index actually succeeded building and optimizing, but it was only when we 
 went to run searches of the random terms we collected along the way that the 
 AIOOBE was hit.
 I suspect this is a bug somewhere in the compact in-RAM terms index ... I'll 
 dig.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: IVY-1388 - probable fix for builds hanging at resolve

2012-12-18 Thread Robert Muir
On Tue, Dec 18, 2012 at 6:31 PM, Shawn Heisey s...@elyograg.org wrote:
 This may not be news for you guys, but I know that a lot of people get
 bitten by it.

 When the lucene/solr build hangs at the resolve target, it is because of
 old ivy lockfiles.  I verified this with strace.  They already knew about
 the problem:

 https://issues.apache.org/jira/browse/IVY-1388

 Currently the fix is only in their trunk.  I compiled that and replaced my
 ivy jar in ~/.ant/lib with the trunk one.  Then I wiped ~/.ivy2 and ~/.m2
 and ran ant clean test dist-excl-slf4j in branch_4x/solr.  Everything
 passed and built OK.


Thanks Shawn!

I know i caused these hangs by turning on ivy locking.

But this is better in my opinion than the default (cache corruption).

its also worked around by just doing removing the .lck file (look
under ~/.ivy2 for the offending one and nuke it)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: IVY-1388 - probable fix for builds hanging at resolve

2012-12-18 Thread Mark Miller
Thanks, I get this a lot. I usually just blask the ivy2 dir and it foes away. 
Oddly that didn't help when it happened to me on my air today.

- Mark

On Dec 18, 2012, at 6:31 PM, Shawn Heisey s...@elyograg.org wrote:

 This may not be news for you guys, but I know that a lot of people get bitten 
 by it.
 
 When the lucene/solr build hangs at the resolve target, it is because of 
 old ivy lockfiles.  I verified this with strace.  They already knew about the 
 problem:
 
 https://issues.apache.org/jira/browse/IVY-1388
 
 Currently the fix is only in their trunk.  I compiled that and replaced my 
 ivy jar in ~/.ant/lib with the trunk one.  Then I wiped ~/.ivy2 and ~/.m2 and 
 ran ant clean test dist-excl-slf4j in branch_4x/solr.  Everything passed 
 and built OK.
 
 Thanks,
 Shawn
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: IVY-1388 - probable fix for builds hanging at resolve

2012-12-18 Thread Robert Muir
On Tue, Dec 18, 2012 at 6:41 PM, Mark Miller markrmil...@gmail.com wrote:
 Thanks, I get this a lot. I usually just blask the ivy2 dir and it foes away. 
 Oddly that didn't help when it happened to me on my air today.

 - Mark


Do you ^C a lot? Thats what causes this

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4635) ArrayIndexOutOfBoundsException when a segment has many, many terms

2012-12-18 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535467#comment-13535467
 ] 

Commit Tag Bot commented on LUCENE-4635:


[branch_4x commit] Michael McCandless
http://svn.apache.org/viewvc?view=revisionrevision=1423718

LUCENE-4635: add test


 ArrayIndexOutOfBoundsException when a segment has many, many terms
 --

 Key: LUCENE-4635
 URL: https://issues.apache.org/jira/browse/LUCENE-4635
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: LUCENE-4635.patch, LUCENE-4635.patch


 Spinoff from Tom Burton-West's java-user thread CheckIndex 
 ArrayIndexOutOfBounds error for merged index ( 
 http://markmail.org/message/fatijkotwucn7hvu ).
 I modified Test2BTerms to instead generate a little over 10B terms, ran it 
 (took 17 hours and created a 162 GB index) and hit a similar exception:
 {noformat}
 Time: 62,164.058
 There was 1 failure:
 1) test2BTerms(org.apache.lucene.index.Test2BTerms)
 java.lang.ArrayIndexOutOfBoundsException: 1246
   at 
 org.apache.lucene.index.TermInfosReaderIndex.compareField(TermInfosReaderIndex.java:249)
   at 
 org.apache.lucene.index.TermInfosReaderIndex.compareTo(TermInfosReaderIndex.java:225)
   at 
 org.apache.lucene.index.TermInfosReaderIndex.getIndexOffset(TermInfosReaderIndex.java:156)
   at org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:232)
   at org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:172)
   at org.apache.lucene.index.SegmentReader.docFreq(SegmentReader.java:539)
   at 
 org.apache.lucene.search.TermQuery$TermWeight$1.add(TermQuery.java:56)
   at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:81)
   at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:87)
   at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:70)
   at 
 org.apache.lucene.search.TermQuery$TermWeight.init(TermQuery.java:53)
   at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:199)
   at 
 org.apache.lucene.search.Searcher.createNormalizedWeight(Searcher.java:168)
   at 
 org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:664)
   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:342)
   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:330)
   at 
 org.apache.lucene.index.Test2BTerms.testSavedTerms(Test2BTerms.java:205)
   at org.apache.lucene.index.Test2BTerms.test2BTerms(Test2BTerms.java:154)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 {noformat}
 The index actually succeeded building and optimizing, but it was only when we 
 went to run searches of the random terms we collected along the way that the 
 AIOOBE was hit.
 I suspect this is a bug somewhere in the compact in-RAM terms index ... I'll 
 dig.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Distributed result set merging in Solr

2012-12-18 Thread Yonik Seeley
On Tue, Dec 18, 2012 at 6:28 PM, Steve McKay steve.mc...@gcecloud.com wrote:
 I'm looking at sending stats facets between shards to speed up merging.
 Rather than have one node responsible for merging the facet sets from every
 shard, each facet set is partitioned by term and then each shard merges one
 partition of each facet set. A-D, E-G, etc.

Could you give a concrete example of what you're thinking (say 3
shards and just a few terms?)

-Yonik
http://lucidworks.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: IVY-1388 - probable fix for builds hanging at resolve

2012-12-18 Thread Shawn Heisey

On 12/18/2012 4:41 PM, Robert Muir wrote:

Thanks Shawn!

I know i caused these hangs by turning on ivy locking.

But this is better in my opinion than the default (cache corruption).

its also worked around by just doing removing the .lck file (look
under ~/.ivy2 for the offending one and nuke it)


I was thinking of filing an issue to upgrade ivy when they release 
something with the fix.


Would anyone complain if I linked that new LUCENE issue with IVY-1388?  
I figure that Jira probably allows that, but I don't want to cause any 
problems.


Responding to your later message, I do press Ctrl-C a lot.  I start a 
build and suddenly realize it's not going to work because I forgot 
something.


Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4635) ArrayIndexOutOfBoundsException when a segment has many, many terms

2012-12-18 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-4635.


   Resolution: Fixed
Fix Version/s: 3.6

4.x/5.x were already fixed ...

Thanks Tom!

 ArrayIndexOutOfBoundsException when a segment has many, many terms
 --

 Key: LUCENE-4635
 URL: https://issues.apache.org/jira/browse/LUCENE-4635
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 3.6

 Attachments: LUCENE-4635.patch, LUCENE-4635.patch


 Spinoff from Tom Burton-West's java-user thread CheckIndex 
 ArrayIndexOutOfBounds error for merged index ( 
 http://markmail.org/message/fatijkotwucn7hvu ).
 I modified Test2BTerms to instead generate a little over 10B terms, ran it 
 (took 17 hours and created a 162 GB index) and hit a similar exception:
 {noformat}
 Time: 62,164.058
 There was 1 failure:
 1) test2BTerms(org.apache.lucene.index.Test2BTerms)
 java.lang.ArrayIndexOutOfBoundsException: 1246
   at 
 org.apache.lucene.index.TermInfosReaderIndex.compareField(TermInfosReaderIndex.java:249)
   at 
 org.apache.lucene.index.TermInfosReaderIndex.compareTo(TermInfosReaderIndex.java:225)
   at 
 org.apache.lucene.index.TermInfosReaderIndex.getIndexOffset(TermInfosReaderIndex.java:156)
   at org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:232)
   at org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:172)
   at org.apache.lucene.index.SegmentReader.docFreq(SegmentReader.java:539)
   at 
 org.apache.lucene.search.TermQuery$TermWeight$1.add(TermQuery.java:56)
   at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:81)
   at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:87)
   at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:70)
   at 
 org.apache.lucene.search.TermQuery$TermWeight.init(TermQuery.java:53)
   at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:199)
   at 
 org.apache.lucene.search.Searcher.createNormalizedWeight(Searcher.java:168)
   at 
 org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:664)
   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:342)
   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:330)
   at 
 org.apache.lucene.index.Test2BTerms.testSavedTerms(Test2BTerms.java:205)
   at org.apache.lucene.index.Test2BTerms.test2BTerms(Test2BTerms.java:154)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 {noformat}
 The index actually succeeded building and optimizing, but it was only when we 
 went to run searches of the random terms we collected along the way that the 
 AIOOBE was hit.
 I suspect this is a bug somewhere in the compact in-RAM terms index ... I'll 
 dig.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: IVY-1388 - probable fix for builds hanging at resolve

2012-12-18 Thread Robert Muir
On Tue, Dec 18, 2012 at 6:51 PM, Shawn Heisey s...@elyograg.org wrote:
 I was thinking of filing an issue to upgrade ivy when they release something
 with the fix.

we should stay up to date. It was my understanding that we are using
the actual latest released version. I realize there are RC versions in
maven, but thats not interesting to me (and confusing).


 Would anyone complain if I linked that new LUCENE issue with IVY-1388?  I
 figure that Jira probably allows that, but I don't want to cause any
 problems.

Please do this, thanks.


 Responding to your later message, I do press Ctrl-C a lot.  I start a build
 and suddenly realize it's not going to work because I forgot something.


Yeah if you do this, it might leave a stray .lck file around. So you
can just run find and so on to take care of it until there is a
release we can use.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4636) Upgrade ivy for IVY-1388

2012-12-18 Thread Shawn Heisey (JIRA)
Shawn Heisey created LUCENE-4636:


 Summary: Upgrade ivy for IVY-1388
 Key: LUCENE-4636
 URL: https://issues.apache.org/jira/browse/LUCENE-4636
 Project: Lucene - Core
  Issue Type: Improvement
Affects Versions: 4.0, 3.6
Reporter: Shawn Heisey
 Fix For: 4.1, 5.0, 3.6.2


For certain failures during a lucene/solr build, or if you press ctrl-c at the 
wrong moment during the build, ivy may leave a lockfile behind.  The next time 
you run a build, ivy will hang with resolve: on the screen.

The ivy project has a fix, currently not yet released.  When it does get 
released, the version installed by the ivy-bootstrap target needs to be updated.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4635) ArrayIndexOutOfBoundsException when a segment has many, many terms

2012-12-18 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535476#comment-13535476
 ] 

Commit Tag Bot commented on LUCENE-4635:


[trunk commit] Michael McCandless
http://svn.apache.org/viewvc?view=revisionrevision=1423720

LUCENE-4635: add test


 ArrayIndexOutOfBoundsException when a segment has many, many terms
 --

 Key: LUCENE-4635
 URL: https://issues.apache.org/jira/browse/LUCENE-4635
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 3.6

 Attachments: LUCENE-4635.patch, LUCENE-4635.patch


 Spinoff from Tom Burton-West's java-user thread CheckIndex 
 ArrayIndexOutOfBounds error for merged index ( 
 http://markmail.org/message/fatijkotwucn7hvu ).
 I modified Test2BTerms to instead generate a little over 10B terms, ran it 
 (took 17 hours and created a 162 GB index) and hit a similar exception:
 {noformat}
 Time: 62,164.058
 There was 1 failure:
 1) test2BTerms(org.apache.lucene.index.Test2BTerms)
 java.lang.ArrayIndexOutOfBoundsException: 1246
   at 
 org.apache.lucene.index.TermInfosReaderIndex.compareField(TermInfosReaderIndex.java:249)
   at 
 org.apache.lucene.index.TermInfosReaderIndex.compareTo(TermInfosReaderIndex.java:225)
   at 
 org.apache.lucene.index.TermInfosReaderIndex.getIndexOffset(TermInfosReaderIndex.java:156)
   at org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:232)
   at org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:172)
   at org.apache.lucene.index.SegmentReader.docFreq(SegmentReader.java:539)
   at 
 org.apache.lucene.search.TermQuery$TermWeight$1.add(TermQuery.java:56)
   at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:81)
   at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:87)
   at org.apache.lucene.util.ReaderUtil$Gather.run(ReaderUtil.java:70)
   at 
 org.apache.lucene.search.TermQuery$TermWeight.init(TermQuery.java:53)
   at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:199)
   at 
 org.apache.lucene.search.Searcher.createNormalizedWeight(Searcher.java:168)
   at 
 org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:664)
   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:342)
   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:330)
   at 
 org.apache.lucene.index.Test2BTerms.testSavedTerms(Test2BTerms.java:205)
   at org.apache.lucene.index.Test2BTerms.test2BTerms(Test2BTerms.java:154)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 {noformat}
 The index actually succeeded building and optimizing, but it was only when we 
 went to run searches of the random terms we collected along the way that the 
 AIOOBE was hit.
 I suspect this is a bug somewhere in the compact in-RAM terms index ... I'll 
 dig.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4636) Upgrade ivy for IVY-1388 - build hangs at resolve:

2012-12-18 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated LUCENE-4636:
-

Summary: Upgrade ivy for IVY-1388 - build hangs at resolve:  (was: 
Upgrade ivy for IVY-1388)

 Upgrade ivy for IVY-1388 - build hangs at resolve:
 

 Key: LUCENE-4636
 URL: https://issues.apache.org/jira/browse/LUCENE-4636
 Project: Lucene - Core
  Issue Type: Improvement
Affects Versions: 3.6, 4.0
Reporter: Shawn Heisey
 Fix For: 4.1, 5.0, 3.6.2


 For certain failures during a lucene/solr build, or if you press ctrl-c at 
 the wrong moment during the build, ivy may leave a lockfile behind.  The next 
 time you run a build, ivy will hang with resolve: on the screen.
 The ivy project has a fix, currently not yet released.  When it does get 
 released, the version installed by the ivy-bootstrap target needs to be 
 updated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4196) Untangle XML-specific nature of Config and Container classes

2012-12-18 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535509#comment-13535509
 ] 

Erick Erickson commented on SOLR-4196:
--

I'm starting by trying to pull all XML references out of CoreContainer. Frankly 
it's turning nasty, assumptions about parsing XML are scattered all over the 
place.

Does anyone have some grand scheme in mind for handling this? I've got an 
approach, but it's tedious, mostly making a new ConfigSolr class that provides 
a thunking layer. Removing all the references to XML, DOM, wc3 sure makes a lot 
of red stuff in IntelliJ.

All I'm looking for here is if I'm overlooking the obvious. Don't want to get 
all through with it and discover there was a simpler way someone had already 
scoped out. I'm not about to make a complete copy of CoreContainer and have to 
maintain both.

 Untangle XML-specific nature of Config and Container classes
 

 Key: SOLR-4196
 URL: https://issues.apache.org/jira/browse/SOLR-4196
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Reporter: Erick Erickson
Assignee: Erick Erickson
Priority: Minor
 Fix For: 4.1, 5.0


 sub-task for SOLR-4083. If we're going to try to obsolete solr.xml, we need 
 to pull all of the specific XML processing out of Config and Container. 
 Currently, we refer to xpaths all over the place. This JIRA is about 
 providing a thunking layer to isolate the XML-esque nature of solr.xml and 
 allow a simple properties file to be used instead which will lead, 
 eventually, to solr.xml going away.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4196) Untangle XML-specific nature of Config and Container classes

2012-12-18 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535524#comment-13535524
 ] 

Ryan McKinley commented on SOLR-4196:
-

bq. Does anyone have some grand scheme in mind for handling this?

I tried a few years back as you you see it is pretty hairy!

I *think* the right approach is to have java objects that represent the 
configs.  Then have a different class that can read (write?) the configs to XML 
(or json, etc)

 Untangle XML-specific nature of Config and Container classes
 

 Key: SOLR-4196
 URL: https://issues.apache.org/jira/browse/SOLR-4196
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Reporter: Erick Erickson
Assignee: Erick Erickson
Priority: Minor
 Fix For: 4.1, 5.0


 sub-task for SOLR-4083. If we're going to try to obsolete solr.xml, we need 
 to pull all of the specific XML processing out of Config and Container. 
 Currently, we refer to xpaths all over the place. This JIRA is about 
 providing a thunking layer to isolate the XML-esque nature of solr.xml and 
 allow a simple properties file to be used instead which will lead, 
 eventually, to solr.xml going away.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Optimize facets when actually single valued?

2012-12-18 Thread Ryan McKinley
is there a JIRA ticket for this?

+1 to Robert's observation that this independent from any format discussion



On Wed, Nov 14, 2012 at 5:46 AM, Robert Muir rcm...@gmail.com wrote:

 On Tue, Nov 13, 2012 at 11:41 PM, Toke Eskildsen t...@statsbiblioteket.dk
 wrote:
  On Tue, 2012-11-13 at 19:50 +0100, Yonik Seeley wrote:
  The original version of Solr (SOLAR when it was still inside CNET) did
  this - a multiValued field with a single value was output as a singe
  value, not an array containing a single value.  Some people wanted
  more predictability (always an array or never an array).
 
  So there are two very different issues with this optimization:
 
  Under the hood, it looks like a win. The single value field cache is
  better performing (speed as well as memory) than the uninverted field.
  There's some trickery with index updates as re-use of structures gets
  interesting when all segments has been delivering single value and a
  multi-value segment is introduced.

 this isn't tricky. in solr these structures are top-level (on top of
 SlowMultiReaderWrapper).

 
  Dynamically changing response formats sounds horrible.

 I don't understand how this is related with my proposal to
 automatically use a different data structure behind the scenes.

 The optimization I am talking about is safe and simple and no user
 would have any idea.

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




Re: Hierarchical stats for Solr

2012-12-18 Thread Ryan McKinley
Hi Steve-

The work you discuss sounds interesting, can you make a JIRA issue for this?

See:
http://wiki.apache.org/solr/HowToContribute#JIRA_tips_.28our_issue.2BAC8-bug_tracker.29

thanks
ryan


On Tue, Dec 18, 2012 at 3:09 PM, Steve McKay steve.mc...@gcecloud.comwrote:

 e.g. facet by vendor and then facet each vendor by year. I've also added
 stats.sort, stats.limit, and stats.offset field params. stats.sort syntax
 is sum|min|max|stdDev|average|sumOfSquares|count|missing|value:asc|desc
 and limit and offset work as in SQL. Faceting will generally use more RAM
 and be faster than the 4.0 baseline. I've changed more than some might
 consider to be strictly necessary; this is because a large part of my
 effort has been to make faceting performant under adverse conditions, with
 large result sets and faceting on fields with large (1m+) cardinalities. If
 there's interest I can post some rough response time numbers for faceting
 on fields with various cardinalities.



Re: Optimize facets when actually single valued?

2012-12-18 Thread Robert Muir
On Tue, Dec 18, 2012 at 8:06 PM, Ryan McKinley ryan...@gmail.com wrote:
 is there a JIRA ticket for this?

 +1 to Robert's observation that this independent from any format discussion


I dont know of one: but feel free!

I thought of the stats situation at some point:
terms.size == terms.sumDocFreq should be enough i think, for faceting purposes?
doesnt really mean the field is truly single valued, because a term
could exist twice for the same doc, but for faceting etc, we dont care
about that I think?
if we really want to check that no term has tf  1 within a doc, we'd
have to involve sumTotalTermFreq too: which is irrelevant here and
unavailable if frequencies are omitted

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4196) Untangle XML-specific nature of Config and Container classes

2012-12-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535545#comment-13535545
 ] 

Mark Miller commented on SOLR-4196:
---

Yeah, eventually we want to get rid of all that. Doing it all at once seems 
difficult though.

Perhaps you can build an in memory xml dom from the directory layout and pass 
it around. This makes back compat support with the current solr.xml fairly easy.

On the other hand, solr.xml does not have that much to it...perhaps I can try 
and lend a hand sometime soon - I can at least do a little more investigation 
to see.

 Untangle XML-specific nature of Config and Container classes
 

 Key: SOLR-4196
 URL: https://issues.apache.org/jira/browse/SOLR-4196
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Reporter: Erick Erickson
Assignee: Erick Erickson
Priority: Minor
 Fix For: 4.1, 5.0


 sub-task for SOLR-4083. If we're going to try to obsolete solr.xml, we need 
 to pull all of the specific XML processing out of Config and Container. 
 Currently, we refer to xpaths all over the place. This JIRA is about 
 providing a thunking layer to isolate the XML-esque nature of solr.xml and 
 allow a simple properties file to be used instead which will lead, 
 eventually, to solr.xml going away.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4212) Support for facet pivot query for filtered count

2012-12-18 Thread Steve Molloy (JIRA)
Steve Molloy created SOLR-4212:
--

 Summary: Support for facet pivot query for filtered count
 Key: SOLR-4212
 URL: https://issues.apache.org/jira/browse/SOLR-4212
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 4.0
Reporter: Steve Molloy


Facet pivot provide hierarchical support for computing data used to populate a 
treemap or similar visualization. TreeMaps usually offer users extra 
information by applying an overlay color on top of the existing square sizes 
based on hierarchical counts. This second count is based on user choices, 
representing, usually with gradient, the proportion of the square that fits the 
user's choices.

The proposition is to add a facet.pivot.q parameter that would allow to specify 
a query (per field) that would be intersected with DocSet used to calculate 
pivot count, stored in separate q-count.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4212) Support for facet pivot query for filtered count

2012-12-18 Thread Steve Molloy (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Molloy updated SOLR-4212:
---

Attachment: patch-4212.txt

Initial patch proposal.

 Support for facet pivot query for filtered count
 

 Key: SOLR-4212
 URL: https://issues.apache.org/jira/browse/SOLR-4212
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 4.0
Reporter: Steve Molloy
 Attachments: patch-4212.txt


 Facet pivot provide hierarchical support for computing data used to populate 
 a treemap or similar visualization. TreeMaps usually offer users extra 
 information by applying an overlay color on top of the existing square sizes 
 based on hierarchical counts. This second count is based on user choices, 
 representing, usually with gradient, the proportion of the square that fits 
 the user's choices.
 The proposition is to add a facet.pivot.q parameter that would allow to 
 specify a query (per field) that would be intersected with DocSet used to 
 calculate pivot count, stored in separate q-count.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4213) Directories that are not shutdown until DirectoryFactory#close do not have close listeners called on them.

2012-12-18 Thread Mark Miller (JIRA)
Mark Miller created SOLR-4213:
-

 Summary: Directories that are not shutdown until 
DirectoryFactory#close do not have close listeners called on them.
 Key: SOLR-4213
 URL: https://issues.apache.org/jira/browse/SOLR-4213
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.1, 5.0




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4213) Directories that are not shutdown until DirectoryFactory#close do not have close listeners called on them.

2012-12-18 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535599#comment-13535599
 ] 

Commit Tag Bot commented on SOLR-4213:
--

[trunk commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1423738

SOLR-4213: Directories that are not shutdown until DirectoryFactory#close do 
not have close listeners called on them.



 Directories that are not shutdown until DirectoryFactory#close do not have 
 close listeners called on them.
 --

 Key: SOLR-4213
 URL: https://issues.apache.org/jira/browse/SOLR-4213
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.1, 5.0




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4213) Directories that are not shutdown until DirectoryFactory#close do not have close listeners called on them.

2012-12-18 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535617#comment-13535617
 ] 

Commit Tag Bot commented on SOLR-4213:
--

[branch_4x commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1423747

SOLR-4213: Directories that are not shutdown until DirectoryFactory#close do 
not have close listeners called on them.



 Directories that are not shutdown until DirectoryFactory#close do not have 
 close listeners called on them.
 --

 Key: SOLR-4213
 URL: https://issues.apache.org/jira/browse/SOLR-4213
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.1, 5.0




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4213) Directories that are not shutdown until DirectoryFactory#close do not have close listeners called on them.

2012-12-18 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-4213.
---

Resolution: Fixed

 Directories that are not shutdown until DirectoryFactory#close do not have 
 close listeners called on them.
 --

 Key: SOLR-4213
 URL: https://issues.apache.org/jira/browse/SOLR-4213
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.1, 5.0




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3972) Missing admin-extra files result in SEVERE log entries with giant stacktrace

2012-12-18 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13535627#comment-13535627
 ] 

Shawn Heisey commented on SOLR-3972:


As someone on the mailing list pointed out, my workaround is a poor one.  It 
does work, and gets rid of the glaring error message, but it complicates HOWTO 
sort of documentation and is difficult for a novice to grasp.


 Missing admin-extra files result in SEVERE log entries with giant stacktrace
 

 Key: SOLR-3972
 URL: https://issues.apache.org/jira/browse/SOLR-3972
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Affects Versions: 4.0, 4.1
 Environment: Linux bigindy5 2.6.32-279.9.1.el6.centos.plus.x86_64 #1 
 SMP Wed Sep 26 03:52:55 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
 java version 1.7.0_07
 Java(TM) SE Runtime Environment (build 1.7.0_07-b10)
 Java HotSpot(TM) 64-Bit Server VM (build 23.3-b01, mixed mode)
Reporter: Shawn Heisey
 Fix For: 4.1


 Missing admin-extra files result in SEVERE log entries with giant stacktrace.
 If a log entry is warranted at all, it should just be a one-line warning.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Distributed result set merging in Solr

2012-12-18 Thread Steve McKay
On Dec 18, 2012, at 6:50 PM, Yonik Seeley yo...@lucidworks.com wrote:

 On Tue, Dec 18, 2012 at 6:28 PM, Steve McKay steve.mc...@gcecloud.com wrote:
 I'm looking at sending stats facets between shards to speed up merging.
 Rather than have one node responsible for merging the facet sets from every
 shard, each facet set is partitioned by term and then each shard merges one
 partition of each facet set. A-D, E-G, etc.
 
 Could you give a concrete example of what you're thinking (say 3
 shards and just a few terms?)

Take three shards and the field spending_category, which has 6 terms: C, D, G, 
I, L, O. Currently when a stats request is faceted on spending_category the 
controller will receive results for each shard with all 6 facets present, and 
merge the results together. What I'm talking about is having each shard 
partition its result into {C, D}, {G, I}, {L, O}. Then shard 2 and 3 send 
facets C and D to shard 1 for merging and likewise for the other shards. Then 
the result each shard sends back to the controller is independent of the other 
shard results and merging is trivial.

In that example, merging doesn't take significant time either way. What 
motivates this is doing top-k operations on facet sets of large cardinality, 
e.g. 1 million unique elements, 200,000 elements being returned by each of 6 
shards. Currently, doing all the merging on the controller, a top-10 query 
spends most of its time merging shard results. Distributing the merge step 
should significantly improve that.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Hierarchical stats for Solr

2012-12-18 Thread Steve McKay
Sure can, thanks!

On Dec 18, 2012, at 8:12 PM, Ryan McKinley 
ryan...@gmail.commailto:ryan...@gmail.com wrote:

Hi Steve-

The work you discuss sounds interesting, can you make a JIRA issue for this?

See:
http://wiki.apache.org/solr/HowToContribute#JIRA_tips_.28our_issue.2BAC8-bug_tracker.29

thanks
ryan


On Tue, Dec 18, 2012 at 3:09 PM, Steve McKay 
steve.mc...@gcecloud.commailto:steve.mc...@gcecloud.com wrote:
e.g. facet by vendor and then facet each vendor by year. I've also added 
stats.sort, stats.limit, and stats.offset field params. stats.sort syntax is 
sum|min|max|stdDev|average|sumOfSquares|count|missing|value:asc|desc and 
limit and offset work as in SQL. Faceting will generally use more RAM and be 
faster than the 4.0 baseline. I've changed more than some might consider to be 
strictly necessary; this is because a large part of my effort has been to make 
faceting performant under adverse conditions, with large result sets and 
faceting on fields with large (1m+) cardinalities. If there's interest I can 
post some rough response time numbers for faceting on fields with various 
cardinalities.




[jira] [Created] (SOLR-4214) Hierarchical stats

2012-12-18 Thread Steve McKay (JIRA)
Steve McKay created SOLR-4214:
-

 Summary: Hierarchical stats
 Key: SOLR-4214
 URL: https://issues.apache.org/jira/browse/SOLR-4214
 Project: Solr
  Issue Type: New Feature
  Components: SearchComponents - other
Reporter: Steve McKay


Hierarchical stats faceting, e.g. facet by vemdor and then facet each vendor by 
year.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >