[jira] [Commented] (SOLR-1982) Leading wildcard queries work for all fields if ReversedWildcardFilterFactory is used for any field

2013-02-09 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13575112#comment-13575112
 ] 

David Smiley commented on SOLR-1982:


I should note that this syntax may seem more intuitive, but it is a different 
code-path that sidesteps the smarts that a field type might have.  For example 
timestamp:* is much slower than timestamp:[* TO *] assuming a precision step 
was used.  Arguably this is a bug from a user perspective, who doesn't 
know/care about an implementation detail like that.

 Leading wildcard queries work for all fields if 
 ReversedWildcardFilterFactory is used for any field
 ---

 Key: SOLR-1982
 URL: https://issues.apache.org/jira/browse/SOLR-1982
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.4, 1.4.1
Reporter: Hoss Man

 As noted on the mailing list...
 http://search.lucidimagination.com/search/document/8064e6877f49e4c4/leading_wildcard_query_strangeness
 ...SolrQueryParse supports leading wild card queries for *any* field as long 
 as at least one field type exists in the schema.xml which uses 
 ReversedWildcardFilterFactory -- even if that field type is never used.
 This is extremely confusing, and ost likely indicates a bug in how 
 SolrQueryParser deals with ReversedWildcardFilterFactory

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0-ea-b76) - Build # 4218 - Failure!

2013-02-09 Thread Uwe Schindler
Hi,

I found out what is causing this bug, it is a JDK 8 bug, a fix already exists 
but it is not yet in version b76 of JDK 8. A nice explanation can be found in 
this eclipse issue:
https://bugs.eclipse.org/bugs/show_bug.cgi?format=multipleid=399801

In short - JDK 8 has a new crazy feature (that may help us in the future to 
make use of interfaces in a backwards compatible way when new methods are 
added): This feature allows to add new methods to interfaces (in the above case 
it is a new method in Principal interface of the JDK): public default boolean 
implies(Subject subject) The keyword default in interfaces means, that 
implementers do not need to implement this method, because it has a default 
method body. In our case, commons-httpclient implements this interface and does 
(of course!) *not* override this method.

The problem is now the backwards compatibility code in JDK 8 for bytecode from 
earlier Java versions. It happens that the verifier does not understand the 
connection between the default method and the missing implementation in older 
pre-8 byte code.

The fix should be this one (not sure):
http://mail.openjdk.java.net/pipermail/hotspot-runtime-dev/2013-January/005125.html
 
http://cr.openjdk.java.net/~bharadwaj/8005689/webrev/

The error is likely to be fixed in the next JDK snapshot builds, so I will 
revert to b65 again!

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Uwe Schindler [mailto:u...@thetaphi.de]
 Sent: Saturday, February 09, 2013 7:33 AM
 To: dev@lucene.apache.org
 Subject: RE: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0-ea-b76) -
 Build # 4218 - Failure!
 
 Hi,
 
 this seems to be a bug in either Commons-Http (using a method/signature
 that’s not public) or suddenly Java 8 changed a signature in a non-backwards
 compatible way. I'll dig and open bug report at Oracle or Commons-Http. The
 reason for this appearing today for the first time was the upgrade of Java 8 
 to
 build b76.
 
 Uwe
 
 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de
 
 
  -Original Message-
  From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de]
  Sent: Saturday, February 09, 2013 2:46 AM
  To: dev@lucene.apache.org
  Subject: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0-ea-b76) -
  Build #
  4218 - Failure!
 
  Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/4218/
  Java: 32bit/jdk1.8.0-ea-b76 -client -XX:+UseSerialGC
 
  1 tests failed.
  REGRESSION:
  org.apache.solr.client.solrj.impl.HttpClientUtilTest.testSetParams
 
  Error Message:
  (class: org/apache/http/auth/BasicUserPrincipal, method: implies
 signature:
  (Ljavax/security/auth/Subject;)Z) Illegal use of nonvirtual function
  call
 
  Stack Trace:
  java.lang.VerifyError: (class:
  org/apache/http/auth/BasicUserPrincipal,
  method: implies signature: (Ljavax/security/auth/Subject;)Z) Illegal
  use of nonvirtual function call
  at
 
 org.apache.http.auth.UsernamePasswordCredentials.init(UsernamePass
  wordCredentials.java:83)
  at
  org.apache.solr.client.solrj.impl.HttpClientUtil.setBasicAuth(HttpClie
  ntUtil.ja
  va:147)
  at
  org.apache.solr.client.solrj.impl.HttpClientConfigurer.configure(HttpC
  lientCo
  nfigurer.java:66)
  at
 
 org.apache.solr.client.solrj.impl.HttpClientUtil.configureClient(HttpClientUtil.
  java:115)
  at
  org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClie
  ntUtil.jav
  a:105)
  at
  org.apache.solr.client.solrj.impl.HttpClientUtilTest.testSetParams(Htt
  pClient
  UtilTest.java:53)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at
 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j
  ava:57)
  at
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
  sorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:483)
  at
 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(Framework
  Method.java:45)
  at
  org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCall
  able.jav
  a:15)
  at
  org.junit.runners.model.FrameworkMethod.invokeExplosively(Framework
  Method.java:42)
  at
 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMeth
  od.java:20)
  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
  at
  org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunn
  er.j
  ava:68)
  at
  org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunn
  er.j
  ava:47)
  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
  at
  org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
  at
  org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
  at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
  at
  

[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 203 - Still Failing!

2013-02-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/203/
Java: 64bit/jdk1.7.0 -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 9311 lines...]
[junit4:junit4] ERROR: JVM J0 ended with an exception, command line: 
/Library/Java/JavaVirtualMachines/jdk1.7.0_10.jdk/Contents/Home/jre/bin/java 
-XX:+UseConcMarkSweepGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/heapdumps
 -Dtests.prefix=tests -Dtests.seed=A6216D47F680D1D9 -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=5.0 
-Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/solr/testlogging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true 
-Dtests.asserts.gracious=false -Dtests.multiplier=1 -DtempDir=. 
-Djava.io.tmpdir=. 
-Djunit4.tempDir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/temp
 
-Dclover.db.dir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/clover/db
 -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Djava.security.policy=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/junit4/tests.policy
 -Dlucene.version=5.0-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Dfile.encoding=ISO-8859-1 -classpath 

[jira] [Created] (LUCENE-4764) Faster but more RAM/Disk consuming DocValuesFormat for facets

2013-02-09 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-4764:
--

 Summary: Faster but more RAM/Disk consuming DocValuesFormat for 
facets
 Key: LUCENE-4764
 URL: https://issues.apache.org/jira/browse/LUCENE-4764
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
 Fix For: 4.2, 5.0


The new default DV format for binary fields has much more
RAM-efficient encoding of the address for each document ... but it's
also a bit slower at decode time, which affects facets because we
decode for every collected docID.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-4764) Faster but more RAM/Disk consuming DocValuesFormat for facets

2013-02-09 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless reassigned LUCENE-4764:
--

Assignee: Michael McCandless

 Faster but more RAM/Disk consuming DocValuesFormat for facets
 -

 Key: LUCENE-4764
 URL: https://issues.apache.org/jira/browse/LUCENE-4764
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 4.2, 5.0


 The new default DV format for binary fields has much more
 RAM-efficient encoding of the address for each document ... but it's
 also a bit slower at decode time, which affects facets because we
 decode for every collected docID.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4764) Faster but more RAM/Disk consuming DocValuesFormat for facets

2013-02-09 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-4764:
---

Attachment: LUCENE-4764.patch

Initial dirty patch (lots of nocommits still):

I added a FacetDocValuesFormat, which goes back to the
more-RAM-consuming-but-faster-for-facets 4.0 format, and also hacked
the FastCountingFacetsAggregator to directly decode from the full
byte[], saving overhead of method-call and filling a BytesRef.  It
gets faster results than default (Lucene42) DVFormat:

This is wikibig all 6.6M, 7 facet dims:

{noformat}
TaskQPS base  StdDevQPS comp  StdDev
Pct diff
 LowTerm  110.44  (2.0%)  104.86  (1.0%)   
-5.1% (  -7% -   -2%)
  Fuzzy1   46.50  (2.6%)   44.83  (1.3%)   
-3.6% (  -7% -0%)
 MedSpanNear   28.61  (2.9%)   27.91  (1.8%)   
-2.5% (  -6% -2%)
 Respell   45.56  (4.0%)   44.71  (3.1%)   
-1.9% (  -8% -5%)
  Fuzzy2   52.44  (3.6%)   51.69  (2.2%)   
-1.4% (  -6% -4%)
   LowPhrase   21.30  (6.3%)   21.01  (6.0%)   
-1.4% ( -12% -   11%)
 LowSpanNear8.37  (2.4%)8.26  (3.3%)   
-1.3% (  -6% -4%)
 MedSloppyPhrase   25.88  (2.4%)   25.73  (2.3%)   
-0.6% (  -5% -4%)
  AndHighMed  105.02  (1.4%)  105.78  (1.0%)
0.7% (  -1% -3%)
 LowSloppyPhrase   20.32  (3.2%)   20.55  (3.5%)
1.1% (  -5% -8%)
HighSpanNear3.51  (2.4%)3.56  (1.7%)
1.2% (  -2% -5%)
  HighPhrase   17.32 (10.1%)   17.56 (10.2%)
1.4% ( -17% -   24%)
  AndHighLow  575.37  (3.9%)  583.69  (3.7%)
1.4% (  -5% -9%)
HighSloppyPhrase0.92  (6.2%)0.95  (6.8%)
2.4% (  -9% -   16%)
 AndHighHigh   23.25  (1.4%)   24.54  (0.9%)
5.5% (   3% -7%)
   MedPhrase  110.00  (5.3%)  117.78  (6.1%)
7.1% (  -4% -   19%)
Wildcard   27.31  (2.1%)   32.28  (1.6%)   
18.2% (  14% -   22%)
 MedTerm   46.99  (2.7%)   57.33  (1.8%)   
22.0% (  17% -   27%)
   OrHighMed   16.38  (3.6%)   21.44  (3.2%)   
30.9% (  23% -   39%)
  OrHighHigh8.63  (3.7%)   11.33  (3.6%)   
31.3% (  23% -   39%)
   OrHighLow   16.88  (3.5%)   22.21  (3.3%)   
31.6% (  23% -   39%)
 Prefix3   12.91  (2.9%)   17.29  (2.0%)   
33.9% (  28% -   39%)
HighTerm   18.99  (2.8%)   25.99  (2.5%)   
36.9% (  30% -   43%)
  IntNRQ3.54  (3.2%)4.96  (2.2%)   
40.0% (  33% -   46%)
{noformat}

But it's also more Disk/RAM-consuming: trunk facet DVs take 61.2 MB
while the patch takes 80.3 MB (31% more).


 Faster but more RAM/Disk consuming DocValuesFormat for facets
 -

 Key: LUCENE-4764
 URL: https://issues.apache.org/jira/browse/LUCENE-4764
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 4.2, 5.0

 Attachments: LUCENE-4764.patch


 The new default DV format for binary fields has much more
 RAM-efficient encoding of the address for each document ... but it's
 also a bit slower at decode time, which affects facets because we
 decode for every collected docID.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4764) Faster but more RAM/Disk consuming DocValuesFormat for facets

2013-02-09 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13575162#comment-13575162
 ] 

Robert Muir commented on LUCENE-4764:
-

should it really write a byte[]? i wonder how it would perform if it wrote and 
kept in ram packed ints, since it knows whats in the byte[]. 

it would just make a byte[] on the fly to satisfy merging etc but otherwise 
provide an int-based interface for facets?


 Faster but more RAM/Disk consuming DocValuesFormat for facets
 -

 Key: LUCENE-4764
 URL: https://issues.apache.org/jira/browse/LUCENE-4764
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 4.2, 5.0

 Attachments: LUCENE-4764.patch


 The new default DV format for binary fields has much more
 RAM-efficient encoding of the address for each document ... but it's
 also a bit slower at decode time, which affects facets because we
 decode for every collected docID.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0-ea-b76) - Build # 4218 - Failure!

2013-02-09 Thread Robert Muir
They put all that work into their sophisticated backwards
compatibility layer, and it still didnt work :)

On Sat, Feb 9, 2013 at 4:18 AM, Uwe Schindler u...@thetaphi.de wrote:
 Hi,

 I found out what is causing this bug, it is a JDK 8 bug, a fix already exists 
 but it is not yet in version b76 of JDK 8. A nice explanation can be found in 
 this eclipse issue:
 https://bugs.eclipse.org/bugs/show_bug.cgi?format=multipleid=399801

 In short - JDK 8 has a new crazy feature (that may help us in the future to 
 make use of interfaces in a backwards compatible way when new methods are 
 added): This feature allows to add new methods to interfaces (in the above 
 case it is a new method in Principal interface of the JDK): public default 
 boolean implies(Subject subject) The keyword default in interfaces means, 
 that implementers do not need to implement this method, because it has a 
 default method body. In our case, commons-httpclient implements this 
 interface and does (of course!) *not* override this method.

 The problem is now the backwards compatibility code in JDK 8 for bytecode 
 from earlier Java versions. It happens that the verifier does not understand 
 the connection between the default method and the missing implementation in 
 older pre-8 byte code.

 The fix should be this one (not sure):
 http://mail.openjdk.java.net/pipermail/hotspot-runtime-dev/2013-January/005125.html
 http://cr.openjdk.java.net/~bharadwaj/8005689/webrev/

 The error is likely to be fixed in the next JDK snapshot builds, so I will 
 revert to b65 again!

 Uwe

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de


 -Original Message-
 From: Uwe Schindler [mailto:u...@thetaphi.de]
 Sent: Saturday, February 09, 2013 7:33 AM
 To: dev@lucene.apache.org
 Subject: RE: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0-ea-b76) -
 Build # 4218 - Failure!

 Hi,

 this seems to be a bug in either Commons-Http (using a method/signature
 that’s not public) or suddenly Java 8 changed a signature in a non-backwards
 compatible way. I'll dig and open bug report at Oracle or Commons-Http. The
 reason for this appearing today for the first time was the upgrade of Java 8 
 to
 build b76.

 Uwe

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de


  -Original Message-
  From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de]
  Sent: Saturday, February 09, 2013 2:46 AM
  To: dev@lucene.apache.org
  Subject: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0-ea-b76) -
  Build #
  4218 - Failure!
 
  Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/4218/
  Java: 32bit/jdk1.8.0-ea-b76 -client -XX:+UseSerialGC
 
  1 tests failed.
  REGRESSION:
  org.apache.solr.client.solrj.impl.HttpClientUtilTest.testSetParams
 
  Error Message:
  (class: org/apache/http/auth/BasicUserPrincipal, method: implies
 signature:
  (Ljavax/security/auth/Subject;)Z) Illegal use of nonvirtual function
  call
 
  Stack Trace:
  java.lang.VerifyError: (class:
  org/apache/http/auth/BasicUserPrincipal,
  method: implies signature: (Ljavax/security/auth/Subject;)Z) Illegal
  use of nonvirtual function call
  at
 
 org.apache.http.auth.UsernamePasswordCredentials.init(UsernamePass
  wordCredentials.java:83)
  at
  org.apache.solr.client.solrj.impl.HttpClientUtil.setBasicAuth(HttpClie
  ntUtil.ja
  va:147)
  at
  org.apache.solr.client.solrj.impl.HttpClientConfigurer.configure(HttpC
  lientCo
  nfigurer.java:66)
  at
 
 org.apache.solr.client.solrj.impl.HttpClientUtil.configureClient(HttpClientUtil.
  java:115)
  at
  org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClie
  ntUtil.jav
  a:105)
  at
  org.apache.solr.client.solrj.impl.HttpClientUtilTest.testSetParams(Htt
  pClient
  UtilTest.java:53)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at
 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j
  ava:57)
  at
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
  sorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:483)
  at
 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(Framework
  Method.java:45)
  at
  org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCall
  able.jav
  a:15)
  at
  org.junit.runners.model.FrameworkMethod.invokeExplosively(Framework
  Method.java:42)
  at
 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMeth
  od.java:20)
  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
  at
  org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunn
  er.j
  ava:68)
  at
  org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunn
  er.j
  ava:47)
  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
  at
  

Re: [JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 203 - Still Failing!

2013-02-09 Thread Mark Miller
Dunno what this is - output looks like it suspects jvm crash - seems to keep 
happening.

- Mark

On Feb 9, 2013, at 6:33 AM, Policeman Jenkins Server jenk...@thetaphi.de 
wrote:

 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/203/
 Java: 64bit/jdk1.7.0 -XX:+UseConcMarkSweepGC
 
 All tests passed
 
 Build Log:
 [...truncated 9311 lines...]
 [junit4:junit4] ERROR: JVM J0 ended with an exception, command line: 
 /Library/Java/JavaVirtualMachines/jdk1.7.0_10.jdk/Contents/Home/jre/bin/java 
 -XX:+UseConcMarkSweepGC -XX:+HeapDumpOnOutOfMemoryError 
 -XX:HeapDumpPath=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/heapdumps
  -Dtests.prefix=tests -Dtests.seed=A6216D47F680D1D9 -Xmx512M -Dtests.iters= 
 -Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
 -Dtests.postingsformat=random -Dtests.docvaluesformat=random 
 -Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
 -Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=5.0 
 -Dtests.cleanthreads=perClass 
 -Djava.util.logging.config.file=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/solr/testlogging.properties
  -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true 
 -Dtests.asserts.gracious=false -Dtests.multiplier=1 -DtempDir=. 
 -Djava.io.tmpdir=. 
 -Djunit4.tempDir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/temp
  
 -Dclover.db.dir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/clover/db
  -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
 -Djava.security.policy=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/junit4/tests.policy
  -Dlucene.version=5.0-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
 -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
 -Djava.awt.headless=true -Dfile.encoding=ISO-8859-1 -classpath 
 

RE: [JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 203 - Still Failing!

2013-02-09 Thread Uwe Schindler
This is a known problem and appears on all mac os x systems quite often (it is 
definitely *not* a VirtualBox issue, Robert got it quite often on his box, too):

[junit4:junit4]  JVM J0: stderr (verbatim) 
[junit4:junit4] java(392,0x147471000) malloc: *** error for object 0x1474dff00: 
pointer being freed was not allocated
[junit4:junit4] *** set a breakpoint in malloc_error_break to debug
[junit4:junit4] java(392,0x1425e6000) malloc: *** error for object 0x1425d4fd0: 
pointer being freed was not allocated
[junit4:junit4] *** set a breakpoint in malloc_error_break to debug
[junit4:junit4]  JVM J0: EOF 

There is an Issue already open in Lucene's issue tracker. The bug is in all 
platforms; Linux by default don’t check free() calls, but Darwin's libc does.

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Mark Miller [mailto:markrmil...@gmail.com]
 Sent: Saturday, February 09, 2013 2:42 PM
 To: dev@lucene.apache.org
 Subject: Re: [JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build #
 203 - Still Failing!
 
 Dunno what this is - output looks like it suspects jvm crash - seems to keep
 happening.
 
 - Mark
 
 On Feb 9, 2013, at 6:33 AM, Policeman Jenkins Server jenk...@thetaphi.de
 wrote:
 
  Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/203/
  Java: 64bit/jdk1.7.0 -XX:+UseConcMarkSweepGC
 
  All tests passed
 
  Build Log:
  [...truncated 9311 lines...]
  [junit4:junit4] ERROR: JVM J0 ended with an exception, command line:
  /Library/Java/JavaVirtualMachines/jdk1.7.0_10.jdk/Contents/Home/jre/bi
  n/java -XX:+UseConcMarkSweepGC -
 XX:+HeapDumpOnOutOfMemoryError
  -XX:HeapDumpPath=/Users/jenkins/jenkins-slave/workspace/Lucene-
 Solr-tr
  unk-MacOSX/heapdumps -Dtests.prefix=tests
  -Dtests.seed=A6216D47F680D1D9 -Xmx512M -Dtests.iters=
  -Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random
  -Dtests.postingsformat=random -Dtests.docvaluesformat=random
  -Dtests.locale=random -Dtests.timezone=random -
 Dtests.directory=random
  -Dtests.linedocsfile=europarl.lines.txt.gz
  -Dtests.luceneMatchVersion=5.0 -Dtests.cleanthreads=perClass
  -Djava.util.logging.config.file=/Users/jenkins/jenkins-slave/workspace
  /Lucene-Solr-trunk-MacOSX/solr/testlogging.properties
  -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true
  -Dtests.asserts.gracious=false -Dtests.multiplier=1 -DtempDir=.
  -Djava.io.tmpdir=.
  -Djunit4.tempDir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-tr
  unk-MacOSX/solr/build/solr-core/test/temp
  -Dclover.db.dir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-tru
  nk-MacOSX/lucene/build/clover/db
  -Djava.security.manager=org.apache.lucene.util.TestSecurityManager
  -Djava.security.policy=/Users/jenkins/jenkins-slave/workspace/Lucene-S
  olr-trunk-MacOSX/lucene/tools/junit4/tests.policy
  -Dlucene.version=5.0-SNAPSHOT -Djetty.testMode=1
  -Djetty.insecurerandom=1
  -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory
  -Djava.awt.headless=true -Dfile.encoding=ISO-8859-1 -classpath
  /Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-
 MacOSX/solr/b
  uild/solr-core/classes/test:/Users/jenkins/jenkins-slave/workspace/Luc
  ene-Solr-trunk-MacOSX/solr/build/solr-test-framework/classes/java:/Use
  rs/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/solr/build
  /solr-core/test-files:/Users/jenkins/jenkins-slave/workspace/Lucene-So
  lr-trunk-MacOSX/lucene/build/test-framework/classes/java:/Users/jenkin
  s/jenkins-slave/workspace/Lucene-Solr-trunk-
 MacOSX/lucene/build/codecs
  /classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk
  -MacOSX/solr/build/solr-solrj/classes/java:/Users/jenkins/jenkins-slav
  e/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/classes/java
  :/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-
 MacOSX/lucen
  e/build/analysis/common/lucene-analyzers-common-5.0-
 SNAPSHOT.jar:/User
  s/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-
 MacOSX/lucene/buil
  d/analysis/kuromoji/lucene-analyzers-kuromoji-5.0-SNAPSHOT.jar:/Users/
  jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-
 MacOSX/lucene/build/
  analysis/phonetic/lucene-analyzers-phonetic-5.0-SNAPSHOT.jar:/Users/je
  nkins/jenkins-slave/workspace/Lucene-Solr-trunk-
 MacOSX/lucene/build/hi
  ghlighter/lucene-highlighter-5.0-SNAPSHOT.jar:/Users/jenkins/jenkins-s
  lave/workspace/Lucene-Solr-trunk-
 MacOSX/lucene/build/memory/lucene-mem
  ory-5.0-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-
 Sol
  r-trunk-MacOSX/lucene/build/misc/lucene-misc-5.0-
 SNAPSHOT.jar:/Users/j
  enkins/jenkins-slave/workspace/Lucene-Solr-trunk-
 MacOSX/lucene/build/s
  patial/lucene-spatial-5.0-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/wo
  rkspace/Lucene-Solr-trunk-MacOSX/lucene/build/suggest/lucene-suggest-
 5
  .0-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-tru
  

Re: [JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 203 - Still Failing!

2013-02-09 Thread Dawid Weiss
 Dunno what this is - output looks like it suspects jvm crash - seems to keep 
 happening.

There is no way to tell what happened from the runner's level so the
status is pretty generic. The JVM process was terminated, that's all
the runner knows. But if you click on the full test output here:

http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/203/consoleText

and do what the runner told you to do :)

 Very likely a JVM crash.  Process output piped in logs above.

then you'll see that the output is what Uwe indeed pointed out.

[junit4:junit4] HEARTBEAT J0 PID(392@localhost): 2013-02-09T11:30:05,
stalled for 69.1s at: ChaosMonkeySafeLeaderTest.testDistribSearch
[junit4:junit4] HEARTBEAT J0 PID(392@localhost): 2013-02-09T11:31:05,
stalled for  129s at: ChaosMonkeySafeLeaderTest.testDistribSearch
[junit4:junit4] HEARTBEAT J0 PID(392@localhost): 2013-02-09T11:32:05,
stalled for  189s at: ChaosMonkeySafeLeaderTest.testDistribSearch
[junit4:junit4] JVM J0: stderr was not empty, see:
/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/temp/junit4-J0-20130209_110428_880.syserr
[junit4:junit4]  JVM J0: stderr (verbatim) 
[junit4:junit4] java(392,0x147471000) malloc: *** error for object
0x1474dff00: pointer being freed was not allocated
[junit4:junit4] *** set a breakpoint in malloc_error_break to debug
[junit4:junit4] java(392,0x1425e6000) malloc: *** error for object
0x1425d4fd0: pointer being freed was not allocated
[junit4:junit4] *** set a breakpoint in malloc_error_break to debug
[junit4:junit4]  JVM J0: EOF 

D.


 - Mark

 On Feb 9, 2013, at 6:33 AM, Policeman Jenkins Server jenk...@thetaphi.de 
 wrote:

 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/203/
 Java: 64bit/jdk1.7.0 -XX:+UseConcMarkSweepGC

 All tests passed

 Build Log:
 [...truncated 9311 lines...]
 [junit4:junit4] ERROR: JVM J0 ended with an exception, command line: 
 /Library/Java/JavaVirtualMachines/jdk1.7.0_10.jdk/Contents/Home/jre/bin/java 
 -XX:+UseConcMarkSweepGC -XX:+HeapDumpOnOutOfMemoryError 
 -XX:HeapDumpPath=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/heapdumps
  -Dtests.prefix=tests -Dtests.seed=A6216D47F680D1D9 -Xmx512M -Dtests.iters= 
 -Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
 -Dtests.postingsformat=random -Dtests.docvaluesformat=random 
 -Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
 -Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=5.0 
 -Dtests.cleanthreads=perClass 
 -Djava.util.logging.config.file=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/solr/testlogging.properties
  -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true 
 -Dtests.asserts.gracious=false -Dtests.multiplier=1 -DtempDir=. 
 -Djava.io.tmpdir=. 
 -Djunit4.tempDir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/temp
  
 -Dclover.db.dir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/clover/db
  -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
 -Djava.security.policy=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/junit4/tests.policy
  -Dlucene.version=5.0-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
 -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
 -Djava.awt.headless=true -Dfile.encoding=ISO-8859-1 -classpath 
 

[jira] [Commented] (SOLR-4421) On CoreContainer shutdown, all SolrCores should publish their state as DOWN

2013-02-09 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13575173#comment-13575173
 ] 

Commit Tag Bot commented on SOLR-4421:
--

[branch_4x commit] Mark Robert Miller
http://svn.apache.org/viewvc?view=revisionrevision=1444364

SOLR-4421: On CoreContainer shutdown, all SolrCores should publish their state 
as DOWN.


 On CoreContainer shutdown, all SolrCores should publish their state as DOWN
 ---

 Key: SOLR-4421
 URL: https://issues.apache.org/jira/browse/SOLR-4421
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.2, 5.0

 Attachments: SOLR-4421.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4422) SolrJ DocumentObjectBinder class loses Map.Entry order when repopulating dynamic field values, such as @Field(dynamic_field_values*).

2013-02-09 Thread Mark S (JIRA)
Mark S created SOLR-4422:


 Summary: SolrJ DocumentObjectBinder class loses Map.Entry order 
when repopulating dynamic field values, such as @Field(dynamic_field_values*).
 Key: SOLR-4422
 URL: https://issues.apache.org/jira/browse/SOLR-4422
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 4.0
Reporter: Mark S


The SolrJ DocumentObjectBinder class does not retain order when reading in 
dynamic field values into a Map.  More specifically, the order in which the Map 
is populated by an application is different to the order in which the Map is 
repopulated by SolrJ.


@Field(dynamic_field_values*)
private MapString, Object dynamicFieldValuesMap = new LinkedHashMapString, 
Object(16);



I believe the following would address this issue.
-  allValuesMap = new HashMapString, Object();
+  allValuesMap = new LinkedHashMapString, Object();


Or, maybe the DocumentObjectBinder should directly populate the Map field 
directly if that field is not null.



I am pretty sure issue does NOT exist with dynamic field values in a List, as 
the SolrJ DocumentObjectBinder uses a List implementation that retains ordering 
new ArrayList();.  So the following will retain ordering.

@Field(dynamic_field_values_ss)
private ListString dynamicFieldValues = new ArrayListString();


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4422) SolrJ DocumentObjectBinder class loses Map.Entry order when repopulating dynamic field values, such as @Field(dynamic_field_values*).

2013-02-09 Thread Mark S (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark S updated SOLR-4422:
-

Description: 
The SolrJ DocumentObjectBinder class does not retain order when reading in 
dynamic field values into a Map.  More specifically, the order in which the Map 
is populated by an application is different to the order in which the Map is 
repopulated by SolrJ.


@Field(dynamic_field_values*)
private MapString, Object dynamicFieldValuesMap = new LinkedHashMapString, 
Object(16);



I believe the following would address this issue.
  -  allValuesMap = new HashMapString, Object();
  +  allValuesMap = new LinkedHashMapString, Object();


Or, maybe the DocumentObjectBinder should directly populate the Map field 
directly if that field is not null.



I am pretty sure issue does NOT exist with dynamic field values in a List, as 
the SolrJ DocumentObjectBinder uses a List implementation that retains ordering 
new ArrayList();.  So the following will retain ordering.

@Field(dynamic_field_values_ss)
private ListString dynamicFieldValues = new ArrayListString();


  was:
The SolrJ DocumentObjectBinder class does not retain order when reading in 
dynamic field values into a Map.  More specifically, the order in which the Map 
is populated by an application is different to the order in which the Map is 
repopulated by SolrJ.


@Field(dynamic_field_values*)
private MapString, Object dynamicFieldValuesMap = new LinkedHashMapString, 
Object(16);



I believe the following would address this issue.
-  allValuesMap = new HashMapString, Object();
+  allValuesMap = new LinkedHashMapString, Object();


Or, maybe the DocumentObjectBinder should directly populate the Map field 
directly if that field is not null.



I am pretty sure issue does NOT exist with dynamic field values in a List, as 
the SolrJ DocumentObjectBinder uses a List implementation that retains ordering 
new ArrayList();.  So the following will retain ordering.

@Field(dynamic_field_values_ss)
private ListString dynamicFieldValues = new ArrayListString();



 SolrJ DocumentObjectBinder class loses Map.Entry order when repopulating 
 dynamic field values, such as @Field(dynamic_field_values*).
 ---

 Key: SOLR-4422
 URL: https://issues.apache.org/jira/browse/SOLR-4422
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 4.0
Reporter: Mark S

 The SolrJ DocumentObjectBinder class does not retain order when reading in 
 dynamic field values into a Map.  More specifically, the order in which the 
 Map is populated by an application is different to the order in which the Map 
 is repopulated by SolrJ.
 
 @Field(dynamic_field_values*)
 private MapString, Object dynamicFieldValuesMap = new LinkedHashMapString, 
 Object(16);
 
 I believe the following would address this issue.
   -  allValuesMap = new HashMapString, Object();
   +  allValuesMap = new LinkedHashMapString, Object();
 Or, maybe the DocumentObjectBinder should directly populate the Map field 
 directly if that field is not null.
 
 I am pretty sure issue does NOT exist with dynamic field values in a List, as 
 the SolrJ DocumentObjectBinder uses a List implementation that retains 
 ordering new ArrayList();.  So the following will retain ordering.
 @Field(dynamic_field_values_ss)
 private ListString dynamicFieldValues = new ArrayListString();

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4422) SolrJ DocumentObjectBinder class loses Map.Entry order when repopulating dynamic field values, such as @Field(dynamic_field_values*).

2013-02-09 Thread Mark S (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark S updated SOLR-4422:
-

Description: 
The SolrJ DocumentObjectBinder class does not retain order when reading in 
dynamic field values into a Map.  More specifically, the order in which the Map 
is populated by an application is different to the order in which the Map is 
repopulated by SolrJ.


@Field(dynamic_field_values*)
private MapString, Object dynamicFieldValuesMap = new LinkedHashMapString, 
Object(16);



I believe the following would address this issue.
  --  allValuesMap = new HashMapString, Object();
  +  allValuesMap = new LinkedHashMapString, Object();


Or, maybe the DocumentObjectBinder should directly populate the Map field 
directly if that field is not null.



I am pretty sure issue does NOT exist with dynamic field values in a List, as 
the SolrJ DocumentObjectBinder uses a List implementation that retains ordering 
new ArrayList();.  So the following will retain ordering.

@Field(dynamic_field_values_ss)
private ListString dynamicFieldValues = new ArrayListString();


  was:
The SolrJ DocumentObjectBinder class does not retain order when reading in 
dynamic field values into a Map.  More specifically, the order in which the Map 
is populated by an application is different to the order in which the Map is 
repopulated by SolrJ.


@Field(dynamic_field_values*)
private MapString, Object dynamicFieldValuesMap = new LinkedHashMapString, 
Object(16);



I believe the following would address this issue.
  -  allValuesMap = new HashMapString, Object();
  +  allValuesMap = new LinkedHashMapString, Object();


Or, maybe the DocumentObjectBinder should directly populate the Map field 
directly if that field is not null.



I am pretty sure issue does NOT exist with dynamic field values in a List, as 
the SolrJ DocumentObjectBinder uses a List implementation that retains ordering 
new ArrayList();.  So the following will retain ordering.

@Field(dynamic_field_values_ss)
private ListString dynamicFieldValues = new ArrayListString();



 SolrJ DocumentObjectBinder class loses Map.Entry order when repopulating 
 dynamic field values, such as @Field(dynamic_field_values*).
 ---

 Key: SOLR-4422
 URL: https://issues.apache.org/jira/browse/SOLR-4422
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 4.0
Reporter: Mark S

 The SolrJ DocumentObjectBinder class does not retain order when reading in 
 dynamic field values into a Map.  More specifically, the order in which the 
 Map is populated by an application is different to the order in which the Map 
 is repopulated by SolrJ.
 
 @Field(dynamic_field_values*)
 private MapString, Object dynamicFieldValuesMap = new LinkedHashMapString, 
 Object(16);
 
 I believe the following would address this issue.
   --  allValuesMap = new HashMapString, Object();
   +  allValuesMap = new LinkedHashMapString, Object();
 Or, maybe the DocumentObjectBinder should directly populate the Map field 
 directly if that field is not null.
 
 I am pretty sure issue does NOT exist with dynamic field values in a List, as 
 the SolrJ DocumentObjectBinder uses a List implementation that retains 
 ordering new ArrayList();.  So the following will retain ordering.
 @Field(dynamic_field_values_ss)
 private ListString dynamicFieldValues = new ArrayListString();

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4764) Faster but more RAM/Disk consuming DocValuesFormat for facets

2013-02-09 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13575189#comment-13575189
 ] 

Shai Erera commented on LUCENE-4764:


bq. i wonder how it would perform if it wrote and kept in ram packed ints, 
since it knows whats in the byte[]

We've tried that in the past. I don't remember on which issue we posted the 
results, but they were not compelling. I.e. what we tried is to keep the ints 
as int[] vs packed-ints. int[] performed (IIRC) 50% faster, while packed-int 
only ~6-10% faster. Also, their RAM footprint was very close. The problem is 
that packed-ints is only good if you know something about the numbers, i.e. 
their size, distribution etc. But with category ordinals, on this Wikipedia 
index, there's nothing special about them. Really every document keeps close 
to arbitrary integers between 1 - 2.2M ...

If the following math holds -- 25 ords per document (that's 100 bytes/doc) x 
6.6M documents -- that's going to be ~660MB (offsets not included). I suspect 
that packed-ints will consume approximately the same size (at least, per past 
results) but won't yield significantly better performance. Therefore if we want 
to cache anything at the int level, we should do an int[] caching aggregator.

Mike, correct me if I'm wrong.

 Faster but more RAM/Disk consuming DocValuesFormat for facets
 -

 Key: LUCENE-4764
 URL: https://issues.apache.org/jira/browse/LUCENE-4764
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 4.2, 5.0

 Attachments: LUCENE-4764.patch


 The new default DV format for binary fields has much more
 RAM-efficient encoding of the address for each document ... but it's
 also a bit slower at decode time, which affects facets because we
 decode for every collected docID.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4764) Faster but more RAM/Disk consuming DocValuesFormat for facets

2013-02-09 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13575192#comment-13575192
 ] 

Robert Muir commented on LUCENE-4764:
-

{quote}
 The problem is that packed-ints is only good if you know something about the 
numbers, i.e. their size, distribution etc. But with category ordinals, on this 
Wikipedia index, there's nothing special about them. Really every document 
keeps close to arbitrary integers between 1 - 2.2M
{quote}

{quote}
If the following math holds – 25 ords per document
{quote}

Right but i dont look at what its doing this way. Today the ords for the 
document are vint-deltas (or similar) within a byte[] right?

So instead perhaps the codec could encode the first ord (minimum) for the doc 
in a simple int[] or whatever, but the additional deltas are all within a big 
packed stream or something like that.

In all cases i like the idea of a specialized docvaluesformat for facets. it 
doesn't have to be one-sized-fits-all: it could have a number of strategies 
depending on whether someone had 5 ords/doc or 500 ords/doc for example, by 
examining the iterator once at index-time to decide.

 Faster but more RAM/Disk consuming DocValuesFormat for facets
 -

 Key: LUCENE-4764
 URL: https://issues.apache.org/jira/browse/LUCENE-4764
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 4.2, 5.0

 Attachments: LUCENE-4764.patch


 The new default DV format for binary fields has much more
 RAM-efficient encoding of the address for each document ... but it's
 also a bit slower at decode time, which affects facets because we
 decode for every collected docID.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4764) Faster but more RAM/Disk consuming DocValuesFormat for facets

2013-02-09 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13575199#comment-13575199
 ] 

Shai Erera commented on LUCENE-4764:


I think that it would actually be interesting to test *only* VInt, without 
dgap. Because the ords seem to be arbitrary, I'm not even sure what they buy 
us. Mike, can you try that? Index with a Sorting(Unique(VInt8)) and modify 
FastCountingFacetsAggregator to not do dgap? Would be interesting to see the 
effects on compression as well as speed. Dgap is something you want to do if 
you suspect that a document will have e.g. higher ordinals, that are close to 
each other in such a way that dgap would make them compress better ...

Robert, if I understand your proposal correctly, what you suggest is to encode:

int[] -- pairs of highest/lowest ordinal in a document + length (#additional 
ords)
byte[] -- a packed-int of deltas for all documents (but deltas are computed off 
the absolute ord in the int[]

Why would that be better than a single byte[] (packed-ints) + offsets?

 Faster but more RAM/Disk consuming DocValuesFormat for facets
 -

 Key: LUCENE-4764
 URL: https://issues.apache.org/jira/browse/LUCENE-4764
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 4.2, 5.0

 Attachments: LUCENE-4764.patch


 The new default DV format for binary fields has much more
 RAM-efficient encoding of the address for each document ... but it's
 also a bit slower at decode time, which affects facets because we
 decode for every collected docID.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3755) shard splitting

2013-02-09 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13575205#comment-13575205
 ] 

Commit Tag Bot commented on SOLR-3755:
--

[trunk commit] Shalin Shekhar Mangar
http://svn.apache.org/viewvc?view=revisionrevision=1444397

SOLR-3755: Test for SolrIndexSplitter


 shard splitting
 ---

 Key: SOLR-3755
 URL: https://issues.apache.org/jira/browse/SOLR-3755
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Yonik Seeley
 Attachments: SOLR-3755.patch, SOLR-3755.patch, 
 SOLR-3755-testSplitter.patch, SOLR-3755-testSplitter.patch


 We can currently easily add replicas to handle increases in query volume, but 
 we should also add a way to add additional shards dynamically by splitting 
 existing shards.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3755) shard splitting

2013-02-09 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13575207#comment-13575207
 ] 

Commit Tag Bot commented on SOLR-3755:
--

[branch_4x commit] Shalin Shekhar Mangar
http://svn.apache.org/viewvc?view=revisionrevision=1444398

–SOLR-3755: Test for SolrIndexSplitter


 shard splitting
 ---

 Key: SOLR-3755
 URL: https://issues.apache.org/jira/browse/SOLR-3755
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Yonik Seeley
 Attachments: SOLR-3755.patch, SOLR-3755.patch, 
 SOLR-3755-testSplitter.patch, SOLR-3755-testSplitter.patch


 We can currently easily add replicas to handle increases in query volume, but 
 we should also add a way to add additional shards dynamically by splitting 
 existing shards.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4423) NPE in SolrParams.toSolrParams when an invalid datatype is used in an XML named list in solrconfig.xml

2013-02-09 Thread Jack Krupansky (JIRA)
Jack Krupansky created SOLR-4423:


 Summary: NPE in SolrParams.toSolrParams when an invalid datatype 
is used in an XML named list in solrconfig.xml
 Key: SOLR-4423
 URL: https://issues.apache.org/jira/browse/SOLR-4423
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 4.0
Reporter: Jack Krupansky


An NPE occurs in SolrParams.toSolrParams if an invalid datatype is used for an 
entry in an XML named list, such as for defaults in a request handler in 
solrconfig.xml.

Repro:

Add this snippet to the Solr 4.0 example solrconfig.xml:

{code}
  requestHandler name=/testBug class=solr.SearchHandler
lst name=defaults
  string name=dfname/string
/lst
  /requestHandler
{code}

The user error there is using string instead of str.

Now try to start Solr. This NPE occurs:

{code}
Feb 09, 2013 12:02:27 PM org.apache.solr.core.CoreContainer create
SEVERE: Unable to create core: collection1
java.lang.NullPointerException
at 
org.apache.solr.common.params.SolrParams.toSolrParams(SolrParams.java:295)
at 
org.apache.solr.handler.RequestHandlerBase.init(RequestHandlerBase.java:100)
at 
org.apache.solr.handler.component.SearchHandler.init(SearchHandler.java:76)
at 
org.apache.solr.core.RequestHandlers.initHandlersFromConfig(RequestHandlers.java:178)
at org.apache.solr.core.SolrCore.init(SolrCore.java:657)
at org.apache.solr.core.SolrCore.init(SolrCore.java:566)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:850)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:534)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:356)
at 
org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:308)
at 
org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:107)
{code}

The NPE is due to the fact that the element/parameter value is null at this 
line:

{code}
  String prev = map.put(params.getName(i), params.getVal(i).toString());
{code}

It is null because DOMUtil.addToNamedList leaves it null if the datatype is 
invalid:

{code}
Object val=null;

if (lst.equals(type)) {
  ...
} else {
  final String textValue = getText(nd);
  try {
if (str.equals(type)) {
  val = textValue;
...
} else if (bool.equals(type)) {
  val = StrUtils.parseBool(textValue);
}
// :NOTE: Unexpected Node names are ignored
// :TODO: should we generate an error here?
  } catch (NumberFormatException nfe) {
...
  }
}

if (nlst != null) nlst.add(name,val);
if (arr != null) arr.add(val);
{code}

The NOTE is incorrect in that the node is not ignored - it is in fact added 
to the list as shown, but with a null value, which causes problems later.

I suggest replacing the TODO (and NOTE) with an else that throws a readable 
exception such as Named list type ' + type + ' is invalid for name ' + name 
+ '.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4424) Solr should complain if a parameter has no name in solrconfig.xml

2013-02-09 Thread Jack Krupansky (JIRA)
Jack Krupansky created SOLR-4424:


 Summary: Solr should complain if a parameter has no name in 
solrconfig.xml
 Key: SOLR-4424
 URL: https://issues.apache.org/jira/browse/SOLR-4424
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 4.0
Reporter: Jack Krupansky


Solr should complain with an appropriate message if the 'name' attribute is 
missing for a parameter in solrconfig.xml, such as for the defaults 
parameters for a request handler.

Repro:

Add this snippet to solrconfig.xml:

{code}
  requestHandler name=/testBug class=solr.SearchHandler
lst name=defaults
  str Name=dfname/str
/lst
  /requestHandler
{code}

Here the user error is Name which should be lower-case name.

Start Solr.

No complaint from Solr that the name attribute is missing. In this case, the 
spelling of the attribute name is correct, but the case is wrong - Name vs. 
name.

The DOMUtil.addToNamedList method fetches and uses the name attribute without 
checking to see if it might be null or missing:

{code}
final String name = getAttr(nd, name);

...

if (nlst != null) nlst.add(name,val);
{code}

I suggest that if the name attribute is null or missing an exception will be 
thrown that says Named list element is missing 'name' attribute and the full 
text of the element with whatever attributes it does have and its value text. 
Is there a way to get the line number?


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4750) Convert DrillDown to DrillDownQuery

2013-02-09 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13575216#comment-13575216
 ] 

Shai Erera commented on LUCENE-4750:


Mike, I'm migrating the patch to after the packages reorg, and I want to handle 
the nocommits. In the process I thought about two things:

# Shouldn't rewrite() call res.rewrite()? We don't know what baseQuery will 
rewrite too...
# Perhaps instead of keeping a {{ListQuery drillDownQueries}}, we can build 
the BQ on-the-fly in {{.add()}}? We can then use it in toString and rewrite 
directly, vs now that rewrite always creates a new BQ.
#* BTW, is that an error to *not* create a new BQ on every rewrite? I don't 
think so, but want to verify...

If we do the 2nd, then we can check in the ctor already if baseQuery is a BQ 
and just use it to add the drill-down clauses to? If we need to know what 
baseQuery was, we can clone() it? Is it perhaps too much?

 Convert DrillDown to DrillDownQuery
 ---

 Key: LUCENE-4750
 URL: https://issues.apache.org/jira/browse/LUCENE-4750
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Reporter: Shai Erera
Assignee: Michael McCandless
 Attachments: LUCENE-4750.patch


 DrillDown is a utility class for creating drill-down queries over a base 
 query and a bunch of categories. We've been asked to support AND, OR and AND 
 of ORs. The latter is not so simple as a static utility method though, so 
 instead we have some sample code ...
 Rather, I think that we can just create a DrillDownQuery (extends Query) 
 which takes a baseQuery in its ctor and exposes add(CategoryPath...), such 
 that every such group of categories is AND'ed with other groups, and 
 internally they are OR'ed. It's very similar to how you would construct a 
 BooleanQuery, only simpler and specific to facets.
 Internally, it would build a BooleanQuery and delegate rewrite, createWeight 
 etc to it.
 That will remove the need for the static utility methods .. or we can keep 
 static term() for convenience.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4750) Convert DrillDown to DrillDownQuery

2013-02-09 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13575219#comment-13575219
 ] 

Shai Erera commented on LUCENE-4750:


Oh, and if we do build the result BQ on the fly, it will make implementing 
equals() and hashCode() very simple ...

 Convert DrillDown to DrillDownQuery
 ---

 Key: LUCENE-4750
 URL: https://issues.apache.org/jira/browse/LUCENE-4750
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Reporter: Shai Erera
Assignee: Michael McCandless
 Attachments: LUCENE-4750.patch


 DrillDown is a utility class for creating drill-down queries over a base 
 query and a bunch of categories. We've been asked to support AND, OR and AND 
 of ORs. The latter is not so simple as a static utility method though, so 
 instead we have some sample code ...
 Rather, I think that we can just create a DrillDownQuery (extends Query) 
 which takes a baseQuery in its ctor and exposes add(CategoryPath...), such 
 that every such group of categories is AND'ed with other groups, and 
 internally they are OR'ed. It's very similar to how you would construct a 
 BooleanQuery, only simpler and specific to facets.
 Internally, it would build a BooleanQuery and delegate rewrite, createWeight 
 etc to it.
 That will remove the need for the static utility methods .. or we can keep 
 static term() for convenience.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4750) Convert DrillDown to DrillDownQuery

2013-02-09 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13575228#comment-13575228
 ] 

Michael McCandless commented on LUCENE-4750:


bq. Mike, I'm migrating the patch to after the packages reorg, and I want to 
handle the nocommits.

Thanks Shai!

bq. Shouldn't rewrite() call res.rewrite()? We don't know what baseQuery will 
rewrite too...

This isn't actually necessary: IndexSearcher will call rewrite on whatever we 
return.

bq. Perhaps instead of keeping a ListQuery drillDownQueries, we can build the 
BQ on-the-fly in .add()? We can then use it in toString and rewrite directly, 
vs now that rewrite always creates a new BQ.

I think that's good?

bq. BTW, is that an error to not create a new BQ on every rewrite? I don't 
think so, but want to verify...

I don't think it's an error to not make a new Query on every rewrite.  I 
suppose there is some risk that an app might run the query while continuing to 
add drill downs in another thread ... but apps just shouldn't do that ...

That said ... I don't really like how toString shows impl details (eg, the 
$facet field name), vs eg just the CP/s of each drill-down, but I think that's 
pretty minor ...

bq. Oh, and if we do build the result BQ on the fly, it will make implementing 
equals() and hashCode() very simple ...

Good!

 Convert DrillDown to DrillDownQuery
 ---

 Key: LUCENE-4750
 URL: https://issues.apache.org/jira/browse/LUCENE-4750
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Reporter: Shai Erera
Assignee: Michael McCandless
 Attachments: LUCENE-4750.patch


 DrillDown is a utility class for creating drill-down queries over a base 
 query and a bunch of categories. We've been asked to support AND, OR and AND 
 of ORs. The latter is not so simple as a static utility method though, so 
 instead we have some sample code ...
 Rather, I think that we can just create a DrillDownQuery (extends Query) 
 which takes a baseQuery in its ctor and exposes add(CategoryPath...), such 
 that every such group of categories is AND'ed with other groups, and 
 internally they are OR'ed. It's very similar to how you would construct a 
 BooleanQuery, only simpler and specific to facets.
 Internally, it would build a BooleanQuery and delegate rewrite, createWeight 
 etc to it.
 That will remove the need for the static utility methods .. or we can keep 
 static term() for convenience.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4425) Provide a way to change state of multiple shards atomically

2013-02-09 Thread Anshum Gupta (JIRA)
Anshum Gupta created SOLR-4425:
--

 Summary: Provide a way to change state of multiple shards 
atomically
 Key: SOLR-4425
 URL: https://issues.apache.org/jira/browse/SOLR-4425
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Anshum Gupta


Provide a way to change state of multiple shards atomically perhaps through the 
Collection API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-4.x #241: POMs out of sync

2013-02-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-4.x/241/

1 tests failed.
REGRESSION:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch

Error Message:
Test Setup Failure: shard1 should have just been set up to be inconsistent - 
but it's still consistent

Stack Trace:
java.lang.AssertionError: Test Setup Failure: shard1 should have just been set 
up to be inconsistent - but it's still consistent
at 
__randomizedtesting.SeedInfo.seed([70E5FE6E872D42C:86E8D1FE9F2DB410]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:216)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:794)
at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:679)





[jira] [Commented] (LUCENE-4750) Convert DrillDown to DrillDownQuery

2013-02-09 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13575259#comment-13575259
 ] 

Michael McCandless commented on LUCENE-4750:


bq. If we do the 2nd, then we can check in the ctor already if baseQuery is a 
BQ and just use it to add the drill-down clauses to? If we need to know what 
baseQuery was, we can clone() it? Is it perhaps too much?

I thought more about this ... while this is a tempting optimization ... I think 
it will mess up scoring in general, because if the original query was using 
coord, then adding those clauses will mess this up?  Maybe we could do this if 
the original query isn't using coord?  But maybe leave this as future TODO 
optimization ... it seems dangerous :)

 Convert DrillDown to DrillDownQuery
 ---

 Key: LUCENE-4750
 URL: https://issues.apache.org/jira/browse/LUCENE-4750
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Reporter: Shai Erera
Assignee: Michael McCandless
 Attachments: LUCENE-4750.patch


 DrillDown is a utility class for creating drill-down queries over a base 
 query and a bunch of categories. We've been asked to support AND, OR and AND 
 of ORs. The latter is not so simple as a static utility method though, so 
 instead we have some sample code ...
 Rather, I think that we can just create a DrillDownQuery (extends Query) 
 which takes a baseQuery in its ctor and exposes add(CategoryPath...), such 
 that every such group of categories is AND'ed with other groups, and 
 internally they are OR'ed. It's very similar to how you would construct a 
 BooleanQuery, only simpler and specific to facets.
 Internally, it would build a BooleanQuery and delegate rewrite, createWeight 
 etc to it.
 That will remove the need for the static utility methods .. or we can keep 
 static term() for convenience.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4750) Convert DrillDown to DrillDownQuery

2013-02-09 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13575260#comment-13575260
 ] 

Shai Erera commented on LUCENE-4750:


bq. I don't really like how toString shows impl details (eg, the $facet field 
name), vs eg just the CP/s of each drill-down, but I think that's pretty minor 
...

That's the current state of the patch anyway. Creating a BQ on-the-fly won't 
change it. And I'm not sure if it's bad .. that way, toString shows you the 
query that is executed, so e.g. if you indexed into multiple category lists, 
you see that right-away. Although, I agree it may be redundant information for 
those who index into one category list...

bq. Maybe we could do this if the original query isn't using coord? But maybe 
leave this as future TODO optimization

Ok.

 Convert DrillDown to DrillDownQuery
 ---

 Key: LUCENE-4750
 URL: https://issues.apache.org/jira/browse/LUCENE-4750
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Reporter: Shai Erera
Assignee: Michael McCandless
 Attachments: LUCENE-4750.patch


 DrillDown is a utility class for creating drill-down queries over a base 
 query and a bunch of categories. We've been asked to support AND, OR and AND 
 of ORs. The latter is not so simple as a static utility method though, so 
 instead we have some sample code ...
 Rather, I think that we can just create a DrillDownQuery (extends Query) 
 which takes a baseQuery in its ctor and exposes add(CategoryPath...), such 
 that every such group of categories is AND'ed with other groups, and 
 internally they are OR'ed. It's very similar to how you would construct a 
 BooleanQuery, only simpler and specific to facets.
 Internally, it would build a BooleanQuery and delegate rewrite, createWeight 
 etc to it.
 That will remove the need for the static utility methods .. or we can keep 
 static term() for convenience.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4750) Convert DrillDown to DrillDownQuery

2013-02-09 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated LUCENE-4750:
---

Attachment: LUCENE-4750.patch

Patch updated to trunk + handles all nocommit. I moved to create a BQ 
on-the-fly, and implemented hashCode() / equals(). Also implemented clone() 
(there's a test showing what would happen if we don't).

.rewrite() throws exception if no baseQuery is given AND no drill-down 
categories too. However, if BQ has at least one clause (either baseQuery was 
given, or at least one drill-down category added), it permits it. If only 
baseQuery was given, it will rewrite to it, so I see no reason to throw an 
exception.

 Convert DrillDown to DrillDownQuery
 ---

 Key: LUCENE-4750
 URL: https://issues.apache.org/jira/browse/LUCENE-4750
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Reporter: Shai Erera
Assignee: Michael McCandless
 Attachments: LUCENE-4750.patch, LUCENE-4750.patch


 DrillDown is a utility class for creating drill-down queries over a base 
 query and a bunch of categories. We've been asked to support AND, OR and AND 
 of ORs. The latter is not so simple as a static utility method though, so 
 instead we have some sample code ...
 Rather, I think that we can just create a DrillDownQuery (extends Query) 
 which takes a baseQuery in its ctor and exposes add(CategoryPath...), such 
 that every such group of categories is AND'ed with other groups, and 
 internally they are OR'ed. It's very similar to how you would construct a 
 BooleanQuery, only simpler and specific to facets.
 Internally, it would build a BooleanQuery and delegate rewrite, createWeight 
 etc to it.
 That will remove the need for the static utility methods .. or we can keep 
 static term() for convenience.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4750) Convert DrillDown to DrillDownQuery

2013-02-09 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13575289#comment-13575289
 ] 

Michael McCandless commented on LUCENE-4750:


+1, patch looks great, thanks Shai!

Can we strengthen the javadocs to call out that baseQuery can be null,
and this means pure browse?  Or maybe a ctor that doesn't take a
baseQuery?  I feel like it's non-obvious / not advertised now ...


 Convert DrillDown to DrillDownQuery
 ---

 Key: LUCENE-4750
 URL: https://issues.apache.org/jira/browse/LUCENE-4750
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Reporter: Shai Erera
Assignee: Michael McCandless
 Attachments: LUCENE-4750.patch, LUCENE-4750.patch


 DrillDown is a utility class for creating drill-down queries over a base 
 query and a bunch of categories. We've been asked to support AND, OR and AND 
 of ORs. The latter is not so simple as a static utility method though, so 
 instead we have some sample code ...
 Rather, I think that we can just create a DrillDownQuery (extends Query) 
 which takes a baseQuery in its ctor and exposes add(CategoryPath...), such 
 that every such group of categories is AND'ed with other groups, and 
 internally they are OR'ed. It's very similar to how you would construct a 
 BooleanQuery, only simpler and specific to facets.
 Internally, it would build a BooleanQuery and delegate rewrite, createWeight 
 etc to it.
 That will remove the need for the static utility methods .. or we can keep 
 static term() for convenience.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-trunk-Linux-Java6-64-test-only - Build # 23668 - Failure!

2013-02-09 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java6-64-test-only/23668/

All tests passed

Build Log:
[...truncated 1118 lines...]
[junit4:junit4] ERROR: JVM J4 ended with an exception, command line: 
/opt/java/64/jdk1.6.0_33/jre/bin/java -Dtests.prefix=tests 
-Dtests.seed=FB9D0F2ECF69EFDC -Xmx512M -Dtests.iters= -Dtests.verbose=false 
-Dtests.infostream=false -Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz 
-Dtests.luceneMatchVersion=5.0 -Dtests.cleanthreads=perMethod 
-Djava.util.logging.config.file=/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java6-64-test-only/checkout/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true 
-Dtests.asserts.gracious=false -Dtests.multiplier=1 -DtempDir=. 
-Djava.io.tmpdir=. 
-Djunit4.tempDir=/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java6-64-test-only/checkout/lucene/build/core/test/temp
 
-Dclover.db.dir=/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java6-64-test-only/checkout/lucene/build/clover/db
 -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Djava.security.policy=/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java6-64-test-only/checkout/lucene/tools/junit4/tests.policy
 -Dlucene.version=5.0-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -classpath 
/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java6-64-test-only/checkout/lucene/build/codecs/classes/java:/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java6-64-test-only/checkout/lucene/build/test-framework/classes/java:/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java6-64-test-only/checkout/lucene/test-framework/lib/junit-4.10.jar:/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java6-64-test-only/checkout/lucene/test-framework/lib/randomizedtesting-runner-2.0.8.jar:/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java6-64-test-only/checkout/lucene/build/core/classes/java:/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java6-64-test-only/checkout/lucene/build/core/classes/test:/var/lib/jenkins/tools/Ant/Ant_1.8.3/lib/ant-launcher.jar:/var/lib/jenkins/.ant/lib/apache-rat-tasks-0.8.jar:/var/lib/jenkins/.ant/lib/ivy-2.2.0.jar:/var/lib/jenkins/.ant/lib/apache-rat-0.8.jar:/var/lib/jenkins/.ant/lib/apache-rat-plugin-0.8.jar:/var/lib/jenkins/.ant/lib/apache-rat-core-0.8.jar:/var/lib/jenkins/.ant/lib/clover-2.6.3.jar:/var/lib/jenkins/tools/Ant/Ant_1.8.3/lib/ant-apache-regexp.jar:/var/lib/jenkins/tools/Ant/Ant_1.8.3/lib/ivy-2.2.0.jar:/var/lib/jenkins/tools/Ant/Ant_1.8.3/lib/ant-jmf.jar:/var/lib/jenkins/tools/Ant/Ant_1.8.3/lib/ant-apache-log4j.jar:/var/lib/jenkins/tools/Ant/Ant_1.8.3/lib/ant-jdepend.jar:/var/lib/jenkins/tools/Ant/Ant_1.8.3/lib/ant-swing.jar:/var/lib/jenkins/tools/Ant/Ant_1.8.3/lib/ant-antlr.jar:/var/lib/jenkins/tools/Ant/Ant_1.8.3/lib/ant-apache-xalan2.jar:/var/lib/jenkins/tools/Ant/Ant_1.8.3/lib/ant-apache-resolver.jar:/var/lib/jenkins/tools/Ant/Ant_1.8.3/lib/ant-commons-logging.jar:/var/lib/jenkins/tools/Ant/Ant_1.8.3/lib/ant-testutil.jar:/var/lib/jenkins/tools/Ant/Ant_1.8.3/lib/ant-apache-bcel.jar:/var/lib/jenkins/tools/Ant/Ant_1.8.3/lib/ant-jsch.jar:/var/lib/jenkins/tools/Ant/Ant_1.8.3/lib/ant-junit.jar:/var/lib/jenkins/tools/Ant/Ant_1.8.3/lib/ant-commons-net.jar:/var/lib/jenkins/tools/Ant/Ant_1.8.3/lib/ant-apache-bsf.jar:/var/lib/jenkins/tools/Ant/Ant_1.8.3/lib/ant-junit4.jar:/var/lib/jenkins/tools/Ant/Ant_1.8.3/lib/ant-jai.jar:/var/lib/jenkins/tools/Ant/Ant_1.8.3/lib/ant-javamail.jar:/var/lib/jenkins/tools/Ant/Ant_1.8.3/lib/ant.jar:/var/lib/jenkins/tools/Ant/Ant_1.8.3/lib/ant-apache-oro.jar:/var/lib/jenkins/tools/Ant/Ant_1.8.3/lib/ant-netrexx.jar:/opt/java/64/jdk1.6.0_33/lib/tools.jar:/var/lib/jenkins/.ivy2/cache/com.carrotsearch.randomizedtesting/junit4-ant/jars/junit4-ant-2.0.8.jar
 -ea:org.apache.lucene... -ea:org.apache.solr... 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe -eventsfile 
/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java6-64-test-only/checkout/lucene/build/core/test/temp/junit4-J4-20130210_023052_299.events
 
@/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java6-64-test-only/checkout/lucene/build/core/test/temp/junit4-J4-20130210_023052_299.suites
[junit4:junit4] ERROR: JVM J4 ended with an exception: Forked process returned 
with error code: 134. Very likely a JVM crash.  Process output piped in logs 
above.
[junit4:junit4] at 
com.carrotsearch.ant.tasks.junit4.JUnit4.executeSlave(JUnit4.java:1253)
[junit4:junit4] at 
com.carrotsearch.ant.tasks.junit4.JUnit4.access$000(JUnit4.java:66)
[junit4:junit4] at 
com.carrotsearch.ant.tasks.junit4.JUnit4$2.call(JUnit4.java:821)
[junit4:junit4] at 
com.carrotsearch.ant.tasks.junit4.JUnit4$2.call(JUnit4.java:818)
[junit4:junit4] at 

Re: [JENKINS] Lucene-trunk-Linux-Java6-64-test-only - Build # 23668 - Failure!

2013-02-09 Thread Mark Miller
It's like in Mario when you find the hidden way to all the warp tubes.

ShouldNotReachHere() has been breached.

- Mark

[junit4:junit4]  JVM J4: stdout (verbatim) 
[junit4:junit4] #
[junit4:junit4] # A fatal error has been detected by the Java Runtime 
Environment:
[junit4:junit4] #
[junit4:junit4] #  Internal Error (ciMethodData.cpp:142), pid=21908, 
tid=140234453112576
[junit4:junit4] #  Error: ShouldNotReachHere()
[junit4:junit4] #
[junit4:junit4] # JRE version: 6.0_33-b03
[junit4:junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.8-b03 mixed 
mode linux-amd64 compressed oops)
[junit4:junit4] # An error report file with more information is saved as:
[junit4:junit4] # 
/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java6-64-test-only/checkout/lucene/build/core/test/J4/hs_err_pid21908.log
[junit4:junit4] #
[junit4:junit4] # If you would like to submit a bug report, please visit:
[junit4:junit4] #   
http://java.sun.com/webapps/bugreport/crash.jsp

[junit4:junit4] #

On Feb 9, 2013, at 8:34 PM, buil...@flonkings.com wrote:

 Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java6-64-test-only/23668/
 
 All tests passed
 
 Build Log:
 [...truncated 1118 lines...]
 [junit4:junit4] ERROR: JVM J4 ended with an exception, command line: 
 /opt/java/64/jdk1.6.0_33/jre/bin/java -Dtests.prefix=tests 
 -Dtests.seed=FB9D0F2ECF69EFDC -Xmx512M -Dtests.iters= -Dtests.verbose=false 
 -Dtests.infostream=false -Dtests.codec=random -Dtests.postingsformat=random 
 -Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
 -Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz 
 -Dtests.luceneMatchVersion=5.0 -Dtests.cleanthreads=perMethod 
 -Djava.util.logging.config.file=/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java6-64-test-only/checkout/lucene/tools/junit4/logging.properties
  -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true 
 -Dtests.asserts.gracious=false -Dtests.multiplier=1 -DtempDir=. 
 -Djava.io.tmpdir=. 
 -Djunit4.tempDir=/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java6-64-test-only/checkout/lucene/build/core/test/temp
  
 -Dclover.db.dir=/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java6-64-test-only/checkout/lucene/build/clover/db
  -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
 -Djava.security.policy=/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java6-64-test-only/checkout/lucene/tools/junit4/tests.policy
  -Dlucene.version=5.0-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
 -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
 -Djava.awt.headless=true -classpath 
 

[jira] [Updated] (LUCENE-4750) Convert DrillDown to DrillDownQuery

2013-02-09 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated LUCENE-4750:
---

Attachment: LUCENE-4750.patch

Thanks Mike, I added another ctor which doesn't take a base query and noted the 
pure browsing behavior if you pass null to the 2nd ctor.

I think that's ready, I'll commit shortly.

 Convert DrillDown to DrillDownQuery
 ---

 Key: LUCENE-4750
 URL: https://issues.apache.org/jira/browse/LUCENE-4750
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Reporter: Shai Erera
Assignee: Michael McCandless
 Attachments: LUCENE-4750.patch, LUCENE-4750.patch, LUCENE-4750.patch


 DrillDown is a utility class for creating drill-down queries over a base 
 query and a bunch of categories. We've been asked to support AND, OR and AND 
 of ORs. The latter is not so simple as a static utility method though, so 
 instead we have some sample code ...
 Rather, I think that we can just create a DrillDownQuery (extends Query) 
 which takes a baseQuery in its ctor and exposes add(CategoryPath...), such 
 that every such group of categories is AND'ed with other groups, and 
 internally they are OR'ed. It's very similar to how you would construct a 
 BooleanQuery, only simpler and specific to facets.
 Internally, it would build a BooleanQuery and delegate rewrite, createWeight 
 etc to it.
 That will remove the need for the static utility methods .. or we can keep 
 static term() for convenience.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4750) Convert DrillDown to DrillDownQuery

2013-02-09 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13575362#comment-13575362
 ] 

Commit Tag Bot commented on LUCENE-4750:


[trunk commit] Shai Erera
http://svn.apache.org/viewvc?view=revisionrevision=182

LUCENE-4750: convert DrillDown to DrillDownQuery


 Convert DrillDown to DrillDownQuery
 ---

 Key: LUCENE-4750
 URL: https://issues.apache.org/jira/browse/LUCENE-4750
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Reporter: Shai Erera
Assignee: Michael McCandless
 Attachments: LUCENE-4750.patch, LUCENE-4750.patch, LUCENE-4750.patch


 DrillDown is a utility class for creating drill-down queries over a base 
 query and a bunch of categories. We've been asked to support AND, OR and AND 
 of ORs. The latter is not so simple as a static utility method though, so 
 instead we have some sample code ...
 Rather, I think that we can just create a DrillDownQuery (extends Query) 
 which takes a baseQuery in its ctor and exposes add(CategoryPath...), such 
 that every such group of categories is AND'ed with other groups, and 
 internally they are OR'ed. It's very similar to how you would construct a 
 BooleanQuery, only simpler and specific to facets.
 Internally, it would build a BooleanQuery and delegate rewrite, createWeight 
 etc to it.
 That will remove the need for the static utility methods .. or we can keep 
 static term() for convenience.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4750) Convert DrillDown to DrillDownQuery

2013-02-09 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera resolved LUCENE-4750.


   Resolution: Fixed
Fix Version/s: 5.0
   4.2
 Assignee: Shai Erera  (was: Michael McCandless)
Lucene Fields: New,Patch Available  (was: New)

Committed to trunk and 4x. Thanks Mike!

 Convert DrillDown to DrillDownQuery
 ---

 Key: LUCENE-4750
 URL: https://issues.apache.org/jira/browse/LUCENE-4750
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Reporter: Shai Erera
Assignee: Shai Erera
 Fix For: 4.2, 5.0

 Attachments: LUCENE-4750.patch, LUCENE-4750.patch, LUCENE-4750.patch


 DrillDown is a utility class for creating drill-down queries over a base 
 query and a bunch of categories. We've been asked to support AND, OR and AND 
 of ORs. The latter is not so simple as a static utility method though, so 
 instead we have some sample code ...
 Rather, I think that we can just create a DrillDownQuery (extends Query) 
 which takes a baseQuery in its ctor and exposes add(CategoryPath...), such 
 that every such group of categories is AND'ed with other groups, and 
 internally they are OR'ed. It's very similar to how you would construct a 
 BooleanQuery, only simpler and specific to facets.
 Internally, it would build a BooleanQuery and delegate rewrite, createWeight 
 etc to it.
 That will remove the need for the static utility methods .. or we can keep 
 static term() for convenience.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4750) Convert DrillDown to DrillDownQuery

2013-02-09 Thread Commit Tag Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13575368#comment-13575368
 ] 

Commit Tag Bot commented on LUCENE-4750:


[branch_4x commit] Shai Erera
http://svn.apache.org/viewvc?view=revisionrevision=184

LUCENE-4750: convert DrillDown to DrillDownQuery


 Convert DrillDown to DrillDownQuery
 ---

 Key: LUCENE-4750
 URL: https://issues.apache.org/jira/browse/LUCENE-4750
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/facet
Reporter: Shai Erera
Assignee: Shai Erera
 Fix For: 4.2, 5.0

 Attachments: LUCENE-4750.patch, LUCENE-4750.patch, LUCENE-4750.patch


 DrillDown is a utility class for creating drill-down queries over a base 
 query and a bunch of categories. We've been asked to support AND, OR and AND 
 of ORs. The latter is not so simple as a static utility method though, so 
 instead we have some sample code ...
 Rather, I think that we can just create a DrillDownQuery (extends Query) 
 which takes a baseQuery in its ctor and exposes add(CategoryPath...), such 
 that every such group of categories is AND'ed with other groups, and 
 internally they are OR'ed. It's very similar to how you would construct a 
 BooleanQuery, only simpler and specific to facets.
 Internally, it would build a BooleanQuery and delegate rewrite, createWeight 
 etc to it.
 That will remove the need for the static utility methods .. or we can keep 
 static term() for convenience.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-4765) Multi-valued docvalues field

2013-02-09 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-4765:
---

 Summary: Multi-valued docvalues field
 Key: LUCENE-4765
 URL: https://issues.apache.org/jira/browse/LUCENE-4765
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Robert Muir


The general idea is basically the docvalues parallel to 
FieldCache.getDocTermOrds/UninvertedField

Currently this stuff is used in e.g. grouping and join for multivalued fields, 
and in solr for faceting.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4765) Multi-valued docvalues field

2013-02-09 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-4765:


Attachment: LUCENE-4765.patch

Here's a prototype patch just to explore the idea.

I only implemented this for Lucene42Codec (just uses FST + vint-encoded byte[] 
as a quick hack).

We should try to think about what would be the best API and so on if we want to 
add this.

 Multi-valued docvalues field
 

 Key: LUCENE-4765
 URL: https://issues.apache.org/jira/browse/LUCENE-4765
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Robert Muir
 Attachments: LUCENE-4765.patch


 The general idea is basically the docvalues parallel to 
 FieldCache.getDocTermOrds/UninvertedField
 Currently this stuff is used in e.g. grouping and join for multivalued 
 fields, and in solr for faceting.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Tidy error on constant-values.html

2013-02-09 Thread Shai Erera
Hi

When I run 'ant documentation-lint' on 4x, I get this error:

Tidy was unable to process file
D:\dev\lucene\lucene_4x\lucene\build\docs\core\constant-values.html, 48
returned.

I run it w/ Oracle 1.7:

java version 1.7.0_13
Java(TM) SE Runtime Environment (build 1.7.0_13-b20)
Java HotSpot(TM) 64-Bit Server VM (build 23.7-b01, mixed mode)

It happens only on 4x, not trunk. I found an email from Robert about a
different error number (76), but I think that was on 1.6.0.

Any ideas?

Shai


[jira] [Commented] (LUCENE-4765) Multi-valued docvalues field

2013-02-09 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13575379#comment-13575379
 ] 

Shai Erera commented on LUCENE-4765:


I briefly looked at the patch, looks good. Perhaps instead of returning an 
iterator, we should return an IntsRef? While optimizing facets code, Mike and I 
observed the iterator-based API is not so good for hot code, and e.g. if 
we'll want to use this for faceting, a bulk-API would be better I think.

Same question for indexing -- in facets, we get all the ordinals at once, so is 
it possible to have a field which takes a list of ordinals instead of adding 
many instances of SortedSetDVField?

 Multi-valued docvalues field
 

 Key: LUCENE-4765
 URL: https://issues.apache.org/jira/browse/LUCENE-4765
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Robert Muir
 Attachments: LUCENE-4765.patch


 The general idea is basically the docvalues parallel to 
 FieldCache.getDocTermOrds/UninvertedField
 Currently this stuff is used in e.g. grouping and join for multivalued 
 fields, and in solr for faceting.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org