[jira] [Commented] (SOLR-2907) java.lang.IllegalArgumentException: deltaQuery has no column to resolve to declared primary key pk='ITEM_ID, CATEGORY_ID'

2013-07-09 Thread Aaron Greenspan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702973#comment-13702973
 ] 

Aaron Greenspan commented on SOLR-2907:
---

I just ran into this in Solr 4.3.0. The error message is extremely confusing. 
The situation I encountered involved an SQL query where there WAS an id field 
defined in the main query query, as well as a field with column=id and 
name=id and yet I would keep getting the error...

deltaQuery has no column to resolve to declared primary key pk='id'

It turns out that what this really means is that a useless field called id 
(or whatever the primary key is set to) also has to be in the deltaQuery 
query itself, even if you never reference a field called id from the 
deltaQuery (which I don't). I only reference a field called something else, 
e.g. blahid, from the deltaQuery. Why deltaQuery needs this redundant field 
when it's apparently never used is beyond me. Or if there is a good reason, 
this error message should definitely be changed given that this has been an 
open ticket for two years.

 java.lang.IllegalArgumentException: deltaQuery has no column to resolve to 
 declared primary key pk='ITEM_ID, CATEGORY_ID'
 -

 Key: SOLR-2907
 URL: https://issues.apache.org/jira/browse/SOLR-2907
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler, Schema and Analysis
Affects Versions: 3.4
Reporter: Alan Baker

 We are using solr for our site and ran into this error in our own schema and 
 I was able to reproduce it using the dataimport example code in the solr 
 project.  We do not get this error in SOLR 1.4 only started seeing it as we 
 are working to upgrade to 3.4.0.  It fails when delta-importing linked tables.
 Complete trace:
 Nov 18, 2011 5:21:02 PM org.apache.solr.handler.dataimport.DataImporter 
 doDeltaImport
 SEVERE: Delta Import Failed
 java.lang.IllegalArgumentException: deltaQuery has no column to resolve to 
 declared primary key pk='ITEM_ID, CATEGORY_ID'
   at 
 org.apache.solr.handler.dataimport.DocBuilder.findMatchingPkColumn(DocBuilder.java:849)
   at 
 org.apache.solr.handler.dataimport.DocBuilder.collectDelta(DocBuilder.java:900)
   at 
 org.apache.solr.handler.dataimport.DocBuilder.collectDelta(DocBuilder.java:879)
   at 
 org.apache.solr.handler.dataimport.DocBuilder.doDelta(DocBuilder.java:285)
   at 
 org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:179)
   at 
 org.apache.solr.handler.dataimport.DataImporter.doDeltaImport(DataImporter.java:390)
   at 
 org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:429)
   at 
 org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:408)
 I used this dataConfig from the wiki on the data import:
 dataConfig
 dataSource driver=org.hsqldb.jdbcDriver 
 url=jdbc:hsqldb:./example-DIH/hsqldb/ex user=sa /
 document
entity  name=item pk=ID 
   query=select * from item 
 deltaImportQuery=select * from item where 
 ID=='${dataimporter.delta.id}'
 deltaQuery=select id from item where last_modified gt; 
 '${dataimporter.last_index_time}'
 entity name=item_category pk=ITEM_ID, CATEGORY_ID
 query=select CATEGORY_ID from item_category where 
 ITEM_ID='${item.ID}'
 deltaQuery=select ITEM_ID, CATEGORY_ID from 
 item_category where last_modified  '${dataimporter.last_index_time}'
 parentDeltaQuery=select ID from item where 
 ID=${item_category.ITEM_ID}
 
   entity name=category pk=ID
 query=select DESCRIPTION as cat from category where 
 ID = '${item_category.CATEGORY_ID}'
 deltaQuery=select ID from category where 
 last_modified gt; '${dataimporter.last_index_time}'
 parentDeltaQuery=select ITEM_ID, CATEGORY_ID from 
 item_category where CATEGORY_ID=${category.ID}/
   /entity
 /entity
 
 /document
 /dataConfig
 To reproduce use the data config from above and set the dataimport.properties 
 last update times to before the last_modifed date in the example data.  I my 
 case I had to set the year to 1969.  Then run a delta-import and the 
 exception occurs.  Thanks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: 

RE: Several builds hanging pecause of permgen

2013-07-09 Thread Uwe Schindler
Hi Mark,

The problem with raising permgen is:
- It's Hotspot specific only, so does not work with other JVMs
- Its no longer available in Java 8

I would really prefer to maybe tune the tests and maybe not create so many 
nodes in the cloud tests. It looks like the bug happens more often with higher 
test multiplier (-Dtests.multiplier=3), so maybe we can really tune that.
If we want to raise permgen, we have to do it in a similar way like we do 
enable the heap dumps - with lots of condition/ tasks in ANT... :(

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Mark Miller [mailto:markrmil...@gmail.com]
 Sent: Tuesday, July 09, 2013 5:20 AM
 To: dev@lucene.apache.org
 Subject: Re: Several builds hanging pecause of permgen
 
 Looks like we currently don't set the max perm gen for tests, so you get the
 default - I think we want to change that regardless - we don't want it to vary
 IMO - it should work like Xmx.
 
 I think we should just set it to 128 mb, and these tests should have plenty of
 room to run.
 
 - Mark
 
 On Jul 8, 2013, at 11:06 PM, Mark Miller markrmil...@gmail.com wrote:
 
 
  On Jul 8, 2013, at 2:26 PM, Uwe Schindler u...@thetaphi.de wrote:
 
  Next one:
  http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6395/console
 
  I will vote all releases of 4.4 with -1 until this is fixed!
 
  You can't veto a release, so it's kind of a hollow threat ;)
 
  It hangs on my local computer, too! Tests pass only ½ of the time, the
 remaining time it hangs with permgen errors.
 
  Depending on the host OS and java version, I have had to raise the max
 perm gem a bit higher for heavy SolrCloud tests that also start up hdfs. I 
 think
 I remember raising it from 96 MB to 128 MB. It's simply a test resource issue 
 -
 those tests are very heavy, which is why most are set to run nightly - I've
 seen the code run heavily without any perm gen issues though - and on
 some of my machines, I don't have to raise the perm gen at all.
 
  - Mark
 
 
  Uwe
 
  -
  Uwe Schindler
  H.-H.-Meier-Allee 63, D-28213 Bremen
  http://www.thetaphi.de
  eMail: u...@thetaphi.de
 
  From: dawid.we...@gmail.com [mailto:dawid.we...@gmail.com] On
 Behalf
  Of Dawid Weiss
  Sent: Monday, July 08, 2013 2:16 PM
  To: dev@lucene.apache.org
  Subject: Re: Several builds hanging pecause of permgen
 
 
  Not much I can do from my side about permgen errors. There is really no
 way to deal with these from within Java (the same process) -- you cannot
 effectively handle anything because your own classes may not load at all.
 
  Dawid
 
  On Mon, Jul 8, 2013 at 1:35 PM, Uwe Schindler u...@thetaphi.de
 wrote:
  Another one, this time on OSX:
  http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/617/
 
  -
  Uwe Schindler
  H.-H.-Meier-Allee 63, D-28213 Bremen
  http://www.thetaphi.de
  eMail: u...@thetaphi.de
 
  From: Robert Muir [mailto:rcm...@gmail.com]
  Sent: Sunday, July 07, 2013 11:15 PM
  To: dev@lucene.apache.org
  Subject: Re: Several builds hanging pecause of permgen
 
  When there were leaks from static classes, we added a checker to
 LuceneTestCase that looks for RAM  N and fails with debugging information.
 
  I wonder if some similar check is possible for this case (to make it easier
 than going thru heapdumps, and to find issues before crash-time)...
 
  On Sun, Jul 7, 2013 at 4:10 PM, Uwe Schindler u...@thetaphi.de wrote:
  Another one:
  http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6385/testReport/
 
 junit/junit.framework/TestSuite/org_apache_solr_request_SimpleFacetsT
  est/
 
  Had to be killed with kill -9
 
  -
  Uwe Schindler
  H.-H.-Meier-Allee 63, D-28213 Bremen
  http://www.thetaphi.de
  eMail: u...@thetaphi.de
 
 
  -Original Message-
  From: Uwe Schindler [mailto:u...@thetaphi.de]
  Sent: Saturday, July 06, 2013 10:16 PM
  To: dev@lucene.apache.org
  Subject: RE: Several builds hanging pecause of permgen
 
  Another one:
  http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6375/console
 
  I was only able to kill the JVM with kill -9 I am sure, it's horrible
 slowdoop!
 
  -
  Uwe Schindler
  H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de
  eMail: u...@thetaphi.de
 
 
  -Original Message-
  From: Uwe Schindler [mailto:u...@thetaphi.de]
  Sent: Friday, July 05, 2013 3:59 PM
  To: dev@lucene.apache.org
  Subject: Several builds hanging pecause of permgen
 
  Several Jenkins builds now hang because of permgen. The runner JVM
  is dead (can only be killed by -9), last example:
 
  http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6360/console
 
  -
  Uwe Schindler
  H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de
  eMail: u...@thetaphi.de
 
 
 
 
  ---
  -- To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For
  additional commands, e-mail: dev-h...@lucene.apache.org
 
 
  

[JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0-ea-b96) - Build # 6403 - Failure!

2013-07-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6403/
Java: 64bit/jdk1.8.0-ea-b96 -XX:-UseCompressedOops -XX:+UseG1GC

No tests ran.

Build Log:
[...truncated 188 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:395: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:255: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build.xml:23: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/common-build.xml:283: 
Minimum supported ANT version is 1.8.2. Yours: Apache Ant version 1.8.0 
compiled on February 1 2010

Total time: 3 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 64bit/jdk1.8.0-ea-b96 -XX:-UseCompressedOops -XX:+UseG1GC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0-ea-b96) - Build # 6479 - Failure!

2013-07-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/6479/
Java: 64bit/jdk1.8.0-ea-b96 -XX:-UseCompressedOops -XX:+UseG1GC

No tests ran.

Build Log:
[...truncated 170 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:389: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:255: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build.xml:23: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:283: 
Minimum supported ANT version is 1.8.2. Yours: Apache Ant version 1.8.0 
compiled on February 1 2010

Total time: 0 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 64bit/jdk1.8.0-ea-b96 -XX:-UseCompressedOops -XX:+UseG1GC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0-ea-b96) - Build # 6404 - Still Failing!

2013-07-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6404/
Java: 64bit/jdk1.8.0-ea-b96 -XX:+UseCompressedOops -XX:+UseSerialGC

No tests ran.

Build Log:
[...truncated 28 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:395: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:255: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build.xml:23: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/common-build.xml:283: 
Minimum supported ANT version is 1.8.2. Yours: Apache Ant version 1.8.0 
compiled on February 1 2010

Total time: 0 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 64bit/jdk1.8.0-ea-b96 -XX:+UseCompressedOops 
-XX:+UseSerialGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.7.0_25) - Build # 3015 - Failure!

2013-07-09 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3015/
Java: 64bit/jdk1.7.0_25 -XX:+UseCompressedOops -XX:+UseSerialGC

No tests ran.

Build Log:
[...truncated 15 lines...]


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: Several builds hanging pecause of permgen

2013-07-09 Thread Dawid Weiss
What's the core of those permgen leaks, guys? Is it the number of
classes (via different classloaders)? Number of interned strings?
Maybe something is going on that we should reflect on before we raise
the default permgen limit? Our typical permgen headaches were due to
repeated classloader leaks -- is Hadoop using something like that?

Dawid

On Tue, Jul 9, 2013 at 8:25 AM, Uwe Schindler u...@thetaphi.de wrote:
 Hi Mark,

 The problem with raising permgen is:
 - It's Hotspot specific only, so does not work with other JVMs
 - Its no longer available in Java 8

 I would really prefer to maybe tune the tests and maybe not create so many 
 nodes in the cloud tests. It looks like the bug happens more often with 
 higher test multiplier (-Dtests.multiplier=3), so maybe we can really tune 
 that.
 If we want to raise permgen, we have to do it in a similar way like we do 
 enable the heap dumps - with lots of condition/ tasks in ANT... :(

 Uwe

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de


 -Original Message-
 From: Mark Miller [mailto:markrmil...@gmail.com]
 Sent: Tuesday, July 09, 2013 5:20 AM
 To: dev@lucene.apache.org
 Subject: Re: Several builds hanging pecause of permgen

 Looks like we currently don't set the max perm gen for tests, so you get the
 default - I think we want to change that regardless - we don't want it to 
 vary
 IMO - it should work like Xmx.

 I think we should just set it to 128 mb, and these tests should have plenty 
 of
 room to run.

 - Mark

 On Jul 8, 2013, at 11:06 PM, Mark Miller markrmil...@gmail.com wrote:

 
  On Jul 8, 2013, at 2:26 PM, Uwe Schindler u...@thetaphi.de wrote:
 
  Next one:
  http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6395/console
 
  I will vote all releases of 4.4 with -1 until this is fixed!
 
  You can't veto a release, so it's kind of a hollow threat ;)
 
  It hangs on my local computer, too! Tests pass only ½ of the time, the
 remaining time it hangs with permgen errors.
 
  Depending on the host OS and java version, I have had to raise the max
 perm gem a bit higher for heavy SolrCloud tests that also start up hdfs. I 
 think
 I remember raising it from 96 MB to 128 MB. It's simply a test resource 
 issue -
 those tests are very heavy, which is why most are set to run nightly - I've
 seen the code run heavily without any perm gen issues though - and on
 some of my machines, I don't have to raise the perm gen at all.
 
  - Mark
 
 
  Uwe
 
  -
  Uwe Schindler
  H.-H.-Meier-Allee 63, D-28213 Bremen
  http://www.thetaphi.de
  eMail: u...@thetaphi.de
 
  From: dawid.we...@gmail.com [mailto:dawid.we...@gmail.com] On
 Behalf
  Of Dawid Weiss
  Sent: Monday, July 08, 2013 2:16 PM
  To: dev@lucene.apache.org
  Subject: Re: Several builds hanging pecause of permgen
 
 
  Not much I can do from my side about permgen errors. There is really no
 way to deal with these from within Java (the same process) -- you cannot
 effectively handle anything because your own classes may not load at all.
 
  Dawid
 
  On Mon, Jul 8, 2013 at 1:35 PM, Uwe Schindler u...@thetaphi.de
 wrote:
  Another one, this time on OSX:
  http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/617/
 
  -
  Uwe Schindler
  H.-H.-Meier-Allee 63, D-28213 Bremen
  http://www.thetaphi.de
  eMail: u...@thetaphi.de
 
  From: Robert Muir [mailto:rcm...@gmail.com]
  Sent: Sunday, July 07, 2013 11:15 PM
  To: dev@lucene.apache.org
  Subject: Re: Several builds hanging pecause of permgen
 
  When there were leaks from static classes, we added a checker to
 LuceneTestCase that looks for RAM  N and fails with debugging information.
 
  I wonder if some similar check is possible for this case (to make it 
  easier
 than going thru heapdumps, and to find issues before crash-time)...
 
  On Sun, Jul 7, 2013 at 4:10 PM, Uwe Schindler u...@thetaphi.de wrote:
  Another one:
  http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6385/testReport/
 
 junit/junit.framework/TestSuite/org_apache_solr_request_SimpleFacetsT
  est/
 
  Had to be killed with kill -9
 
  -
  Uwe Schindler
  H.-H.-Meier-Allee 63, D-28213 Bremen
  http://www.thetaphi.de
  eMail: u...@thetaphi.de
 
 
  -Original Message-
  From: Uwe Schindler [mailto:u...@thetaphi.de]
  Sent: Saturday, July 06, 2013 10:16 PM
  To: dev@lucene.apache.org
  Subject: RE: Several builds hanging pecause of permgen
 
  Another one:
  http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6375/console
 
  I was only able to kill the JVM with kill -9 I am sure, it's horrible
 slowdoop!
 
  -
  Uwe Schindler
  H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de
  eMail: u...@thetaphi.de
 
 
  -Original Message-
  From: Uwe Schindler [mailto:u...@thetaphi.de]
  Sent: Friday, July 05, 2013 3:59 PM
  To: dev@lucene.apache.org
  Subject: Several builds hanging pecause of permgen
 
  Several Jenkins builds now hang because of permgen. The runner JVM
  is dead 

[JENKINS] Lucene-Solr-Tests-4.x-Java7 - Build # 1396 - Still Failing

2013-07-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java7/1396/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=2486, 
name=recoveryCmdExecutor-1201-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at 
java.net.Socket.connect(Socket.java:579) at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
 at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
 at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
 at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
 at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
 at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:722)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.BasicDistributedZkTest: 
   1) Thread[id=2486, name=recoveryCmdExecutor-1201-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
at java.net.Socket.connect(Socket.java:579)
at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
at __randomizedtesting.SeedInfo.seed([B8C4E590C0E88213]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=2486, name=recoveryCmdExecutor-1201-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at 

RE: Several builds hanging pecause of permgen

2013-07-09 Thread Uwe Schindler
Hi Mark,

  I am sure, it's horrible slowdoop!
 
 It doesn't bother me that you say slowdoop or say that you said that hadoop
 was made by first year students on twitter - but I will point out it kind of
 makes you look like a dick - a community of people work on that project that
 is very similar to the one that works on Lucene/Solr. And I know you, so it
 doesn't really affect my view of you, but it's kind of unbecoming for a PMC
 chair to besmirch another Apache project by referring to it with a derogatory
 nickname FWIW - no real skin off my nose - besmirch away if that's what
 you enjoy.

My problem is not with Hadoop, my problem is that this is part of main Solr 
module and is not in a contrib module (where it should live). It was committed 
without discussing about all the integration problems, like test-only JAR files 
in different versions like we currently have. I have nothing against Hadoop, 
but I feel like we should not use code that has a bad design and is not 
platform independent (I am talking about the test dependencies only!). Isn't it 
possible to test our code without using this MiniDFSCluster? This component 
also seems to be the cause of the permgen errors: We need to tune it or replace 
it by something that works correctly - also on Windows!

I agree the slow nickname is bad - the parts of hadoop code I looked at 
should be updated, including the Cygwin requirement (MiniDFSCluster, UNIX Shell 
code). But the response time to new issues seems to be low (see the Locale 
problems where you submitted a 2-line patch), so I don't want to submit patches 
or issues at the moment.

Uwe


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Several builds hanging pecause of permgen

2013-07-09 Thread Uwe Schindler
I inspected the heap dumps already, 2 things are interesting:

- The heap dump file is quite small (only approx. 50 MB), so the permgen issues 
start when not even -Xmx is used completely.
- There are lots of (250) reflection class loaders, but no leaks. All 
classloaders exist one time, just the reflection-based 
sun.relect.DelegatingClassLoader with the stubs for speeding up reflection 
shows up, but only with one class/loader. Which is perfectly fine.
- ~2500 classes loaded, which is not much.

How do I get the number of interned Strings from jvisualvm?

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: dawid.we...@gmail.com [mailto:dawid.we...@gmail.com] On Behalf
 Of Dawid Weiss
 Sent: Tuesday, July 09, 2013 9:13 AM
 To: dev@lucene.apache.org
 Subject: Re: Several builds hanging pecause of permgen
 
 What's the core of those permgen leaks, guys? Is it the number of classes
 (via different classloaders)? Number of interned strings?
 Maybe something is going on that we should reflect on before we raise the
 default permgen limit? Our typical permgen headaches were due to
 repeated classloader leaks -- is Hadoop using something like that?
 
 Dawid
 
 On Tue, Jul 9, 2013 at 8:25 AM, Uwe Schindler u...@thetaphi.de wrote:
  Hi Mark,
 
  The problem with raising permgen is:
  - It's Hotspot specific only, so does not work with other JVMs
  - Its no longer available in Java 8
 
  I would really prefer to maybe tune the tests and maybe not create so
 many nodes in the cloud tests. It looks like the bug happens more often with
 higher test multiplier (-Dtests.multiplier=3), so maybe we can really tune
 that.
  If we want to raise permgen, we have to do it in a similar way like we
  do enable the heap dumps - with lots of condition/ tasks in ANT...
  :(
 
  Uwe
 
  -
  Uwe Schindler
  H.-H.-Meier-Allee 63, D-28213 Bremen
  http://www.thetaphi.de
  eMail: u...@thetaphi.de
 
 
  -Original Message-
  From: Mark Miller [mailto:markrmil...@gmail.com]
  Sent: Tuesday, July 09, 2013 5:20 AM
  To: dev@lucene.apache.org
  Subject: Re: Several builds hanging pecause of permgen
 
  Looks like we currently don't set the max perm gen for tests, so you
  get the default - I think we want to change that regardless - we
  don't want it to vary IMO - it should work like Xmx.
 
  I think we should just set it to 128 mb, and these tests should have
  plenty of room to run.
 
  - Mark
 
  On Jul 8, 2013, at 11:06 PM, Mark Miller markrmil...@gmail.com wrote:
 
  
   On Jul 8, 2013, at 2:26 PM, Uwe Schindler u...@thetaphi.de wrote:
  
   Next one:
   http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6395/console
  
   I will vote all releases of 4.4 with -1 until this is fixed!
  
   You can't veto a release, so it's kind of a hollow threat ;)
  
   It hangs on my local computer, too! Tests pass only ½ of the time,
   the
  remaining time it hangs with permgen errors.
  
   Depending on the host OS and java version, I have had to raise the
   max
  perm gem a bit higher for heavy SolrCloud tests that also start up
  hdfs. I think I remember raising it from 96 MB to 128 MB. It's simply
  a test resource issue - those tests are very heavy, which is why most
  are set to run nightly - I've seen the code run heavily without any
  perm gen issues though - and on some of my machines, I don't have to
 raise the perm gen at all.
  
   - Mark
  
  
   Uwe
  
   -
   Uwe Schindler
   H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de
   eMail: u...@thetaphi.de
  
   From: dawid.we...@gmail.com [mailto:dawid.we...@gmail.com] On
  Behalf
   Of Dawid Weiss
   Sent: Monday, July 08, 2013 2:16 PM
   To: dev@lucene.apache.org
   Subject: Re: Several builds hanging pecause of permgen
  
  
   Not much I can do from my side about permgen errors. There is
   really no
  way to deal with these from within Java (the same process) -- you
  cannot effectively handle anything because your own classes may not
 load at all.
  
   Dawid
  
   On Mon, Jul 8, 2013 at 1:35 PM, Uwe Schindler u...@thetaphi.de
  wrote:
   Another one, this time on OSX:
   http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/617/
  
   -
   Uwe Schindler
   H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de
   eMail: u...@thetaphi.de
  
   From: Robert Muir [mailto:rcm...@gmail.com]
   Sent: Sunday, July 07, 2013 11:15 PM
   To: dev@lucene.apache.org
   Subject: Re: Several builds hanging pecause of permgen
  
   When there were leaks from static classes, we added a checker to
  LuceneTestCase that looks for RAM  N and fails with debugging
 information.
  
   I wonder if some similar check is possible for this case (to make
   it easier
  than going thru heapdumps, and to find issues before crash-time)...
  
   On Sun, Jul 7, 2013 at 4:10 PM, Uwe Schindler u...@thetaphi.de
 wrote:
   Another one:
   

Re: Several builds hanging pecause of permgen

2013-07-09 Thread Dawid Weiss
 How do I get the number of interned Strings from jvisualvm?

Don't know about jvisualvm but:

The jmap -permgen command prints statistics for the objects in the
permanent generation, including information about internalized String
instances. See   2.7.4 Getting Information on the Permanent
Generation.

http://www.oracle.com/technetwork/java/javase/tooldescr-136044.html#gblmm

There's also a magic switch to hotspot that dumps those strings:

  product(bool, PrintStringTableStatistics, false,  \
  print statistics about the StringTable and SymbolTable) \
\
  notproduct(bool, PrintSymbolTableSizeHistogram, false,\
  print histogram of the symbol table)\


D.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Several builds hanging pecause of permgen

2013-07-09 Thread Uwe Schindler
I digged more:

In most heap dumps, the failing thread looks like that, so it looks like Log4J 
is the bad guy - I looked at the code, it does tricky stuff to get stack traces 
but I found no String.intern() anywhere.

My question: When did we change tests to use log4j? Was this before/after the 
hadoop change? Maybe it's not related to hadoop, and is more a bug in Log4J. I 
find similar stack traces all around the internet always linked to Log4J.
  
qtp1522342989-5342 prio=5 tid=5342 RUNNABLE
at java.lang.OutOfMemoryError.init(OutOfMemoryError.java:25)
at java.lang.Throwable.getStackTraceElement(Native Method)
at java.lang.Throwable.getOurStackTrace(Throwable.java:591)
at java.lang.Throwable.getStackTrace(Throwable.java:582)
at sun.reflect.GeneratedMethodAccessor2.invoke(unknown string)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.log4j.spi.LocationInfo.init(LocationInfo.java:139)
   Local Variable: org.apache.log4j.spi.LocationInfo#52
   Local Variable: java.lang.String#28067
   Local Variable: java.lang.Throwable#1
at 
org.apache.log4j.spi.LoggingEvent.getLocationInformation(LoggingEvent.java:253)
at org.apache.solr.util.SolrLogLayout._format(SolrLogLayout.java:122)
   Local Variable: java.lang.StringBuilder#32
   Local Variable: java.lang.String#26042
   Local Variable: org.apache.solr.util.SolrLogLayout#1
at org.apache.solr.util.SolrLogLayout.format(SolrLogLayout.java:110)
at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:310)
   Local Variable: org.apache.log4j.ConsoleAppender#1
at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
at 
org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
   Local Variable: org.apache.log4j.helpers.AppenderAttachableImpl#1
at org.apache.log4j.Category.callAppenders(Category.java:206)
   Local Variable: org.apache.log4j.spi.LoggingEvent#52
   Local Variable: org.apache.log4j.Logger#52
at org.apache.log4j.Category.forcedLog(Category.java:391)
at org.apache.log4j.Category.log(Category.java:856)
at org.slf4j.impl.Log4jLoggerAdapter.info(Log4jLoggerAdapter.java:305)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1909)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:659)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:362)
   Local Variable: org.apache.solr.handler.StandardRequestHandler#4
   Local Variable: org.apache.solr.core.SolrCore#3
   Local Variable: java.lang.String#25968
   Local Variable: org.apache.solr.response.SolrQueryResponse#3
   Local Variable: 
org.eclipse.jetty.servlet.ServletHandler$CachedChain#15
   Local Variable: org.apache.solr.servlet.cache.Method#3
   Local Variable: org.apache.solr.servlet.SolrRequestParsers$1#3
   Local Variable: org.apache.solr.servlet.SolrDispatchFilter#2
   Local Variable: org.apache.solr.core.CoreContainer#3
   Local Variable: org.apache.solr.servlet.SolrRequestParsers#3
   Local Variable: org.apache.solr.core.SolrConfig#3
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:158)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:136)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
   Local Variable: 
org.eclipse.jetty.servlet.ServletHandler$CachedChain#13
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.server.handler.GzipHandler.handle(GzipHandler.java:301)
   Local Variable: org.eclipse.jetty.server.handler.GzipHandler#2
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1077)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
   Local Variable: org.eclipse.jetty.servlet.ServletHolder#2
   Local Variable: org.eclipse.jetty.servlet.ServletHandler#2
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
   Local Variable: org.eclipse.jetty.server.session.SessionHandler#2
at 

RE: Several builds hanging pecause of permgen

2013-07-09 Thread Uwe Schindler
Hi,

Yeah, and permgen is not included in the dump :( 
http://stackoverflow.com/questions/4080010/how-to-dump-permgen
So you can only look at a live JVM, which is hard because you don't know when 
it dies. Once it died, it's too late as jmap in most cases cannot connect 
anymore (and only kill -9 helps to kill).

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: dawid.we...@gmail.com [mailto:dawid.we...@gmail.com] On Behalf
 Of Dawid Weiss
 Sent: Tuesday, July 09, 2013 9:52 AM
 To: dev@lucene.apache.org
 Subject: Re: Several builds hanging pecause of permgen
 
  How do I get the number of interned Strings from jvisualvm?
 
 Don't know about jvisualvm but:
 
 The jmap -permgen command prints statistics for the objects in the
 permanent generation, including information about internalized String
 instances. See   2.7.4 Getting Information on the Permanent
 Generation.
 
 http://www.oracle.com/technetwork/java/javase/tooldescr-
 136044.html#gblmm
 
 There's also a magic switch to hotspot that dumps those strings:
 
   product(bool, PrintStringTableStatistics, false,  \
   print statistics about the StringTable and SymbolTable) \
 \
   notproduct(bool, PrintSymbolTableSizeHistogram, false,\
   print histogram of the symbol table)\
 
 
 D.
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5017) Allow sharding based on the value of a field

2013-07-09 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703039#comment-13703039
 ] 

Noble Paul commented on SOLR-5017:
--

bq.I could see by default, the compositeId router also paying attention to the 
_shard_ parameter

the _shard_ parameter is the actual name of the shard. In case of compositeId 
router , the client is agnostic of the shard name and all that it cares about 
is shard.keys. What I mean to say is, the name _shard_ can be a bit confusing

As of now we don't have a plan on how to do shard splitting for 'implicit' 
router. Let's keep it as  TBD

In case of compositeId router , I would like to read the part before the (!) to 
be read from the 'shardField'. The semantics will be exactly same as it is now. 
Reading the value from a request parameter would mean we will need to persist 
it along with the document in some field . 

 Allow sharding based on the value of a field
 

 Key: SOLR-5017
 URL: https://issues.apache.org/jira/browse/SOLR-5017
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul

 We should be able to create a collection where sharding is done based on the 
 value of a given field
 collections can be created with shardField=fieldName, which will be persisted 
 in DocCollection in ZK
 implicit DocRouter would look at this field instead of _shard_ field
 CompositeIdDocRouter can also use this field instead of looking at the id 
 field. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5017) Allow sharding based on the value of a field

2013-07-09 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703039#comment-13703039
 ] 

Noble Paul edited comment on SOLR-5017 at 7/9/13 8:11 AM:
--

bq.I could see by default, the compositeId router also paying attention to the 
\_shard_ parameter

the \_shard_ parameter is the actual name of the shard. In case of compositeId 
router , the client is agnostic of the shard name and all that it cares about 
is shard.keys. What I mean to say is, the name \_shard_ can be a bit confusing

As of now we don't have a plan on how to do shard splitting for 'implicit' 
router. Let's keep it as  TBD

In case of compositeId router , I would like to read the part before the (!) to 
be read from the 'shardField'. The semantics will be exactly same as it is now. 
Reading the value from a request parameter would mean we will need to persist 
it along with the document in some field . 

  was (Author: noble.paul):
bq.I could see by default, the compositeId router also paying attention to 
the _shard_ parameter

the _shard_ parameter is the actual name of the shard. In case of compositeId 
router , the client is agnostic of the shard name and all that it cares about 
is shard.keys. What I mean to say is, the name _shard_ can be a bit confusing

As of now we don't have a plan on how to do shard splitting for 'implicit' 
router. Let's keep it as  TBD

In case of compositeId router , I would like to read the part before the (!) to 
be read from the 'shardField'. The semantics will be exactly same as it is now. 
Reading the value from a request parameter would mean we will need to persist 
it along with the document in some field . 
  
 Allow sharding based on the value of a field
 

 Key: SOLR-5017
 URL: https://issues.apache.org/jira/browse/SOLR-5017
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul

 We should be able to create a collection where sharding is done based on the 
 value of a given field
 collections can be created with shardField=fieldName, which will be persisted 
 in DocCollection in ZK
 implicit DocRouter would look at this field instead of _shard_ field
 CompositeIdDocRouter can also use this field instead of looking at the id 
 field. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 1778 - Failure

2013-07-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java6/1778/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=2386, 
name=recoveryCmdExecutor-1254-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) at 
java.net.Socket.connect(Socket.java:546) at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
 at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
 at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
 at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
 at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
 at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:679)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.BasicDistributedZkTest: 
   1) Thread[id=2386, name=recoveryCmdExecutor-1254-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
at java.net.Socket.connect(Socket.java:546)
at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)
at __randomizedtesting.SeedInfo.seed([39BFF554310B9253]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=2386, name=recoveryCmdExecutor-1254-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)  
   at 

Re: Several builds hanging pecause of permgen

2013-07-09 Thread Shai Erera
FWIW, we're testing indexing on hadoop too, using LocalJobRunner, no
MiniDFCluster (since we're not after testing HDFS, only our MapReduce
logic) and it works on Windows and Linux.

Not everything is testable that way though, for instance there were bugs
that were only exposed on a true HDFS (but not GPFS), but since we develop
both on Windows and Linux, we chose not to try force MiniDFSCluster to run
on Windows. I tried w/ the community, but it seemed they didn't care much
about it.

I don't know what Solr tests, and why it needs MiniDFSCluster, just
pointing out that you can test MR jobs on Windows too.

Shai


On Tue, Jul 9, 2013 at 10:27 AM, Uwe Schindler u...@thetaphi.de wrote:

 Hi Mark,

   I am sure, it's horrible slowdoop!
 
  It doesn't bother me that you say slowdoop or say that you said that
 hadoop
  was made by first year students on twitter - but I will point out it
 kind of
  makes you look like a dick - a community of people work on that project
 that
  is very similar to the one that works on Lucene/Solr. And I know you, so
 it
  doesn't really affect my view of you, but it's kind of unbecoming for a
 PMC
  chair to besmirch another Apache project by referring to it with a
 derogatory
  nickname FWIW - no real skin off my nose - besmirch away if that's
 what
  you enjoy.

 My problem is not with Hadoop, my problem is that this is part of main
 Solr module and is not in a contrib module (where it should live). It was
 committed without discussing about all the integration problems, like
 test-only JAR files in different versions like we currently have. I have
 nothing against Hadoop, but I feel like we should not use code that has a
 bad design and is not platform independent (I am talking about the test
 dependencies only!). Isn't it possible to test our code without using this
 MiniDFSCluster? This component also seems to be the cause of the permgen
 errors: We need to tune it or replace it by something that works correctly
 - also on Windows!

 I agree the slow nickname is bad - the parts of hadoop code I looked at
 should be updated, including the Cygwin requirement (MiniDFSCluster, UNIX
 Shell code). But the response time to new issues seems to be low (see the
 Locale problems where you submitted a 2-line patch), so I don't want to
 submit patches or issues at the moment.

 Uwe


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Commented] (SOLR-3076) Solr(Cloud) should support block joins

2013-07-09 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703080#comment-13703080
 ] 

Mikhail Khludnev commented on SOLR-3076:


bq. 2. How would Solr block join compare with Elasticsearch nested types? ...

ElasticSearch also provides some sort of nested facets. However, I found them 
useless for real life eCommerce faceting. Though, internally ES has enough 
infrastructure to provide correct item level faceting. I feel it can be 
delivered there easily. It's worth to clarify what I mean in real life 
eCommerce faceting, just check any site: if we have colored items which are 
nested into top-level products, and when we count item level facets eg color, 
we need to count number of *products* which has any items of the particular 
color and passed item level filter eg size. 

also desired direction is relations schema, see discussion in the beginning, 
I'm not sure how ElasticSearch address it.   

bq. 3. Can individual child documents be updated,...

you can only delete. I mean, you should be able to do so, but I don't remember 
whether it's covered by test or not. Update,Replace,Append might be possible 
only after stacked updates are there (frankly speaking, never).

 Solr(Cloud) should support block joins
 --

 Key: SOLR-3076
 URL: https://issues.apache.org/jira/browse/SOLR-3076
 Project: Solr
  Issue Type: New Feature
Reporter: Grant Ingersoll
Assignee: Yonik Seeley
 Fix For: 5.0, 4.4

 Attachments: 27M-singlesegment-histogram.png, 27M-singlesegment.png, 
 bjq-vs-filters-backward-disi.patch, bjq-vs-filters-illegal-state.patch, 
 child-bjqparser.patch, dih-3076.patch, dih-config.xml, 
 parent-bjq-qparser.patch, parent-bjq-qparser.patch, Screen Shot 2012-07-17 at 
 1.12.11 AM.png, SOLR-3076-childDocs.patch, SOLR-3076.patch, SOLR-3076.patch, 
 SOLR-3076.patch, SOLR-3076.patch, SOLR-3076.patch, SOLR-3076.patch, 
 SOLR-3076.patch, SOLR-3076.patch, SOLR-3076.patch, SOLR-3076.patch, 
 SOLR-7036-childDocs-solr-fork-trunk-patched, 
 solrconf-bjq-erschema-snippet.xml, solrconfig.xml.patch, 
 tochild-bjq-filtered-search-fix.patch


 Lucene has the ability to do block joins, we should add it to Solr.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5022) PermGen exhausted test failures on Jenkins.

2013-07-09 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703084#comment-13703084
 ] 

Robert Muir commented on SOLR-5022:
---

-1 to increasing permgen.

solr ran fine without it before, I want to know why something wants more 
permgen: and for what? classes, interned strings, what exactly.

 PermGen exhausted test failures on Jenkins.
 ---

 Key: SOLR-5022
 URL: https://issues.apache.org/jira/browse/SOLR-5022
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5022) PermGen exhausted test failures on Jenkins.

2013-07-09 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703087#comment-13703087
 ] 

Robert Muir commented on SOLR-5022:
---

This isnt the heap where you give things plenty of room.

This is a memory leak that should be fixed.

 PermGen exhausted test failures on Jenkins.
 ---

 Key: SOLR-5022
 URL: https://issues.apache.org/jira/browse/SOLR-5022
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5022) PermGen exhausted test failures on Jenkins.

2013-07-09 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703100#comment-13703100
 ] 

Uwe Schindler commented on SOLR-5022:
-

The problem with raising permgen is:
- It's Hotspot specific only, so does not work with other JVMs
- Its no longer available in Java 8

I would really prefer to maybe tune the tests and maybe not create so many 
nodes in the cloud tests. It looks like the bug happens more often with higher 
test multiplier (-Dtests.multiplier=3), so maybe we can really tune that.
If we want to raise permgen, we have to do it in a similar way like we do 
enable the heap dumps - with lots of condition/ tasks in ANT... :(


 PermGen exhausted test failures on Jenkins.
 ---

 Key: SOLR-5022
 URL: https://issues.apache.org/jira/browse/SOLR-5022
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5022) PermGen exhausted test failures on Jenkins.

2013-07-09 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703102#comment-13703102
 ] 

Uwe Schindler commented on SOLR-5022:
-

One thing in addition:
We currently have a assumeFalse() in the hadoop tests that check for windows 
and freebsd. But the latter, freebsd is bogus, as only the configuration of 
Jenkins FreeBSD is wrong, not FreeBSD in general (the blackhole must be 
enabled).

I would prefer to add a property to ANT tests.disable.hadoop that defaults to 
true (on Windows) and false elsewhere. In the tests we can make an assume 
on the existence of this property. Or alternatively put all hadoop tests in a 
test group that can be disabled (I would prefer the latter, maybe [~dawidweiss] 
can help).

On FreeBSD jenkins we would set this property to true, on other jenkins we 
can autodetect it (windows, or other). And if one does not want to run Hadoop 
tests at all, he can disable.



 PermGen exhausted test failures on Jenkins.
 ---

 Key: SOLR-5022
 URL: https://issues.apache.org/jira/browse/SOLR-5022
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5022) PermGen exhausted test failures on Jenkins.

2013-07-09 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703104#comment-13703104
 ] 

Dawid Weiss commented on SOLR-5022:
---

bq. Or alternatively put all hadoop tests in a test group that can be disabled

This shouldn't be a problem -- create a new test group (an annotation marked 
with a meta-annotation, see existing code of BadApple for example), enable or 
disable the test group by default, override via ANT.

The group would be disabled/enabled via ant's condition and a value passed via 
system property, much like it is the case with badapple and nightly. There is 
no way to evaluate a test group's execution status at runtime; an alternative 
here is to use a before-suite-rule and an assumption in there.

 PermGen exhausted test failures on Jenkins.
 ---

 Key: SOLR-5022
 URL: https://issues.apache.org/jira/browse/SOLR-5022
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5017) Allow sharding based on the value of a field

2013-07-09 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703039#comment-13703039
 ] 

Noble Paul edited comment on SOLR-5017 at 7/9/13 10:49 AM:
---

bq.I could see by default, the compositeId router also paying attention to the 
\_shard_ parameter

the \_shard_ parameter is the actual name of the shard. In case of compositeId 
router , the client is agnostic of the shard name and all that it cares about 
is shard.keys. What I mean to say is, the name \_shard_ can be a bit confusing

As of now we don't have a plan on how to do shard splitting for 'implicit' 
router. Let's keep it as  TBD

In case of compositeId router , I would like to read the part before the \(!) 
to be read from the 'shardField'. The semantics will be exactly same as it is 
now. Reading the value from a request parameter would mean we will need to 
persist it along with the document in some field . 

  was (Author: noble.paul):
bq.I could see by default, the compositeId router also paying attention to 
the \_shard_ parameter

the \_shard_ parameter is the actual name of the shard. In case of compositeId 
router , the client is agnostic of the shard name and all that it cares about 
is shard.keys. What I mean to say is, the name \_shard_ can be a bit confusing

As of now we don't have a plan on how to do shard splitting for 'implicit' 
router. Let's keep it as  TBD

In case of compositeId router , I would like to read the part before the (!) to 
be read from the 'shardField'. The semantics will be exactly same as it is now. 
Reading the value from a request parameter would mean we will need to persist 
it along with the document in some field . 
  
 Allow sharding based on the value of a field
 

 Key: SOLR-5017
 URL: https://issues.apache.org/jira/browse/SOLR-5017
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul

 We should be able to create a collection where sharding is done based on the 
 value of a given field
 collections can be created with shardField=fieldName, which will be persisted 
 in DocCollection in ZK
 implicit DocRouter would look at this field instead of _shard_ field
 CompositeIdDocRouter can also use this field instead of looking at the id 
 field. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 4.4 release planning

2013-07-09 Thread Erick Erickson
Steve:

So is the plan to address the Permgen issues before releasing and fix
it both on the branch and 4x and trunk?



On Tue, Jul 9, 2013 at 1:51 AM, Steve Rowe sar...@gmail.com wrote:
 4.4 branch created.

 On Jul 8, 2013, at 12:37 PM, Steve Rowe sar...@gmail.com wrote:
 As I mentioned a week ago, I plan on branching for 4.4 today, likely late in 
 the day (UTC+4).

 If all goes well, I'll cut an RC in one week, on July 15th.


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5020) Add final() method to DelegatingCollector

2013-07-09 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703189#comment-13703189
 ] 

Mikhail Khludnev commented on SOLR-5020:


I second this feature, however I suppose it should be done on Lucene level ie. 
base Collector should has such method for notification and IndexSearcher should 
call it.

 Add final() method to DelegatingCollector
 -

 Key: SOLR-5020
 URL: https://issues.apache.org/jira/browse/SOLR-5020
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 5.0
Reporter: Joel Bernstein
Priority: Minor
 Fix For: 5.0

 Attachments: SOLR-5020.patch


 This issue adds a final() method to the DelegatingCollector class so that it 
 can be notified when collection is complete. 
 The current collect() method assumes that the delegating collector will 
 either forward on the document or not with each call. The final() method will 
 allow DelegatingCollectors to have more sophisticated behavior.
 For example a Field Collapsing delegating collector could collapse the 
 documents as the collect() method is being called. Then when the final() 
 method is called it could pass the collapsed documents to the delegate 
 collectors.
 This would allow grouping to be implemented within the PostFilter framework.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5020) Add final() method to DelegatingCollector

2013-07-09 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703200#comment-13703200
 ] 

Robert Muir commented on SOLR-5020:
---

We discussed this on another issue (I think I may have opened it, not sure), 
the thing is that if someone calls search(Query, Filter, Collector), they know 
when its done, when this very method returns!

I also tried to look at what it would take (even though it seems stupid for 
lucene), thinking it might make things easier somehow for people: and tried to 
test that all collectors were well-behaved, and its really complicated.

So after review I think it doesnt make a lot of sense there.

 Add final() method to DelegatingCollector
 -

 Key: SOLR-5020
 URL: https://issues.apache.org/jira/browse/SOLR-5020
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 5.0
Reporter: Joel Bernstein
Priority: Minor
 Fix For: 5.0

 Attachments: SOLR-5020.patch


 This issue adds a final() method to the DelegatingCollector class so that it 
 can be notified when collection is complete. 
 The current collect() method assumes that the delegating collector will 
 either forward on the document or not with each call. The final() method will 
 allow DelegatingCollectors to have more sophisticated behavior.
 For example a Field Collapsing delegating collector could collapse the 
 documents as the collect() method is being called. Then when the final() 
 method is called it could pass the collapsed documents to the delegate 
 collectors.
 This would allow grouping to be implemented within the PostFilter framework.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 4.4 release planning

2013-07-09 Thread Yonik Seeley
On Tue, Jul 9, 2013 at 7:07 AM, Erick Erickson erickerick...@gmail.com wrote:
 So is the plan to address the Permgen issues before releasing and fix
 it both on the branch and 4x and trunk?

IMO, it depends if people think it's a test-only issue or not, a
hadoop-only issue or not, etc.
All bug fixes should also go to the branch unless the risk is too high.

-Yonik
http://lucidworks.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: 4.4 release planning

2013-07-09 Thread Uwe Schindler
Hi,

Mark created https://issues.apache.org/jira/browse/SOLR-5022

I think at this time it is hard to find out if it’s a serious production issue 
or just a test issue. The problem is currently that its hard to reproduce 
consistently and when it reproduces, the JVM is completely dead, not responding 
to anything (no kill, no debugger attachment, no jmap,...). The issue appears 
more often with tests.modifier=3, but this is not really telling you it’s a 
test relict.

There are no bad classloaders involved (I checked the dump, all looks fine, 
do duplicate classes,...), so it looks like something is creating interned 
strings all the time, which does not play well with our test infrastructure 
that runs everything in the same JVM, so interned strings never get cleaned up. 
A fix would be to create a module for running those tests in good-old one JVM 
per test suite.

Until we know *what and *where* the issue is, I see this as blocker.

Uwe
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: ysee...@gmail.com [mailto:ysee...@gmail.com] On Behalf Of Yonik
 Seeley
 Sent: Tuesday, July 09, 2013 2:43 PM
 To: Lucene/Solr Dev
 Subject: Re: 4.4 release planning
 
 On Tue, Jul 9, 2013 at 7:07 AM, Erick Erickson erickerick...@gmail.com
 wrote:
  So is the plan to address the Permgen issues before releasing and fix
  it both on the branch and 4x and trunk?
 
 IMO, it depends if people think it's a test-only issue or not, a hadoop-only
 issue or not, etc.
 All bug fixes should also go to the branch unless the risk is too high.
 
 -Yonik
 http://lucidworks.com
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5022) PermGen exhausted test failures on Jenkins.

2013-07-09 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-5022:


  Component/s: Tests
 Priority: Blocker  (was: Major)
Fix Version/s: 4.4
   5.0

 PermGen exhausted test failures on Jenkins.
 ---

 Key: SOLR-5022
 URL: https://issues.apache.org/jira/browse/SOLR-5022
 Project: Solr
  Issue Type: Test
  Components: Tests
Reporter: Mark Miller
Priority: Blocker
 Fix For: 5.0, 4.4




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5023) deleteInstanceDir is added to CoreAdminHandler but can't be passed with solrj

2013-07-09 Thread Lyubov Romanchuk (JIRA)
Lyubov Romanchuk created SOLR-5023:
--

 Summary: deleteInstanceDir is added to CoreAdminHandler but can't 
be passed with solrj
 Key: SOLR-5023
 URL: https://issues.apache.org/jira/browse/SOLR-5023
 Project: Solr
  Issue Type: Improvement
  Components: multicore
Affects Versions: 4.2.1
Reporter: Lyubov Romanchuk


deleteInstanceDir is added to CoreAdminHandler but is not supported in Unload 
CoreAdminRequest

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5017) Allow sharding based on the value of a field

2013-07-09 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703236#comment-13703236
 ] 

Jack Krupansky commented on SOLR-5017:
--

bq. Not sure what you mean by explicit routing

I mean where the user has placed a prefix and ! in front of a key value. 
Granted, it isn't explicitly stating the shard, and is really simply a 
surrogate key value to use for sharding. Is there better terminology for the 
fact that they used the ! notation?

Question for Noble: If a shard field is specified and there is a ! on a 
document key, which takes precedence?


 Allow sharding based on the value of a field
 

 Key: SOLR-5017
 URL: https://issues.apache.org/jira/browse/SOLR-5017
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul

 We should be able to create a collection where sharding is done based on the 
 value of a given field
 collections can be created with shardField=fieldName, which will be persisted 
 in DocCollection in ZK
 implicit DocRouter would look at this field instead of _shard_ field
 CompositeIdDocRouter can also use this field instead of looking at the id 
 field. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5017) Allow sharding based on the value of a field

2013-07-09 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703246#comment-13703246
 ] 

Noble Paul commented on SOLR-5017:
--

if a collection is created with the shardField value, it is a required param 
for all docs.If the field is null the document addition fails. No more lookup 
for ! anymore. 

 Allow sharding based on the value of a field
 

 Key: SOLR-5017
 URL: https://issues.apache.org/jira/browse/SOLR-5017
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul

 We should be able to create a collection where sharding is done based on the 
 value of a given field
 collections can be created with shardField=fieldName, which will be persisted 
 in DocCollection in ZK
 implicit DocRouter would look at this field instead of _shard_ field
 CompositeIdDocRouter can also use this field instead of looking at the id 
 field. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5022) PermGen exhausted test failures on Jenkins.

2013-07-09 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703245#comment-13703245
 ] 

Uwe Schindler commented on SOLR-5022:
-

Maybe I know, why the permgen issues do not happen for all of us! The reason is:

- Something seems to eat permeg by interning strings! Those interned strings 
are never freed until the JVM dies.
- If you run with many CPUs, the test runner runs tests in multiple parallel 
JVMs, so every JVMs runs less tests.

...the Jenkins server on MacOSX runs with one JVM only (because the virtual box 
has only 2 virtual CPUs). So all tests have to share the permgen. Windows 
always passes because no hadoop used. And linux fails more seldom (2 parallel 
JVMs). On FreeBSD we also don't run hadoop tests.

We have to find out: Something seems to eat all permgen, not by loading 
classes, but by interning strings. And that’s the issue here. My idea would be: 
I will run forbidden-apis on all JAR files of Solr that were added and will 
forbid {{String#intern()}} signature. This should show us very fast, who 
interns strings and we can open bug reports or hot-patch those jar files.


 PermGen exhausted test failures on Jenkins.
 ---

 Key: SOLR-5022
 URL: https://issues.apache.org/jira/browse/SOLR-5022
 Project: Solr
  Issue Type: Test
  Components: Tests
Reporter: Mark Miller
Priority: Blocker
 Fix For: 5.0, 4.4




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5022) PermGen exhausted test failures on Jenkins.

2013-07-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703263#comment-13703263
 ] 

Mark Miller commented on SOLR-5022:
---

bq. solr ran fine without it before,

It runs fine now as well - requiring more perm gen in tests is not a Solr bug - 
sorry. Simply saying words don't make things true ;)

For running the clover target, we set the perm size to 192m - quick, fix that 
bug! Oh wait, thats a stupid thing to say...

 PermGen exhausted test failures on Jenkins.
 ---

 Key: SOLR-5022
 URL: https://issues.apache.org/jira/browse/SOLR-5022
 Project: Solr
  Issue Type: Test
  Components: Tests
Reporter: Mark Miller
Priority: Blocker
 Fix For: 5.0, 4.4




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5022) PermGen exhausted test failures on Jenkins.

2013-07-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703264#comment-13703264
 ] 

Mark Miller commented on SOLR-5022:
---

bq. It looks like the bug happens more often with higher test multiplier 
(-Dtests.multiplier=3), so maybe we can really tune that.

Yes, we could make our tests shittier rather than give them the required 
resources to run, but thats a pretty silly trade.

 PermGen exhausted test failures on Jenkins.
 ---

 Key: SOLR-5022
 URL: https://issues.apache.org/jira/browse/SOLR-5022
 Project: Solr
  Issue Type: Test
  Components: Tests
Reporter: Mark Miller
Priority: Blocker
 Fix For: 5.0, 4.4




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-4.x-Java6 - Build # 1779 - Still Failing

2013-07-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java6/1779/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=1485, 
name=recoveryCmdExecutor-689-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) at 
java.net.Socket.connect(Socket.java:546) at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
 at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
 at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
 at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
 at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
 at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:679)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.BasicDistributedZkTest: 
   1) Thread[id=1485, name=recoveryCmdExecutor-689-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
at java.net.Socket.connect(Socket.java:546)
at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)
at __randomizedtesting.SeedInfo.seed([16D68E7A2D88F791]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=1485, name=recoveryCmdExecutor-689-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)   

[jira] [Commented] (SOLR-5022) PermGen exhausted test failures on Jenkins.

2013-07-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703265#comment-13703265
 ] 

Mark Miller commented on SOLR-5022:
---

bq. Its no longer available in Java 8

And do you see the problem on Java 8 runs?

 PermGen exhausted test failures on Jenkins.
 ---

 Key: SOLR-5022
 URL: https://issues.apache.org/jira/browse/SOLR-5022
 Project: Solr
  Issue Type: Test
  Components: Tests
Reporter: Mark Miller
Priority: Blocker
 Fix For: 5.0, 4.4




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5022) PermGen exhausted test failures on Jenkins.

2013-07-09 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-5022:
--

Priority: Major  (was: Blocker)

 PermGen exhausted test failures on Jenkins.
 ---

 Key: SOLR-5022
 URL: https://issues.apache.org/jira/browse/SOLR-5022
 Project: Solr
  Issue Type: Test
  Components: Tests
Reporter: Mark Miller
 Fix For: 5.0, 4.4




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 4.4 release planning

2013-07-09 Thread Mark Miller

On Jul 9, 2013, at 8:52 AM, Uwe Schindler u...@thetaphi.de wrote:

 Until we know *what and *where* the issue is, I see this as blocker.

That code has been running for months in the real world on huge amounts of data 
for long periods of time with no perm gen issues. It's an issue with the 
resources required by tests and running the dfs mini cluster and creating so 
many collections. It's not a blocker.

- Mark
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5022) PermGen exhausted test failures on Jenkins.

2013-07-09 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703275#comment-13703275
 ] 

Uwe Schindler commented on SOLR-5022:
-

bq. And do you see the problem on Java 8 runs?

No, also not on jRockit or IBM J9. But MacOSX only has Java 6 and Java 7 at the 
moment, so it's not 100% for sure.

 PermGen exhausted test failures on Jenkins.
 ---

 Key: SOLR-5022
 URL: https://issues.apache.org/jira/browse/SOLR-5022
 Project: Solr
  Issue Type: Test
  Components: Tests
Reporter: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: SOLR-5022.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5022) PermGen exhausted test failures on Jenkins.

2013-07-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703274#comment-13703274
 ] 

Mark Miller commented on SOLR-5022:
---

bq. the Jenkins server on MacOSX runs with one JVM only (because the virtual 
box has only 2 virtual CPUs). So all tests have to share the permgen. Windows 
always passes because no hadoop used. And linux fails more seldom (2 parallel 
JVMs). 

Yes, this would match what I have seen in the wild - on the machines that have 
fewer cores, I was more likely to see perm gen issues with certain agressive 
tests. With my 6 core machines where I run with 8 jvms, I have never even 
remotely seen an issue.

 PermGen exhausted test failures on Jenkins.
 ---

 Key: SOLR-5022
 URL: https://issues.apache.org/jira/browse/SOLR-5022
 Project: Solr
  Issue Type: Test
  Components: Tests
Reporter: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: SOLR-5022.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5022) PermGen exhausted test failures on Jenkins.

2013-07-09 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-5022:


Attachment: SOLR-5022.patch

Here is a patch not for the permgen issue, but make Jenkins more flexible. A 
sysprop {{-Dtests.disableHdfs=true}} is now supported. It is by default true on 
Windows.

The good thing, if you have cygwin, you can enable them now :-)

I will commit this as a first step to make the Hdfs stuff more flexible. The 
ASF Jenkins server gets this sysprop hardcoded into the jenkins config (like 
tests.jettyConnector). 

 PermGen exhausted test failures on Jenkins.
 ---

 Key: SOLR-5022
 URL: https://issues.apache.org/jira/browse/SOLR-5022
 Project: Solr
  Issue Type: Test
  Components: Tests
Reporter: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: SOLR-5022.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5022) PermGen exhausted test failures on Jenkins.

2013-07-09 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703278#comment-13703278
 ] 

Uwe Schindler commented on SOLR-5022:
-

bq. For running the clover target, we set the perm size to 192m - quick, fix 
that bug! Oh wait, thats a stupid thing to say...

Clover only works on Oracle JDKs...

 PermGen exhausted test failures on Jenkins.
 ---

 Key: SOLR-5022
 URL: https://issues.apache.org/jira/browse/SOLR-5022
 Project: Solr
  Issue Type: Test
  Components: Tests
Reporter: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: SOLR-5022.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5005) JavaScriptRequestHandler

2013-07-09 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5005:
-

Attachment: SOLR-5005.patch

new methods and changed variable names

 JavaScriptRequestHandler
 

 Key: SOLR-5005
 URL: https://issues.apache.org/jira/browse/SOLR-5005
 Project: Solr
  Issue Type: New Feature
Reporter: David Smiley
Assignee: Noble Paul
 Attachments: patch, SOLR-5005.patch, SOLR-5005.patch


 A user customizable script based request handler would be very useful.  It's 
 inspired from the ScriptUpdateRequestProcessor, but on the search end. A user 
 could write a script that submits searches to Solr (in-VM) and can react to 
 the results of one search before making another that is formulated 
 dynamically.  And it can assemble the response data, potentially reducing 
 both the latency and data that would move over the wire if this feature 
 didn't exist.  It could also be used to easily add a user-specifiable search 
 API at the Solr server with request parameters governed by what the user 
 wants to advertise -- especially useful within enterprises.  And, it could be 
 used to enforce security requirements on allowable parameter valuables to 
 Solr, so a javascript based Solr client could be allowed to talk to only a 
 script based request handler which enforces the rules.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5005) JavaScriptRequestHandler

2013-07-09 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701996#comment-13701996
 ] 

Noble Paul edited comment on SOLR-5005 at 7/9/13 1:55 PM:
--

sample script

{code}
var requestParameterQuery = param('query'); //or p('query') as a short form
var results = q({'qt': '/select','q':requestParameterQuery}); // or inline this 
as q({'qt': '/select','q':p('query')})
rsp.add('myfirstscriptresults', results.get('results'));// r is the 
SolrQueryResponse object
// you may run more queries .  
{code}

  was (Author: noble.paul):
sample script

{code}
var requestParameterQuery = param('query'); //or p('query') as a short form
var results = q({'qt': '/select','q':requestParameterQuery}); // or inline this 
as q({'qt': '/select','q':p('query')})
r.add('myfirstscriptresults', results.get('results'));// r is the 
SolrQueryResponse object
// you may run more queries .  
{code}
  
 JavaScriptRequestHandler
 

 Key: SOLR-5005
 URL: https://issues.apache.org/jira/browse/SOLR-5005
 Project: Solr
  Issue Type: New Feature
Reporter: David Smiley
Assignee: Noble Paul
 Attachments: patch, SOLR-5005.patch, SOLR-5005.patch


 A user customizable script based request handler would be very useful.  It's 
 inspired from the ScriptUpdateRequestProcessor, but on the search end. A user 
 could write a script that submits searches to Solr (in-VM) and can react to 
 the results of one search before making another that is formulated 
 dynamically.  And it can assemble the response data, potentially reducing 
 both the latency and data that would move over the wire if this feature 
 didn't exist.  It could also be used to easily add a user-specifiable search 
 API at the Solr server with request parameters governed by what the user 
 wants to advertise -- especially useful within enterprises.  And, it could be 
 used to enforce security requirements on allowable parameter valuables to 
 Solr, so a javascript based Solr client could be allowed to talk to only a 
 script based request handler which enforces the rules.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5022) PermGen exhausted test failures on Jenkins.

2013-07-09 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703282#comment-13703282
 ] 

Dawid Weiss commented on SOLR-5022:
---

I wouldn't want to argue whether increasing permgen is a good fix or not, but 
it's an interesting debugging problem on its own. I've just ran Solr tests with 
an aspect that intercepts intern() calls. I'll post the results here once the 
tests complete. Let's see what we can get. :)

 PermGen exhausted test failures on Jenkins.
 ---

 Key: SOLR-5022
 URL: https://issues.apache.org/jira/browse/SOLR-5022
 Project: Solr
  Issue Type: Test
  Components: Tests
Reporter: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: SOLR-5022.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5022) PermGen exhausted test failures on Jenkins.

2013-07-09 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703294#comment-13703294
 ] 

Uwe Schindler commented on SOLR-5022:
-

Thanks Dawid, so I don't need to setup forbidden-apis for that! That was my 
first idea how to find the places that call intern().

 PermGen exhausted test failures on Jenkins.
 ---

 Key: SOLR-5022
 URL: https://issues.apache.org/jira/browse/SOLR-5022
 Project: Solr
  Issue Type: Test
  Components: Tests
Reporter: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: SOLR-5022.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5022) PermGen exhausted test failures on Jenkins.

2013-07-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703295#comment-13703295
 ] 

Mark Miller commented on SOLR-5022:
---

Patch looks good Uwe - +1 on that approach.

bq. increasing permgen is a good fix or not,

I would call it a workaround more than a fix - longer term it would be nice to 
see the root cause addressed - but considering it would seem to involve code in 
another project, you have to work from a short term and 'possible' long term 
perspective.

 PermGen exhausted test failures on Jenkins.
 ---

 Key: SOLR-5022
 URL: https://issues.apache.org/jira/browse/SOLR-5022
 Project: Solr
  Issue Type: Test
  Components: Tests
Reporter: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: SOLR-5022.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5022) PermGen exhausted test failures on Jenkins.

2013-07-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703297#comment-13703297
 ] 

ASF subversion and git services commented on SOLR-5022:
---

Commit 1501278 from [~thetaphi]
[ https://svn.apache.org/r1501278 ]

SOLR-5022: Make it possible to disable HDFS tests on ANT command line (so ASF 
Jenkins can use it). Windows is disabled by default, too.

 PermGen exhausted test failures on Jenkins.
 ---

 Key: SOLR-5022
 URL: https://issues.apache.org/jira/browse/SOLR-5022
 Project: Solr
  Issue Type: Test
  Components: Tests
Reporter: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: SOLR-5022.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5022) PermGen exhausted test failures on Jenkins.

2013-07-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703305#comment-13703305
 ] 

ASF subversion and git services commented on SOLR-5022:
---

Commit 1501279 from [~thetaphi]
[ https://svn.apache.org/r1501279 ]

Merged revision(s) 1501278 from lucene/dev/trunk:
SOLR-5022: Make it possible to disable HDFS tests on ANT command line (so ASF 
Jenkins can use it). Windows is disabled by default, too.

 PermGen exhausted test failures on Jenkins.
 ---

 Key: SOLR-5022
 URL: https://issues.apache.org/jira/browse/SOLR-5022
 Project: Solr
  Issue Type: Test
  Components: Tests
Reporter: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: SOLR-5022.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5022) PermGen exhausted test failures on Jenkins.

2013-07-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703306#comment-13703306
 ] 

ASF subversion and git services commented on SOLR-5022:
---

Commit 1501281 from [~thetaphi]
[ https://svn.apache.org/r1501281 ]

Merged revision(s) 1501278 from lucene/dev/trunk:
SOLR-5022: Make it possible to disable HDFS tests on ANT command line (so ASF 
Jenkins can use it). Windows is disabled by default, too.

 PermGen exhausted test failures on Jenkins.
 ---

 Key: SOLR-5022
 URL: https://issues.apache.org/jira/browse/SOLR-5022
 Project: Solr
  Issue Type: Test
  Components: Tests
Reporter: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: SOLR-5022.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4981) BasicDistributedZkTest fails on FreeBSD jenkins due to thread leak.

2013-07-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703318#comment-13703318
 ] 

Mark Miller commented on SOLR-4981:
---

None of these changes have helped this.

 BasicDistributedZkTest fails on FreeBSD jenkins due to thread leak.
 ---

 Key: SOLR-4981
 URL: https://issues.apache.org/jira/browse/SOLR-4981
 Project: Solr
  Issue Type: Test
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5007) TestRecoveryHdfs seems to be leaking a thread occasionally that ends up failing a completely different test.

2013-07-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703321#comment-13703321
 ] 

Mark Miller commented on SOLR-5007:
---

HADOOP-9703 seems to be making some progress.

bq. Also: we could add those IPC threads to the thread leak filters if they're 
harmless instead of doing Scope.NONE.

That's probably preferable for now then - they should be harmless and they 
affect all the hdfs tests - having Scope.NONE makes it easy to introduce new 
leaks without noticing.

 TestRecoveryHdfs seems to be leaking a thread occasionally that ends up 
 failing a completely different test.
 

 Key: SOLR-5007
 URL: https://issues.apache.org/jira/browse/SOLR-5007
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Assignee: Mark Miller



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-5023) deleteInstanceDir is added to CoreAdminHandler but can't be passed with solrj

2013-07-09 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-5023:
-

Assignee: Mark Miller

 deleteInstanceDir is added to CoreAdminHandler but can't be passed with solrj
 -

 Key: SOLR-5023
 URL: https://issues.apache.org/jira/browse/SOLR-5023
 Project: Solr
  Issue Type: Improvement
  Components: multicore
Affects Versions: 4.2.1
Reporter: Lyubov Romanchuk
Assignee: Mark Miller

 deleteInstanceDir is added to CoreAdminHandler but is not supported in Unload 
 CoreAdminRequest

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5023) deleteInstanceDir is added to CoreAdminHandler but can't be passed with solrj

2013-07-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703323#comment-13703323
 ] 

Mark Miller commented on SOLR-5023:
---

We should add the deleteDataDir option as well. I guess the best workaround for 
now is to simply subclass the Unload CoreAdminRequest and add the param in 
getParams.

 deleteInstanceDir is added to CoreAdminHandler but can't be passed with solrj
 -

 Key: SOLR-5023
 URL: https://issues.apache.org/jira/browse/SOLR-5023
 Project: Solr
  Issue Type: Improvement
  Components: multicore
Affects Versions: 4.2.1
Reporter: Lyubov Romanchuk

 deleteInstanceDir is added to CoreAdminHandler but is not supported in Unload 
 CoreAdminRequest

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5023) deleteInstanceDir is added to CoreAdminHandler but can't be passed with solrj

2013-07-09 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-5023:
--

Fix Version/s: 4.5
   5.0

 deleteInstanceDir is added to CoreAdminHandler but can't be passed with solrj
 -

 Key: SOLR-5023
 URL: https://issues.apache.org/jira/browse/SOLR-5023
 Project: Solr
  Issue Type: Improvement
  Components: multicore
Affects Versions: 4.2.1
Reporter: Lyubov Romanchuk
Assignee: Mark Miller
 Fix For: 5.0, 4.5


 deleteInstanceDir is added to CoreAdminHandler but is not supported in Unload 
 CoreAdminRequest

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-4.x-Java7 - Build # 1397 - Still Failing

2013-07-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java7/1397/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=2096, 
name=recoveryCmdExecutor-742-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at 
java.net.Socket.connect(Socket.java:579) at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
 at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
 at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
 at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
 at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
 at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:722)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.BasicDistributedZkTest: 
   1) Thread[id=2096, name=recoveryCmdExecutor-742-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
at java.net.Socket.connect(Socket.java:579)
at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
at __randomizedtesting.SeedInfo.seed([EBBDEBFC7A91EB03]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=2096, name=recoveryCmdExecutor-742-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)   

[jira] [Commented] (SOLR-4943) Add a new info admin handler.

2013-07-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703342#comment-13703342
 ] 

Mark Miller commented on SOLR-4943:
---

Okay, I'm about ready to put this in so that the UI half can be finished...

 Add a new info admin handler.
 -

 Key: SOLR-4943
 URL: https://issues.apache.org/jira/browse/SOLR-4943
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: SOLR-4943-2.patch, SOLR-4943-3__hoss_variant.patch, 
 SOLR-4943-3.patch, SOLR-4943.patch, SOLR-4943.patch, SOLR-4943.patch


 Currently, you have to specify a core to get system information for a variety 
 of request handlers - properties, logging, thread dump, system, etc.
 These should be available at a system location and not core specific location.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5010) Add REST support for Copy Fields

2013-07-09 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703343#comment-13703343
 ] 

Yonik Seeley commented on SOLR-5010:


For a more JSONish API, perhaps we should optionally accept an array in place 
of a comma separated list?
{code}
dest : [myfield1, myfield2]
{code}

 Add REST support for Copy Fields
 

 Key: SOLR-5010
 URL: https://issues.apache.org/jira/browse/SOLR-5010
 Project: Solr
  Issue Type: Sub-task
  Components: Schema and Analysis
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: 5.0, 4.4

 Attachments: SOLR-5010-copyFields.patch, SOLR-5010-copyFields.patch, 
 SOLR-5010-copyFields.patch, SOLR-5010-copyFields.patch, SOLR-5010.patch, 
 SOLR-5010.patch, SOLR-5010.patch


 Per SOLR-4898, adding copy field support.  Should be simply a new parameter 
 to the PUT/POST with the name of the target to copy to.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5022) PermGen exhausted test failures on Jenkins.

2013-07-09 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated SOLR-5022:
--

Attachment: intern-count-win.txt

This is a count/uniq of a full run from a Windows box. I forgot it won't run 
Hadoop tests in this mode -- will retry on a Mac this evening (preemptive 
interrupt from kids).

 PermGen exhausted test failures on Jenkins.
 ---

 Key: SOLR-5022
 URL: https://issues.apache.org/jira/browse/SOLR-5022
 Project: Solr
  Issue Type: Test
  Components: Tests
Reporter: Mark Miller
 Fix For: 5.0, 4.4

 Attachments: intern-count-win.txt, SOLR-5022.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5084) EliasFanoDocIdSet

2013-07-09 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703369#comment-13703369
 ] 

Adrien Grand commented on LUCENE-5084:
--

bq. for now I'll concentrate on adding an index.

Have you started working on it? Otherwise I would like to commit your patch, I 
think it is already a very good improvement and we can work on the index on 
another issue, what do you think?

 EliasFanoDocIdSet
 -

 Key: LUCENE-5084
 URL: https://issues.apache.org/jira/browse/LUCENE-5084
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Paul Elschot
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-5084.patch, LUCENE-5084.patch


 DocIdSet in Elias-Fano encoding

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5007) TestRecoveryHdfs seems to be leaking a thread occasionally that ends up failing a completely different test.

2013-07-09 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703371#comment-13703371
 ] 

Dawid Weiss commented on SOLR-5007:
---

Yep, it's better to ignore those known threads than ignore everything.

 TestRecoveryHdfs seems to be leaking a thread occasionally that ends up 
 failing a completely different test.
 

 Key: SOLR-5007
 URL: https://issues.apache.org/jira/browse/SOLR-5007
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Assignee: Mark Miller



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5084) EliasFanoDocIdSet

2013-07-09 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703373#comment-13703373
 ] 

David Smiley commented on LUCENE-5084:
--

bq. Otherwise I would like to commit your patch, I think it is already a very 
good improvement and we can work on the index on another issue

+1 definitely.  One step at a time.  Thanks Paul  Adrien.

 EliasFanoDocIdSet
 -

 Key: LUCENE-5084
 URL: https://issues.apache.org/jira/browse/LUCENE-5084
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Paul Elschot
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-5084.patch, LUCENE-5084.patch


 DocIdSet in Elias-Fano encoding

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5092) join: don't expect all filters to be FixedBitSet instances

2013-07-09 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703381#comment-13703381
 ] 

Adrien Grand commented on LUCENE-5092:
--

Since block join scores seem to all score in order (is it true?), I guess this 
could work. However, I'm getting more and more convinced by Robert's point that 
maybe for this issue we shouldn't make the API more complicated and copy the 
filter's DocIdSet to a FixedBitSet when it returns something else (easy to do, 
maybe we should do it now and open another issue to explore other ways to store 
the parent-child mapping).

 join: don't expect all filters to be FixedBitSet instances
 --

 Key: LUCENE-5092
 URL: https://issues.apache.org/jira/browse/LUCENE-5092
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/join
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor

 The join module throws exceptions when the parents filter isn't a 
 FixedBitSet. The reason is that the join module relies on prevSetBit to find 
 the first child document given a parent ID.
 As suggested by Uwe and Paul Elschot on LUCENE-5081, we could fix it by 
 exposing methods in the iterators to iterate backwards. When the join modules 
 gets an iterator which isn't able to iterate backwards, it would just need to 
 dump its content into another DocIdSet that supports backward iteration, 
 FixedBitSet for example.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2750) add Kamikaze 3.0.1 into Lucene

2013-07-09 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703385#comment-13703385
 ] 

Adrien Grand commented on LUCENE-2750:
--

For the record, Daniel Lemire wrote a post about PFOR delta decompression speed 
at http://lemire.me/blog/archives/2013/07/08/fast-integer-compression-in-java/. 
I have not dug the reasons that may explain why his implementation is faster 
than kamikaze or if the benchmark itself is relevant to Lucene but I think it's 
something worth looking into.

 add Kamikaze 3.0.1 into Lucene
 --

 Key: LUCENE-2750
 URL: https://issues.apache.org/jira/browse/LUCENE-2750
 Project: Lucene - Core
  Issue Type: Sub-task
  Components: modules/other
Reporter: hao yan
Assignee: Adrien Grand
   Original Estimate: 336h
  Remaining Estimate: 336h

 Kamikaze 3.0.1 is the updated version of Kamikaze 2.0.0. It can achieve 
 significantly better performance then Kamikaze 2.0.0 in terms of both 
 compressed size and decompression speed. The main difference between the two 
 versions is Kamikaze 3.0.x uses the much more efficient implementation of the 
 PForDelta compression algorithm. My goal is to integrate the highly efficient 
 PForDelta implementation into Lucene Codec.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5010) Add REST support for Copy Fields

2013-07-09 Thread Grant Ingersoll (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703415#comment-13703415
 ] 

Grant Ingersoll commented on SOLR-5010:
---

bq. For a more JSONish API, perhaps

D'oh, that is a good idea.  I'll correct it, but can't get to it until Thursday.

 Add REST support for Copy Fields
 

 Key: SOLR-5010
 URL: https://issues.apache.org/jira/browse/SOLR-5010
 Project: Solr
  Issue Type: Sub-task
  Components: Schema and Analysis
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: 5.0, 4.4

 Attachments: SOLR-5010-copyFields.patch, SOLR-5010-copyFields.patch, 
 SOLR-5010-copyFields.patch, SOLR-5010-copyFields.patch, SOLR-5010.patch, 
 SOLR-5010.patch, SOLR-5010.patch


 Per SOLR-4898, adding copy field support.  Should be simply a new parameter 
 to the PUT/POST with the name of the target to copy to.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4130 - Failure

2013-07-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4130/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.BasicDistributedZkTest: 1) Thread[id=2792, 
name=recoveryCmdExecutor-944-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at 
java.net.Socket.connect(Socket.java:579) at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
 at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
 at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
 at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
 at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
 at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:722)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.BasicDistributedZkTest: 
   1) Thread[id=2792, name=recoveryCmdExecutor-944-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
at java.net.Socket.connect(Socket.java:579)
at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:291)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
at __randomizedtesting.SeedInfo.seed([2C1A136EFFB29B03]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=2792, name=recoveryCmdExecutor-944-thread-1, state=RUNNABLE, 
group=TGRP-BasicDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) 

[jira] [Updated] (SOLR-4914) Factor out core discovery and persistence logic

2013-07-09 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated SOLR-4914:


Attachment: SOLR-4914.patch

Nearly there.  ShardSplitTest is failing consistently, and I still need to work 
out what's going on there.  There's also a nocommit in there concerning what to 
do when unloading a core discovered via a core.properties file - should we 
delete the file, or rename it?  I think renaming it is probably the most 
user-friendly thing to do, as then you just have to rename it back again to 
make the core available next time you start up.

 Factor out core discovery and persistence logic
 ---

 Key: SOLR-4914
 URL: https://issues.apache.org/jira/browse/SOLR-4914
 Project: Solr
  Issue Type: Improvement
Affects Versions: 5.0
Reporter: Erick Erickson
Assignee: Alan Woodward
 Attachments: SOLR-4914.patch, SOLR-4914.patch, SOLR-4914.patch, 
 SOLR-4914.patch, SOLR-4914.patch


 Alan Woodward has done some work to refactor how core persistence works that 
 we should work on going forward that I want to separate from a shorter-term 
 tactical problem (See SOLR-4910).
 I'm attaching Alan's patch to this JIRA and we'll carry it forward separately 
 from 4910.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5020) Add final() method to DelegatingCollector

2013-07-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703492#comment-13703492
 ] 

ASF subversion and git services commented on SOLR-5020:
---

Commit 1501376 from [~yo...@apache.org]
[ https://svn.apache.org/r1501376 ]

SOLR-5020: add DelegatingCollector.final()

 Add final() method to DelegatingCollector
 -

 Key: SOLR-5020
 URL: https://issues.apache.org/jira/browse/SOLR-5020
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 5.0
Reporter: Joel Bernstein
Priority: Minor
 Fix For: 5.0

 Attachments: SOLR-5020.patch


 This issue adds a final() method to the DelegatingCollector class so that it 
 can be notified when collection is complete. 
 The current collect() method assumes that the delegating collector will 
 either forward on the document or not with each call. The final() method will 
 allow DelegatingCollectors to have more sophisticated behavior.
 For example a Field Collapsing delegating collector could collapse the 
 documents as the collect() method is being called. Then when the final() 
 method is called it could pass the collapsed documents to the delegate 
 collectors.
 This would allow grouping to be implemented within the PostFilter framework.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4914) Factor out core discovery and persistence logic

2013-07-09 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703501#comment-13703501
 ] 

Erick Erickson commented on SOLR-4914:
--

bq: I think renaming it is probably the most user-friendly thing to do

+1 unless deleteIndex=true in which case nuke it all?

 Factor out core discovery and persistence logic
 ---

 Key: SOLR-4914
 URL: https://issues.apache.org/jira/browse/SOLR-4914
 Project: Solr
  Issue Type: Improvement
Affects Versions: 5.0
Reporter: Erick Erickson
Assignee: Alan Woodward
 Attachments: SOLR-4914.patch, SOLR-4914.patch, SOLR-4914.patch, 
 SOLR-4914.patch, SOLR-4914.patch


 Alan Woodward has done some work to refactor how core persistence works that 
 we should work on going forward that I want to separate from a shorter-term 
 tactical problem (See SOLR-4910).
 I'm attaching Alan's patch to this JIRA and we'll carry it forward separately 
 from 4910.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5020) Add final() method to DelegatingCollector

2013-07-09 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703509#comment-13703509
 ] 

ASF subversion and git services commented on SOLR-5020:
---

Commit 1501383 from [~yo...@apache.org]
[ https://svn.apache.org/r1501383 ]

SOLR-5020: add DelegatingCollector.final()

 Add final() method to DelegatingCollector
 -

 Key: SOLR-5020
 URL: https://issues.apache.org/jira/browse/SOLR-5020
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 5.0
Reporter: Joel Bernstein
Priority: Minor
 Fix For: 5.0

 Attachments: SOLR-5020.patch


 This issue adds a final() method to the DelegatingCollector class so that it 
 can be notified when collection is complete. 
 The current collect() method assumes that the delegating collector will 
 either forward on the document or not with each call. The final() method will 
 allow DelegatingCollectors to have more sophisticated behavior.
 For example a Field Collapsing delegating collector could collapse the 
 documents as the collect() method is being called. Then when the final() 
 method is called it could pass the collapsed documents to the delegate 
 collectors.
 This would allow grouping to be implemented within the PostFilter framework.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5020) Add final() method to DelegatingCollector

2013-07-09 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-5020.


   Resolution: Fixed
Fix Version/s: 4.4

Although I'm still questioning configurable collectors, this issue certainly 
makes sense on it's own.

Committed 4x  trunk.

 Add final() method to DelegatingCollector
 -

 Key: SOLR-5020
 URL: https://issues.apache.org/jira/browse/SOLR-5020
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 5.0
Reporter: Joel Bernstein
Priority: Minor
 Fix For: 5.0, 4.4

 Attachments: SOLR-5020.patch


 This issue adds a final() method to the DelegatingCollector class so that it 
 can be notified when collection is complete. 
 The current collect() method assumes that the delegating collector will 
 either forward on the document or not with each call. The final() method will 
 allow DelegatingCollectors to have more sophisticated behavior.
 For example a Field Collapsing delegating collector could collapse the 
 documents as the collect() method is being called. Then when the final() 
 method is called it could pass the collapsed documents to the delegate 
 collectors.
 This would allow grouping to be implemented within the PostFilter framework.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5020) Add final() method to DelegatingCollector

2013-07-09 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-5020:
---

Fix Version/s: (was: 4.4)
   4.5

 Add final() method to DelegatingCollector
 -

 Key: SOLR-5020
 URL: https://issues.apache.org/jira/browse/SOLR-5020
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 5.0
Reporter: Joel Bernstein
Priority: Minor
 Fix For: 5.0, 4.5

 Attachments: SOLR-5020.patch


 This issue adds a final() method to the DelegatingCollector class so that it 
 can be notified when collection is complete. 
 The current collect() method assumes that the delegating collector will 
 either forward on the document or not with each call. The final() method will 
 allow DelegatingCollectors to have more sophisticated behavior.
 For example a Field Collapsing delegating collector could collapse the 
 documents as the collect() method is being called. Then when the final() 
 method is called it could pass the collapsed documents to the delegate 
 collectors.
 This would allow grouping to be implemented within the PostFilter framework.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4914) Factor out core discovery and persistence logic

2013-07-09 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703547#comment-13703547
 ] 

Alan Woodward commented on SOLR-4914:
-

I think if deleteIndex=true then it will already have been nuked.

 Factor out core discovery and persistence logic
 ---

 Key: SOLR-4914
 URL: https://issues.apache.org/jira/browse/SOLR-4914
 Project: Solr
  Issue Type: Improvement
Affects Versions: 5.0
Reporter: Erick Erickson
Assignee: Alan Woodward
 Attachments: SOLR-4914.patch, SOLR-4914.patch, SOLR-4914.patch, 
 SOLR-4914.patch, SOLR-4914.patch


 Alan Woodward has done some work to refactor how core persistence works that 
 we should work on going forward that I want to separate from a shorter-term 
 tactical problem (See SOLR-4910).
 I'm attaching Alan's patch to this JIRA and we'll carry it forward separately 
 from 4910.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4345) Solr Admin UI dosent work in IE 10

2013-07-09 Thread Ali Kianzadeh (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703550#comment-13703550
 ] 

Ali Kianzadeh commented on SOLR-4345:
-

I just to report that issue still exist in Solr 4.3.1 
I have tested in Windows 7 with IE 10

 Solr Admin UI dosent work in IE 10
 --

 Key: SOLR-4345
 URL: https://issues.apache.org/jira/browse/SOLR-4345
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.1
 Environment: Windows 8, IE 10
Reporter: Kurt Pedersen
Assignee: Stefan Matheis (steffkes)
  Labels: IE, IE10, InternetExplorer
 Fix For: 4.2, 5.0

 Attachments: SOLR-4345.patch, solr-ie10.png


 The main Windows only shows Loading on IE 10. Working fine in Chrome.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4886) configure and test PDF export of SOLR CWIKI after existing ref guide content is loaded

2013-07-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-4886:
---

Description: 
* add the ASL as a page in the wiki such that it appears early in the exported 
PDF
* test that exporting the PDF works ok with the large space
* review the exported PDF for problems
* confirm/tweak-configs so that PDF exporting can done really easily, ideally 
automatically
* document/script steps to create PDF on each solr release.

Lots of tips here: 
https://confluence.atlassian.com/display/DOC/Providing+PDF+Versions+of+your+Technical+Documentation

  was:
* add the ASL as a page in the wiki such that it appears early in the exported 
PDF
* test that exporting the PDF works ok with the large space
* review the exported PDF for problems
* confirm/tweak-configs so that PDF exporting can done really easily, ideally 
automatically
* document/script steps to create PDF on each solr release.


 configure and test PDF export of SOLR CWIKI after existing ref guide content 
 is loaded
 --

 Key: SOLR-4886
 URL: https://issues.apache.org/jira/browse/SOLR-4886
 Project: Solr
  Issue Type: Sub-task
  Components: documentation
Reporter: Hoss Man
Assignee: Hoss Man

 * add the ASL as a page in the wiki such that it appears early in the 
 exported PDF
 * test that exporting the PDF works ok with the large space
 * review the exported PDF for problems
 * confirm/tweak-configs so that PDF exporting can done really easily, ideally 
 automatically
 * document/script steps to create PDF on each solr release.
 Lots of tips here: 
 https://confluence.atlassian.com/display/DOC/Providing+PDF+Versions+of+your+Technical+Documentation

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5020) Add finnish() method to DelegatingCollector

2013-07-09 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-5020:
-

Summary: Add finnish() method to DelegatingCollector  (was: Add final() 
method to DelegatingCollector)

 Add finnish() method to DelegatingCollector
 ---

 Key: SOLR-5020
 URL: https://issues.apache.org/jira/browse/SOLR-5020
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 5.0
Reporter: Joel Bernstein
Priority: Minor
 Fix For: 5.0, 4.5

 Attachments: SOLR-5020.patch


 This issue adds a final() method to the DelegatingCollector class so that it 
 can be notified when collection is complete. 
 The current collect() method assumes that the delegating collector will 
 either forward on the document or not with each call. The final() method will 
 allow DelegatingCollectors to have more sophisticated behavior.
 For example a Field Collapsing delegating collector could collapse the 
 documents as the collect() method is being called. Then when the final() 
 method is called it could pass the collapsed documents to the delegate 
 collectors.
 This would allow grouping to be implemented within the PostFilter framework.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4345) Solr Admin UI dosent work in IE 10

2013-07-09 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703567#comment-13703567
 ] 

Yonik Seeley commented on SOLR-4345:


Confirmed - current trunk does not work with IE10

 Solr Admin UI dosent work in IE 10
 --

 Key: SOLR-4345
 URL: https://issues.apache.org/jira/browse/SOLR-4345
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.1
 Environment: Windows 8, IE 10
Reporter: Kurt Pedersen
Assignee: Stefan Matheis (steffkes)
  Labels: IE, IE10, InternetExplorer
 Fix For: 4.2, 5.0

 Attachments: SOLR-4345.patch, solr-ie10.png


 The main Windows only shows Loading on IE 10. Working fine in Chrome.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5020) Add finish() method to DelegatingCollector

2013-07-09 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-5020:
-

Summary: Add finish() method to DelegatingCollector  (was: Add finnish() 
method to DelegatingCollector)

 Add finish() method to DelegatingCollector
 --

 Key: SOLR-5020
 URL: https://issues.apache.org/jira/browse/SOLR-5020
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 5.0
Reporter: Joel Bernstein
Priority: Minor
 Fix For: 5.0, 4.5

 Attachments: SOLR-5020.patch


 This issue adds a final() method to the DelegatingCollector class so that it 
 can be notified when collection is complete. 
 The current collect() method assumes that the delegating collector will 
 either forward on the document or not with each call. The final() method will 
 allow DelegatingCollectors to have more sophisticated behavior.
 For example a Field Collapsing delegating collector could collapse the 
 documents as the collect() method is being called. Then when the final() 
 method is called it could pass the collapsed documents to the delegate 
 collectors.
 This would allow grouping to be implemented within the PostFilter framework.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5020) Add finish() method to DelegatingCollector

2013-07-09 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-5020:
-

Description: 
This issue adds a finish() method to the DelegatingCollector class so that it 
can be notified when collection is complete. 

The current collect() method assumes that the delegating collector will either 
forward on the document or not with each call. The final() method will allow 
DelegatingCollectors to have more sophisticated behavior.

For example a Field Collapsing delegating collector could collapse the 
documents as the collect() method is being called. Then when the finish() 
method is called it could pass the collapsed documents to the delegate 
collectors.

This would allow grouping to be implemented within the PostFilter framework.

  was:
This issue adds a final() method to the DelegatingCollector class so that it 
can be notified when collection is complete. 

The current collect() method assumes that the delegating collector will either 
forward on the document or not with each call. The final() method will allow 
DelegatingCollectors to have more sophisticated behavior.

For example a Field Collapsing delegating collector could collapse the 
documents as the collect() method is being called. Then when the final() method 
is called it could pass the collapsed documents to the delegate collectors.

This would allow grouping to be implemented within the PostFilter framework.


 Add finish() method to DelegatingCollector
 --

 Key: SOLR-5020
 URL: https://issues.apache.org/jira/browse/SOLR-5020
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 5.0
Reporter: Joel Bernstein
Priority: Minor
 Fix For: 5.0, 4.5

 Attachments: SOLR-5020.patch


 This issue adds a finish() method to the DelegatingCollector class so that it 
 can be notified when collection is complete. 
 The current collect() method assumes that the delegating collector will 
 either forward on the document or not with each call. The final() method will 
 allow DelegatingCollectors to have more sophisticated behavior.
 For example a Field Collapsing delegating collector could collapse the 
 documents as the collect() method is being called. Then when the finish() 
 method is called it could pass the collapsed documents to the delegate 
 collectors.
 This would allow grouping to be implemented within the PostFilter framework.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5020) Add finish() method to DelegatingCollector

2013-07-09 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-5020:
-

Description: 
This issue adds a finish() method to the DelegatingCollector class so that it 
can be notified when collection is complete. 

The current collect() method assumes that the delegating collector will either 
forward on the document or not with each call. The finish() method will allow 
DelegatingCollectors to have more sophisticated behavior.

For example a Field Collapsing delegating collector could collapse the 
documents as the collect() method is being called. Then when the finish() 
method is called it could pass the collapsed documents to the delegate 
collectors.

This would allow grouping to be implemented within the PostFilter framework.

  was:
This issue adds a finish() method to the DelegatingCollector class so that it 
can be notified when collection is complete. 

The current collect() method assumes that the delegating collector will either 
forward on the document or not with each call. The final() method will allow 
DelegatingCollectors to have more sophisticated behavior.

For example a Field Collapsing delegating collector could collapse the 
documents as the collect() method is being called. Then when the finish() 
method is called it could pass the collapsed documents to the delegate 
collectors.

This would allow grouping to be implemented within the PostFilter framework.


 Add finish() method to DelegatingCollector
 --

 Key: SOLR-5020
 URL: https://issues.apache.org/jira/browse/SOLR-5020
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 5.0
Reporter: Joel Bernstein
Priority: Minor
 Fix For: 5.0, 4.5

 Attachments: SOLR-5020.patch


 This issue adds a finish() method to the DelegatingCollector class so that it 
 can be notified when collection is complete. 
 The current collect() method assumes that the delegating collector will 
 either forward on the document or not with each call. The finish() method 
 will allow DelegatingCollectors to have more sophisticated behavior.
 For example a Field Collapsing delegating collector could collapse the 
 documents as the collect() method is being called. Then when the finish() 
 method is called it could pass the collapsed documents to the delegate 
 collectors.
 This would allow grouping to be implemented within the PostFilter framework.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5084) EliasFanoDocIdSet

2013-07-09 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703581#comment-13703581
 ] 

Paul Elschot commented on LUCENE-5084:
--

For now the indexed version is in subclasses of the encoder and decoder.
In these I only expect some changes in method signatures from private to 
protected, so I don't mind either way.

 EliasFanoDocIdSet
 -

 Key: LUCENE-5084
 URL: https://issues.apache.org/jira/browse/LUCENE-5084
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Paul Elschot
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-5084.patch, LUCENE-5084.patch


 DocIdSet in Elias-Fano encoding

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5020) Add finish() method to DelegatingCollector

2013-07-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-5020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703602#comment-13703602
 ] 

Tomás Fernández Löbbe commented on SOLR-5020:
-

I think the issue Robert refers to is LUCENE-4370. I had a similar requirement 
once I implemented a Collector. 

 Add finish() method to DelegatingCollector
 --

 Key: SOLR-5020
 URL: https://issues.apache.org/jira/browse/SOLR-5020
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 5.0
Reporter: Joel Bernstein
Priority: Minor
 Fix For: 5.0, 4.5

 Attachments: SOLR-5020.patch


 This issue adds a finish() method to the DelegatingCollector class so that it 
 can be notified when collection is complete. 
 The current collect() method assumes that the delegating collector will 
 either forward on the document or not with each call. The finish() method 
 will allow DelegatingCollectors to have more sophisticated behavior.
 For example a Field Collapsing delegating collector could collapse the 
 documents as the collect() method is being called. Then when the finish() 
 method is called it could pass the collapsed documents to the delegate 
 collectors.
 This would allow grouping to be implemented within the PostFilter framework.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4998) Make the use of Slice and Shard consistent across the code and document base

2013-07-09 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703604#comment-13703604
 ] 

Yonik Seeley commented on SOLR-4998:


The maxShardsPerNode parameter uses shard in the incorrect sense I believe?
It's kind of a questionable parameter anyway since it doesn't make sense in the 
context of nodes that may have different capacities.

 Make the use of Slice and Shard consistent across the code and document base
 

 Key: SOLR-4998
 URL: https://issues.apache.org/jira/browse/SOLR-4998
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3, 4.3.1
Reporter: Anshum Gupta

 The interchangeable use of Slice and Shard is pretty confusing at times. We 
 should define each separately and use the apt term whenever we do so.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4808) Persist and use replication factor and maxShardsPerNode at Collection and Shard level

2013-07-09 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703627#comment-13703627
 ] 

Yonik Seeley commented on SOLR-4808:


bq. New mandatory collectionApiMode parameter during create collection 
command (we can think of a better name)

Eh, internally mandatory I hope (as in, the user should not have to specify it?)

bq. replicationFactor is persisted at slice level

It still feels like this should be a collection level property that we have the 
ability to store/override on a per-shard level.
The reasons off the top of my head:
 - would be nice to be able to create a new shard w/o having to know/specify 
what the replication factor currently is
 - possible to completely lose the replication factor if we delete a shard and 
re-add a new one
 - there may be one shard that has a lot of demand and you set it's replication 
level high... so you override the replicationFactor for that shard only.  It 
would still be nice to be able to adjust the replication factor for everyone 
else (by adjusting the collection level replicationFactor)

maxShardsPerNode - should that be maxReplicasPerNode, or are we really 
talking logical shards?


 Persist and use replication factor and maxShardsPerNode at Collection and 
 Shard level
 -

 Key: SOLR-4808
 URL: https://issues.apache.org/jira/browse/SOLR-4808
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Anshum Gupta
Assignee: Shalin Shekhar Mangar
  Labels: solrcloud
 Attachments: SOLR-4808.patch, SOLR-4808.patch


 The replication factor for a collection as of now is not persisted and used 
 while adding replicas.
 We should save the replication factor at collection factor as well as shard 
 level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5098) Broadword bit selection

2013-07-09 Thread Paul Elschot (JIRA)
Paul Elschot created LUCENE-5098:


 Summary: Broadword bit selection
 Key: LUCENE-5098
 URL: https://issues.apache.org/jira/browse/LUCENE-5098
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/other
Reporter: Paul Elschot
Priority: Minor




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4958) Document policies/process arround maintaining hte solr ref guide

2013-07-09 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-4958.


Resolution: Fixed

Initial work is done...

https://cwiki.apache.org/confluence/display/solr/Internal+-+Maintaining+Documentation

...obviously this will evolve, but the key starting bits are there.

 Document policies/process arround maintaining hte solr ref guide
 

 Key: SOLR-4958
 URL: https://issues.apache.org/jira/browse/SOLR-4958
 Project: Solr
  Issue Type: Sub-task
  Components: documentation
Reporter: Hoss Man
Assignee: Hoss Man

 Need to write up some docs of the various plans discussed in jira for how to 
 maintain the solr ref guide moving forward...
 * single live version tracking stable branch
 * who has edit perms (committers) and why
 * how/where people should submit doc feedback (comments, and cleaning up 
 comments when changes are rolled into docs)
 * PDF snapshots per minor release (based on SOLR-4886 outcome)
 * errata page(s) for mistakes in older docs (should be dynamic and linked to 
 from docs in PDF)
 * trunk-pending page/section of wiki for docs about things that are trunk 
 only that wont be included in PDF but can easily be moved/copied into the 
 approprate section once trunk gets renamed 5x

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5098) Broadword bit selection

2013-07-09 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot updated LUCENE-5098:
-

Attachment: LUCENE-5098.patch

 Broadword bit selection
 ---

 Key: LUCENE-5098
 URL: https://issues.apache.org/jira/browse/LUCENE-5098
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/other
Reporter: Paul Elschot
Priority: Minor
 Attachments: LUCENE-5098.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4997) The splitshard api doesn't call commit on new sub shards

2013-07-09 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703645#comment-13703645
 ] 

Shalin Shekhar Mangar commented on SOLR-4997:
-

I think I have found the cause of the failures. It has to do with sub shard 
replication. I'm working on a test + fix.

 The splitshard api doesn't call commit on new sub shards
 

 Key: SOLR-4997
 URL: https://issues.apache.org/jira/browse/SOLR-4997
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3, 4.3.1
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.4

 Attachments: SOLR-4997.patch


 The splitshard api doesn't call commit on new sub shards but it happily sets 
 them to active state which means on a successful split, the documents are not 
 visible to searchers unless an explicit commit is called on the cluster.
 The coreadmin split api will still not call commit on targetCores. That is by 
 design and we're not going to change that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5098) Broadword bit selection

2013-07-09 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703649#comment-13703649
 ] 

Paul Elschot commented on LUCENE-5098:
--

This has some methods and constants inspired by the article
Broadword Implementation of Rank/Select Queries by Sebastiano Vigna, January 
30, 2012,
currently retrievable from http://vigna.di.unimi.it/ftp/papers/Broadword.pdf .

I'd expect this to become really useful in the Elias-Fano sequence of 
LUCENE-5084 after an index is added to that.

 Broadword bit selection
 ---

 Key: LUCENE-5098
 URL: https://issues.apache.org/jira/browse/LUCENE-5098
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/other
Reporter: Paul Elschot
Priority: Minor
 Attachments: LUCENE-5098.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5017) Allow sharding based on the value of a field

2013-07-09 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703653#comment-13703653
 ] 

Yonik Seeley commented on SOLR-5017:


bq. the _shard_ parameter is the actual name of the shard.

For the implicit router.  For a hash based router, it should be the value that 
is hashed to then lookup the shard based on ranges.

bq. In case of compositeId router , I would like to read the part before the 
(!) to be read from the 'shardField'.

I think it should work simpler... _shard_ is used as the whole value to hash on 
for any hash based router.
It's simple - if you want to have doc B have the exact same hash as doc A, then 
you give _shard_=A when adding doc B.

bq. I would like to read the part before the (!) to be read from the 
'shardField'.

Perhaps that should be a different router... compositeField rather than 
compositeId.


 Allow sharding based on the value of a field
 

 Key: SOLR-5017
 URL: https://issues.apache.org/jira/browse/SOLR-5017
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul

 We should be able to create a collection where sharding is done based on the 
 value of a given field
 collections can be created with shardField=fieldName, which will be persisted 
 in DocCollection in ZK
 implicit DocRouter would look at this field instead of _shard_ field
 CompositeIdDocRouter can also use this field instead of looking at the id 
 field. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4886) configure and test PDF export of SOLR CWIKI after existing ref guide content is loaded

2013-07-09 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703660#comment-13703660
 ] 

Cassandra Targett commented on SOLR-4886:
-

A couple things to note about customizing the PDF export:

# The default CSS is attached to this page of Atlassian documentation: 
https://confluence.atlassian.com/display/CONF35/Editing+the+PDF+Stylesheet.
# Confluence first exports the requested pages to HTML, then converts the HTML 
to PDF. The divs you need to modify are found in the HTML, which can only be 
grabbed off the server. That's probably not an option here. However, the PDF 
does have good coverage of the various divs that are used, it's just you won't 
really know if you're changing what you hope you're changing without a lot of 
trial  error. (Since I have the same version of Confluence, I could probably 
get you a copy of pre-PDF HTML output if you wanted.)
# Good news, though, is that this doesn't look much changed between 3.x and 
5.x, so whatever changes you do should survive the rest of the Confluence 
upgrade process.

At LucidWorks we stopped using the default PDF export tool in 2011 because of a 
few limitations I decided were too much of a pain in my ass (we license a 
commercial tool instead):

* Changing the font was difficult. Unless you want Times New Roman, Helvetica 
or Courier, you have to upload the font file to the server and reference it's 
location in an @font-face call in the CSS. (It does seem that Confluence 5 has 
made this uploading fonts business easier: 
https://confluence.atlassian.com/display/DOC/Creating+PDF+in+Another+Language 
discusses a way to upload fonts.)
* I could never figure out how to get the Table of Contents page to have a 
header that said Table of Contents, or any other text (others who are better 
with CSS might know this, but I didn't).
* Bookmarks are created during PDF creation, but they link to the end of the 
previous section, not the start of the new section that should be bookmarked. 
This might not be a bad thing, depending on if you want to make page breaks at 
the start of specific header levels (I did).

There were other reasons for using a commercial tool, but they don't really 
impact this issue.

 configure and test PDF export of SOLR CWIKI after existing ref guide content 
 is loaded
 --

 Key: SOLR-4886
 URL: https://issues.apache.org/jira/browse/SOLR-4886
 Project: Solr
  Issue Type: Sub-task
  Components: documentation
Reporter: Hoss Man
Assignee: Hoss Man

 * add the ASL as a page in the wiki such that it appears early in the 
 exported PDF
 * test that exporting the PDF works ok with the large space
 * review the exported PDF for problems
 * confirm/tweak-configs so that PDF exporting can done really easily, ideally 
 automatically
 * document/script steps to create PDF on each solr release.
 Lots of tips here: 
 https://confluence.atlassian.com/display/DOC/Providing+PDF+Versions+of+your+Technical+Documentation

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5084) EliasFanoDocIdSet

2013-07-09 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703662#comment-13703662
 ] 

Paul Elschot commented on LUCENE-5084:
--

Broadword bit selection is at LUCENE-5098 .

 EliasFanoDocIdSet
 -

 Key: LUCENE-5084
 URL: https://issues.apache.org/jira/browse/LUCENE-5084
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Paul Elschot
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-5084.patch, LUCENE-5084.patch


 DocIdSet in Elias-Fano encoding

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr Wikis and Reference guide

2013-07-09 Thread Chris Hostetter

: I was wondering the same thing.  Yesterday I was going to update a
: page on the Solr Wiki and stopped myself after remembering this email
: and feeling unsure if my effort will be thrown away in favour of the
: new Confluence Wiki and new Solr Ref Guide.

Bottom line: 

It's better to write documentation *somewhere* then *no where* just 
becuase you aren't sure where is best.

There was extensive dicusssion in SOLR-4618 about how to use this guide 
moving forward, and what it should mean for the existing MoinMoin guide.  
I've attempted to capture the key elements of that dicussion here...

https://cwiki.apache.org/confluence/display/solr/Internal+-+Maintaining+Documentation

In particular, on the subject of the ref guide vs the wiki...

https://cwiki.apache.org/confluence/display/solr/Internal+-+Maintaining+Documentation#Internal-MaintainingDocumentation-WhatShouldandShouldNotbeIncludedinThisDocumentation
https://cwiki.apache.org/confluence/display/solr/Internal+-+Maintaining+Documentation#Internal-MaintainingDocumentation-Migrating%22Official%22DocumentationfromMoinMoin

in a nut shell:

 * If you see something wrong in moin moin and only have a few 
   minutes, just fix it. (better then leaving it broken)
 * If you see something in moin moin that is also in the ref 
   guide, delete the sentences from moin moin and link to 
   the ref guide instead
 * If you see something in moin moin that isn't in the ref 
   guide, help migrate that content into the ref guide and 
   link to it from the old moin moin pages
 * if you need to document something totally new, add it to the 
   ref guide.

...but those are just my opinions ... i've been trying to take the lead on 
making progress with getting the infra/processes/guidelines in place for 
being able to manage this doc as a group (so we can all help maintain it 
in a sustainable way moving forward w/o going insane) but i'm just one 
dude with terrible grammer and no real expertise in documntation -- 
everyone needs to help chip in on managing the *content* of the docs or 
we'll just wind up with some useless/stale content that's really easy to 
snapshot everytime we do a release.


-Hoss

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5099) QueryNodeProcessorImpl should set parent to nulll before returning on processing

2013-07-09 Thread Adriano Crestani (JIRA)
Adriano Crestani created LUCENE-5099:


 Summary: QueryNodeProcessorImpl should set parent to nulll before 
returning on processing
 Key: LUCENE-5099
 URL: https://issues.apache.org/jira/browse/LUCENE-5099
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Affects Versions: 4.3.1, 3.6.2
Reporter: Adriano Crestani
Assignee: Adriano Crestani


QueryNodeProcessorImpl should always return the root of the tree after 
processing, so it needs to make the parent is set to null before returning. 
Otherwise, the previous parent is leaked and never removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4957) Audit format/plugin/markup problems in solr ref guide related to Confluence 5.x upgrade

2013-07-09 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703684#comment-13703684
 ] 

Hoss Man commented on SOLR-4957:


Since it looks like the Solr 4.4 release is going to happen before the 
Confluence upgrade is finished, i'm going to press forward with trying to fix 
these markup problems as things stand.

 Audit format/plugin/markup problems in solr ref guide related to Confluence 
 5.x upgrade
 ---

 Key: SOLR-4957
 URL: https://issues.apache.org/jira/browse/SOLR-4957
 Project: Solr
  Issue Type: Sub-task
  Components: documentation
Reporter: Hoss Man
Assignee: Hoss Man

 The Solr Ref guide donated by lucidworks is now live on the ASF's CWIKI 
 instance of Confluence -- but the CWIKI is in the process of being upgraded 
 to confluence 5.x (INFRA-6406)
 We need to audit the ref guide for markup/plugin/formatting problems that 
 need to be fixed, but we should avoid making any major changes to try and 
 address any problems like this until the Confluence 5.x upgrade is completed, 
 since that process will involve the pages being converted to the newer wiki 
 syntax at least twice, and may change the way some plugins work.
 We'll use this issue as a place for people to track any formating/plugin 
 porblems they see when browsing the wiki -- please include the URL of the 
 specific page(s) where problems are noticed, using relative anchors into 
 individual page sections if possible, and a description of the problem seen.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4886) configure and test PDF export of SOLR CWIKI after existing ref guide content is loaded

2013-07-09 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703688#comment-13703688
 ] 

Cassandra Targett commented on SOLR-4886:
-

Header levels are also problematic and it warrants a separate comment to 
explain what's going to happen.

The PDF export uses the page hierarchy to compress all the pages into one 
(pretty massive) HTML file. Each page is then given a heading level in the HTML 
that corresponds to it's location in the hierarchy. The rest of the heading 
levels on all the pages are then compressed under those heading levels.

So, to use the Schema API page as an example 
(https://cwiki.apache.org/confluence/display/solr/Schema+API), here is where 
it's located in the hierarchy: 

Apache Solr Reference Guide - Documents, Fields, and Schema Design - Schema 
API

In the HTML file, the header for the page Apache Solr Reference Guide will be 
h1, Documents, Fields, and Schema Design will be h2, Schema API will be h3. 

Taking another example of a page deeper in the hierarchy, DataDir and 
DirectoryFactory in SolrConfig 
(https://cwiki.apache.org/confluence/display/solr/DataDir+and+DirectoryFactory+in+SolrConfig),
 will be h4.

So, after all the pages have been assigned heading levels appropriate for their 
level in the page hierarchy, all the defined headers within the pages will 
start to get numbers. I believe the highest heading level of content (not a 
page title) will be h5. The levels then go down from that point, depending on 
how they were defined on the page. An h4 will likely end up as an h8.

This is important if you want page titles to appear a certain way - changing h1 
will pretty much only change one title. You'll have to modify at h2, h3, and h4 
also, depending on how you want to entire document to flow.

I also wanted page breaks on certain pages, so had to define those to happen on 
h2 and h3 levels. See also my previous comment about problems with bookmarks 
within the PDF - the page breaks were not without their own problems.

 configure and test PDF export of SOLR CWIKI after existing ref guide content 
 is loaded
 --

 Key: SOLR-4886
 URL: https://issues.apache.org/jira/browse/SOLR-4886
 Project: Solr
  Issue Type: Sub-task
  Components: documentation
Reporter: Hoss Man
Assignee: Hoss Man

 * add the ASL as a page in the wiki such that it appears early in the 
 exported PDF
 * test that exporting the PDF works ok with the large space
 * review the exported PDF for problems
 * confirm/tweak-configs so that PDF exporting can done really easily, ideally 
 automatically
 * document/script steps to create PDF on each solr release.
 Lots of tips here: 
 https://confluence.atlassian.com/display/DOC/Providing+PDF+Versions+of+your+Technical+Documentation

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4886) configure and test PDF export of SOLR CWIKI after existing ref guide content is loaded

2013-07-09 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703729#comment-13703729
 ] 

Cassandra Targett commented on SOLR-4886:
-

See also this comment in 
SOLR-4957(https://issues.apache.org/jira/browse/SOLR-4957#comment-13693221):

{quote}
Missing \{unicode-text} macro

This is another User Macro that is only used on the Language Analysis page. It 
sets the font in the PDF export to a unicode font to properly render some of 
the more complex language examples on that page. At LucidWorks, our PDF font is 
Verdana, which does not properly render Arabic, Chinese, Japanese, Hindi, 
Persian and Thai. This macro sets a CSS class for those snippets of text that 
didn't render right, and the CSS for the PDF uses that class to set the font as 
Arial Unicode. If you use a Unicode font as the body for the whole PDF, you can 
remove the references to this macro. If you use an even less-unicode-compatible 
font than Verdana, you may need to expand the use of the macro.

This macro had problems in my test conversion to Confluence 4, but I have not 
yet had a chance to look at what the problems were or if they are easy to fix.
{quote}

We discovered this problem while using the commercial PDF tool; however, it is 
a font problem so I am assuming it would also appear in Confluence's PDF tool 
if the font used does not render those languages properly.

 configure and test PDF export of SOLR CWIKI after existing ref guide content 
 is loaded
 --

 Key: SOLR-4886
 URL: https://issues.apache.org/jira/browse/SOLR-4886
 Project: Solr
  Issue Type: Sub-task
  Components: documentation
Reporter: Hoss Man
Assignee: Hoss Man

 * add the ASL as a page in the wiki such that it appears early in the 
 exported PDF
 * test that exporting the PDF works ok with the large space
 * review the exported PDF for problems
 * confirm/tweak-configs so that PDF exporting can done really easily, ideally 
 automatically
 * document/script steps to create PDF on each solr release.
 Lots of tips here: 
 https://confluence.atlassian.com/display/DOC/Providing+PDF+Versions+of+your+Technical+Documentation

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5010) Add REST support for Copy Fields

2013-07-09 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703746#comment-13703746
 ] 

Yonik Seeley commented on SOLR-5010:


Floks, remember when writing new tests that generate expected exceptions - you 
can instruct the test framework to ignore (not log) the exception explicitly 
with ignoreException(regex).  This makes tests much more debuggable when they 
do fail. 

By default there is ignoreException(ignore_exception), so sometimes the 
easiest way is to just make sure the exception has that string (this is what I 
did here: http://svn.apache.org/viewvc?view=revisionrevision=r1501498


 Add REST support for Copy Fields
 

 Key: SOLR-5010
 URL: https://issues.apache.org/jira/browse/SOLR-5010
 Project: Solr
  Issue Type: Sub-task
  Components: Schema and Analysis
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: 5.0, 4.4

 Attachments: SOLR-5010-copyFields.patch, SOLR-5010-copyFields.patch, 
 SOLR-5010-copyFields.patch, SOLR-5010-copyFields.patch, SOLR-5010.patch, 
 SOLR-5010.patch, SOLR-5010.patch


 Per SOLR-4898, adding copy field support.  Should be simply a new parameter 
 to the PUT/POST with the name of the target to copy to.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >