[jira] [Commented] (SOLR-6342) Solr should mention CHANGES from Lucene which have side effects in Solr

2014-08-11 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092497#comment-14092497
 ] 

Varun Thacker commented on SOLR-6342:
-

Slightly unrelated but maybe we could also fix the problem pointed out by 
[~andyetitmoves] here - 
http://mail-archives.apache.org/mod_mbox/lucene-dev/201401.mbox/%3ccadmuoewdzjg3bke3m_2fxtkvha3ua_6kjy_isjt9wxv3ixy...@mail.gmail.com%3E
 

 Solr should mention CHANGES from Lucene which have side effects in Solr
 ---

 Key: SOLR-6342
 URL: https://issues.apache.org/jira/browse/SOLR-6342
 Project: Solr
  Issue Type: Bug
Reporter: Varun Thacker

 This is the problem that I faced - A user upgraded Solr from 4.0 to 4.9 and 
 the queries being formed were behaving differently.
 I tracked it down to LUCENE-5180 ( which I believe is the correct thing to do 
 ) . The problem is since Solr does't ship with the Lucene CHANGES.txt file 
 and we didn't record this under the the Solr CHANGES.txt file also, a user is 
 not aware of such changes when upgrading.
 I can think of two options - 
 1. Duplicate the changes which have side effects in Solr under Solr's 
 CHANGES.txt file also ( not sure if we do this already and that we missed 
 this one ).
 2. Be conservative and ship Solr binaries with the Lucene CHANGES.txt file 
 also
 We should address this problem



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6357) Using query time Join in deleteByQuery throws ClassCastException

2014-08-11 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092504#comment-14092504
 ] 

Mikhail Khludnev commented on SOLR-6357:


I'm not sure but it might be challenging to use purely Solr query \{!join ..\} 
deep inside of Lucene core, where deleteQuery is handled. Try to use native 
Lucene query-time join SOLR-6234.

 Using query time Join in deleteByQuery throws ClassCastException
 

 Key: SOLR-6357
 URL: https://issues.apache.org/jira/browse/SOLR-6357
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.9
Reporter: Arcadius Ahouansou

 Consider the following input document where we have:
 - 1 Samsung mobile phone and
 - 2 manufactures: Apple and Samsung.
 {code}
 [
{
   id:galaxy note ii,
   cat:product,
   manu_s:samsung
},
{
   id:samsung,
   cat:manufacturer,
   name:Samsung Electronics
},
{
   id:apple,
   cat:manufacturer,
   name:Apple Inc
}
 ]
 {code}
 My objective is to delete from the default index all manufacturers not having 
 any product in the index.
 After indexing (  curl 'http://localhost:8983/solr/update?commit=true' -H 
 Content-Type: text/json --data-binary @delete-by-join-query.json )
 I went to
 {code}http://localhost:8983/solr/select?q=cat:manufacturer -{!join 
 from=manu_s to=id}cat:product
 {code}
 and I could see only Apple, the only manufacturer not having any product in 
 the index.
 However, when I use that same query for deletion: 
 {code}
 http://localhost:8983/solr/update?commit=truestream.body=deletequerycat:manufacturer
  -{!join from=manu_s to=id}cat:product/query/delete
 {code}
 I get
 {code}
 java.lang.ClassCastException: org.apache.lucene.search.IndexSearcher cannot 
 be cast to org.apache.solr.search.SolrIndexSearcher
   at 
 org.apache.solr.search.JoinQuery.createWeight(JoinQParserPlugin.java:143)
   at 
 org.apache.lucene.search.BooleanQuery$BooleanWeight.init(BooleanQuery.java:185)
   at 
 org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:526)
   at 
 org.apache.lucene.search.BooleanQuery$BooleanWeight.init(BooleanQuery.java:185)
   at 
 org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:526)
   at 
 org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:684)
   at 
 org.apache.lucene.search.QueryWrapperFilter.getDocIdSet(QueryWrapperFilter.java:55)
   at 
 org.apache.lucene.index.BufferedUpdatesStream.applyQueryDeletes(BufferedUpdatesStream.java:552)
   at 
 org.apache.lucene.index.BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:287)
   at 
 {code}
 This seems to be a bug.
 Looking at the source code, the exception is happening in {code}
  @Override
   public Weight createWeight(IndexSearcher searcher) throws IOException {
 return new JoinQueryWeight((SolrIndexSearcher)searcher);
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0_20-ea-b23) - Build # 10878 - Failure!

2014-08-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/10878/
Java: 32bit/jdk1.8.0_20-ea-b23 -server -XX:+UseSerialGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest

Error Message:
ERROR: SolrIndexSearcher opens=16 closes=15

Stack Trace:
java.lang.AssertionError: ERROR: SolrIndexSearcher opens=16 closes=15
at __randomizedtesting.SeedInfo.seed([86DB4D1F5744FDF4]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:423)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:182)
at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:790)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest: 1) Thread[id=922, 
name=qtp10522329-922, state=WAITING, group=TGRP-ChaosMonkeyNothingIsSafeTest]   
  at java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:502) at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer.blockUntilFinished(ConcurrentUpdateSolrServer.java:374)
 at 
org.apache.solr.update.StreamingSolrServers.blockUntilFinished(StreamingSolrServers.java:103)
 at 
org.apache.solr.update.SolrCmdDistributor.blockAndDoRetries(SolrCmdDistributor.java:228)
 at 
org.apache.solr.update.SolrCmdDistributor.finish(SolrCmdDistributor.java:89)
 at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doFinish(DistributedUpdateProcessor.java:766)
 at 
org.apache.solr.update.processor.DistributedUpdateProcessor.finish(DistributedUpdateProcessor.java:1662)
 at 
org.apache.solr.update.processor.LogUpdateProcessor.finish(LogUpdateProcessorFactory.java:179)
 at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:83)
 at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1966) 
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777) 
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
 at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:137)
 at 

Re: Range field for interger

2014-08-11 Thread Poornima Jay
Thanks for you reply Uwe.
I have changed the field type to tint and the precisionStep to 8 after the 
changes reloaded the core and re-indexed the content. Still no luck.

fieldType name=tint class=solr.TrieIntField precisionStep=8 
positionIncrementGap=0/


Not sure what is wrong.

Regards,
Poornima


On Friday, 8 August 2014 2:53 PM, Uwe Schindler u...@thetaphi.de wrote:
 


Hi,
 
in most cases the reason for those problems is, because you changed the field 
definition, but did not reindex all documents or drop the whole index.
In addition, to do range queries, you should use the “tint” fields, which have 
“precisionStep”  0, otherwise range queries can get very slow if you have many 
documents with many distinct integer values.
 
Uwe
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de
 
From:Poornima Jay [mailto:poornima...@rocketmail.com] 
Sent: Friday, August 08, 2014 11:19 AM
To: solr-dev; solr-user
Subject: Range field for interger
 
Hi,
 
I am using solr 3.6.1 and trying to find a range on a field which was defined 
as integer. but i'm not getting accurate results. below is my schema.
 
The input will be as [-1 TO 0] or [2 TO 5]
 
fieldType name=int class=solr.TrieIntField precisionStep=0 
positionIncrementGap=0/
field name=interestlevel type=int indexed=true stored=false 
multiValued=true /
 
my query string will be interestlevel:[-1 TO 0] -- this is returning only 2 
records from solr where as it has 21 records in the DB.
 
Please advice.
 
Thanks,
Poornima

[jira] [Updated] (SOLR-6360) Unnecessary Content-Charset header in HttpSolrServer

2014-08-11 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-6360:


Fix Version/s: 4.10
   5.0

 Unnecessary Content-Charset header in HttpSolrServer
 

 Key: SOLR-6360
 URL: https://issues.apache.org/jira/browse/SOLR-6360
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 3.6, 4.9
Reporter: Michael Ryan
Assignee: Uwe Schindler
Priority: Minor
 Fix For: 5.0, 4.10


 The httpclient code in HttpSolrServer currently sets a Content-Charset 
 header when making a POST request:
 {code}post.setHeader(Content-Charset, UTF-8);{code}
 As far as I know this is not a real header and is not necessary. It seems 
 this was a mistake in the original implementation of this class, when 
 converting from httpclient v3 to httpclient v4. CommonsHttpSolrServer did 
 this, which is what the line of code above seems to have been based on:
 {code}post.getParams().setContentCharset(UTF-8);{code}
 The actual way to set the charset in httpclient v4 is already being done 
 correctly, with these lines:
 {code}parts.add(new FormBodyPart(p, new StringBody(v, 
 StandardCharsets.UTF_8)));
 post.setEntity(new UrlEncodedFormEntity(postParams, 
 StandardCharsets.UTF_8));{code}
 So basically, the Content-Charset line can just be removed.
 (I think the explicit setting of the Content-Type header also might be 
 unnecessary, but I haven't taken the time to investigate that.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6360) Unnecessary Content-Charset header in HttpSolrServer

2014-08-11 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler reassigned SOLR-6360:
---

Assignee: Uwe Schindler

Thanks for reporting this, Content-Charset is indeed totally bogus. I will 
remove the header.

 Unnecessary Content-Charset header in HttpSolrServer
 

 Key: SOLR-6360
 URL: https://issues.apache.org/jira/browse/SOLR-6360
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 3.6, 4.9
Reporter: Michael Ryan
Assignee: Uwe Schindler
Priority: Minor
 Fix For: 5.0, 4.10


 The httpclient code in HttpSolrServer currently sets a Content-Charset 
 header when making a POST request:
 {code}post.setHeader(Content-Charset, UTF-8);{code}
 As far as I know this is not a real header and is not necessary. It seems 
 this was a mistake in the original implementation of this class, when 
 converting from httpclient v3 to httpclient v4. CommonsHttpSolrServer did 
 this, which is what the line of code above seems to have been based on:
 {code}post.getParams().setContentCharset(UTF-8);{code}
 The actual way to set the charset in httpclient v4 is already being done 
 correctly, with these lines:
 {code}parts.add(new FormBodyPart(p, new StringBody(v, 
 StandardCharsets.UTF_8)));
 post.setEntity(new UrlEncodedFormEntity(postParams, 
 StandardCharsets.UTF_8));{code}
 So basically, the Content-Charset line can just be removed.
 (I think the explicit setting of the Content-Type header also might be 
 unnecessary, but I haven't taken the time to investigate that.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6360) Unnecessary Content-Charset header in HttpSolrServer

2014-08-11 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-6360:


Attachment: SOLR-6360.patch

Patch.

 Unnecessary Content-Charset header in HttpSolrServer
 

 Key: SOLR-6360
 URL: https://issues.apache.org/jira/browse/SOLR-6360
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 3.6, 4.9
Reporter: Michael Ryan
Assignee: Uwe Schindler
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: SOLR-6360.patch


 The httpclient code in HttpSolrServer currently sets a Content-Charset 
 header when making a POST request:
 {code}post.setHeader(Content-Charset, UTF-8);{code}
 As far as I know this is not a real header and is not necessary. It seems 
 this was a mistake in the original implementation of this class, when 
 converting from httpclient v3 to httpclient v4. CommonsHttpSolrServer did 
 this, which is what the line of code above seems to have been based on:
 {code}post.getParams().setContentCharset(UTF-8);{code}
 The actual way to set the charset in httpclient v4 is already being done 
 correctly, with these lines:
 {code}parts.add(new FormBodyPart(p, new StringBody(v, 
 StandardCharsets.UTF_8)));
 post.setEntity(new UrlEncodedFormEntity(postParams, 
 StandardCharsets.UTF_8));{code}
 So basically, the Content-Charset line can just be removed.
 (I think the explicit setting of the Content-Type header also might be 
 unnecessary, but I haven't taken the time to investigate that.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_20-ea-b23) - Build # 4242 - Still Failing!

2014-08-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4242/
Java: 32bit/jdk1.8.0_20-ea-b23 -client -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestNestedChildren

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.dataimport.TestNestedChildren: 1) Thread[id=77, 
name=Timer-0, state=WAITING, group=TGRP-TestNestedChildren] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:502) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.dataimport.TestNestedChildren: 
   1) Thread[id=77, name=Timer-0, state=WAITING, group=TGRP-TestNestedChildren]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
at __randomizedtesting.SeedInfo.seed([73B1D5EA76876E2F]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestNestedChildren

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=77, 
name=Timer-0, state=WAITING, group=TGRP-TestNestedChildren] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:502) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=77, name=Timer-0, state=WAITING, group=TGRP-TestNestedChildren]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
at __randomizedtesting.SeedInfo.seed([73B1D5EA76876E2F]:0)




Build Log:
[...truncated 15020 lines...]
   [junit4] Suite: org.apache.solr.handler.dataimport.TestNestedChildren
   [junit4]   2 Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\.\temp\solr.handler.dataimport.TestNestedChildren-73B1D5EA76876E2F-001\init-core-data-001
   [junit4]   2 38430 T76 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(true) and clientAuth (false)
   [junit4]   2 40581 T76 oas.SolrTestCaseJ4.initCore initCore
   [junit4]   2 40581 T76 oasc.SolrResourceLoader.init new 
SolrResourceLoader for directory: 
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\.\temp\solr.handler.dataimport.TestNestedChildren-73B1D5EA76876E2F-001\core-home-001\collection1\'
   [junit4]   2 40671 T76 oasc.SolrConfig.init Using Lucene MatchVersion: 
LUCENE_5_0
   [junit4]   2 40700 T76 oasc.SolrConfig.init Loaded SolrConfig: 
dataimport-solrconfig.xml
   [junit4]   2 40702 T76 oass.IndexSchema.readSchema Reading Solr Schema from 
dataimport-schema.xml
   [junit4]   2 40711 T76 oass.IndexSchema.readSchema [null] Schema 
name=dih_test
   [junit4]   2 40751 T76 oass.IndexSchema.readSchema default search field in 
schema is desc
   [junit4]   2 40753 T76 oass.IndexSchema.readSchema query parser default 
operator is OR
   [junit4]   2 40757 T76 oass.IndexSchema.readSchema unique key field: id
   [junit4]   2 40759 T76 oasc.SolrResourceLoader.locateSolrHome JNDI not 
configured for solr (NoInitialContextEx)
   [junit4]   2 40759 T76 oasc.SolrResourceLoader.locateSolrHome using system 
property solr.solr.home: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\.\temp\solr.handler.dataimport.TestNestedChildren-73B1D5EA76876E2F-001\core-home-001
   [junit4]   2 40759 T76 oasc.SolrResourceLoader.init new 
SolrResourceLoader for directory: 
'C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\.\temp\solr.handler.dataimport.TestNestedChildren-73B1D5EA76876E2F-001\core-home-001\'
   [junit4]   2 40843 T3 oashd.JdbcDataSource.finalize ERROR JdbcDataSource 
was not closed prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!!
   [junit4]   2 40843 T3 oashd.JdbcDataSource.finalize ERROR JdbcDataSource 
was not closed prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!!
   [junit4]   2 40861 T76 oasc.CoreContainer.init New CoreContainer 23558811
   [junit4]   2 40861 T76 oasc.CoreContainer.load Loading cores into 
CoreContainer 
[instanceDir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\.\temp\solr.handler.dataimport.TestNestedChildren-73B1D5EA76876E2F-001\core-home-001\]
   [junit4]   2 40863 T76 

[jira] [Resolved] (SOLR-6360) Unnecessary Content-Charset header in HttpSolrServer

2014-08-11 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved SOLR-6360.
-

Resolution: Fixed

Thanks. I just committed this. The header was totally unused and has no meaning 
in the HTTP spec. So it just wastes transfer bandwidth.

 Unnecessary Content-Charset header in HttpSolrServer
 

 Key: SOLR-6360
 URL: https://issues.apache.org/jira/browse/SOLR-6360
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 3.6, 4.9
Reporter: Michael Ryan
Assignee: Uwe Schindler
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: SOLR-6360.patch


 The httpclient code in HttpSolrServer currently sets a Content-Charset 
 header when making a POST request:
 {code}post.setHeader(Content-Charset, UTF-8);{code}
 As far as I know this is not a real header and is not necessary. It seems 
 this was a mistake in the original implementation of this class, when 
 converting from httpclient v3 to httpclient v4. CommonsHttpSolrServer did 
 this, which is what the line of code above seems to have been based on:
 {code}post.getParams().setContentCharset(UTF-8);{code}
 The actual way to set the charset in httpclient v4 is already being done 
 correctly, with these lines:
 {code}parts.add(new FormBodyPart(p, new StringBody(v, 
 StandardCharsets.UTF_8)));
 post.setEntity(new UrlEncodedFormEntity(postParams, 
 StandardCharsets.UTF_8));{code}
 So basically, the Content-Charset line can just be removed.
 (I think the explicit setting of the Content-Type header also might be 
 unnecessary, but I haven't taken the time to investigate that.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6360) Unnecessary Content-Charset header in HttpSolrServer

2014-08-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092551#comment-14092551
 ] 

ASF subversion and git services commented on SOLR-6360:
---

Commit 1617218 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1617218 ]

SOLR-6360: Remove bogus Content-Charset header in HttpSolrServer

 Unnecessary Content-Charset header in HttpSolrServer
 

 Key: SOLR-6360
 URL: https://issues.apache.org/jira/browse/SOLR-6360
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 3.6, 4.9
Reporter: Michael Ryan
Assignee: Uwe Schindler
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: SOLR-6360.patch


 The httpclient code in HttpSolrServer currently sets a Content-Charset 
 header when making a POST request:
 {code}post.setHeader(Content-Charset, UTF-8);{code}
 As far as I know this is not a real header and is not necessary. It seems 
 this was a mistake in the original implementation of this class, when 
 converting from httpclient v3 to httpclient v4. CommonsHttpSolrServer did 
 this, which is what the line of code above seems to have been based on:
 {code}post.getParams().setContentCharset(UTF-8);{code}
 The actual way to set the charset in httpclient v4 is already being done 
 correctly, with these lines:
 {code}parts.add(new FormBodyPart(p, new StringBody(v, 
 StandardCharsets.UTF_8)));
 post.setEntity(new UrlEncodedFormEntity(postParams, 
 StandardCharsets.UTF_8));{code}
 So basically, the Content-Charset line can just be removed.
 (I think the explicit setting of the Content-Type header also might be 
 unnecessary, but I haven't taken the time to investigate that.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6360) Unnecessary Content-Charset header in HttpSolrServer

2014-08-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092552#comment-14092552
 ] 

ASF subversion and git services commented on SOLR-6360:
---

Commit 1617219 from [~thetaphi] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1617219 ]

Merged revision(s) 1617218 from lucene/dev/trunk:
SOLR-6360: Remove bogus Content-Charset header in HttpSolrServer

 Unnecessary Content-Charset header in HttpSolrServer
 

 Key: SOLR-6360
 URL: https://issues.apache.org/jira/browse/SOLR-6360
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 3.6, 4.9
Reporter: Michael Ryan
Assignee: Uwe Schindler
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: SOLR-6360.patch


 The httpclient code in HttpSolrServer currently sets a Content-Charset 
 header when making a POST request:
 {code}post.setHeader(Content-Charset, UTF-8);{code}
 As far as I know this is not a real header and is not necessary. It seems 
 this was a mistake in the original implementation of this class, when 
 converting from httpclient v3 to httpclient v4. CommonsHttpSolrServer did 
 this, which is what the line of code above seems to have been based on:
 {code}post.getParams().setContentCharset(UTF-8);{code}
 The actual way to set the charset in httpclient v4 is already being done 
 correctly, with these lines:
 {code}parts.add(new FormBodyPart(p, new StringBody(v, 
 StandardCharsets.UTF_8)));
 post.setEntity(new UrlEncodedFormEntity(postParams, 
 StandardCharsets.UTF_8));{code}
 So basically, the Content-Charset line can just be removed.
 (I think the explicit setting of the Content-Type header also might be 
 unnecessary, but I haven't taken the time to investigate that.)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5244) Exporting Full Sorted Result Sets

2014-08-11 Thread Lewis G (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092558#comment-14092558
 ] 

Lewis G commented on SOLR-5244:
---

Dear Colleague,
I am currently out of the office and will read your email when I return. If 
this is a matter involving the NCBI BioSystems database, please email 
biosystems.h...@ncbi.nlm.nih.gov.
Regards,
Lewis



 Exporting Full Sorted Result Sets
 -

 Key: SOLR-5244
 URL: https://issues.apache.org/jira/browse/SOLR-5244
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 5.0
Reporter: Joel Bernstein
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: 0001-SOLR_5244.patch, SOLR-5244.patch, SOLR-5244.patch, 
 SOLR-5244.patch, SOLR-5244.patch, SOLR-5244.patch


 This ticket allows Solr to export full sorted result sets. The proposed 
 syntax is:
 {code}
 q=*:*rows=-1wt=xsortfl=a,b,csort=a desc,b desc
 {code}
 Under the covers, the rows=-1 parameter will signal Solr to use the 
 ExportQParserPlugin as a RankQuery, which will simply collect a BitSet of the 
 results. The SortingResponseWriter will sort the results based on the sort 
 criteria and stream the results out.
 This capability will open up Solr for a whole range of uses that were 
 typically done using aggregation engines like Hadoop. For example:
 *Large Distributed Joins*
 A client outside of Solr calls two different Solr collections and returns the 
 results sorted by a join key. The client iterates through both streams and 
 performs a merge join.
 *Fully Distributed Field Collapsing/Grouping*
 A client outside of Solr makes individual calls to all the servers in a 
 single collection and returns results sorted by the collapse key. The client 
 merge joins the sorted lists on the collapse key to perform the field 
 collapse.
 *High Cardinality Distributed Aggregation*
 A client outside of Solr makes individual calls to all the servers in a 
 single collection and sorts on a high cardinality field. The client then 
 merge joins the sorted lists to perform the high cardinality aggregation.
 *Large Scale Time Series Rollups*
 A client outside Solr makes individual calls to all servers in a collection 
 and sorts on time dimensions. The client merge joins the sorted result sets 
 and rolls up the time dimensions as it iterates through the data.
 In these scenarios Solr is being used as a distributed sorting engine. 
 Developers can write clients that take advantage of this sorting capability 
 in any way they wish.
 *Session Analysis and Aggregation*
 A client outside Solr makes individual calls to all servers in a collection 
 and sorts on the sessionID. The client merge joins the sorted results and 
 aggregates sessions as it iterates through the results.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_65) - Build # 10999 - Still Failing!

2014-08-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10999/
Java: 32bit/jdk1.7.0_65 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.MultiThreadedOCPTest.testDistribSearch

Error Message:
After invoking OVERSEERSTATUS, SplitShard task [2000] was still supposed to be 
in [running] but isn't.It is [completed]

Stack Trace:
java.lang.AssertionError: After invoking OVERSEERSTATUS, SplitShard task [2000] 
was still supposed to be in [running] but isn't.It is [completed]
at 
__randomizedtesting.SeedInfo.seed([3CBD2BCCF48B8BCE:BD5BA5D483D4EBF2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.testLongAndShortRunningParallelApiCalls(MultiThreadedOCPTest.java:209)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.doTest(MultiThreadedOCPTest.java:73)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)

Re: [JENKINS] Lucene-4x-Linux-Java7-64-test-only - Build # 28522 - Failure!

2014-08-11 Thread Michael McCandless
I committed a fix.

Mike McCandless

http://blog.mikemccandless.com


On Sun, Aug 10, 2014 at 5:05 PM,  buil...@flonkings.com wrote:
 Build: builds.flonkings.com/job/Lucene-4x-Linux-Java7-64-test-only/28522/

 1 tests failed.
 REGRESSION:  
 org.apache.lucene.index.TestIndexWriterThreadsToSegments.testManyThreadsClose

 Error Message:
 Captured an uncaught exception in thread: Thread[id=42, name=Thread-19, 
 state=RUNNABLE, group=TGRP-TestIndexWriterThreadsToSegments]

 Stack Trace:
 com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
 uncaught exception in thread: Thread[id=42, name=Thread-19, state=RUNNABLE, 
 group=TGRP-TestIndexWriterThreadsToSegments]
 at 
 __randomizedtesting.SeedInfo.seed([2C6E7E6D776A11E:307F2004AACBE9F8]:0)
 Caused by: java.lang.RuntimeException: java.lang.NullPointerException
 at __randomizedtesting.SeedInfo.seed([2C6E7E6D776A11E]:0)
 at 
 org.apache.lucene.index.TestIndexWriterThreadsToSegments$3.run(TestIndexWriterThreadsToSegments.java:258)
 Caused by: java.lang.NullPointerException
 at 
 org.apache.lucene.index.DocumentsWriterPerThreadPool.release(DocumentsWriterPerThreadPool.java:300)
 at 
 org.apache.lucene.index.DocumentsWriterFlushControl.obtainAndLock(DocumentsWriterFlushControl.java:473)
 at 
 org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:438)
 at 
 org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1539)
 at 
 org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1254)
 at 
 org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:149)
 at 
 org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:110)
 at 
 org.apache.lucene.index.TestIndexWriterThreadsToSegments$3.run(TestIndexWriterThreadsToSegments.java:253)




 Build Log:
 [...truncated 607 lines...]
[junit4] Suite: org.apache.lucene.index.TestIndexWriterThreadsToSegments
[junit4]   2 ago 10, 2014 5:01:35 PM 
 com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
  uncaughtException
[junit4]   2 ADVERTENCIA: Uncaught exception in thread: 
 Thread[Thread-19,5,TGRP-TestIndexWriterThreadsToSegments]
[junit4]   2 java.lang.RuntimeException: java.lang.NullPointerException
[junit4]   2at 
 __randomizedtesting.SeedInfo.seed([2C6E7E6D776A11E]:0)
[junit4]   2at 
 org.apache.lucene.index.TestIndexWriterThreadsToSegments$3.run(TestIndexWriterThreadsToSegments.java:258)
[junit4]   2 Caused by: java.lang.NullPointerException
[junit4]   2at 
 org.apache.lucene.index.DocumentsWriterPerThreadPool.release(DocumentsWriterPerThreadPool.java:300)
[junit4]   2at 
 org.apache.lucene.index.DocumentsWriterFlushControl.obtainAndLock(DocumentsWriterFlushControl.java:473)
[junit4]   2at 
 org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:438)
[junit4]   2at 
 org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1539)
[junit4]   2at 
 org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1254)
[junit4]   2at 
 org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:149)
[junit4]   2at 
 org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:110)
[junit4]   2at 
 org.apache.lucene.index.TestIndexWriterThreadsToSegments$3.run(TestIndexWriterThreadsToSegments.java:253)
[junit4]   2
[junit4]   2 NOTE: reproduce with: ant test  
 -Dtestcase=TestIndexWriterThreadsToSegments 
 -Dtests.method=testManyThreadsClose -Dtests.seed=2C6E7E6D776A11E 
 -Dtests.slow=true -Dtests.locale=es_CL 
 -Dtests.timezone=America/Indiana/Winamac -Dtests.file.encoding=ISO-8859-1
[junit4] ERROR   0.58s J1 | 
 TestIndexWriterThreadsToSegments.testManyThreadsClose 
[junit4] Throwable #1: 
 com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
 uncaught exception in thread: Thread[id=42, name=Thread-19, state=RUNNABLE, 
 group=TGRP-TestIndexWriterThreadsToSegments]
[junit4]at 
 __randomizedtesting.SeedInfo.seed([2C6E7E6D776A11E:307F2004AACBE9F8]:0)
[junit4] Caused by: java.lang.RuntimeException: 
 java.lang.NullPointerException
[junit4]at 
 __randomizedtesting.SeedInfo.seed([2C6E7E6D776A11E]:0)
[junit4]at 
 org.apache.lucene.index.TestIndexWriterThreadsToSegments$3.run(TestIndexWriterThreadsToSegments.java:258)
[junit4] Caused by: java.lang.NullPointerException
[junit4]at 
 org.apache.lucene.index.DocumentsWriterPerThreadPool.release(DocumentsWriterPerThreadPool.java:300)
[junit4]at 
 org.apache.lucene.index.DocumentsWriterFlushControl.obtainAndLock(DocumentsWriterFlushControl.java:473)
[junit4]at 
 

[jira] [Commented] (LUCENE-5881) Add test beaster

2014-08-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092594#comment-14092594
 ] 

Michael McCandless commented on LUCENE-5881:


This is awesome, thanks Uwe!

 Add test beaster
 

 Key: LUCENE-5881
 URL: https://issues.apache.org/jira/browse/LUCENE-5881
 Project: Lucene - Core
  Issue Type: Task
  Components: general/build, general/test
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5881.patch, LUCENE-5881.patch, LUCENE-5881.patch


 On dev@lao we discussed about integrating a test beaster directly into google.
 This extra target in common-build.xml does the same like Mike's Python script 
 using Groovy.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6361) /admin/collection API reload action is not work. --Got timeout excpetion

2014-08-11 Thread Lewis Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092608#comment-14092608
 ] 

Lewis Liu commented on SOLR-6361:
-

I just go over the sourcecode,  The backend should process the reload request 
and populate the processing result into this node 
/overseer/collection-queue-work/qnr-68. Whereas the final result is 
this node be deleted.



 /admin/collection API reload action is not work. --Got timeout excpetion
 

 Key: SOLR-6361
 URL: https://issues.apache.org/jira/browse/SOLR-6361
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 4.9
 Environment: Ubuntu 10.04+Jdk 1.7_55 
 One Zookeeper + Two shards
Reporter: Lewis Liu
Priority: Critical
  Labels: /admin/collections?action=reload
   Original Estimate: 120h
  Remaining Estimate: 120h

 I just updated the schema.xml and uploaded into zookeeper, I want to make all 
 shards effecitve immediately ,So i call the api 
 /solr/admin/collections?action=reloadname=collection4,  After 3 minutes i 
 got 
 exception like this:
 org.apache.solr.common.SolrException: reloadcollection the collection time 
 out:180s
   at 
 org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:368)
   at 
 org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:320)
   at 
 I checked the log ,Found these log
 [2014-08-11 17:19:55,227] [main-SendThread(localhost:2181)] DEBUG 
 org.apache.zookeeper.ClientCnxn  - Got WatchedEvent state:SyncConnected 
 type:NodeDeleted path:/overseer/collection-queue-work/qnr-68 for 
 sessionid 0x147c387a9b3000b
 [2014-08-11 17:19:55,227] [main-EventThread] INFO  
 org.apache.solr.cloud.DistributedQueue  - LatchChildWatcher fired on path: 
 /overseer/collection-queue-work/qnr-68 state: SyncConnected type 
 NodeDeleted



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6361) /admin/collection API reload action is not work. --Got timeout excpetion

2014-08-11 Thread Lewis Liu (JIRA)
Lewis Liu created SOLR-6361:
---

 Summary: /admin/collection API reload action is not work. --Got 
timeout excpetion
 Key: SOLR-6361
 URL: https://issues.apache.org/jira/browse/SOLR-6361
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 4.9
 Environment: Ubuntu 10.04+Jdk 1.7_55 

One Zookeeper + Two shards

Reporter: Lewis Liu
Priority: Critical


I just updated the schema.xml and uploaded into zookeeper, I want to make all 
shards effecitve immediately ,So i call the api 
/solr/admin/collections?action=reloadname=collection4,  After 3 minutes i got 
exception like this:
org.apache.solr.common.SolrException: reloadcollection the collection time 
out:180s
at 
org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:368)
at 
org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:320)
at 



I checked the log ,Found these log

[2014-08-11 17:19:55,227] [main-SendThread(localhost:2181)] DEBUG 
org.apache.zookeeper.ClientCnxn  - Got WatchedEvent state:SyncConnected 
type:NodeDeleted path:/overseer/collection-queue-work/qnr-68 for 
sessionid 0x147c387a9b3000b
[2014-08-11 17:19:55,227] [main-EventThread] INFO  
org.apache.solr.cloud.DistributedQueue  - LatchChildWatcher fired on path: 
/overseer/collection-queue-work/qnr-68 state: SyncConnected type 
NodeDeleted




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6361) /admin/collection API reload action is not work. --Got timeout exception

2014-08-11 Thread Lewis Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lewis Liu updated SOLR-6361:


Summary: /admin/collection API reload action is not work. --Got timeout 
exception  (was: /admin/collection API reload action is not work. --Got timeout 
excpetion)

 /admin/collection API reload action is not work. --Got timeout exception
 

 Key: SOLR-6361
 URL: https://issues.apache.org/jira/browse/SOLR-6361
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 4.9
 Environment: Ubuntu 10.04+Jdk 1.7_55 
 One Zookeeper + Two shards
Reporter: Lewis Liu
Priority: Critical
  Labels: /admin/collections?action=reload
   Original Estimate: 120h
  Remaining Estimate: 120h

 I just updated the schema.xml and uploaded into zookeeper, I want to make all 
 shards effecitve immediately ,So i call the api 
 /solr/admin/collections?action=reloadname=collection4,  After 3 minutes i 
 got 
 exception like this:
 org.apache.solr.common.SolrException: reloadcollection the collection time 
 out:180s
   at 
 org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:368)
   at 
 org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:320)
   at 
 I checked the log ,Found these log
 [2014-08-11 17:19:55,227] [main-SendThread(localhost:2181)] DEBUG 
 org.apache.zookeeper.ClientCnxn  - Got WatchedEvent state:SyncConnected 
 type:NodeDeleted path:/overseer/collection-queue-work/qnr-68 for 
 sessionid 0x147c387a9b3000b
 [2014-08-11 17:19:55,227] [main-EventThread] INFO  
 org.apache.solr.cloud.DistributedQueue  - LatchChildWatcher fired on path: 
 /overseer/collection-queue-work/qnr-68 state: SyncConnected type 
 NodeDeleted



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4796 - Still Failing

2014-08-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4796/

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta: 1) 
Thread[id=38, name=Timer-0, state=WAITING, 
group=TGRP-TestSqlEntityProcessorDelta] at java.lang.Object.wait(Native 
Method) at java.lang.Object.wait(Object.java:503) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta: 
   1) Thread[id=38, name=Timer-0, state=WAITING, 
group=TGRP-TestSqlEntityProcessorDelta]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:503)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
at __randomizedtesting.SeedInfo.seed([4995CC397B1EB902]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=38, 
name=Timer-0, state=WAITING, group=TGRP-TestSqlEntityProcessorDelta] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:503) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=38, name=Timer-0, state=WAITING, 
group=TGRP-TestSqlEntityProcessorDelta]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:503)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
at __randomizedtesting.SeedInfo.seed([4995CC397B1EB902]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSimplePropertiesWriter

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.dataimport.TestSimplePropertiesWriter: 1) 
Thread[id=168, name=Timer-0, state=WAITING, 
group=TGRP-TestSimplePropertiesWriter] at java.lang.Object.wait(Native 
Method) at java.lang.Object.wait(Object.java:503) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.dataimport.TestSimplePropertiesWriter: 
   1) Thread[id=168, name=Timer-0, state=WAITING, 
group=TGRP-TestSimplePropertiesWriter]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:503)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
at __randomizedtesting.SeedInfo.seed([4995CC397B1EB902]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSimplePropertiesWriter

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=168, name=Timer-0, state=WAITING, 
group=TGRP-TestSimplePropertiesWriter] at java.lang.Object.wait(Native 
Method) at java.lang.Object.wait(Object.java:503) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=168, name=Timer-0, state=WAITING, 
group=TGRP-TestSimplePropertiesWriter]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:503)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
at __randomizedtesting.SeedInfo.seed([4995CC397B1EB902]:0)




Build Log:
[...truncated 15053 lines...]
   [junit4] Suite: 
org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta
   [junit4]   2 Creating dataDir: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/build/contrib/solr-dataimporthandler/test/J0/./temp/solr.handler.dataimport.TestSqlEntityProcessorDelta-4995CC397B1EB902-001/init-core-data-001
   [junit4]   2 21861 T37 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(true) and clientAuth (false)
   [junit4]   2 24248 T37 oas.SolrTestCaseJ4.initCore initCore
   [junit4]   2 24249 T37 oasc.SolrResourceLoader.init new 
SolrResourceLoader for directory: 

[jira] [Commented] (LUCENE-4396) BooleanScorer should sometimes be used for MUST clauses

2014-08-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092619#comment-14092619
 ] 

Michael McCandless commented on LUCENE-4396:


Thanks Da, this looks like great progress.

bq. In this patch, I have fixed a bug of wrong coord counting.

Is it possible to make a test case showing what the bug was, and
that's fixed (and stays fixed)?

Also, do we have a test case that fails if DAAT and TAAT scoring
differs (as it does on trunk today)?  I know you worked hard /
iterated to get these two to produce precisely the same score, which
is awesome!  I want to make sure we don't regress in the future...

I'm a little worried about the heavy math (the matrix) used to
determine which scorer to apply, i.e. it's a little too magical when
you just come across it in the sources.  Can you add a comment to that
part in the code, linking to this issue and explaining the motivation
behind it?  It may also be over-tuned to Wikipedia... but then each of
these boolean scorers should do OK.

+1 to work on the javadocs / comments.  Make sure any now-done TODOs
are removed!

Can I commit TestBooleanUnevenly to trunk today?  Seems like there's
no reason to wait...

I'll run some perf tests on this patch too...


 BooleanScorer should sometimes be used for MUST clauses
 ---

 Key: LUCENE-4396
 URL: https://issues.apache.org/jira/browse/LUCENE-4396
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
 Attachments: And.tasks, And.tasks, AndOr.tasks, AndOr.tasks, 
 LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, 
 LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, 
 LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, 
 LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, SIZE.perf, all.perf, 
 luceneutil-score-equal.patch, luceneutil-score-equal.patch, merge.perf, 
 merge.png, perf.png, stat.cpp, stat.cpp, tasks.cpp


 Today we only use BooleanScorer if the query consists of SHOULD and MUST_NOT.
 If there is one or more MUST clauses we always use BooleanScorer2.
 But I suspect that unless the MUST clauses have very low hit count compared 
 to the other clauses, that BooleanScorer would perform better than 
 BooleanScorer2.  BooleanScorer still has some vestiges from when it used to 
 handle MUST so it shouldn't be hard to bring back this capability ... I think 
 the challenging part might be the heuristics on when to use which (likely we 
 would have to use firstDocID as proxy for total hit count).
 Likely we should also have BooleanScorer sometimes use .advance() on the subs 
 in this case, eg if suddenly the MUST clause skips 100 docs then you want 
 to .advance() all the SHOULD clauses.
 I won't have near term time to work on this so feel free to take it if you 
 are inspired!



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4396) BooleanScorer should sometimes be used for MUST clauses

2014-08-11 Thread Da Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092678#comment-14092678
 ] 

Da Huang commented on LUCENE-4396:
--

Thanks for your sugggestions, Mike !

{quote}
Is it possible to make a test case showing what the bug was, and
that's fixed (and stays fixed)?
{quote}
The current test cases can show the bug, if you uncomment this line:
{code}
//  scorerOrClass = BooleanArrayScorer.class;
{code}

{quote}
Also, do we have a test case that fails if DAAT and TAAT scoring
differs (as it does on trunk today)? 
{quote}
Negative. I'll add the test case to the next patch.

{quote}
Can you add a comment to that
part in the code, linking to this issue and explaining the motivation
behind it?
{quote}
Sure.

{quote}
Can I commit TestBooleanUnevenly to trunk today? Seems like there's
no reason to wait...
{quote}
Yes, sure.


 BooleanScorer should sometimes be used for MUST clauses
 ---

 Key: LUCENE-4396
 URL: https://issues.apache.org/jira/browse/LUCENE-4396
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
 Attachments: And.tasks, And.tasks, AndOr.tasks, AndOr.tasks, 
 LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, 
 LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, 
 LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, 
 LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, SIZE.perf, all.perf, 
 luceneutil-score-equal.patch, luceneutil-score-equal.patch, merge.perf, 
 merge.png, perf.png, stat.cpp, stat.cpp, tasks.cpp


 Today we only use BooleanScorer if the query consists of SHOULD and MUST_NOT.
 If there is one or more MUST clauses we always use BooleanScorer2.
 But I suspect that unless the MUST clauses have very low hit count compared 
 to the other clauses, that BooleanScorer would perform better than 
 BooleanScorer2.  BooleanScorer still has some vestiges from when it used to 
 handle MUST so it shouldn't be hard to bring back this capability ... I think 
 the challenging part might be the heuristics on when to use which (likely we 
 would have to use firstDocID as proxy for total hit count).
 Likely we should also have BooleanScorer sometimes use .advance() on the subs 
 in this case, eg if suddenly the MUST clause skips 100 docs then you want 
 to .advance() all the SHOULD clauses.
 I won't have near term time to work on this so feel free to take it if you 
 are inspired!



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5656) Add autoAddReplicas feature for shared file systems.

2014-08-11 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092686#comment-14092686
 ] 

Mark Miller commented on SOLR-5656:
---

Didn't ping the issue itself, but I put up a new patch about a week and a half 
ago: https://reviews.apache.org/r/23371/

 Add autoAddReplicas feature for shared file systems.
 

 Key: SOLR-5656
 URL: https://issues.apache.org/jira/browse/SOLR-5656
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-5656.patch, SOLR-5656.patch, SOLR-5656.patch, 
 SOLR-5656.patch


 When using HDFS, the Overseer should have the ability to reassign the cores 
 from failed nodes to running nodes.
 Given that the index and transaction logs are in hdfs, it's simple for 
 surviving hardware to take over serving cores for failed hardware.
 There are some tricky issues around having the Overseer handle this for you, 
 but seems a simple first pass is not too difficult.
 This will add another alternative to replicating both with hdfs and solr.
 It shouldn't be specific to hdfs, and would be an option for any shared file 
 system Solr supports.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.7.0_65) - Build # 11000 - Still Failing!

2014-08-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11000/
Java: 64bit/jdk1.7.0_65 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSimplePropertiesWriter

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.dataimport.TestSimplePropertiesWriter: 1) 
Thread[id=17, name=Timer-0, state=WAITING, 
group=TGRP-TestSimplePropertiesWriter] at java.lang.Object.wait(Native 
Method) at java.lang.Object.wait(Object.java:503) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.dataimport.TestSimplePropertiesWriter: 
   1) Thread[id=17, name=Timer-0, state=WAITING, 
group=TGRP-TestSimplePropertiesWriter]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:503)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
at __randomizedtesting.SeedInfo.seed([F95A17B6F9A43C6]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSimplePropertiesWriter

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=17, 
name=Timer-0, state=WAITING, group=TGRP-TestSimplePropertiesWriter] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:503) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=17, name=Timer-0, state=WAITING, 
group=TGRP-TestSimplePropertiesWriter]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:503)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
at __randomizedtesting.SeedInfo.seed([F95A17B6F9A43C6]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta: 1) 
Thread[id=37, name=Timer-0, state=WAITING, 
group=TGRP-TestSqlEntityProcessorDelta] at java.lang.Object.wait(Native 
Method) at java.lang.Object.wait(Object.java:503) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta: 
   1) Thread[id=37, name=Timer-0, state=WAITING, 
group=TGRP-TestSqlEntityProcessorDelta]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:503)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
at __randomizedtesting.SeedInfo.seed([F95A17B6F9A43C6]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=37, 
name=Timer-0, state=WAITING, group=TGRP-TestSqlEntityProcessorDelta] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:503) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=37, name=Timer-0, state=WAITING, 
group=TGRP-TestSqlEntityProcessorDelta]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:503)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
at __randomizedtesting.SeedInfo.seed([F95A17B6F9A43C6]:0)




Build Log:
[...truncated 14985 lines...]
   [junit4] Suite: org.apache.solr.handler.dataimport.TestSimplePropertiesWriter
   [junit4]   2 Creating dataDir: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/contrib/solr-dataimporthandler/test/J1/./temp/solr.handler.dataimport.TestSimplePropertiesWriter-F95A17B6F9A43C6-001/init-core-data-001
   [junit4]   2 1446 T16 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(true) and clientAuth (true)
   [junit4]   2 2315 T16 oas.SolrTestCaseJ4.initCore initCore
   [junit4]   2 2369 T16 oasc.SolrResourceLoader.init new SolrResourceLoader 
for directory: 

[jira] [Commented] (LUCENE-5875) Default page/block sizes in the FST package can cause OOMs

2014-08-11 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092709#comment-14092709
 ] 

Karl Wright commented on LUCENE-5875:
-

Hi Christian,

Doesn't look like anyone has commented yet on this ticket.  Would you be 
willing to attach a patch, to demonstrate what you would like to change?

Thanks!

 Default page/block sizes in the FST package can cause OOMs
 --

 Key: LUCENE-5875
 URL: https://issues.apache.org/jira/browse/LUCENE-5875
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/FSTs
Affects Versions: 4.9
Reporter: Christian Ziech
Priority: Minor

 We are building some fairly big FSTs (the biggest one having about 500M terms 
 with an average of 20 characters per term) and that works very well so far.
 The problem is just that we can use neither the doShareSuffix nor the 
 doPackFST option from the builder since both would cause us to get 
 exceptions. One beeing an OOM and the other an IllegalArgumentException for a 
 negative array size in ArrayUtil.
 The thing here is that we in theory still have far more than enough memory 
 available but it seems that java for some reason cannot allocate byte or long 
 arrays of the size the NodeHash needs (maybe fragmentation?).
 Reducing the constant in the NodeHash from 130 to e.g. 27 seems to fix the 
 issue mostly. Could e.g. the Builder pass through its bytesPageBits to the 
 NodeHash or could we get a custom parameter for that?
 The other problem we run into was a NegativeArraySizeException when we try to 
 pack the FST. It seems that we overflowed to 0x8000. Unfortunately I 
 accidentally overwrote that exception but I remember it was triggered by the 
 GrowableWriter for the inCounts in line 728 of the FST. If it helps I can try 
 to reproduce it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2357) Thread Local memory leaks on restart

2014-08-11 Thread Jaimin Patel (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092748#comment-14092748
 ] 

Jaimin Patel commented on SOLR-2357:


I am running into the same issue? Is the any solution to the problem?

Its randomly causing tomcat shutdown. 

 Thread Local memory leaks on restart
 

 Key: SOLR-2357
 URL: https://issues.apache.org/jira/browse/SOLR-2357
 Project: Solr
  Issue Type: Bug
  Components: contrib - Solr Cell (Tika extraction), search
Affects Versions: 1.4.1
 Environment: Windows Server 2008, Apache Tomcat 7.0.8, Java 1.6.23
Reporter: Gus Heck
  Labels: memory_leak, threadlocal

 Restarting solr (via a changed to a watched resource or via manager app for 
 example) after submitting documents with Solr-Cell, gives the following 
 message (many many times), and causes Tomcat to shutdown completely. 
 SEVERE: The web application [/solr] created a ThreadLocal with key of type 
 [org.
 apache.solr.common.util.DateUtil.ThreadLocalDateFormat] (value 
 [org.apache.solr.
 common.util.DateUtil$ThreadLocalDateFormat@dc30dfa]) and a value of type 
 [java.t
 ext.SimpleDateFormat] (value [java.text.SimpleDateFormat@5af7aed5]) but 
 failed t
 o remove it when the web application was stopped. Threads are going to be 
 renewe
 d over time to try and avoid a probable memory leak.
 Feb 10, 2011 7:17:53 AM org.apache.catalina.loader.WebappClassLoader 
 checkThread
 LocalMapForLeaks



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4396) BooleanScorer should sometimes be used for MUST clauses

2014-08-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092763#comment-14092763
 ] 

ASF subversion and git services commented on LUCENE-4396:
-

Commit 1617284 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1617284 ]

LUCENE-4396: add test case

 BooleanScorer should sometimes be used for MUST clauses
 ---

 Key: LUCENE-4396
 URL: https://issues.apache.org/jira/browse/LUCENE-4396
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
 Attachments: And.tasks, And.tasks, AndOr.tasks, AndOr.tasks, 
 LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, 
 LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, 
 LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, 
 LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, SIZE.perf, all.perf, 
 luceneutil-score-equal.patch, luceneutil-score-equal.patch, merge.perf, 
 merge.png, perf.png, stat.cpp, stat.cpp, tasks.cpp


 Today we only use BooleanScorer if the query consists of SHOULD and MUST_NOT.
 If there is one or more MUST clauses we always use BooleanScorer2.
 But I suspect that unless the MUST clauses have very low hit count compared 
 to the other clauses, that BooleanScorer would perform better than 
 BooleanScorer2.  BooleanScorer still has some vestiges from when it used to 
 handle MUST so it shouldn't be hard to bring back this capability ... I think 
 the challenging part might be the heuristics on when to use which (likely we 
 would have to use firstDocID as proxy for total hit count).
 Likely we should also have BooleanScorer sometimes use .advance() on the subs 
 in this case, eg if suddenly the MUST clause skips 100 docs then you want 
 to .advance() all the SHOULD clauses.
 I won't have near term time to work on this so feel free to take it if you 
 are inspired!



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4396) BooleanScorer should sometimes be used for MUST clauses

2014-08-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092764#comment-14092764
 ] 

ASF subversion and git services commented on LUCENE-4396:
-

Commit 1617285 from [~mikemccand] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1617285 ]

LUCENE-4396: add test case

 BooleanScorer should sometimes be used for MUST clauses
 ---

 Key: LUCENE-4396
 URL: https://issues.apache.org/jira/browse/LUCENE-4396
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
 Attachments: And.tasks, And.tasks, AndOr.tasks, AndOr.tasks, 
 LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, 
 LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, 
 LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, 
 LUCENE-4396.patch, LUCENE-4396.patch, LUCENE-4396.patch, SIZE.perf, all.perf, 
 luceneutil-score-equal.patch, luceneutil-score-equal.patch, merge.perf, 
 merge.png, perf.png, stat.cpp, stat.cpp, tasks.cpp


 Today we only use BooleanScorer if the query consists of SHOULD and MUST_NOT.
 If there is one or more MUST clauses we always use BooleanScorer2.
 But I suspect that unless the MUST clauses have very low hit count compared 
 to the other clauses, that BooleanScorer would perform better than 
 BooleanScorer2.  BooleanScorer still has some vestiges from when it used to 
 handle MUST so it shouldn't be hard to bring back this capability ... I think 
 the challenging part might be the heuristics on when to use which (likely we 
 would have to use firstDocID as proxy for total hit count).
 Likely we should also have BooleanScorer sometimes use .advance() on the subs 
 in this case, eg if suddenly the MUST clause skips 100 docs then you want 
 to .advance() all the SHOULD clauses.
 I won't have near term time to work on this so feel free to take it if you 
 are inspired!



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6304) JsonLoader should be able to flatten an input JSON to multiple docs

2014-08-11 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6304:
-

Summary: JsonLoader should be able to flatten an input JSON to multiple 
docs  (was: Add a way to flatten an input JSON to multiple docs)

 JsonLoader should be able to flatten an input JSON to multiple docs
 ---

 Key: SOLR-6304
 URL: https://issues.apache.org/jira/browse/SOLR-6304
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6304.patch, SOLR-6304.patch


 example
 {noformat}
 curl 
 localhost:8983/update/json/docs?split=/batters/batterf=recipeId:/idf=recipeType:/typef=id:/batters/batter/idf=type:/batters/batter/type
  -d '
 {
   id: 0001,
   type: donut,
   name: Cake,
   ppu: 0.55,
   batters: {
   batter:
   [
   { id: 1001, type: 
 Regular },
   { id: 1002, type: 
 Chocolate },
   { id: 1003, type: 
 Blueberry },
   { id: 1004, type: 
 Devil's Food }
   ]
   }
 }'
 {noformat}
 should produce the following output docs
 {noformat}
 { recipeId:001, recipeType:donut, id:1001, type:Regular }
 { recipeId:001, recipeType:donut, id:1002, type:Chocolate }
 { recipeId:001, recipeType:donut, id:1003, type:Blueberry }
 { recipeId:001, recipeType:donut, id:1004, type:Devil's food }
 {noformat}
 the split param is the element in the tree where it should be split into 
 multiple docs. The 'f' are field name mappings



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6304) JsonLoader should be able to flatten an input JSON to multiple docs

2014-08-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092791#comment-14092791
 ] 

ASF subversion and git services commented on SOLR-6304:
---

Commit 1617287 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1617287 ]

SOLR-6304 JsonLoader should be able to flatten an input JSON to multiple docs

 JsonLoader should be able to flatten an input JSON to multiple docs
 ---

 Key: SOLR-6304
 URL: https://issues.apache.org/jira/browse/SOLR-6304
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6304.patch, SOLR-6304.patch


 example
 {noformat}
 curl 
 localhost:8983/update/json/docs?split=/batters/batterf=recipeId:/idf=recipeType:/typef=id:/batters/batter/idf=type:/batters/batter/type
  -d '
 {
   id: 0001,
   type: donut,
   name: Cake,
   ppu: 0.55,
   batters: {
   batter:
   [
   { id: 1001, type: 
 Regular },
   { id: 1002, type: 
 Chocolate },
   { id: 1003, type: 
 Blueberry },
   { id: 1004, type: 
 Devil's Food }
   ]
   }
 }'
 {noformat}
 should produce the following output docs
 {noformat}
 { recipeId:001, recipeType:donut, id:1001, type:Regular }
 { recipeId:001, recipeType:donut, id:1002, type:Chocolate }
 { recipeId:001, recipeType:donut, id:1003, type:Blueberry }
 { recipeId:001, recipeType:donut, id:1004, type:Devil's food }
 {noformat}
 the split param is the element in the tree where it should be split into 
 multiple docs. The 'f' are field name mappings



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6362) TestSqlEntityProcessorDelta test bug

2014-08-11 Thread James Dyer (JIRA)
James Dyer created SOLR-6362:


 Summary: TestSqlEntityProcessorDelta test bug
 Key: SOLR-6362
 URL: https://issues.apache.org/jira/browse/SOLR-6362
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 4.9
Reporter: James Dyer
Assignee: James Dyer
Priority: Trivial
 Fix For: 5.0, 4.10


I stumbled on a case where TestSqlEntityProcessorDelta#testChildEntities will 
fail.  Test bug.

-Dtests.seed=B387DA5FC73441ED



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6362) TestSqlEntityProcessorDelta test bug

2014-08-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092800#comment-14092800
 ] 

ASF subversion and git services commented on SOLR-6362:
---

Commit 1617289 from jd...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1617289 ]

SOLR-6362: fix test bug

 TestSqlEntityProcessorDelta test bug
 

 Key: SOLR-6362
 URL: https://issues.apache.org/jira/browse/SOLR-6362
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 4.9
Reporter: James Dyer
Assignee: James Dyer
Priority: Trivial
 Fix For: 5.0, 4.10


 I stumbled on a case where TestSqlEntityProcessorDelta#testChildEntities will 
 fail.  Test bug.
 -Dtests.seed=B387DA5FC73441ED



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5875) Default page/block sizes in the FST package can cause OOMs

2014-08-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092802#comment-14092802
 ] 

Michael McCandless commented on LUCENE-5875:


Hmm, we could simply decrease the PagedGrowableWriter from 130 (1B values in 
each packed array) to 127 (128M values)?  Asking for a single contiguous 
packed array with 1 B values and a highish bpv can easily be a lot of RAM (8 GB 
in the worst case).

One thing to try is enabling doShareSuffix, but then try setting 
doShareNonSingletonNodes to false; this should be a big reduction on RAM 
required, while making the resulting FST a big larger than minimal.  If that's 
still too much RAM, try decreasing shareMaxTailLength from Integer.MAX_VALUE to 
smallish numbers, e.g. maybe 10 or 5 or 4 or so.  As that number gets smaller, 
the RAM required to build will decrease, and the FST will grow in size.

On packing, it looks like the FST code cannot handle  2.1 B nodes when packing 
is enabled, but this looks like something we could fix (it was just skipped 
when we did LUCENE-3298).  However, you should have hit IllegalStateException, 
not NegativeArraySizeException.  Oh, actually, I suspect this was due to 
LUCENE-5844, which will be fixed in 4.10, at which point you really should hit 
IllegalStateException.  The thing is, even if we fix packing to allow  2.1 B 
nodes, packing is additionally RAM intensive (i.e., adds to the RAM required 
for normal FST building) ... and I'm not sure how much shrinkage packing 
actually buys these days (we've made some improvements to the unpacked format). 
 Do you have any numbers from your large FSTs?

 Default page/block sizes in the FST package can cause OOMs
 --

 Key: LUCENE-5875
 URL: https://issues.apache.org/jira/browse/LUCENE-5875
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/FSTs
Affects Versions: 4.9
Reporter: Christian Ziech
Priority: Minor

 We are building some fairly big FSTs (the biggest one having about 500M terms 
 with an average of 20 characters per term) and that works very well so far.
 The problem is just that we can use neither the doShareSuffix nor the 
 doPackFST option from the builder since both would cause us to get 
 exceptions. One beeing an OOM and the other an IllegalArgumentException for a 
 negative array size in ArrayUtil.
 The thing here is that we in theory still have far more than enough memory 
 available but it seems that java for some reason cannot allocate byte or long 
 arrays of the size the NodeHash needs (maybe fragmentation?).
 Reducing the constant in the NodeHash from 130 to e.g. 27 seems to fix the 
 issue mostly. Could e.g. the Builder pass through its bytesPageBits to the 
 NodeHash or could we get a custom parameter for that?
 The other problem we run into was a NegativeArraySizeException when we try to 
 pack the FST. It seems that we overflowed to 0x8000. Unfortunately I 
 accidentally overwrote that exception but I remember it was triggered by the 
 GrowableWriter for the inCounts in line 728 of the FST. If it helps I can try 
 to reproduce it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5879) Add auto-prefix terms to block tree terms dict

2014-08-11 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-5879:
---

Attachment: LUCENE-5879.patch

Another iteration ... still tons of nocommits.  I added 
Automata.makeBinaryInterval, so you can easily make an automaton that matches a 
min / max term range.  I plan to switch the read-time API to use a new 
CompiledAutomaton.AUTOMATON_TYPE.RANGE, and fix e.g. PrefixQuery (and 
eventually maybe the infix suggester, and numeric range query) to use this API.

 Add auto-prefix terms to block tree terms dict
 --

 Key: LUCENE-5879
 URL: https://issues.apache.org/jira/browse/LUCENE-5879
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/codecs
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5879.patch, LUCENE-5879.patch


 This cool idea to generalize numeric/trie fields came from Adrien:
 Today, when we index a numeric field (LongField, etc.) we pre-compute
 (via NumericTokenStream) outside of indexer/codec which prefix terms
 should be indexed.
 But this can be inefficient: you set a static precisionStep, and
 always add those prefix terms regardless of how the terms in the field
 are actually distributed.  Yet typically in real world applications
 the terms have a non-random distribution.
 So, it should be better if instead the terms dict decides where it
 makes sense to insert prefix terms, based on how dense the terms are
 in each region of term space.
 This way we can speed up query time for both term (e.g. infix
 suggester) and numeric ranges, and it should let us use less index
 space and get faster range queries.
  
 This would also mean that min/maxTerm for a numeric field would now be
 correct, vs today where the externally computed prefix terms are
 placed after the full precision terms, causing hairy code like
 NumericUtils.getMaxInt/Long.  So optos like LUCENE-5860 become
 feasible.
 The terms dict can also do tricks not possible if you must live on top
 of its APIs, e.g. to handle the adversary/over-constrained case when a
 given prefix has too many terms following it but finer prefixes
 have too few (what block tree calls floor term blocks).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6362) TestSqlEntityProcessorDelta test bug

2014-08-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092811#comment-14092811
 ] 

ASF subversion and git services commented on SOLR-6362:
---

Commit 1617294 from jd...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1617294 ]

SOLR-6362: fix test bug

 TestSqlEntityProcessorDelta test bug
 

 Key: SOLR-6362
 URL: https://issues.apache.org/jira/browse/SOLR-6362
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 4.9
Reporter: James Dyer
Assignee: James Dyer
Priority: Trivial
 Fix For: 5.0, 4.10


 I stumbled on a case where TestSqlEntityProcessorDelta#testChildEntities will 
 fail.  Test bug.
 -Dtests.seed=B387DA5FC73441ED



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6304) JsonLoader should be able to flatten an input JSON to multiple docs

2014-08-11 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092818#comment-14092818
 ] 

Erik Hatcher commented on SOLR-6304:


bq. I want all paths to support it .That is why I did not use a prefix. Why 
can't csv too do it?

Ok, cool.  As for CSV, the echo feature is for when an incoming payload is 
split into multiple documents, right?   So it doesn't have quite the same 
value/effect that it does for this flattening of JSON and XML.  

 JsonLoader should be able to flatten an input JSON to multiple docs
 ---

 Key: SOLR-6304
 URL: https://issues.apache.org/jira/browse/SOLR-6304
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, 4.10

 Attachments: SOLR-6304.patch, SOLR-6304.patch


 example
 {noformat}
 curl 
 localhost:8983/update/json/docs?split=/batters/batterf=recipeId:/idf=recipeType:/typef=id:/batters/batter/idf=type:/batters/batter/type
  -d '
 {
   id: 0001,
   type: donut,
   name: Cake,
   ppu: 0.55,
   batters: {
   batter:
   [
   { id: 1001, type: 
 Regular },
   { id: 1002, type: 
 Chocolate },
   { id: 1003, type: 
 Blueberry },
   { id: 1004, type: 
 Devil's Food }
   ]
   }
 }'
 {noformat}
 should produce the following output docs
 {noformat}
 { recipeId:001, recipeType:donut, id:1001, type:Regular }
 { recipeId:001, recipeType:donut, id:1002, type:Chocolate }
 { recipeId:001, recipeType:donut, id:1003, type:Blueberry }
 { recipeId:001, recipeType:donut, id:1004, type:Devil's food }
 {noformat}
 the split param is the element in the tree where it should be split into 
 multiple docs. The 'f' are field name mappings



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6344) Wild card support for JSON parsing

2014-08-11 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-6344.
--

   Resolution: Fixed
Fix Version/s: 4.10
   5.0

this is resolved as a part of SOLR-6304

 Wild card support for JSON parsing
 --

 Key: SOLR-6344
 URL: https://issues.apache.org/jira/browse/SOLR-6344
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, 4.10


 additional syntaxes
  * f=/path/to/node/* : map  all fields in that path to respective keys
  * f=/path/to/node/** : map all fields recursively in that path with 
 respective keys
 * f=dest:/path/to/node/*: map all the fields to a field called 'dest'
 * f=dest:/path/to/node/**:map all the fields recursively to a field called 
 'dest'



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6304) JsonLoader should be able to flatten an input JSON to multiple docs

2014-08-11 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-6304.
--

   Resolution: Fixed
Fix Version/s: 4.10
   5.0

 JsonLoader should be able to flatten an input JSON to multiple docs
 ---

 Key: SOLR-6304
 URL: https://issues.apache.org/jira/browse/SOLR-6304
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, 4.10

 Attachments: SOLR-6304.patch, SOLR-6304.patch


 example
 {noformat}
 curl 
 localhost:8983/update/json/docs?split=/batters/batterf=recipeId:/idf=recipeType:/typef=id:/batters/batter/idf=type:/batters/batter/type
  -d '
 {
   id: 0001,
   type: donut,
   name: Cake,
   ppu: 0.55,
   batters: {
   batter:
   [
   { id: 1001, type: 
 Regular },
   { id: 1002, type: 
 Chocolate },
   { id: 1003, type: 
 Blueberry },
   { id: 1004, type: 
 Devil's Food }
   ]
   }
 }'
 {noformat}
 should produce the following output docs
 {noformat}
 { recipeId:001, recipeType:donut, id:1001, type:Regular }
 { recipeId:001, recipeType:donut, id:1002, type:Chocolate }
 { recipeId:001, recipeType:donut, id:1003, type:Blueberry }
 { recipeId:001, recipeType:donut, id:1004, type:Devil's food }
 {noformat}
 the split param is the element in the tree where it should be split into 
 multiple docs. The 'f' are field name mappings



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6304) JsonLoader should be able to flatten an input JSON to multiple docs

2014-08-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092819#comment-14092819
 ] 

ASF subversion and git services commented on SOLR-6304:
---

Commit 1617296 from [~noble.paul] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1617296 ]

SOLR-6304 JsonLoader should be able to flatten an input JSON to multiple docs

 JsonLoader should be able to flatten an input JSON to multiple docs
 ---

 Key: SOLR-6304
 URL: https://issues.apache.org/jira/browse/SOLR-6304
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, 4.10

 Attachments: SOLR-6304.patch, SOLR-6304.patch


 example
 {noformat}
 curl 
 localhost:8983/update/json/docs?split=/batters/batterf=recipeId:/idf=recipeType:/typef=id:/batters/batter/idf=type:/batters/batter/type
  -d '
 {
   id: 0001,
   type: donut,
   name: Cake,
   ppu: 0.55,
   batters: {
   batter:
   [
   { id: 1001, type: 
 Regular },
   { id: 1002, type: 
 Chocolate },
   { id: 1003, type: 
 Blueberry },
   { id: 1004, type: 
 Devil's Food }
   ]
   }
 }'
 {noformat}
 should produce the following output docs
 {noformat}
 { recipeId:001, recipeType:donut, id:1001, type:Regular }
 { recipeId:001, recipeType:donut, id:1002, type:Chocolate }
 { recipeId:001, recipeType:donut, id:1003, type:Blueberry }
 { recipeId:001, recipeType:donut, id:1004, type:Devil's food }
 {noformat}
 the split param is the element in the tree where it should be split into 
 multiple docs. The 'f' are field name mappings



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5875) Default page/block sizes in the FST package can cause OOMs

2014-08-11 Thread Christian Ziech (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092849#comment-14092849
 ] 

Christian Ziech commented on LUCENE-5875:
-

Exactly - when the PagedGrowableWriter in the NodeHash used 127 bytes for a 
page things worked like a charm (with maxint as suffix length, 
doShareNonSingletonNodes set to true and both of the min suffix counts set to 
0).

What numbers are you interested in? With doShareSuffix enabled the FST takes 
3.1 GB of disk space: I quickly fetched the following numbers:
- arcCount: 561802889
- nodeCount: 291569846
- arcWithOutputCount: 201469018

While in theory the nodeCount should hence be lower than 2.1B I think we also 
got an exception when enabling packing. But I'm not sure if we tried it in 
conjunction with doShareSuffix. 

 Default page/block sizes in the FST package can cause OOMs
 --

 Key: LUCENE-5875
 URL: https://issues.apache.org/jira/browse/LUCENE-5875
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/FSTs
Affects Versions: 4.9
Reporter: Christian Ziech
Priority: Minor

 We are building some fairly big FSTs (the biggest one having about 500M terms 
 with an average of 20 characters per term) and that works very well so far.
 The problem is just that we can use neither the doShareSuffix nor the 
 doPackFST option from the builder since both would cause us to get 
 exceptions. One beeing an OOM and the other an IllegalArgumentException for a 
 negative array size in ArrayUtil.
 The thing here is that we in theory still have far more than enough memory 
 available but it seems that java for some reason cannot allocate byte or long 
 arrays of the size the NodeHash needs (maybe fragmentation?).
 Reducing the constant in the NodeHash from 130 to e.g. 27 seems to fix the 
 issue mostly. Could e.g. the Builder pass through its bytesPageBits to the 
 NodeHash or could we get a custom parameter for that?
 The other problem we run into was a NegativeArraySizeException when we try to 
 pack the FST. It seems that we overflowed to 0x8000. Unfortunately I 
 accidentally overwrote that exception but I remember it was triggered by the 
 GrowableWriter for the inCounts in line 728 of the FST. If it helps I can try 
 to reproduce it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6361) /admin/collection API reload action is not work. --Got timeout exception

2014-08-11 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-6361.


Resolution: Invalid

That node is pretty much used for internal communication and not for external 
visibility. It's used for the marking and consuming of the response internally.
That 3 min TIMEOUT is the http timeout and even though the request said it 
timeout (the parent request did), the internal requests would most likely have 
gone through. That's the case with a lot of long running Collection API calls.
I'd recommend you ask such questions on the mailing list before creating a 
JIRA.  I'm closing this one out for now.

P.S: For long running collection API calls, check out the ASYNC mode of the 
calls. At the same time, I'm not sure if ASYNC support for RELOAD exists at 
this time. If it doesn't feel free to create a JIRA for that one.

 /admin/collection API reload action is not work. --Got timeout exception
 

 Key: SOLR-6361
 URL: https://issues.apache.org/jira/browse/SOLR-6361
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 4.9
 Environment: Ubuntu 10.04+Jdk 1.7_55 
 One Zookeeper + Two shards
Reporter: Lewis Liu
Priority: Critical
  Labels: /admin/collections?action=reload
   Original Estimate: 120h
  Remaining Estimate: 120h

 I just updated the schema.xml and uploaded into zookeeper, I want to make all 
 shards effecitve immediately ,So i call the api 
 /solr/admin/collections?action=reloadname=collection4,  After 3 minutes i 
 got 
 exception like this:
 org.apache.solr.common.SolrException: reloadcollection the collection time 
 out:180s
   at 
 org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:368)
   at 
 org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:320)
   at 
 I checked the log ,Found these log
 [2014-08-11 17:19:55,227] [main-SendThread(localhost:2181)] DEBUG 
 org.apache.zookeeper.ClientCnxn  - Got WatchedEvent state:SyncConnected 
 type:NodeDeleted path:/overseer/collection-queue-work/qnr-68 for 
 sessionid 0x147c387a9b3000b
 [2014-08-11 17:19:55,227] [main-EventThread] INFO  
 org.apache.solr.cloud.DistributedQueue  - LatchChildWatcher fired on path: 
 /overseer/collection-queue-work/qnr-68 state: SyncConnected type 
 NodeDeleted



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3274) ZooKeeper related SolrCloud problems

2014-08-11 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092864#comment-14092864
 ] 

Per Steffensen commented on SOLR-3274:
--

bq. That's simply impossible for all 3 zookeeper instances to get offline 
simultaneously. 

Well you never know

bq. Since there's always at least 1 stable ZK node this seems like a 
communication/reliability bug in Solr.

In a 3-node ZK-cluster you need at least 2 healthy ZK-nodes connected with each 
other for the cluster to be operational. A majority of the nodes always need to 
agree for an operation to be carried out - this way you know that at any time 
only one set of ZK-nodes in a ZK-cluster can successfully carry out operations  
- e.g. when there is no network connection between two sets of ZK-nodes (but 
connections internally between the nodes in each set are ok), only one set can 
contain a majority of the total number of ZK-nodes in the cluster.

 ZooKeeper related SolrCloud problems
 

 Key: SOLR-3274
 URL: https://issues.apache.org/jira/browse/SOLR-3274
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-ALPHA
 Environment: Any
Reporter: Per Steffensen

 Same setup as in SOLR-3273. Well if I have to tell the entire truth we have 7 
 Solr servers, running 28 slices of the same collection (collA) - all slices 
 have one replica (two shards all in all - leader + replica) - 56 cores all in 
 all (8 shards on each solr instance). But anyways...
 Besides the problem reported in SOLR-3273, the system seems to run fine under 
 high load for several hours, but eventually errors like the ones shown below 
 start to occur. I might be wrong, but they all seem to indicate some kind of 
 unstability in the collaboration between Solr and ZooKeeper. I have to say 
 that I havnt been there to check ZooKeeper at the moment where those 
 exception occur, but basically I dont believe the exceptions occur because 
 ZooKeeper is not running stable - at least when I go and check ZooKeeper 
 through other channels (e.g. my eclipse ZK plugin) it is always accepting 
 my connection and generally seems to be doing fine.
 Exception 1) Often the first error we see in solr.log is something like this
 {code}
 Mar 22, 2012 5:06:43 AM org.apache.solr.common.SolrException log
 SEVERE: org.apache.solr.common.SolrException: Cannot talk to ZooKeeper - 
 Updates are disabled.
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.zkCheck(DistributedUpdateProcessor.java:678)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:250)
 at org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:140)
 at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:80)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:59)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1540)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:407)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:256)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
 at 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
 at 
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
 at 
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
 at 
 org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
 at 
 org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
 at 
 org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
 at 
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
 at org.mortbay.jetty.Server.handle(Server.java:326)
 at 
 org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
 at 
 org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:945)
 at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756)
 at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218)
 at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
 at 
 org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
 at 
 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
 {code}
 I believe this error basically occurs because 

[jira] [Commented] (SOLR-3274) ZooKeeper related SolrCloud problems

2014-08-11 Thread Alexander S. (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092884#comment-14092884
 ] 

Alexander S. commented on SOLR-3274:


Hi, thanks for the response.

bq. Well you never know
I've checked nodes status, that 3rd node was online all the time and there were 
no any load on it.

bq. In a 3-node ZK-cluster you need at least 2 healthy ZK-nodes connected with 
each other for the cluster to be operational.
That should be the problem since 2 other ZK instances might be (theoretically) 
unavailable because of heavy load (since they share same nodes with Solr 
instances). Both nodes have 16 CPU cores, 48G of memory and RAID 10 (SSD), I 
thought it would be hard to get performance issues there. Anyway, adding a 
separate node with 4th zookeeper instance might help, right?

 ZooKeeper related SolrCloud problems
 

 Key: SOLR-3274
 URL: https://issues.apache.org/jira/browse/SOLR-3274
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-ALPHA
 Environment: Any
Reporter: Per Steffensen

 Same setup as in SOLR-3273. Well if I have to tell the entire truth we have 7 
 Solr servers, running 28 slices of the same collection (collA) - all slices 
 have one replica (two shards all in all - leader + replica) - 56 cores all in 
 all (8 shards on each solr instance). But anyways...
 Besides the problem reported in SOLR-3273, the system seems to run fine under 
 high load for several hours, but eventually errors like the ones shown below 
 start to occur. I might be wrong, but they all seem to indicate some kind of 
 unstability in the collaboration between Solr and ZooKeeper. I have to say 
 that I havnt been there to check ZooKeeper at the moment where those 
 exception occur, but basically I dont believe the exceptions occur because 
 ZooKeeper is not running stable - at least when I go and check ZooKeeper 
 through other channels (e.g. my eclipse ZK plugin) it is always accepting 
 my connection and generally seems to be doing fine.
 Exception 1) Often the first error we see in solr.log is something like this
 {code}
 Mar 22, 2012 5:06:43 AM org.apache.solr.common.SolrException log
 SEVERE: org.apache.solr.common.SolrException: Cannot talk to ZooKeeper - 
 Updates are disabled.
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.zkCheck(DistributedUpdateProcessor.java:678)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:250)
 at org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:140)
 at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:80)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:59)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1540)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:407)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:256)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
 at 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
 at 
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
 at 
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
 at 
 org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
 at 
 org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
 at 
 org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
 at 
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
 at org.mortbay.jetty.Server.handle(Server.java:326)
 at 
 org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
 at 
 org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:945)
 at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756)
 at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218)
 at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
 at 
 org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
 at 
 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
 {code}
 I believe this error basically occurs because SolrZkClient.isConnected 
 reports false, which means that its internal keeper.getState does not 
 return ZooKeeper.States.CONNECTED. 

[jira] [Commented] (LUCENE-5875) Default page/block sizes in the FST package can cause OOMs

2014-08-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092890#comment-14092890
 ] 

Michael McCandless commented on LUCENE-5875:


OK I'll drop the constant to 127.

bq. What numbers are you interested in?

I was just wondering what reduction in FST size you see from packing (when you 
can get it to succeed...), i.e. whether it's really worth investing in fixing 
packing to handle big FSTs.

bq. While in theory the nodeCount should hence be lower than 2.1B I think we 
also got an exception when enabling packing.

Hmm, something else is wrong then ... or was this just an OOME?  If not, can 
you reproduce the non-OOME when turning on packing despite node count being 
well below 2.1B?

 Default page/block sizes in the FST package can cause OOMs
 --

 Key: LUCENE-5875
 URL: https://issues.apache.org/jira/browse/LUCENE-5875
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/FSTs
Affects Versions: 4.9
Reporter: Christian Ziech
Priority: Minor

 We are building some fairly big FSTs (the biggest one having about 500M terms 
 with an average of 20 characters per term) and that works very well so far.
 The problem is just that we can use neither the doShareSuffix nor the 
 doPackFST option from the builder since both would cause us to get 
 exceptions. One beeing an OOM and the other an IllegalArgumentException for a 
 negative array size in ArrayUtil.
 The thing here is that we in theory still have far more than enough memory 
 available but it seems that java for some reason cannot allocate byte or long 
 arrays of the size the NodeHash needs (maybe fragmentation?).
 Reducing the constant in the NodeHash from 130 to e.g. 27 seems to fix the 
 issue mostly. Could e.g. the Builder pass through its bytesPageBits to the 
 NodeHash or could we get a custom parameter for that?
 The other problem we run into was a NegativeArraySizeException when we try to 
 pack the FST. It seems that we overflowed to 0x8000. Unfortunately I 
 accidentally overwrote that exception but I remember it was triggered by the 
 GrowableWriter for the inCounts in line 728 of the FST. If it helps I can try 
 to reproduce it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6363) DIH Sql Tests not properly shutting down Derby db

2014-08-11 Thread James Dyer (JIRA)
James Dyer created SOLR-6363:


 Summary: DIH Sql Tests not properly shutting down Derby db
 Key: SOLR-6363
 URL: https://issues.apache.org/jira/browse/SOLR-6363
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 4.9
Reporter: James Dyer
Assignee: James Dyer
Priority: Minor
 Fix For: 5.0, 4.10


The embedded derby db used by DIH's JDBC tests is not getting shut down 
properly.  Subsequently our test runner is complaining about zombie threads.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.7.0_65) - Build # 11001 - Still Failing!

2014-08-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11001/
Java: 64bit/jdk1.7.0_65 -XX:-UseCompressedOops -XX:+UseSerialGC

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSimplePropertiesWriter

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.dataimport.TestSimplePropertiesWriter: 1) 
Thread[id=41, name=Timer-0, state=WAITING, 
group=TGRP-TestSimplePropertiesWriter] at java.lang.Object.wait(Native 
Method) at java.lang.Object.wait(Object.java:503) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.dataimport.TestSimplePropertiesWriter: 
   1) Thread[id=41, name=Timer-0, state=WAITING, 
group=TGRP-TestSimplePropertiesWriter]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:503)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
at __randomizedtesting.SeedInfo.seed([513ECE12E95D7320]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSimplePropertiesWriter

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=41, 
name=Timer-0, state=WAITING, group=TGRP-TestSimplePropertiesWriter] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:503) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=41, name=Timer-0, state=WAITING, 
group=TGRP-TestSimplePropertiesWriter]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:503)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
at __randomizedtesting.SeedInfo.seed([513ECE12E95D7320]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta: 1) 
Thread[id=12, name=Timer-0, state=WAITING, 
group=TGRP-TestSqlEntityProcessorDelta] at java.lang.Object.wait(Native 
Method) at java.lang.Object.wait(Object.java:503) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta: 
   1) Thread[id=12, name=Timer-0, state=WAITING, 
group=TGRP-TestSqlEntityProcessorDelta]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:503)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
at __randomizedtesting.SeedInfo.seed([513ECE12E95D7320]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=12, 
name=Timer-0, state=WAITING, group=TGRP-TestSqlEntityProcessorDelta] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:503) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=12, name=Timer-0, state=WAITING, 
group=TGRP-TestSqlEntityProcessorDelta]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:503)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
at __randomizedtesting.SeedInfo.seed([513ECE12E95D7320]:0)




Build Log:
[...truncated 14982 lines...]
   [junit4] Suite: 
org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta
   [junit4]   2 log4j:WARN No such property [conversionPattern] in 
org.apache.solr.util.SolrLogLayout.
   [junit4]   2 Creating dataDir: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/contrib/solr-dataimporthandler/test/J1/./temp/solr.handler.dataimport.TestSqlEntityProcessorDelta-513ECE12E95D7320-001/init-core-data-001
   [junit4]   2 1382 T11 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(false) and clientAuth (false)
   [junit4]   2 2315 T11 oas.SolrTestCaseJ4.initCore initCore
   [junit4]   2 2353 T11 oasc.SolrResourceLoader.init new SolrResourceLoader 
for directory: 

[jira] [Commented] (SOLR-6363) DIH Sql Tests not properly shutting down Derby db

2014-08-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092928#comment-14092928
 ] 

ASF subversion and git services commented on SOLR-6363:
---

Commit 1617314 from jd...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1617314 ]

SOLR-6363: fix test bug - shut down Derby properly

 DIH Sql Tests not properly shutting down Derby db
 -

 Key: SOLR-6363
 URL: https://issues.apache.org/jira/browse/SOLR-6363
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 4.9
Reporter: James Dyer
Assignee: James Dyer
Priority: Minor
 Fix For: 5.0, 4.10


 The embedded derby db used by DIH's JDBC tests is not getting shut down 
 properly.  Subsequently our test runner is complaining about zombie threads.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5875) Default page/block sizes in the FST package can cause OOMs

2014-08-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092930#comment-14092930
 ] 

ASF subversion and git services commented on LUCENE-5875:
-

Commit 1617315 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1617315 ]

LUCENE-5875: reduce page size of backing packed array used by NodeHash when 
building an FST from 1B to 128M values

 Default page/block sizes in the FST package can cause OOMs
 --

 Key: LUCENE-5875
 URL: https://issues.apache.org/jira/browse/LUCENE-5875
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/FSTs
Affects Versions: 4.9
Reporter: Christian Ziech
Priority: Minor

 We are building some fairly big FSTs (the biggest one having about 500M terms 
 with an average of 20 characters per term) and that works very well so far.
 The problem is just that we can use neither the doShareSuffix nor the 
 doPackFST option from the builder since both would cause us to get 
 exceptions. One beeing an OOM and the other an IllegalArgumentException for a 
 negative array size in ArrayUtil.
 The thing here is that we in theory still have far more than enough memory 
 available but it seems that java for some reason cannot allocate byte or long 
 arrays of the size the NodeHash needs (maybe fragmentation?).
 Reducing the constant in the NodeHash from 130 to e.g. 27 seems to fix the 
 issue mostly. Could e.g. the Builder pass through its bytesPageBits to the 
 NodeHash or could we get a custom parameter for that?
 The other problem we run into was a NegativeArraySizeException when we try to 
 pack the FST. It seems that we overflowed to 0x8000. Unfortunately I 
 accidentally overwrote that exception but I remember it was triggered by the 
 GrowableWriter for the inCounts in line 728 of the FST. If it helps I can try 
 to reproduce it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6362) TestSqlEntityProcessorDelta test bug

2014-08-11 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer resolved SOLR-6362.
--

Resolution: Fixed

 TestSqlEntityProcessorDelta test bug
 

 Key: SOLR-6362
 URL: https://issues.apache.org/jira/browse/SOLR-6362
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 4.9
Reporter: James Dyer
Assignee: James Dyer
Priority: Trivial
 Fix For: 5.0, 4.10


 I stumbled on a case where TestSqlEntityProcessorDelta#testChildEntities will 
 fail.  Test bug.
 -Dtests.seed=B387DA5FC73441ED



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6363) DIH Sql Tests not properly shutting down Derby db

2014-08-11 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer updated SOLR-6363:
-

Affects Version/s: (was: 4.9)
   5.0
Fix Version/s: (was: 4.10)

this is only broken in Trunk.

 DIH Sql Tests not properly shutting down Derby db
 -

 Key: SOLR-6363
 URL: https://issues.apache.org/jira/browse/SOLR-6363
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 5.0
Reporter: James Dyer
Assignee: James Dyer
Priority: Minor
 Fix For: 5.0


 The embedded derby db used by DIH's JDBC tests is not getting shut down 
 properly.  Subsequently our test runner is complaining about zombie threads.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6363) DIH Sql Tests not properly shutting down Derby db

2014-08-11 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer resolved SOLR-6363.
--

Resolution: Fixed

 DIH Sql Tests not properly shutting down Derby db
 -

 Key: SOLR-6363
 URL: https://issues.apache.org/jira/browse/SOLR-6363
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 5.0
Reporter: James Dyer
Assignee: James Dyer
Priority: Minor
 Fix For: 5.0


 The embedded derby db used by DIH's JDBC tests is not getting shut down 
 properly.  Subsequently our test runner is complaining about zombie threads.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6363) DIH Sql Tests not properly shutting down Derby db

2014-08-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092934#comment-14092934
 ] 

ASF subversion and git services commented on SOLR-6363:
---

Commit 1617317 from jd...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1617317 ]

SOLR-6363: no CHANGES.txt for trunk-only fix

 DIH Sql Tests not properly shutting down Derby db
 -

 Key: SOLR-6363
 URL: https://issues.apache.org/jira/browse/SOLR-6363
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 5.0
Reporter: James Dyer
Assignee: James Dyer
Priority: Minor
 Fix For: 5.0


 The embedded derby db used by DIH's JDBC tests is not getting shut down 
 properly.  Subsequently our test runner is complaining about zombie threads.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5875) Default page/block sizes in the FST package can cause OOMs

2014-08-11 Thread Christian Ziech (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092932#comment-14092932
 ] 

Christian Ziech commented on LUCENE-5875:
-

{quote}
Hmm, something else is wrong then ... or was this just an OOME? If not, can you 
reproduce the non-OOME when turning on packing despite node count being well 
below 2.1B?
{quote}

Sure - give me 1-2 days and I'll paste it here. 

 Default page/block sizes in the FST package can cause OOMs
 --

 Key: LUCENE-5875
 URL: https://issues.apache.org/jira/browse/LUCENE-5875
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/FSTs
Affects Versions: 4.9
Reporter: Christian Ziech
Priority: Minor

 We are building some fairly big FSTs (the biggest one having about 500M terms 
 with an average of 20 characters per term) and that works very well so far.
 The problem is just that we can use neither the doShareSuffix nor the 
 doPackFST option from the builder since both would cause us to get 
 exceptions. One beeing an OOM and the other an IllegalArgumentException for a 
 negative array size in ArrayUtil.
 The thing here is that we in theory still have far more than enough memory 
 available but it seems that java for some reason cannot allocate byte or long 
 arrays of the size the NodeHash needs (maybe fragmentation?).
 Reducing the constant in the NodeHash from 130 to e.g. 27 seems to fix the 
 issue mostly. Could e.g. the Builder pass through its bytesPageBits to the 
 NodeHash or could we get a custom parameter for that?
 The other problem we run into was a NegativeArraySizeException when we try to 
 pack the FST. It seems that we overflowed to 0x8000. Unfortunately I 
 accidentally overwrote that exception but I remember it was triggered by the 
 GrowableWriter for the inCounts in line 728 of the FST. If it helps I can try 
 to reproduce it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5875) Default page/block sizes in the FST package can cause OOMs

2014-08-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092936#comment-14092936
 ] 

ASF subversion and git services commented on LUCENE-5875:
-

Commit 1617318 from [~mikemccand] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1617318 ]

LUCENE-5875: reduce page size of backing packed array used by NodeHash when 
building an FST from 1B to 128M values

 Default page/block sizes in the FST package can cause OOMs
 --

 Key: LUCENE-5875
 URL: https://issues.apache.org/jira/browse/LUCENE-5875
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/FSTs
Affects Versions: 4.9
Reporter: Christian Ziech
Priority: Minor

 We are building some fairly big FSTs (the biggest one having about 500M terms 
 with an average of 20 characters per term) and that works very well so far.
 The problem is just that we can use neither the doShareSuffix nor the 
 doPackFST option from the builder since both would cause us to get 
 exceptions. One beeing an OOM and the other an IllegalArgumentException for a 
 negative array size in ArrayUtil.
 The thing here is that we in theory still have far more than enough memory 
 available but it seems that java for some reason cannot allocate byte or long 
 arrays of the size the NodeHash needs (maybe fragmentation?).
 Reducing the constant in the NodeHash from 130 to e.g. 27 seems to fix the 
 issue mostly. Could e.g. the Builder pass through its bytesPageBits to the 
 NodeHash or could we get a custom parameter for that?
 The other problem we run into was a NegativeArraySizeException when we try to 
 pack the FST. It seems that we overflowed to 0x8000. Unfortunately I 
 accidentally overwrote that exception but I remember it was triggered by the 
 GrowableWriter for the inCounts in line 728 of the FST. If it helps I can try 
 to reproduce it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (SOLR-3274) ZooKeeper related SolrCloud problems

2014-08-11 Thread Erick Erickson
bq: Anyway, adding a separate node with 4th zookeeper instance might help,
right?

no. The formula for a quorum is (num_zookeeper_nodes)/2 + 1. So adding a
fourth
node requires that _three_ of them be up, i.e. only one can be unreachable.
Which is
the same number as with 4. It actually makes failure _more_ likely to have
an even
number of ZK instances.

bq: ...since they share same nodes with Solr instances

As separate processes? Or embedded? If the latter, the cure is obvious. If
the former,
consider running the ZK instances on other nodes perhaps...

Best,
Erick


On Mon, Aug 11, 2014 at 8:28 AM, Alexander S. (JIRA) j...@apache.org
wrote:


 [
 https://issues.apache.org/jira/browse/SOLR-3274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092884#comment-14092884
 ]

 Alexander S. commented on SOLR-3274:
 

 Hi, thanks for the response.

 bq. Well you never know
 I've checked nodes status, that 3rd node was online all the time and there
 were no any load on it.

 bq. In a 3-node ZK-cluster you need at least 2 healthy ZK-nodes connected
 with each other for the cluster to be operational.
 That should be the problem since 2 other ZK instances might be
 (theoretically) unavailable because of heavy load (since they share same
 nodes with Solr instances). Both nodes have 16 CPU cores, 48G of memory and
 RAID 10 (SSD), I thought it would be hard to get performance issues there.
 Anyway, adding a separate node with 4th zookeeper instance might help,
 right?

  ZooKeeper related SolrCloud problems
  
 
  Key: SOLR-3274
  URL: https://issues.apache.org/jira/browse/SOLR-3274
  Project: Solr
   Issue Type: Bug
   Components: SolrCloud
 Affects Versions: 4.0-ALPHA
  Environment: Any
 Reporter: Per Steffensen
 
  Same setup as in SOLR-3273. Well if I have to tell the entire truth we
 have 7 Solr servers, running 28 slices of the same collection (collA) - all
 slices have one replica (two shards all in all - leader + replica) - 56
 cores all in all (8 shards on each solr instance). But anyways...
  Besides the problem reported in SOLR-3273, the system seems to run fine
 under high load for several hours, but eventually errors like the ones
 shown below start to occur. I might be wrong, but they all seem to indicate
 some kind of unstability in the collaboration between Solr and ZooKeeper. I
 have to say that I havnt been there to check ZooKeeper at the moment where
 those exception occur, but basically I dont believe the exceptions occur
 because ZooKeeper is not running stable - at least when I go and check
 ZooKeeper through other channels (e.g. my eclipse ZK plugin) it is always
 accepting my connection and generally seems to be doing fine.
  Exception 1) Often the first error we see in solr.log is something like
 this
  {code}
  Mar 22, 2012 5:06:43 AM org.apache.solr.common.SolrException log
  SEVERE: org.apache.solr.common.SolrException: Cannot talk to ZooKeeper -
 Updates are disabled.
  at
 org.apache.solr.update.processor.DistributedUpdateProcessor.zkCheck(DistributedUpdateProcessor.java:678)
  at
 org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:250)
  at
 org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:140)
  at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:80)
  at
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:59)
  at
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:1540)
  at
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:407)
  at
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:256)
  at
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
  at
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
  at
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
  at
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
  at
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
  at
 org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
  at
 org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
  at
 org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
  at
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
  at org.mortbay.jetty.Server.handle(Server.java:326)
  at
 org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
  at
 

[jira] [Assigned] (SOLR-5986) Don't allow runaway queries from harming Solr cluster health or search performance

2014-08-11 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta reassigned SOLR-5986:
--

Assignee: Anshum Gupta

 Don't allow runaway queries from harming Solr cluster health or search 
 performance
 --

 Key: SOLR-5986
 URL: https://issues.apache.org/jira/browse/SOLR-5986
 Project: Solr
  Issue Type: Improvement
  Components: search
Reporter: Steve Davids
Assignee: Anshum Gupta
Priority: Critical
 Fix For: 4.10


 The intent of this ticket is to have all distributed search requests stop 
 wasting CPU cycles on requests that have already timed out or are so 
 complicated that they won't be able to execute. We have come across a case 
 where a nasty wildcard query within a proximity clause was causing the 
 cluster to enumerate terms for hours even though the query timeout was set to 
 minutes. This caused a noticeable slowdown within the system which made us 
 restart the replicas that happened to service that one request, the worst 
 case scenario are users with a relatively low zk timeout value will have 
 nodes start dropping from the cluster due to long GC pauses.
 [~amccurry] Built a mechanism into Apache Blur to help with the issue in 
 BLUR-142 (see commit comment for code, though look at the latest code on the 
 trunk for newer bug fixes).
 Solr should be able to either prevent these problematic queries from running 
 by some heuristic (possibly estimated size of heap usage) or be able to 
 execute a thread interrupt on all query threads once the time threshold is 
 met. This issue mirrors what others have discussed on the mailing list: 
 http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200903.mbox/%3c856ac15f0903272054q2dbdbd19kea3c5ba9e105b...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5986) Don't allow runaway queries from harming Solr cluster health or search performance

2014-08-11 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-5986:
---

Fix Version/s: (was: 4.9)
   4.10

 Don't allow runaway queries from harming Solr cluster health or search 
 performance
 --

 Key: SOLR-5986
 URL: https://issues.apache.org/jira/browse/SOLR-5986
 Project: Solr
  Issue Type: Improvement
  Components: search
Reporter: Steve Davids
Assignee: Anshum Gupta
Priority: Critical
 Fix For: 4.10


 The intent of this ticket is to have all distributed search requests stop 
 wasting CPU cycles on requests that have already timed out or are so 
 complicated that they won't be able to execute. We have come across a case 
 where a nasty wildcard query within a proximity clause was causing the 
 cluster to enumerate terms for hours even though the query timeout was set to 
 minutes. This caused a noticeable slowdown within the system which made us 
 restart the replicas that happened to service that one request, the worst 
 case scenario are users with a relatively low zk timeout value will have 
 nodes start dropping from the cluster due to long GC pauses.
 [~amccurry] Built a mechanism into Apache Blur to help with the issue in 
 BLUR-142 (see commit comment for code, though look at the latest code on the 
 trunk for newer bug fixes).
 Solr should be able to either prevent these problematic queries from running 
 by some heuristic (possibly estimated size of heap usage) or be able to 
 execute a thread interrupt on all query threads once the time threshold is 
 met. This issue mirrors what others have discussed on the mailing list: 
 http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200903.mbox/%3c856ac15f0903272054q2dbdbd19kea3c5ba9e105b...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: UseCompoundFile in SolrIndexConfig

2014-08-11 Thread Chris Hostetter

: Maybe we can have a new parameter alwaysUseCompoundFile to affect both
: newly flushed and newly merged segments (this will be translated into
: respective IWC and MP settings), and we make useCompoundFile affect IWC
: only? I know this is a slight change to back-compat, but it's not a serious
: change as in user indexes will still work as they did, only huge merged
: segments won't be packed in a CFS. So if anyone asks, we just tell them to
: migrate to the new API.

I think that's fine ... but i also think it's fine to make a clean break 
of it, and document in CHANGES.txt's upgrade section useCompoundFile no 
longer affects the MergePolicy, it only impacts IndexWriterConfig, because 
that makes the most sense by default and most users on upgrade should be 
better off in this situation -- if you really want to *always* use 
compound files, add the following settings to your mergePolicy/ 
config...

Happy to let you make that call -- no strong opinion about it.

: As for keeping code in for as long as we can ... I have a problem with
: that. It's like we try to educate users about the best use of Solr,
: through WARN messages, but then don't make them do the actual cut ever,
: unless it gets in the developers' way. I'd prefer that we treat the XML
: files like any other API -- we deprecate and remove in the next major
: release. Users have all the time through 4.x to get used to the new API.
: When 5.0 is out, they have to make the cut, and since we also document that
: in the migration guide, it should be enough, no?

There's a subtle but important distinction though between Java APIs and 
XML APIs -- which is why i tend to err on the side of leaving in support 
for things as long as possible when dealing with XML parsing.

in both cases,you can add a deprecation message (from the compiler, 
or a warning message logged by the config file parsing code) but the 
question is what happens if they ignore or don't notice those warnings and 
keeps upgrading -- or does a leap frog upgrade (ie: goes straight from 4.5 
to 5.0, and you deprecated soemthing in 4.6)

if you rip out that Java API in 5.0, a user who blindly upgrades will get 
a hard failure right up front at compilation and knows immediately that 
there is a problem and they have to consult the upgrade/migration docs.

on the xml side though ... a user who does a leap frog upgrade straight 
from 4.5 to 5.0 (or didn't notice the warnings logged by 4.6) will have no 
idea that their config is now being ignored -- unless of course there is 
special logic in the 5.0 code to check for it, bug if you're leaving that 
in there, then why not leave in code to try and be backcompat as well?


but like i said .. it's a fine line, and a judegement call -- there 
hasn't really been an explicit this is what we always do.  in some 
cases the impact on users is minor but the burden on developers is heavy, 
so it's easy to make one choice ... in other cases the impact on users 
can be significant, so a little extra effort is put into backcompat, or at 
least keeping an explicit check  hard failure message in the code beyond 
the next X.0 release.


-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-5098) Add REST support for adding field types to the schema

2014-08-11 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter reassigned SOLR-5098:


Assignee: Timothy Potter  (was: Steve Rowe)

 Add REST support for adding field types to the schema
 -

 Key: SOLR-5098
 URL: https://issues.apache.org/jira/browse/SOLR-5098
 Project: Solr
  Issue Type: Sub-task
  Components: Schema and Analysis
Reporter: Steve Rowe
Assignee: Timothy Potter
Priority: Minor

 POST to {{/schema/fieldtypes}} will add one or more new field types to the 
 schema.
 PUT to {{/schema/fieldtypes/_name_}} will add the {{_name_}}'d field type to 
 the schema.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: UseCompoundFile in SolrIndexConfig

2014-08-11 Thread Shai Erera
OK, I agree I didn't have XML APIs in mind when I wrote that. So if we
change that API such that users *must* migrate (because e.g. we add a new
mandatory parameter), then that's fine not to keep old code. But if users'
apps silently use other settings just because they didn't upgrade their XML
files (and old tags are ignored), that's not good.

Of course, if we change the semantics of useCompoundFile, we can remove
the deprecated code in 5.0 because the behavior changed and it's fine if
users don't pay attention since nothing breaks in their apps. I will handle
that under the same issue.

Shai


On Mon, Aug 11, 2014 at 7:55 PM, Chris Hostetter hossman_luc...@fucit.org
wrote:


 : Maybe we can have a new parameter alwaysUseCompoundFile to affect both
 : newly flushed and newly merged segments (this will be translated into
 : respective IWC and MP settings), and we make useCompoundFile affect IWC
 : only? I know this is a slight change to back-compat, but it's not a
 serious
 : change as in user indexes will still work as they did, only huge merged
 : segments won't be packed in a CFS. So if anyone asks, we just tell them
 to
 : migrate to the new API.

 I think that's fine ... but i also think it's fine to make a clean break
 of it, and document in CHANGES.txt's upgrade section useCompoundFile no
 longer affects the MergePolicy, it only impacts IndexWriterConfig, because
 that makes the most sense by default and most users on upgrade should be
 better off in this situation -- if you really want to *always* use
 compound files, add the following settings to your mergePolicy/
 config...

 Happy to let you make that call -- no strong opinion about it.

 : As for keeping code in for as long as we can ... I have a problem with
 : that. It's like we try to educate users about the best use of Solr,
 : through WARN messages, but then don't make them do the actual cut ever,
 : unless it gets in the developers' way. I'd prefer that we treat the XML
 : files like any other API -- we deprecate and remove in the next major
 : release. Users have all the time through 4.x to get used to the new API.
 : When 5.0 is out, they have to make the cut, and since we also document
 that
 : in the migration guide, it should be enough, no?

 There's a subtle but important distinction though between Java APIs and
 XML APIs -- which is why i tend to err on the side of leaving in support
 for things as long as possible when dealing with XML parsing.

 in both cases,you can add a deprecation message (from the compiler,
 or a warning message logged by the config file parsing code) but the
 question is what happens if they ignore or don't notice those warnings and
 keeps upgrading -- or does a leap frog upgrade (ie: goes straight from 4.5
 to 5.0, and you deprecated soemthing in 4.6)

 if you rip out that Java API in 5.0, a user who blindly upgrades will get
 a hard failure right up front at compilation and knows immediately that
 there is a problem and they have to consult the upgrade/migration docs.

 on the xml side though ... a user who does a leap frog upgrade straight
 from 4.5 to 5.0 (or didn't notice the warnings logged by 4.6) will have no
 idea that their config is now being ignored -- unless of course there is
 special logic in the 5.0 code to check for it, bug if you're leaving that
 in there, then why not leave in code to try and be backcompat as well?


 but like i said .. it's a fine line, and a judegement call -- there
 hasn't really been an explicit this is what we always do.  in some
 cases the impact on users is minor but the burden on developers is heavy,
 so it's easy to make one choice ... in other cases the impact on users
 can be significant, so a little extra effort is put into backcompat, or at
 least keeping an explicit check  hard failure message in the code beyond
 the next X.0 release.


 -Hoss
 http://www.lucidworks.com/

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4797 - Still Failing

2014-08-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4797/

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestNestedChildren

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.dataimport.TestNestedChildren: 1) Thread[id=38, 
name=Timer-0, state=WAITING, group=TGRP-TestNestedChildren] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:503) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.dataimport.TestNestedChildren: 
   1) Thread[id=38, name=Timer-0, state=WAITING, group=TGRP-TestNestedChildren]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:503)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
at __randomizedtesting.SeedInfo.seed([C4CEFF93C1F92F7]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestNestedChildren

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=38, 
name=Timer-0, state=WAITING, group=TGRP-TestNestedChildren] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:503) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=38, name=Timer-0, state=WAITING, group=TGRP-TestNestedChildren]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:503)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
at __randomizedtesting.SeedInfo.seed([C4CEFF93C1F92F7]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta: 1) 
Thread[id=19, name=Timer-0, state=WAITING, 
group=TGRP-TestSqlEntityProcessorDelta] at java.lang.Object.wait(Native 
Method) at java.lang.Object.wait(Object.java:503) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta: 
   1) Thread[id=19, name=Timer-0, state=WAITING, 
group=TGRP-TestSqlEntityProcessorDelta]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:503)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
at __randomizedtesting.SeedInfo.seed([C4CEFF93C1F92F7]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=19, 
name=Timer-0, state=WAITING, group=TGRP-TestSqlEntityProcessorDelta] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:503) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=19, name=Timer-0, state=WAITING, 
group=TGRP-TestSqlEntityProcessorDelta]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:503)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
at __randomizedtesting.SeedInfo.seed([C4CEFF93C1F92F7]:0)




Build Log:
[...truncated 15046 lines...]
   [junit4] Suite: 
org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta
   [junit4]   2 Creating dataDir: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/build/contrib/solr-dataimporthandler/test/J1/./temp/solr.handler.dataimport.TestSqlEntityProcessorDelta-C4CEFF93C1F92F7-001/init-core-data-001
   [junit4]   2 5399 T18 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(true) and clientAuth (true)
   [junit4]   2 9352 T18 oas.SolrTestCaseJ4.initCore initCore
   [junit4]   2 9513 T18 oasc.SolrResourceLoader.init new SolrResourceLoader 
for directory: 
'/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/build/contrib/solr-dataimporthandler/test/J1/./temp/solr.handler.dataimport.TestSqlEntityProcessorDelta-C4CEFF93C1F92F7-001/core-home-001/collection1/'
   [junit4]   2 10386 T18 oasc.SolrConfig.init Using 

[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0_20-ea-b23) - Build # 10882 - Failure!

2014-08-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/10882/
Java: 32bit/jdk1.8.0_20-ea-b23 -server -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 52164 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:474: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:413: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:87: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:188: The 
following files are missing svn:eol-style (or binary svn:mime-type):
* ./lucene/core/src/test/org/apache/lucene/search/TestBooleanUnevenly.java

Total time: 106 minutes 47 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 32bit/jdk1.8.0_20-ea-b23 -server 
-XX:+UseParallelGC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-6345) collapsingQParserPlugin degraded performance when using tagging

2014-08-11 Thread David Boychuck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093074#comment-14093074
 ] 

David Boychuck commented on SOLR-6345:
--

Should I close this as a non-issue?

 collapsingQParserPlugin degraded performance when using tagging
 ---

 Key: SOLR-6345
 URL: https://issues.apache.org/jira/browse/SOLR-6345
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.7.2
Reporter: David Boychuck
Priority: Critical
  Labels: collapsingQParserPlugin
   Original Estimate: 72h
  Remaining Estimate: 72h

 I am having a problem with degraded performance when using the 
 collapseQParserPlugin with facet tagging.
 An example query would look something like this
 {code}
 http://host:port/solr/index/handler?facet=truefq={!collapse 
 field=groupid}facet.query={!ex=Width_numeric}Width_numeric:[10+TO+15]facet.query={!ex=Width_numeric}Width_numeric:[15+TO+20]facet.sort=indexstart=0q=36+x+42+shower+basefq={!tag%3DWidth_numeric}Width_numeric:[35+TO+40]
 {code}
 When I either remove the grouping
 {code}
 fq={!collapse field=groupid}
 {code}
 or remove the tag
 {code}
 fq={!tag%3DWidth_numeric}Width_numeric:[35+TO+40]
 {code}
 I am getting requests orders of magnitude faster. In my production 
 environment with around 800k documents. I jump from less than 20ms to over 
 100ms sometimes 200ms using the collapsingQParserPlugin with tagging.
 The issue can be observed in the TestCollapseQParserPlugin tests:
 {code}
 params.add(q, *:*);
 params.add(fq, {!collapse field=group_s});
 params.add(defType, edismax);
 params.add(bf, field(test_ti));
 params.add(facet, true);
 params.add(facet.field, {!ex=test_ti}test_ti);
 params.add(fq, {!tag=test_ti}test_ti:10);
 assertQ(req(params), *[count(//doc)=3],
//result/doc[1]/float[@name='id'][.='2.0'],
//result/doc[2]/float[@name='id'][.='6.0']
 );
 {code}
 With this test with the tagging I ran 10 tests and consistently got response 
 times between 23-28ms. When I removed the tag and ran 10 more tests I 
 consistently got results between 15-18ms
 In all cases if I don't use the collapseQParserPlugin with tagging by either 
 removing the tag or leaving the tag but removing the collapse I am getting 
 poor performance.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: UseCompoundFile in SolrIndexConfig

2014-08-11 Thread Chris Hostetter

: Of course, if we change the semantics of useCompoundFile, we can remove
: the deprecated code in 5.0 because the behavior changed and it's fine if
: users don't pay attention since nothing breaks in their apps. I will handle

exactly.



-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Review Request 23371: SOLR-5656: Add autoAddReplicas feature for shared file systems.

2014-08-11 Thread Mark Miller

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23371/
---

(Updated Aug. 11, 2014, 6:49 p.m.)


Review request for lucene.


Changes
---

Updated patch to trunk.


Bugs: SOLR-5656
https://issues.apache.org/jira/browse/SOLR-5656


Repository: lucene


Description
---

First svn patch for SOLR-5656: Add autoAddReplicas feature for shared file 
systems.


Diffs (updated)
-

  
trunk/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestSolrEntityProcessorEndToEnd.java
 1617332 
  trunk/solr/core/src/java/org/apache/solr/cloud/Assign.java 1617332 
  trunk/solr/core/src/java/org/apache/solr/cloud/CloudUtil.java PRE-CREATION 
  trunk/solr/core/src/java/org/apache/solr/cloud/ElectionContext.java 1617332 
  trunk/solr/core/src/java/org/apache/solr/cloud/Overseer.java 1617332 
  
trunk/solr/core/src/java/org/apache/solr/cloud/OverseerAutoReplicaFailoverThread.java
 PRE-CREATION 
  
trunk/solr/core/src/java/org/apache/solr/cloud/OverseerCollectionProcessor.java 
1617332 
  trunk/solr/core/src/java/org/apache/solr/cloud/ZkController.java 1617332 
  trunk/solr/core/src/java/org/apache/solr/core/ConfigSolr.java 1617332 
  trunk/solr/core/src/java/org/apache/solr/core/ConfigSolrXml.java 1617332 
  trunk/solr/core/src/java/org/apache/solr/core/ConfigSolrXmlOld.java 1617332 
  trunk/solr/core/src/java/org/apache/solr/core/CoreContainer.java 1617332 
  trunk/solr/core/src/java/org/apache/solr/core/DirectoryFactory.java 1617332 
  trunk/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java 
1617332 
  
trunk/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java 
1617332 
  trunk/solr/core/src/java/org/apache/solr/handler/admin/CoreAdminHandler.java 
1617332 
  trunk/solr/core/src/java/org/apache/solr/request/LocalSolrQueryRequest.java 
1617332 
  trunk/solr/core/src/java/org/apache/solr/update/HdfsUpdateLog.java 1617332 
  trunk/solr/core/src/java/org/apache/solr/update/UpdateShardHandler.java 
1617332 
  trunk/solr/core/src/test-files/log4j.properties 1617332 
  trunk/solr/core/src/test-files/solr/solr-no-core.xml 1617332 
  trunk/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZkTest.java 
1617332 
  trunk/solr/core/src/test/org/apache/solr/cloud/ChaosMonkeyShardSplitTest.java 
1617332 
  trunk/solr/core/src/test/org/apache/solr/cloud/ClusterStateUpdateTest.java 
1617332 
  
trunk/solr/core/src/test/org/apache/solr/cloud/CollectionsAPIDistributedZkTest.java
 1617332 
  trunk/solr/core/src/test/org/apache/solr/cloud/CustomCollectionTest.java 
1617332 
  trunk/solr/core/src/test/org/apache/solr/cloud/DeleteReplicaTest.java 1617332 
  trunk/solr/core/src/test/org/apache/solr/cloud/MigrateRouteKeyTest.java 
1617332 
  
trunk/solr/core/src/test/org/apache/solr/cloud/OverseerCollectionProcessorTest.java
 1617332 
  trunk/solr/core/src/test/org/apache/solr/cloud/OverseerRolesTest.java 1617332 
  trunk/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java 1617332 
  trunk/solr/core/src/test/org/apache/solr/cloud/ShardRoutingCustomTest.java 
1617332 
  trunk/solr/core/src/test/org/apache/solr/cloud/ShardSplitTest.java 1617332 
  
trunk/solr/core/src/test/org/apache/solr/cloud/SharedFSAutoReplicaFailoverTest.java
 PRE-CREATION 
  
trunk/solr/core/src/test/org/apache/solr/cloud/SharedFSAutoReplicaFailoverUtilsTest.java
 PRE-CREATION 
  trunk/solr/core/src/test/org/apache/solr/cloud/ZkControllerTest.java 1617332 
  trunk/solr/core/src/test/org/apache/solr/cloud/hdfs/HdfsTestUtil.java 1617332 
  trunk/solr/core/src/test/org/apache/solr/handler/TestReplicationHandler.java 
1617332 
  
trunk/solr/core/src/test/org/apache/solr/handler/TestReplicationHandlerBackup.java
 1617332 
  trunk/solr/core/src/test/org/apache/solr/search/TestRecoveryHdfs.java 1617332 
  trunk/solr/core/src/test/org/apache/solr/util/MockConfigSolr.java 
PRE-CREATION 
  trunk/solr/example/solr/solr.xml 1617332 
  
trunk/solr/solrj/src/java/org/apache/solr/client/solrj/request/CollectionAdminRequest.java
 1617332 
  trunk/solr/solrj/src/java/org/apache/solr/common/cloud/ClosableThread.java 
1617332 
  trunk/solr/solrj/src/java/org/apache/solr/common/cloud/ClusterState.java 
1617332 
  trunk/solr/solrj/src/java/org/apache/solr/common/cloud/ClusterStateUtil.java 
PRE-CREATION 
  trunk/solr/solrj/src/java/org/apache/solr/common/cloud/DocCollection.java 
1617332 
  trunk/solr/solrj/src/java/org/apache/solr/common/cloud/SolrZkClient.java 
1617332 
  trunk/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java 
1617332 
  
trunk/solr/solrj/src/test/org/apache/solr/client/solrj/SolrExampleTestBase.java 
1617332 
  
trunk/solr/solrj/src/test/org/apache/solr/client/solrj/TestLBHttpSolrServer.java
 1617332 
  
trunk/solr/solrj/src/test/org/apache/solr/client/solrj/embedded/JettyWebappTest.java
 1617332 
  

[JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.8.0) - Build # 1728 - Still Failing!

2014-08-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/1728/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.schema.TestCloudSchemaless.testDistribSearch

Error Message:
Timeout occured while waiting response from server at: 
https://127.0.0.1:58405/gfy/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:58405/gfy/collection1
at 
__randomizedtesting.SeedInfo.seed([254B9B6C41F3D4A6:A4AD157436ACB49A]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:68)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:54)
at 
org.apache.solr.schema.TestCloudSchemaless.doTest(TestCloudSchemaless.java:140)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:867)
at sun.reflect.GeneratedMethodAccessor45.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
   

Re: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0_20-ea-b23) - Build # 10882 - Failure!

2014-08-11 Thread Michael McCandless
Woops, I'll fix ... sorry for the noise.  New dev box: Haswell 4790K!

Mike McCandless

http://blog.mikemccandless.com


On Mon, Aug 11, 2014 at 1:52 PM, Policeman Jenkins Server
jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/10882/
 Java: 32bit/jdk1.8.0_20-ea-b23 -server -XX:+UseParallelGC

 All tests passed

 Build Log:
 [...truncated 52164 lines...]
 BUILD FAILED
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:474: The following 
 error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:413: The following 
 error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:87: The 
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:188: The 
 following files are missing svn:eol-style (or binary svn:mime-type):
 * ./lucene/core/src/test/org/apache/lucene/search/TestBooleanUnevenly.java

 Total time: 106 minutes 47 seconds
 Build step 'Invoke Ant' marked build as failure
 [description-setter] Description set: Java: 32bit/jdk1.8.0_20-ea-b23 -server 
 -XX:+UseParallelGC
 Archiving artifacts
 Recording test results
 Email was triggered for: Failure - Any
 Sending email for trigger: Failure - Any




 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6345) collapsingQParserPlugin degraded performance when using tagging

2014-08-11 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093204#comment-14093204
 ] 

Joel Bernstein commented on SOLR-6345:
--

Let's keep this open and investigate further. Still trying to clear some things 
off my plate, and I'll look at SOLR-6066 first.

 collapsingQParserPlugin degraded performance when using tagging
 ---

 Key: SOLR-6345
 URL: https://issues.apache.org/jira/browse/SOLR-6345
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.7.2
Reporter: David Boychuck
Priority: Critical
  Labels: collapsingQParserPlugin
   Original Estimate: 72h
  Remaining Estimate: 72h

 I am having a problem with degraded performance when using the 
 collapseQParserPlugin with facet tagging.
 An example query would look something like this
 {code}
 http://host:port/solr/index/handler?facet=truefq={!collapse 
 field=groupid}facet.query={!ex=Width_numeric}Width_numeric:[10+TO+15]facet.query={!ex=Width_numeric}Width_numeric:[15+TO+20]facet.sort=indexstart=0q=36+x+42+shower+basefq={!tag%3DWidth_numeric}Width_numeric:[35+TO+40]
 {code}
 When I either remove the grouping
 {code}
 fq={!collapse field=groupid}
 {code}
 or remove the tag
 {code}
 fq={!tag%3DWidth_numeric}Width_numeric:[35+TO+40]
 {code}
 I am getting requests orders of magnitude faster. In my production 
 environment with around 800k documents. I jump from less than 20ms to over 
 100ms sometimes 200ms using the collapsingQParserPlugin with tagging.
 The issue can be observed in the TestCollapseQParserPlugin tests:
 {code}
 params.add(q, *:*);
 params.add(fq, {!collapse field=group_s});
 params.add(defType, edismax);
 params.add(bf, field(test_ti));
 params.add(facet, true);
 params.add(facet.field, {!ex=test_ti}test_ti);
 params.add(fq, {!tag=test_ti}test_ti:10);
 assertQ(req(params), *[count(//doc)=3],
//result/doc[1]/float[@name='id'][.='2.0'],
//result/doc[2]/float[@name='id'][.='6.0']
 );
 {code}
 With this test with the tagging I ran 10 tests and consistently got response 
 times between 23-28ms. When I removed the tag and ran 10 more tests I 
 consistently got results between 15-18ms
 In all cases if I don't use the collapseQParserPlugin with tagging by either 
 removing the tag or leaving the tag but removing the collapse I am getting 
 poor performance.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-4.x-Java7 - Build # 2054 - Failure

2014-08-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java7/2054/

1 tests failed.
REGRESSION:  org.apache.solr.cloud.SyncSliceTest.testDistribSearch

Error Message:
shard1 is not consistent.  Got 305 from 
http://127.0.0.1:38217/collection1lastClient and got 5 from 
http://127.0.0.1:55509/collection1

Stack Trace:
java.lang.AssertionError: shard1 is not consistent.  Got 305 from 
http://127.0.0.1:38217/collection1lastClient and got 5 from 
http://127.0.0.1:55509/collection1
at 
__randomizedtesting.SeedInfo.seed([33A21A85E71BD267:B244949D9044B25B]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1139)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1118)
at org.apache.solr.cloud.SyncSliceTest.doTest(SyncSliceTest.java:235)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:867)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[jira] [Updated] (SOLR-5656) Add autoAddReplicas feature for shared file systems.

2014-08-11 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-5656:
--

Description: 
When using HDFS, the Overseer should have the ability to reassign the cores 
from failed nodes to running nodes.

Given that the index and transaction logs are in hdfs, it's simple for 
surviving hardware to take over serving cores for failed hardware.

There are some tricky issues around having the Overseer handle this for you, 
but seems a simple first pass is not too difficult.

This will add another alternative to replicating both with hdfs and solr.

It shouldn't be specific to hdfs, and would be an option for any shared file 
system Solr supports.

https://reviews.apache.org/r/23371/

  was:
When using HDFS, the Overseer should have the ability to reassign the cores 
from failed nodes to running nodes.

Given that the index and transaction logs are in hdfs, it's simple for 
surviving hardware to take over serving cores for failed hardware.

There are some tricky issues around having the Overseer handle this for you, 
but seems a simple first pass is not too difficult.

This will add another alternative to replicating both with hdfs and solr.

It shouldn't be specific to hdfs, and would be an option for any shared file 
system Solr supports.


 Add autoAddReplicas feature for shared file systems.
 

 Key: SOLR-5656
 URL: https://issues.apache.org/jira/browse/SOLR-5656
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-5656.patch, SOLR-5656.patch, SOLR-5656.patch, 
 SOLR-5656.patch


 When using HDFS, the Overseer should have the ability to reassign the cores 
 from failed nodes to running nodes.
 Given that the index and transaction logs are in hdfs, it's simple for 
 surviving hardware to take over serving cores for failed hardware.
 There are some tricky issues around having the Overseer handle this for you, 
 but seems a simple first pass is not too difficult.
 This will add another alternative to replicating both with hdfs and solr.
 It shouldn't be specific to hdfs, and would be an option for any shared file 
 system Solr supports.
 https://reviews.apache.org/r/23371/



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_20-ea-b23) - Build # 11002 - Still Failing!

2014-08-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11002/
Java: 64bit/jdk1.8.0_20-ea-b23 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeySafeLeaderTest

Error Message:
Suite timeout exceeded (= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (= 720 msec).
at __randomizedtesting.SeedInfo.seed([7D1FDEA59F68E006]:0)


REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([7D1FDEA59F68E006]:0)




Build Log:
[...truncated 12040 lines...]
   [junit4] Suite: org.apache.solr.cloud.ChaosMonkeySafeLeaderTest
   [junit4]   2 Creating dataDir: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/./temp/solr.cloud.ChaosMonkeySafeLeaderTest-7D1FDEA59F68E006-001/init-core-data-001
   [junit4]   2 27486 T77 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(true) and clientAuth (true)
   [junit4]   2 27486 T77 oas.BaseDistributedSearchTestCase.initHostContext 
Setting hostContext system property: /_qv/t
   [junit4]   2 27500 T77 oas.SolrTestCaseJ4.setUp ###Starting 
testDistribSearch
   [junit4]   2 27505 T77 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   1 client port:0.0.0.0/0.0.0.0:0
   [junit4]   2 27508 T78 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2 27708 T77 oasc.ZkTestServer.run start zk server on port:53385
   [junit4]   2 27751 T77 oascc.ConnectionManager.waitForConnected Waiting for 
client to connect to ZooKeeper
   [junit4]   2 27832 T79 oazs.NIOServerCnxn.doIO WARN Exception causing close 
of session 0x0 due to java.io.IOException: ZooKeeperServer not running
   [junit4]   2 29223 T84 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@47a96857 
name:ZooKeeperConnection Watcher:127.0.0.1:53385 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2 29224 T77 oascc.ConnectionManager.waitForConnected Client is 
connected to ZooKeeper
   [junit4]   2 29226 T77 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2 29261 T77 oascc.ConnectionManager.waitForConnected Waiting for 
client to connect to ZooKeeper
   [junit4]   2 29264 T86 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@4b0c8fba 
name:ZooKeeperConnection Watcher:127.0.0.1:53385/solr got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2 29264 T77 oascc.ConnectionManager.waitForConnected Client is 
connected to ZooKeeper
   [junit4]   2 29268 T77 oascc.SolrZkClient.makePath makePath: 
/collections/collection1
   [junit4]   2 29301 T77 oascc.SolrZkClient.makePath makePath: 
/collections/collection1/shards
   [junit4]   2 29308 T77 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection
   [junit4]   2 29313 T77 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection/shards
   [junit4]   2 29320 T77 oasc.AbstractZkTestCase.putConfig put 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2 29332 T77 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.xml
   [junit4]   2 29340 T77 oasc.AbstractZkTestCase.putConfig put 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/conf/schema15.xml
 to /configs/conf1/schema.xml
   [junit4]   2 29341 T77 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/schema.xml
   [junit4]   2 29346 T77 oasc.AbstractZkTestCase.putConfig put 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2 29346 T77 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2 29351 T77 oasc.AbstractZkTestCase.putConfig put 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2 29352 T77 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/stopwords.txt
   [junit4]   2 29356 T77 oasc.AbstractZkTestCase.putConfig put 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/conf/protwords.txt
 to /configs/conf1/protwords.txt
   [junit4]   2 29356 T77 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/protwords.txt
   [junit4]   2 29360 T77 oasc.AbstractZkTestCase.putConfig put 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/conf/currency.xml
 to 

[jira] [Created] (SOLR-6364) _version_ value too big for javascript clients causing reported _version_ never matching internal _version_ == suggested resolution: json should communicate _version_ as

2014-08-11 Thread Marc Portier (JIRA)
Marc Portier created SOLR-6364:
--

 Summary: _version_ value too big for javascript clients causing 
reported _version_ never matching internal _version_ == suggested resolution: 
json should communicate _version_ as string! 
 Key: SOLR-6364
 URL: https://issues.apache.org/jira/browse/SOLR-6364
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.9
 Environment: ubuntu 14.04 desktop 32bit

Oracle Corporation Java HotSpot(TM) Server VM (1.7.0_45 24.45-b08)

lucene-spec 4.9.0
lucene-impl 4.9.0 1604085 - rmuir - 2014-06-20 06:22:23

solr-spec  4.9.0
solr-impl  4.9.0 1604085 - rmuir - 2014-06-20 06:34:03

Reporter: Marc Portier


There seems to be a 100 based rounding active in the output/rendition/return of 
the _version_ field of added documents.  Internally on the solr side however 
the real number (non-rounded) is effective and introduces conflicts with the 
optimistic concurrency logic.

Apparently this is to be expected in all Javascript clients, since the 
_version_ numbers used are too big to fit into Javascript Number variables 
without loss of precision.

Here is what one can do to see this in action - all steps below with 
1/ using the solr4 admin UI on 
http://localhost:8983/solr/#/mycore/documents
2/ the request-handler box set to 
/update?commit=trueversions=true
3/ by adding the following into the documents  section on the page:

[1] for create 
Using:

{ id: tst-abcd, version: 1, type: test, title: [title], 
_version_: -1 }

Response:

{  responseHeader: {status: 0,QTime: 1882  },
  adds: [tst-abcd,1476172747866374100  ]
}

 see the returned _version_ is a multiple of 100, always!


[2] update
Using:

{ id: tst-abcd, version: 2, type: test, title: [title update], 
_version_: 1476172747866374100 }

Response Error:
{  responseHeader: {status: 409,QTime: 51  },
  error: {msg: version conflict for tst-abcd 
expected=1476172747866374100 actual=1476172747866374144,
code: 409  }}

 notice how the error-message-string correctly mentions the real actual 
 _version_ that is effective (not rounded to 100)


[3] corrected update, using that effective number

{ id: tst-abcd, version: 2, type: test, title: [title update], 
_version_: 1476172747866374144 }

Response:

{  responseHeader: {status: 0,QTime: 597  },
  adds: [tst-abcd,1476173026894545000  ] }



Odd at first this behaviour is not shown with curl on the command line...

[1] create

$ curl $solrbase/update?commit=trueversions=true -H 
'Content-type:application/json' -d '[{ id: tst-1234, version: 1, type: 
test, title: [title], _version_: -1 }]'


response: 

{responseHeader:{status:0,QTime:587},adds:[tst-1234,1476163269470191616]}

 number is not rounded, looks good!

[2] update 

$ curl $solrbase/update?commit=trueversions=true -H 
'Content-type:application/json' -d '[{ id: tst-1234, version: 2, type: 
test, title: [title updated], _version_: 1476163269470191616 }]'


response: 

{responseHeader:{status:0,QTime:512},adds:[tst-1234,1476163320472928256]}



All this was pretty much a mistery to me untill I came across this:

http://stackoverflow.com/questions/15689790/parse-json-in-javascript-long-numbers-get-rounded


This looks like passing down the too big numbers in the _version_ as strings 
should avoid the issue. Or use numbers that aren't that big, since apparently: 
The largest number JavaScript can handle without loss of precision is 
9007199254740992  -- quoted from that stackoverflow page.

There are more references (below) talking about this being a Javascript 
limitation rather then a pure json-spec issue, nevertheless... it might be 
easier to adapt solr to deal with this know Javascript limitation and thus 
helping out the Javascript clients out there?

- 
http://stackoverflow.com/questions/307179/what-is-javascripts-max-int-whats-the-highest-integer-value-a-number-can-go-t
- http://stackoverflow.com/questions/13502398/json-integers-limit-on-size

In terms of backwards compatibility I don't see an easy way out for the moment. 
 
- clients that expect _version_ to be numeric might not handle the string
- in existing deployments it might be hard to reduce all the already existing 
_version_ to stay undere the limit...

I still have to investigate into receiving and parsing XML replies from SOLR 
instead - making sure I keep the returned _version_ info in a Javascript 
string.  Hoping that might work as a timely (but not as elegant) workaround.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-3957) Remove response WARNING of This response format is experimental

2014-08-11 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher reassigned SOLR-3957:
--

Assignee: Erik Hatcher

 Remove response WARNING of This response format is experimental
 ---

 Key: SOLR-3957
 URL: https://issues.apache.org/jira/browse/SOLR-3957
 Project: Solr
  Issue Type: Wish
Affects Versions: 4.0
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Minor
 Fix For: 5.0


 Remove all the useless (which I daresay is all of them) response WARNINGs 
 stating This response format is experimental.
 At this point, all of these are more than just experimental, and even if so 
 things are subject to change and in most cases can be done in a compatible 
 manner anyway.
 Less noise.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5656) Add autoAddReplicas feature for shared file systems.

2014-08-11 Thread Scott Lindner (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093272#comment-14093272
 ] 

Scott Lindner commented on SOLR-5656:
-

Does this depend on SOLR-6237? Your comments above seem to imply that it's not, 
but conceptually can't you end up with multiple replicas pointing to the same 
location on disk (admittedly I haven't dug into your code)?  If so, what impact 
does this have with multiple solr instances (i.e. multiple replicas of a given 
shard) pointing to the same location on disk where each could be accepting 
write changes? 



 Add autoAddReplicas feature for shared file systems.
 

 Key: SOLR-5656
 URL: https://issues.apache.org/jira/browse/SOLR-5656
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-5656.patch, SOLR-5656.patch, SOLR-5656.patch, 
 SOLR-5656.patch


 When using HDFS, the Overseer should have the ability to reassign the cores 
 from failed nodes to running nodes.
 Given that the index and transaction logs are in hdfs, it's simple for 
 surviving hardware to take over serving cores for failed hardware.
 There are some tricky issues around having the Overseer handle this for you, 
 but seems a simple first pass is not too difficult.
 This will add another alternative to replicating both with hdfs and solr.
 It shouldn't be specific to hdfs, and would be an option for any shared file 
 system Solr supports.
 https://reviews.apache.org/r/23371/



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6364) _version_ value too big for javascript clients causing reported _version_ never matching internal _version_ == suggested resolution: json should communicate _version_ as

2014-08-11 Thread Marc Portier (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marc Portier updated SOLR-6364:
---

Description: 
There seems to be a 100 based rounding active in the output/rendition/return of 
the _version_ field of added documents.  Internally on the solr side however 
the real number (non-rounded) is effective and introduces conflicts with the 
optimistic concurrency logic.

Apparently this is to be expected in all Javascript clients, since the 
_version_ numbers used are too big to fit into Javascript Number variables 
without loss of precision.

Here is what one can do to see this in action - all steps below with 
1/ using the solr4 admin UI on 
http://localhost:8983/solr/#/mycore/documents
2/ the request-handler box set to 
/update?commit=trueversions=true
3/ by adding the following into the documents  section on the page:

[1] for create 
Using:

{ id: tst-abcd, version: 1, type: test, title: [title], 
_version_: -1 }

Response:

{  responseHeader: {status: 0,QTime: 1882  },
  adds: [tst-abcd,1476172747866374100  ]
}

 see the returned __version__ is a multiple of 100, always!


[2] update
Using:

{ id: tst-abcd, version: 2, type: test, title: [title update], 
_version_: 1476172747866374100 }

Response Error:
{  responseHeader: {status: 409,QTime: 51  },
  error: {msg: version conflict for tst-abcd 
expected=1476172747866374100 actual=1476172747866374144,
code: 409  }}

 notice how the error-message-string correctly mentions the real actual 
 __version__ that is effective (not rounded to 100)


[3] corrected update, using that effective number

{ id: tst-abcd, version: 2, type: test, title: [title update], 
_version_: 1476172747866374144 }

Response:

{  responseHeader: {status: 0,QTime: 597  },
  adds: [tst-abcd,1476173026894545000  ] }



Odd at first this behaviour is not shown with curl on the command line...

[1] create

$ curl $solrbase/update?commit=trueversions=true -H 
'Content-type:application/json' -d '[{ id: tst-1234, version: 1, type: 
test, title: [title], _version_: -1 }]'


response: 

{responseHeader:{status:0,QTime:587},adds:[tst-1234,1476163269470191616]}

 number is not rounded, looks good!

[2] update 

$ curl $solrbase/update?commit=trueversions=true -H 
'Content-type:application/json' -d '[{ id: tst-1234, version: 2, type: 
test, title: [title updated], _version_: 1476163269470191616 }]'


response: 

{responseHeader:{status:0,QTime:512},adds:[tst-1234,1476163320472928256]}



All this was pretty much a mistery to me untill I came across this:

http://stackoverflow.com/questions/15689790/parse-json-in-javascript-long-numbers-get-rounded


This looks like passing down the too big numbers in the __version__ as strings 
should avoid the issue. Or use numbers that aren't that big, since apparently: 
The largest number JavaScript can handle without loss of precision is 
9007199254740992  -- quoted from that stackoverflow page.

There are more references (below) talking about this being a Javascript 
limitation rather then a pure json-spec issue, nevertheless... it might be 
easier to adapt solr to deal with this know Javascript limitation and thus 
helping out the Javascript clients out there?

- 
http://stackoverflow.com/questions/307179/what-is-javascripts-max-int-whats-the-highest-integer-value-a-number-can-go-t
- http://stackoverflow.com/questions/13502398/json-integers-limit-on-size

In terms of backwards compatibility I don't see an easy way out for the moment. 
 
- clients that expect __version__ to be numeric might not handle the string
- in existing deployments it might be hard to reduce all the already existing 
__version__ to stay undere the limit...

I still have to investigate into receiving and parsing XML replies from SOLR 
instead - making sure I keep the returned __version__ info in a Javascript 
string.  Hoping that might work as a timely (but not as elegant) workaround.

  was:
There seems to be a 100 based rounding active in the output/rendition/return of 
the _version_ field of added documents.  Internally on the solr side however 
the real number (non-rounded) is effective and introduces conflicts with the 
optimistic concurrency logic.

Apparently this is to be expected in all Javascript clients, since the 
_version_ numbers used are too big to fit into Javascript Number variables 
without loss of precision.

Here is what one can do to see this in action - all steps below with 
1/ using the solr4 admin UI on 
http://localhost:8983/solr/#/mycore/documents
2/ the request-handler box set to 
/update?commit=trueversions=true
3/ by adding the following into the documents  section on the page:

[1] for create 
Using:

{ id: tst-abcd, version: 1, type: test, title: [title], 
_version_: -1 }

Response:

{  responseHeader: {status: 0,QTime: 1882  },
  adds: [tst-abcd,1476172747866374100  ]
}

 

[jira] [Updated] (SOLR-3957) Remove response WARNING of This response format is experimental

2014-08-11 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-3957:
---

Attachment: SOLR-3957.patch

patch that removes the experimental warning from both the DIH responses and 
replication handler details command response.  Also removed 
RequestHandlerUtils.addExperimentalFormatWarning(rsp) method, though maybe it 
should be kept, or maybe removed on trunk and deprecated on 4x if ported back 
there?

 Remove response WARNING of This response format is experimental
 ---

 Key: SOLR-3957
 URL: https://issues.apache.org/jira/browse/SOLR-3957
 Project: Solr
  Issue Type: Wish
Affects Versions: 4.0
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Minor
 Fix For: 5.0

 Attachments: SOLR-3957.patch


 Remove all the useless (which I daresay is all of them) response WARNINGs 
 stating This response format is experimental.
 At this point, all of these are more than just experimental, and even if so 
 things are subject to change and in most cases can be done in a compatible 
 manner anyway.
 Less noise.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5656) Add autoAddReplicas feature for shared file systems.

2014-08-11 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093309#comment-14093309
 ] 

Mark Miller commented on SOLR-5656:
---

bq. Does this depend on SOLR-6237? 

No, that is a separate feature.

bq. but conceptually can't you end up with multiple replicas pointing to the 
same location on disk 

There are efforts to prevent his, I don't think it's likely, but we will 
improve this over time. There are settings that can be tuned around timing as 
well. I have done a fair amount of manual testing already, but the unit tests 
will be expanded over time.

bq. If so, what impact does this have with multiple solr instances (i.e. 
multiple replicas of a given shard) pointing to the same location on disk where 
each could be accepting write changes?

You should be running with the hdfs lock impl so that one of the SolrCores 
would fail to start.

I plan on committing this soon so that I don't have to keep it up to date with 
trunk. We can still iteratate if their are further comments. The feature is 
optional per collection and defaults to off.

 Add autoAddReplicas feature for shared file systems.
 

 Key: SOLR-5656
 URL: https://issues.apache.org/jira/browse/SOLR-5656
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-5656.patch, SOLR-5656.patch, SOLR-5656.patch, 
 SOLR-5656.patch


 When using HDFS, the Overseer should have the ability to reassign the cores 
 from failed nodes to running nodes.
 Given that the index and transaction logs are in hdfs, it's simple for 
 surviving hardware to take over serving cores for failed hardware.
 There are some tricky issues around having the Overseer handle this for you, 
 but seems a simple first pass is not too difficult.
 This will add another alternative to replicating both with hdfs and solr.
 It shouldn't be specific to hdfs, and would be an option for any shared file 
 system Solr supports.
 https://reviews.apache.org/r/23371/



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6261) Run ZK watch event callbacks in parallel to the event thread

2014-08-11 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-6261.
---

Resolution: Fixed

Thanks Shalin and Ramkumar!

 Run ZK watch event callbacks in parallel to the event thread
 

 Key: SOLR-6261
 URL: https://issues.apache.org/jira/browse/SOLR-6261
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.9
Reporter: Ramkumar Aiyengar
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: thread-compare.jpg


 Currently checking for leadership (due to the leader's ephemeral node going 
 away) happens in ZK's event thread. If there are many cores and all of them 
 are due leadership, then they would have to serially go through the two-way 
 sync and leadership takeover.
 For tens of cores, this could mean 30-40s without leadership before the last 
 in the list even gets to start the leadership process. If the leadership 
 process happens in a separate thread, then the cores could all take over in 
 parallel.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6336) DistributedQueue (and it's use in OCP) leaks ZK Watches

2014-08-11 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-6336.
---

   Resolution: Fixed
Fix Version/s: 4.10
   5.0

Thanks Ramkumar!

 DistributedQueue (and it's use in OCP) leaks ZK Watches
 ---

 Key: SOLR-6336
 URL: https://issues.apache.org/jira/browse/SOLR-6336
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Ramkumar Aiyengar
Assignee: Mark Miller
 Fix For: 5.0, 4.10


 The current {{DistributedQueue}} implementation leaks ZK watches whenever it 
 finds children or times out on finding one. OCP uses this in its event loop 
 and can loop tight in some conditions (when exclusivity checks fail), leading 
 to lots of watches which get triggered together on the next event (could be a 
 while for some activities like shard splitting).
 This gets exposed by SOLR-6261 which spawns a new thread for every parallel 
 watch event.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6364) _version_ value too big for javascript clients causing reported _version_ never matching internal _version_ == suggested resolution: json should communicate _version_ a

2014-08-11 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093395#comment-14093395
 ] 

Hoss Man commented on SOLR-6364:


bq. There seems to be a 100 based rounding active in the 
output/rendition/return of the version field of added documents.

No such rounding is occurring in solr (which is why you can't reproduce when 
you hit solr from curl).  What you are seeing appears to be the behavior of the 
javascript engine in your browser when it parses JSON containing numeric values 
larger then the javascript spec allows for numbers. (i see the same behavior in 
firefox)

bq. There are more references (below) talking about this being a Javascript 
limitation rather then a pure json-spec issue...

correct, the JSON spec has no limitiation on hte size of a numeric value.

bq. ...it might be easier to adapt solr to deal with this know Javascript 
limitation and thus helping out the Javascript clients out there?

we certainly should *not* change the default behavior of solr's JSON response 
format, since many languages have no problems with java long values in JSON 
-- but it does seem like it might be a good idea to add an option for dealing 
with this better, since it would certainly affect any javascript client parsing 
any long value (not just {{_version_}}

---

My suggestion would be a new json.long param (following the naming convention 
of json.nl and json.wrf params) that would control wether java long 
values should be returned as numerics or strings in the JSON response.



 _version_ value too big for javascript clients causing reported _version_ 
 never matching internal _version_ == suggested resolution: json should 
 communicate _version_ as string! 
 ---

 Key: SOLR-6364
 URL: https://issues.apache.org/jira/browse/SOLR-6364
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.9
 Environment: ubuntu 14.04 desktop 32bit
 Oracle Corporation Java HotSpot(TM) Server VM (1.7.0_45 24.45-b08)
 lucene-spec 4.9.0
 lucene-impl 4.9.0 1604085 - rmuir - 2014-06-20 06:22:23
 solr-spec  4.9.0
 solr-impl  4.9.0 1604085 - rmuir - 2014-06-20 06:34:03
Reporter: Marc Portier

 There seems to be a 100 based rounding active in the output/rendition/return 
 of the _version_ field of added documents.  Internally on the solr side 
 however the real number (non-rounded) is effective and introduces conflicts 
 with the optimistic concurrency logic.
 Apparently this is to be expected in all Javascript clients, since the 
 _version_ numbers used are too big to fit into Javascript Number variables 
 without loss of precision.
 Here is what one can do to see this in action - all steps below with 
 1/ using the solr4 admin UI on 
 http://localhost:8983/solr/#/mycore/documents
 2/ the request-handler box set to 
 /update?commit=trueversions=true
 3/ by adding the following into the documents  section on the page:
 [1] for create 
 Using:
 { id: tst-abcd, version: 1, type: test, title: [title], 
 _version_: -1 }
 Response:
 {  responseHeader: {status: 0,QTime: 1882  },
   adds: [tst-abcd,1476172747866374100  ]
 }
  see the returned __version__ is a multiple of 100, always!
 [2] update
 Using:
 { id: tst-abcd, version: 2, type: test, title: [title update], 
 _version_: 1476172747866374100 }
 Response Error:
 {  responseHeader: {status: 409,QTime: 51  },
   error: {msg: version conflict for tst-abcd 
 expected=1476172747866374100 actual=1476172747866374144,
 code: 409  }}
  notice how the error-message-string correctly mentions the real actual 
  __version__ that is effective (not rounded to 100)
 [3] corrected update, using that effective number
 { id: tst-abcd, version: 2, type: test, title: [title update], 
 _version_: 1476172747866374144 }
 Response:
 {  responseHeader: {status: 0,QTime: 597  },
   adds: [tst-abcd,1476173026894545000  ] }
 Odd at first this behaviour is not shown with curl on the command line...
 [1] create
 $ curl $solrbase/update?commit=trueversions=true -H 
 'Content-type:application/json' -d '[{ id: tst-1234, version: 1, 
 type: test, title: [title], _version_: -1 }]'
 response: 
 {responseHeader:{status:0,QTime:587},adds:[tst-1234,1476163269470191616]}
  number is not rounded, looks good!
 [2] update 
 $ curl $solrbase/update?commit=trueversions=true -H 
 'Content-type:application/json' -d '[{ id: tst-1234, version: 2, 
 type: test, title: [title updated], _version_: 1476163269470191616 
 }]'
 response: 
 {responseHeader:{status:0,QTime:512},adds:[tst-1234,1476163320472928256]}
 All this was pretty much a mistery to me untill I came across this:
 

[jira] [Commented] (SOLR-2894) Implement distributed pivot faceting

2014-08-11 Thread Andrew Muldowney (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093410#comment-14093410
 ] 

Andrew Muldowney commented on SOLR-2894:


Inital reports are that the newest version is a fair bit slower. 

Caveats: We're still on 4.2 in production. So I've backported this to our 4.2 
for testing. After going down that rabbit hole for a few days I've got the 
.wars so I can better test tomorrow but a small sample of 400 production 
queries on 166,343,278 documents had the following results

Old Patch
Average Query Time: 20.56ms

New Refactor
Average Query Time: 63.47ms

I'm using SolrMeter to run these at 200 qpm on a set of five slaves. Tomorrow 
I'll give each version a much larger burn in, (the query file is 651mbs of 
queries). I'm not sure these are statistically accurate but I wanted to share 
what I'm seeing at the moment.

 Implement distributed pivot faceting
 

 Key: SOLR-2894
 URL: https://issues.apache.org/jira/browse/SOLR-2894
 Project: Solr
  Issue Type: Improvement
Reporter: Erik Hatcher
Assignee: Hoss Man
 Fix For: 4.9, 5.0

 Attachments: SOLR-2894-mincount-minification.patch, 
 SOLR-2894-reworked.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894_cloud_test.patch, dateToObject.patch, 
 pivot_mincount_problem.sh


 Following up on SOLR-792, pivot faceting currently only supports 
 undistributed mode.  Distributed pivot faceting needs to be implemented.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2894) Implement distributed pivot faceting

2014-08-11 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093449#comment-14093449
 ] 

Hoss Man commented on SOLR-2894:


Damn... that is unfortunate.

Which older patch are you comparing with?
Do you have any idea where the slowdown may have been introduced? 
Can you post some details about the structure of the requests?  (any chance the 
speed diff is just due to legitimate bugs in the older patch that have been 
fixed and now result in additional refinement?)

 Implement distributed pivot faceting
 

 Key: SOLR-2894
 URL: https://issues.apache.org/jira/browse/SOLR-2894
 Project: Solr
  Issue Type: Improvement
Reporter: Erik Hatcher
Assignee: Hoss Man
 Fix For: 4.9, 5.0

 Attachments: SOLR-2894-mincount-minification.patch, 
 SOLR-2894-reworked.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894_cloud_test.patch, dateToObject.patch, 
 pivot_mincount_problem.sh


 Following up on SOLR-792, pivot faceting currently only supports 
 undistributed mode.  Distributed pivot faceting needs to be implemented.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6261) Run ZK watch event callbacks in parallel to the event thread

2014-08-11 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093462#comment-14093462
 ] 

Ramkumar Aiyengar commented on SOLR-6261:
-

[~markrmil...@gmail.com], had posted a patch for one missed case with the 
session watcher (just above Shalin's comment), could you commit that as well or 
should that be a separate issue?

 Run ZK watch event callbacks in parallel to the event thread
 

 Key: SOLR-6261
 URL: https://issues.apache.org/jira/browse/SOLR-6261
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.9
Reporter: Ramkumar Aiyengar
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: thread-compare.jpg


 Currently checking for leadership (due to the leader's ephemeral node going 
 away) happens in ZK's event thread. If there are many cores and all of them 
 are due leadership, then they would have to serially go through the two-way 
 sync and leadership takeover.
 For tens of cores, this could mean 30-40s without leadership before the last 
 in the list even gets to start the leadership process. If the leadership 
 process happens in a separate thread, then the cores could all take over in 
 parallel.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/ibm-j9-jdk7) - Build # 11003 - Still Failing!

2014-08-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11003/
Java: 64bit/ibm-j9-jdk7 
-Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}

2 tests failed.
REGRESSION:  
org.apache.lucene.codecs.lucene49.TestLucene49NormsFormat.testRamBytesUsed

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([C056FF86402C9880:32F5EDC68A5387D6]:0)
at 
org.apache.lucene.codecs.lucene49.Lucene49NormsConsumer$NormMap.getOrd(Lucene49NormsConsumer.java:249)
at 
org.apache.lucene.codecs.lucene49.Lucene49NormsConsumer.addNumericField(Lucene49NormsConsumer.java:150)
at 
org.apache.lucene.codecs.DocValuesConsumer.mergeNumericField(DocValuesConsumer.java:129)
at 
org.apache.lucene.index.SegmentMerger.mergeNorms(SegmentMerger.java:253)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:131)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3989)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3584)
at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:1872)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1689)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1642)
at 
org.apache.lucene.index.BaseIndexFileFormatTestCase.testRamBytesUsed(BaseIndexFileFormatTestCase.java:227)
at 
org.apache.lucene.index.BaseNormsFormatTestCase.testRamBytesUsed(BaseNormsFormatTestCase.java:44)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:619)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 

[jira] [Commented] (SOLR-6312) CloudSolrServer doesn't honor updatesToLeaders constructor argument

2014-08-11 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093490#comment-14093490
 ] 

Hoss Man commented on SOLR-6312:


Steve: can you elaborate a bit more on what exactly your code looks like, and 
what behavior you are seeing that you think is incorrect? (it's not clear if 
you are saying all requests, including queries, are only being sent to leaders 
when updatesToLeaders==true; or if you are saying that regardless of whether 
updatesToLeaders==true, updates are only going to hte leaders.

from what i can tell, updatesToLeaders is completely ignored in 4.9, and i 
_think_ should have been marked deprecated a while ago.

from what i remember of the history, updatesToLeaders was a feature in early 
versions of CloudSolrServer that, instead of picking a random server from the 
pool of all servers, would cause the code to pick a random server from one of 
the current leaders - which would increase the odds of saving a hop in the 
case where we were sending a commit or deleteByQuery, or we got lucky and 
picked the right leader for a doc add/delete.

But once CloudSolrServer became smart enough to be able to ask ZooKeeper for  
the DocRouter being used, we no longer needed to randomly pick a leader - we 
know exactly which leader to use for each update -- making that setting 
unneccessary...

https://svn.apache.org/viewvc?view=revisionrevision=r1521713

So, if my understanding is correct, what you should be seeing is that _queries_ 
are randomly distributed to any 1 solr node, while updates are always targeted 
at the correct leader.

does that jive with what you are seeing?



I think the bug here is mark the updatesToLeaders option deprecated, and remove 
it in trunk.  (either that, or: if there is still some code path where 
CloudSolrServer may not be able to figure out the DocRouter, then in _that_ 
situation i guess it might still make sense to round robin just among the known 
leaders?)


 CloudSolrServer doesn't honor updatesToLeaders constructor argument
 ---

 Key: SOLR-6312
 URL: https://issues.apache.org/jira/browse/SOLR-6312
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Davids
 Fix For: 4.10


 The CloudSolrServer doesn't use the updatesToLeaders property - all SolrJ 
 requests are being sent to the shard leaders.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6312) CloudSolrServer doesn't honor updatesToLeaders constructor argument

2014-08-11 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093503#comment-14093503
 ] 

Steve Rowe commented on SOLR-6312:
--

bq. I think the bug here is mark the updatesToLeaders option deprecated, and 
remove it in trunk.

The updatesToLeader option has been disconnected from the way CloudSolrServer 
operates for 11 months now, 5 feature releases ago - what's the point of 
deprecating non-existent functionality?

(Jessica Cheng noted this problem a ways back [on 
SOLR-4816|https://issues.apache.org/jira/browse/SOLR-4816?focusedCommentId=13792911page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13792911].)
 

I vote for outright removal - that should have happened when SOLR-4816 was 
committed in [r1521713|http://svn.apache.org/r1521713].


 CloudSolrServer doesn't honor updatesToLeaders constructor argument
 ---

 Key: SOLR-6312
 URL: https://issues.apache.org/jira/browse/SOLR-6312
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Davids
 Fix For: 4.10


 The CloudSolrServer doesn't use the updatesToLeaders property - all SolrJ 
 requests are being sent to the shard leaders.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6312) CloudSolrServer doesn't honor updatesToLeaders constructor argument

2014-08-11 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093508#comment-14093508
 ] 

Hoss Man commented on SOLR-6312:


bq. what's the point of deprecating non-existent functionality?

correct compilation w/o modification of existing client code on upgrade.



 CloudSolrServer doesn't honor updatesToLeaders constructor argument
 ---

 Key: SOLR-6312
 URL: https://issues.apache.org/jira/browse/SOLR-6312
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Davids
 Fix For: 4.10


 The CloudSolrServer doesn't use the updatesToLeaders property - all SolrJ 
 requests are being sent to the shard leaders.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6312) CloudSolrServer doesn't honor updatesToLeaders constructor argument

2014-08-11 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093524#comment-14093524
 ] 

Anshum Gupta commented on SOLR-6312:


bq. correct compilation w/o modification of existing client code on upgrade
+1 for that. The public methods/constructor shouldn't be broken (even if they 
were superficial) in a single release.

 CloudSolrServer doesn't honor updatesToLeaders constructor argument
 ---

 Key: SOLR-6312
 URL: https://issues.apache.org/jira/browse/SOLR-6312
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Davids
 Fix For: 4.10


 The CloudSolrServer doesn't use the updatesToLeaders property - all SolrJ 
 requests are being sent to the shard leaders.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6312) CloudSolrServer doesn't honor updatesToLeaders constructor argument

2014-08-11 Thread Steve Davids (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093533#comment-14093533
 ] 

Steve Davids commented on SOLR-6312:


bq. So, if my understanding is correct, what you should be seeing is that 
queries are randomly distributed to any 1 solr node, while updates are always 
targeted at the correct leader.

[~hossman] You are correct, that is what I am seeing as well. Though I have a 
re-indexing use-case where I would actually like to distribute update requests 
to more than just the leader. I am currently performing XPath extraction logic 
in the update chain before distributed the requests to replicas, the problem I 
am running into is that the leader's CPU is completely pegged running the 
XPaths while the replicas are almost idle (~20%). I looked to this feature to 
allow more throughput by load balancing the extraction logic to the replicas 
and just forwarding the complete/hydrated document to the leader. I know this 
is a somewhat fringe case but still think it can be useful.

 CloudSolrServer doesn't honor updatesToLeaders constructor argument
 ---

 Key: SOLR-6312
 URL: https://issues.apache.org/jira/browse/SOLR-6312
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Davids
 Fix For: 4.10


 The CloudSolrServer doesn't use the updatesToLeaders property - all SolrJ 
 requests are being sent to the shard leaders.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-4.x-Java7 - Build # 2055 - Still Failing

2014-08-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java7/2055/

No tests ran.

Build Log:
[...truncated 25 lines...]


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0_20-ea-b23) - Build # 10884 - Failure!

2014-08-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/10884/
Java: 32bit/jdk1.8.0_20-ea-b23 -client -XX:+UseG1GC

1 tests failed.
REGRESSION:  org.apache.solr.schema.TestCloudSchemaless.testDistribSearch

Error Message:
Timeout occured while waiting response from server at: 
https://127.0.0.1:46293/e_dm/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:46293/e_dm/collection1
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:68)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:54)
at 
org.apache.solr.schema.TestCloudSchemaless.doTest(TestCloudSchemaless.java:140)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:867)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
  

[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4798 - Still Failing

2014-08-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4798/

No tests ran.

Build Log:
[...truncated 193 lines...]


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: [JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4798 - Still Failing

2014-08-11 Thread Steve Rowe
From the build log:

/tmp/hudson6964778820318780872.sh: line 3:
/home/jenkins/tools/java/latest1.7/bin/java: No such file or directory

There was a tweet from Infrastructure saying they were going to restart a
bunch of VMs - maybe they upgraded lucene.zones.apache.org but didn't put
the JVMs back?

I'll investigate.

Steve

On Mon, Aug 11, 2014 at 8:46 PM, Apache Jenkins Server 
jenk...@builds.apache.org wrote:

 Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4798/

 No tests ran.

 Build Log:
 [...truncated 193 lines...]



 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6261) Run ZK watch event callbacks in parallel to the event thread

2014-08-11 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093582#comment-14093582
 ] 

Mark Miller commented on SOLR-6261:
---

Ah, confused it with the second patch on the other issue when I glanced at it. 
Yeah, I can commit it here.

 Run ZK watch event callbacks in parallel to the event thread
 

 Key: SOLR-6261
 URL: https://issues.apache.org/jira/browse/SOLR-6261
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.9
Reporter: Ramkumar Aiyengar
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: thread-compare.jpg


 Currently checking for leadership (due to the leader's ephemeral node going 
 away) happens in ZK's event thread. If there are many cores and all of them 
 are due leadership, then they would have to serially go through the two-way 
 sync and leadership takeover.
 For tens of cores, this could mean 30-40s without leadership before the last 
 in the list even gets to start the leadership process. If the leadership 
 process happens in a separate thread, then the cores could all take over in 
 parallel.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 602 - Failure

2014-08-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/602/

No tests ran.

Build Log:
[...truncated 139 lines...]


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Artifacts-trunk - Build # 2585 - Failure

2014-08-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Artifacts-trunk/2585/

No tests ran.

Build Log:
[...truncated 277 lines...]


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Solr-Artifacts-trunk - Build # 2481 - Failure

2014-08-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-Artifacts-trunk/2481/

No tests ran.

Build Log:
[...truncated 1005 lines...]


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: [JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4798 - Still Failing

2014-08-11 Thread Steve Rowe
I couldn't login to lucene.zones.apache.org directly - connections timed
out for some reason.  But I was able to login by first logging into
people.apache.org and from there into lucene.zones.apache.org.  Once logged
in, I could see that /home/jenkins/ did not exist - instead /home/hudson/
has its expected contents.  So I symlinked /home/jenkins to /home/hudson,
and hopefully that'll allow things to work.


On Mon, Aug 11, 2014 at 8:54 PM, Steve Rowe sar...@gmail.com wrote:

 From the build log:

 /tmp/hudson6964778820318780872.sh: line 3: 
 /home/jenkins/tools/java/latest1.7/bin/java: No such file or directory

 There was a tweet from Infrastructure saying they were going to restart a
 bunch of VMs - maybe they upgraded lucene.zones.apache.org but didn't put
 the JVMs back?

 I'll investigate.

 Steve

 On Mon, Aug 11, 2014 at 8:46 PM, Apache Jenkins Server 
 jenk...@builds.apache.org wrote:

 Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4798/

 No tests ran.

 Build Log:
 [...truncated 193 lines...]



 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org





[jira] [Commented] (SOLR-6364) _version_ value too big for javascript clients causing reported _version_ never matching internal _version_ == suggested resolution: json should communicate _version_ a

2014-08-11 Thread Marc Portier (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093750#comment-14093750
 ] 

Marc Portier commented on SOLR-6364:


bq. My suggestion would be a new json.long param (following the naming 
convention of json.nl and json.wrf params) that would control wether java 
long values should be returned as numerics or strings in the JSON response.

Yep. This surely sounds like the elegant way out I was still trying to get at.  
The param would then take num (default for backwards compat) or text as 
values?.  And indeed: it should affect other fields/values in the json as well.

 _version_ value too big for javascript clients causing reported _version_ 
 never matching internal _version_ == suggested resolution: json should 
 communicate _version_ as string! 
 ---

 Key: SOLR-6364
 URL: https://issues.apache.org/jira/browse/SOLR-6364
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.9
 Environment: ubuntu 14.04 desktop 32bit
 Oracle Corporation Java HotSpot(TM) Server VM (1.7.0_45 24.45-b08)
 lucene-spec 4.9.0
 lucene-impl 4.9.0 1604085 - rmuir - 2014-06-20 06:22:23
 solr-spec  4.9.0
 solr-impl  4.9.0 1604085 - rmuir - 2014-06-20 06:34:03
Reporter: Marc Portier

 There seems to be a 100 based rounding active in the output/rendition/return 
 of the _version_ field of added documents.  Internally on the solr side 
 however the real number (non-rounded) is effective and introduces conflicts 
 with the optimistic concurrency logic.
 Apparently this is to be expected in all Javascript clients, since the 
 _version_ numbers used are too big to fit into Javascript Number variables 
 without loss of precision.
 Here is what one can do to see this in action - all steps below with 
 1/ using the solr4 admin UI on 
 http://localhost:8983/solr/#/mycore/documents
 2/ the request-handler box set to 
 /update?commit=trueversions=true
 3/ by adding the following into the documents  section on the page:
 [1] for create 
 Using:
 { id: tst-abcd, version: 1, type: test, title: [title], 
 _version_: -1 }
 Response:
 {  responseHeader: {status: 0,QTime: 1882  },
   adds: [tst-abcd,1476172747866374100  ]
 }
  see the returned __version__ is a multiple of 100, always!
 [2] update
 Using:
 { id: tst-abcd, version: 2, type: test, title: [title update], 
 _version_: 1476172747866374100 }
 Response Error:
 {  responseHeader: {status: 409,QTime: 51  },
   error: {msg: version conflict for tst-abcd 
 expected=1476172747866374100 actual=1476172747866374144,
 code: 409  }}
  notice how the error-message-string correctly mentions the real actual 
  __version__ that is effective (not rounded to 100)
 [3] corrected update, using that effective number
 { id: tst-abcd, version: 2, type: test, title: [title update], 
 _version_: 1476172747866374144 }
 Response:
 {  responseHeader: {status: 0,QTime: 597  },
   adds: [tst-abcd,1476173026894545000  ] }
 Odd at first this behaviour is not shown with curl on the command line...
 [1] create
 $ curl $solrbase/update?commit=trueversions=true -H 
 'Content-type:application/json' -d '[{ id: tst-1234, version: 1, 
 type: test, title: [title], _version_: -1 }]'
 response: 
 {responseHeader:{status:0,QTime:587},adds:[tst-1234,1476163269470191616]}
  number is not rounded, looks good!
 [2] update 
 $ curl $solrbase/update?commit=trueversions=true -H 
 'Content-type:application/json' -d '[{ id: tst-1234, version: 2, 
 type: test, title: [title updated], _version_: 1476163269470191616 
 }]'
 response: 
 {responseHeader:{status:0,QTime:512},adds:[tst-1234,1476163320472928256]}
 All this was pretty much a mistery to me untill I came across this:
 http://stackoverflow.com/questions/15689790/parse-json-in-javascript-long-numbers-get-rounded
 This looks like passing down the too big numbers in the __version__ as 
 strings should avoid the issue. Or use numbers that aren't that big, since 
 apparently: The largest number JavaScript can handle without loss of 
 precision is 9007199254740992  -- quoted from that stackoverflow page.
 There are more references (below) talking about this being a Javascript 
 limitation rather then a pure json-spec issue, nevertheless... it might be 
 easier to adapt solr to deal with this know Javascript limitation and thus 
 helping out the Javascript clients out there?
 - 
 http://stackoverflow.com/questions/307179/what-is-javascripts-max-int-whats-the-highest-integer-value-a-number-can-go-t
 - http://stackoverflow.com/questions/13502398/json-integers-limit-on-size
 In terms of backwards compatibility I don't see an easy way out 

[jira] [Commented] (SOLR-6312) CloudSolrServer doesn't honor updatesToLeaders constructor argument

2014-08-11 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093752#comment-14093752
 ] 

Ramkumar Aiyengar commented on SOLR-6312:
-

It's unlikely you will see a difference by sending to all replicas instead of 
targeting the leaders, as the replicas will internally forward your requests to 
the leader anyway, that's the way SolrCloud is designed -- updates always go to 
the leader. All CSS is trying to do is save you the extra hop (and some 
cpu/latency savings in the process).

 CloudSolrServer doesn't honor updatesToLeaders constructor argument
 ---

 Key: SOLR-6312
 URL: https://issues.apache.org/jira/browse/SOLR-6312
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Davids
 Fix For: 4.10


 The CloudSolrServer doesn't use the updatesToLeaders property - all SolrJ 
 requests are being sent to the shard leaders.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6304) Transforming and Indexing custom JSON data

2014-08-11 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6304:
-

Summary: Transforming and Indexing custom JSON data  (was: JsonLoader 
should be able to flatten an input JSON to multiple docs)

 Transforming and Indexing custom JSON data
 --

 Key: SOLR-6304
 URL: https://issues.apache.org/jira/browse/SOLR-6304
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, 4.10

 Attachments: SOLR-6304.patch, SOLR-6304.patch


 example
 {noformat}
 curl 
 localhost:8983/update/json/docs?split=/batters/batterf=recipeId:/idf=recipeType:/typef=id:/batters/batter/idf=type:/batters/batter/type
  -d '
 {
   id: 0001,
   type: donut,
   name: Cake,
   ppu: 0.55,
   batters: {
   batter:
   [
   { id: 1001, type: 
 Regular },
   { id: 1002, type: 
 Chocolate },
   { id: 1003, type: 
 Blueberry },
   { id: 1004, type: 
 Devil's Food }
   ]
   }
 }'
 {noformat}
 should produce the following output docs
 {noformat}
 { recipeId:001, recipeType:donut, id:1001, type:Regular }
 { recipeId:001, recipeType:donut, id:1002, type:Chocolate }
 { recipeId:001, recipeType:donut, id:1003, type:Blueberry }
 { recipeId:001, recipeType:donut, id:1004, type:Devil's food }
 {noformat}
 the split param is the element in the tree where it should be split into 
 multiple docs. The 'f' are field name mappings



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >