[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b85) - Build # 14626 - Failure!

2015-10-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14626/
Java: 32bit/jdk1.9.0-ea-b85 -client -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithKerberosAlt

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithKerberosAlt: 1) Thread[id=10415, 
name=groupCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:747)2) Thread[id=10416, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:747)3) Thread[id=10417, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:747)4) Thread[id=10414, 
name=apacheds, state=WAITING, group=TGRP-TestSolrCloudWithKerberosAlt] 
at java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)5) Thread[id=10418, 
name=ou=system.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:747)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithKerberosAlt: 
   1) Thread[id=10415, name=groupCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 

[jira] [Commented] (SOLR-8180) Missing commons-logging dependency in solrj-lib for SolrJ

2015-10-22 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969542#comment-14969542
 ] 

David Smiley commented on SOLR-8180:


Yes; +1 to that.  I'll keep this on my backlog until I have time to if someone 
doesn't get it done first.  Should be simple.

> Missing commons-logging dependency in solrj-lib for SolrJ
> -
>
> Key: SOLR-8180
> URL: https://issues.apache.org/jira/browse/SOLR-8180
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8180.patch
>
>
> When using DBVisualizer, SquirrelSQL, or Java JDBC with the Solr JDBC driver, 
> an additional dependency on commons-logging must be added otherwise the 
> following exception occurs:
> {code}
> org.apache.solr.common.SolrException: Unable to create HttpClient instance. 
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil$HttpClientFactory.createHttpClient(HttpClientUtil.java:393)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:124)
>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.(CloudSolrClient.java:196)
>   at 
> org.apache.solr.client.solrj.io.SolrClientCache.getCloudSolrClient(SolrClientCache.java:47)
>   at 
> org.apache.solr.client.solrj.io.sql.ConnectionImpl.(ConnectionImpl.java:51)
>   at 
> org.apache.solr.client.solrj.io.sql.DriverImpl.connect(DriverImpl.java:108)
>   at 
> org.apache.solr.client.solrj.io.sql.DriverImpl.connect(DriverImpl.java:76)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.onseven.dbvis.h.B.D.ᅣチ(Z:1548)
>   at com.onseven.dbvis.h.B.F$A.call(Z:1369)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil$HttpClientFactory.createHttpClient(HttpClientUtil.java:391)
>   ... 16 more
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/commons/logging/LogFactory
>   at 
> org.apache.http.impl.client.CloseableHttpClient.(CloseableHttpClient.java:58)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.(AbstractHttpClient.java:287)
>   at 
> org.apache.http.impl.client.DefaultHttpClient.(DefaultHttpClient.java:128)
>   at 
> org.apache.http.impl.client.SystemDefaultHttpClient.(SystemDefaultHttpClient.java:116)
>   ... 21 more
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-7843) Importing Deltal create a memory leak

2015-10-22 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reopened SOLR-7843:
-
  Assignee: Shalin Shekhar Mangar

> Importing Deltal create a memory leak
> -
>
> Key: SOLR-7843
> URL: https://issues.apache.org/jira/browse/SOLR-7843
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 5.2.1
>Reporter: Pablo Lozano
>Assignee: Shalin Shekhar Mangar
>  Labels: memory-leak
>
> The org.apache.solr.handler.dataimport.SolrWriter is not correctly cleaning 
> itself after finishing importing Deltas as the "Set deltaKeys" is not 
> being cleaned after the process has finished. 
> When using a custom importer or DataSource for my case I need to add 
> additional parameters to the delta keys.
> When the data import finishes the DeltaKeys is not set back to null and the 
> DataImporter, DocBuilder and the SolrWriter are mantained as live objects 
> because there are being referenced by the "infoRegistry" of the SolrCore 
> which seems to be used for Jmx information.
> It appears that starting a second delta import did not freed the memory which 
> may cause on the long run an OutOfMemory, I have not checked if starting a 
> full import would break the references and free the memory.
> An easy fix is possible which  would be to add to the SolrWriter "deltaKeys = 
> null;" on the close method.
> Or nullify the writer on DocBuilder after being used on the method execute();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_60) - Build # 5353 - Still Failing!

2015-10-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5353/
Java: 32bit/jdk1.8.0_60 -client -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 54569 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:775: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:655: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:638: The 
following files are missing svn:eol-style (or binary svn:mime-type):
* ./lucene/sandbox/src/java/org/apache/lucene/util/GeoRect.java
* ./lucene/sandbox/src/test/org/apache/lucene/util/BaseGeoPointTestCase.java

Total time: 93 minutes 34 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: [JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 993 - Still Failing

2015-10-22 Thread Shalin Shekhar Mangar
This is a valid bug caused by assertions added in SOLR-8069. See
https://issues.apache.org/jira/browse/SOLR-8069?focusedCommentId=14968664=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14968664

On Thu, Oct 22, 2015 at 12:17 PM, Apache Jenkins Server
 wrote:
> Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/993/
>
> 1 tests failed.
> FAILED:  
> org.apache.solr.cloud.LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR
>
> Error Message:
> Captured an uncaught exception in thread: Thread[id=43491, 
> name=coreZkRegister-5997-thread-1, state=RUNNABLE, 
> group=TGRP-LeaderInitiatedRecoveryOnShardRestartTest]
>
> Stack Trace:
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=43491, 
> name=coreZkRegister-5997-thread-1, state=RUNNABLE, 
> group=TGRP-LeaderInitiatedRecoveryOnShardRestartTest]
> Caused by: java.lang.AssertionError
> at __randomizedtesting.SeedInfo.seed([7F78F76DDF75FAD1]:0)
> at 
> org.apache.solr.cloud.ZkController.updateLeaderInitiatedRecoveryState(ZkController.java:2133)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:434)
> at 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:197)
> at 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:157)
> at 
> org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:346)
> at 
> org.apache.solr.cloud.ZkController.joinElection(ZkController.java:1113)
> at org.apache.solr.cloud.ZkController.register(ZkController.java:926)
> at org.apache.solr.cloud.ZkController.register(ZkController.java:881)
> at org.apache.solr.core.ZkContainer$2.run(ZkContainer.java:183)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:231)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>
>
>
>
> Build Log:
> [...truncated 11227 lines...]
>[junit4] Suite: 
> org.apache.solr.cloud.LeaderInitiatedRecoveryOnShardRestartTest
>[junit4]   2> Creating dataDir: 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/build/solr-core/test/J2/temp/solr.cloud.LeaderInitiatedRecoveryOnShardRestartTest_7F78F76DDF75FAD1-001/init-core-data-001
>[junit4]   2> 2804605 INFO  
> (SUITE-LeaderInitiatedRecoveryOnShardRestartTest-seed#[7F78F76DDF75FAD1]-worker)
>  [] o.a.s.BaseDistributedSearchTestCase Setting hostContext system 
> property: /
>[junit4]   2> 2804608 INFO  
> (TEST-LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR-seed#[7F78F76DDF75FAD1])
>  [] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
>[junit4]   2> 2804609 INFO  (Thread-33860) [] o.a.s.c.ZkTestServer 
> client port:0.0.0.0/0.0.0.0:0
>[junit4]   2> 2804609 INFO  (Thread-33860) [] o.a.s.c.ZkTestServer 
> Starting server
>[junit4]   2> 2804709 INFO  
> (TEST-LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR-seed#[7F78F76DDF75FAD1])
>  [] o.a.s.c.ZkTestServer start zk server on port:52462
>[junit4]   2> 2804709 INFO  
> (TEST-LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR-seed#[7F78F76DDF75FAD1])
>  [] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
>[junit4]   2> 2804709 INFO  
> (TEST-LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR-seed#[7F78F76DDF75FAD1])
>  [] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
>[junit4]   2> 2804712 INFO  (zkCallback-2280-thread-1) [] 
> o.a.s.c.c.ConnectionManager Watcher 
> org.apache.solr.common.cloud.ConnectionManager@7a70d2b6 
> name:ZooKeeperConnection Watcher:127.0.0.1:52462 got event WatchedEvent 
> state:SyncConnected type:None path:null path:null type:None
>[junit4]   2> 2804712 INFO  
> (TEST-LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR-seed#[7F78F76DDF75FAD1])
>  [] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
>[junit4]   2> 2804712 INFO  
> (TEST-LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR-seed#[7F78F76DDF75FAD1])
>  [] o.a.s.c.c.SolrZkClient Using default ZkACLProvider
>[junit4]   2> 2804712 INFO  
> (TEST-LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR-seed#[7F78F76DDF75FAD1])
>  [] o.a.s.c.c.SolrZkClient makePath: /solr
>[junit4]   2> 2804715 INFO  
> (TEST-LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR-seed#[7F78F76DDF75FAD1])
>  [] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
>[junit4]   2> 2804715 INFO  
> 

[jira] [Reopened] (SOLR-8069) Ensure that only the valid ZooKeeper registered leader can put a replica into Leader Initiated Recovery.

2015-10-22 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reopened SOLR-8069:
-

There's a reproducible failure in the test added by SOLR-8075 caused by 
assertion error on asserts added in this issue.

{code}
1 tests failed.
FAILED:  
org.apache.solr.cloud.LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR

Error Message:
Captured an uncaught exception in thread: Thread[id=43491, 
name=coreZkRegister-5997-thread-1, state=RUNNABLE, 
group=TGRP-LeaderInitiatedRecoveryOnShardRestartTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=43491, name=coreZkRegister-5997-thread-1, 
state=RUNNABLE, group=TGRP-LeaderInitiatedRecoveryOnShardRestartTest]
Caused by: java.lang.AssertionError
at __randomizedtesting.SeedInfo.seed([7F78F76DDF75FAD1]:0)
at 
org.apache.solr.cloud.ZkController.updateLeaderInitiatedRecoveryState(ZkController.java:2133)
at 
org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:434)
at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:197)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:157)
at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:346)
at 
org.apache.solr.cloud.ZkController.joinElection(ZkController.java:1113)
at org.apache.solr.cloud.ZkController.register(ZkController.java:926)
at org.apache.solr.cloud.ZkController.register(ZkController.java:881)
at org.apache.solr.core.ZkContainer$2.run(ZkContainer.java:183)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:231)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

The assertion is that leaderCd != null fails because 
ShardLeaderElectionContext.runLeaderProcess calls 
ZkController.updateLeaderInitiatedRecoveryState with a null core descriptor  
which is by design because if you are marking a replica as 'active' then you 
don't necessarily need to be a leader.

> Ensure that only the valid ZooKeeper registered leader can put a replica into 
> Leader Initiated Recovery.
> 
>
> Key: SOLR-8069
> URL: https://issues.apache.org/jira/browse/SOLR-8069
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-8069.patch, SOLR-8069.patch
>
>
> I've seen this twice now. Need to work on a test.
> When some issues hit all the replicas at once, you can end up in a situation 
> where the rightful leader was put or put itself into LIR. Even on restart, 
> this rightful leader won't take leadership and you have to manually clear the 
> LIR nodes.
> It seems that if all the replicas participate in election on startup, LIR 
> should just be cleared.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b85) - Build # 14619 - Still Failing!

2015-10-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14619/
Java: 32bit/jdk1.9.0-ea-b85 -server -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithKerberosAlt

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithKerberosAlt: 1) Thread[id=13923, 
name=ou=system.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:747)2) Thread[id=13922, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:747)3) Thread[id=13921, 
name=apacheds, state=WAITING, group=TGRP-TestSolrCloudWithKerberosAlt] 
at java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)4) Thread[id=13925, 
name=groupCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:747)5) Thread[id=13924, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:747)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithKerberosAlt: 
   1) Thread[id=13923, name=ou=system.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 

[jira] [Resolved] (SOLR-7843) Importing Deltal create a memory leak

2015-10-22 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-7843.
-
   Resolution: Fixed
Fix Version/s: Trunk
   5.4

Thanks for the nudge, Joseph and to Pablo for reporting. This fix will be 
released in 5.4.

> Importing Deltal create a memory leak
> -
>
> Key: SOLR-7843
> URL: https://issues.apache.org/jira/browse/SOLR-7843
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 5.2.1
>Reporter: Pablo Lozano
>Assignee: Shalin Shekhar Mangar
>  Labels: memory-leak
> Fix For: 5.4, Trunk
>
>
> The org.apache.solr.handler.dataimport.SolrWriter is not correctly cleaning 
> itself after finishing importing Deltas as the "Set deltaKeys" is not 
> being cleaned after the process has finished. 
> When using a custom importer or DataSource for my case I need to add 
> additional parameters to the delta keys.
> When the data import finishes the DeltaKeys is not set back to null and the 
> DataImporter, DocBuilder and the SolrWriter are mantained as live objects 
> because there are being referenced by the "infoRegistry" of the SolrCore 
> which seems to be used for Jmx information.
> It appears that starting a second delta import did not freed the memory which 
> may cause on the long run an OutOfMemory, I have not checked if starting a 
> full import would break the references and free the memory.
> An easy fix is possible which  would be to add to the SolrWriter "deltaKeys = 
> null;" on the close method.
> Or nullify the writer on DocBuilder after being used on the method execute();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7843) Importing Delta create a memory leak

2015-10-22 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-7843:

Summary: Importing Delta create a memory leak  (was: Importing Deltal 
create a memory leak)

> Importing Delta create a memory leak
> 
>
> Key: SOLR-7843
> URL: https://issues.apache.org/jira/browse/SOLR-7843
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 5.2.1
>Reporter: Pablo Lozano
>Assignee: Shalin Shekhar Mangar
>  Labels: memory-leak
> Fix For: 5.4, Trunk
>
>
> The org.apache.solr.handler.dataimport.SolrWriter is not correctly cleaning 
> itself after finishing importing Deltas as the "Set deltaKeys" is not 
> being cleaned after the process has finished. 
> When using a custom importer or DataSource for my case I need to add 
> additional parameters to the delta keys.
> When the data import finishes the DeltaKeys is not set back to null and the 
> DataImporter, DocBuilder and the SolrWriter are mantained as live objects 
> because there are being referenced by the "infoRegistry" of the SolrCore 
> which seems to be used for Jmx information.
> It appears that starting a second delta import did not freed the memory which 
> may cause on the long run an OutOfMemory, I have not checked if starting a 
> full import would break the references and free the memory.
> An easy fix is possible which  would be to add to the SolrWriter "deltaKeys = 
> null;" on the close method.
> Or nullify the writer on DocBuilder after being used on the method execute();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7843) Importing Deltal create a memory leak

2015-10-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969705#comment-14969705
 ] 

ASF subversion and git services commented on SOLR-7843:
---

Commit 1710079 from sha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1710079 ]

SOLR-7843: DataImportHandler's delta imports leak memory because the delta keys 
are kept in memory and not cleared after the process is finished

> Importing Deltal create a memory leak
> -
>
> Key: SOLR-7843
> URL: https://issues.apache.org/jira/browse/SOLR-7843
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 5.2.1
>Reporter: Pablo Lozano
>Assignee: Shalin Shekhar Mangar
>  Labels: memory-leak
> Fix For: 5.4, Trunk
>
>
> The org.apache.solr.handler.dataimport.SolrWriter is not correctly cleaning 
> itself after finishing importing Deltas as the "Set deltaKeys" is not 
> being cleaned after the process has finished. 
> When using a custom importer or DataSource for my case I need to add 
> additional parameters to the delta keys.
> When the data import finishes the DeltaKeys is not set back to null and the 
> DataImporter, DocBuilder and the SolrWriter are mantained as live objects 
> because there are being referenced by the "infoRegistry" of the SolrCore 
> which seems to be used for Jmx information.
> It appears that starting a second delta import did not freed the memory which 
> may cause on the long run an OutOfMemory, I have not checked if starting a 
> full import would break the references and free the memory.
> An easy fix is possible which  would be to add to the SolrWriter "deltaKeys = 
> null;" on the close method.
> Or nullify the writer on DocBuilder after being used on the method execute();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6273) Cross Data Center Replication

2015-10-22 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-6273:
-
Attachment: forShalin.patch

[~shalinmangar] Attached a patch for you that should apply cleanly to trunk. It 
rolls up all the intermediate changes we've made AND has some special logging 
in IndexFetcher to show which of the chained calls generates the null pointer 
exception, and it's:
solrcore.getUpdateHandler().getUpdateLog() that's generating the exception

Just look for the initials EOE line 290 or so. Obviously that shouldn't be 
committed ;)

Applying this patch to trunk should be used as a  base for ongoing work, I've 
been meaning to commit it for a while but haven't gotten to the bottom of the 
test failures we were having before the null pointer issue cropped up. I'll be 
happy to coordinate that whenever.

> Cross Data Center Replication
> -
>
> Key: SOLR-6273
> URL: https://issues.apache.org/jira/browse/SOLR-6273
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Erick Erickson
> Attachments: SOLR-6273-trunk-testfix1.patch, 
> SOLR-6273-trunk-testfix2.patch, SOLR-6273-trunk-testfix3.patch, 
> SOLR-6273-trunk.patch, SOLR-6273-trunk.patch, SOLR-6273.patch, 
> SOLR-6273.patch, SOLR-6273.patch, SOLR-6273.patch, forShalin.patch
>
>
> This is the master issue for Cross Data Center Replication (CDCR)
> described at a high level here: 
> http://heliosearch.org/solr-cross-data-center-replication/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8180) Missing logging dependency in solrj-lib for SolrJ

2015-10-22 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969631#comment-14969631
 ] 

Steve Rowe commented on SOLR-8180:
--

bq. I'm not sure if changing ivy.xml would also change the SolrJ POM.

It's supposed to automagically, but there has been special handling in the past 
around logging jars, so I'm not sure it'll work without undoing that if it's 
still there.

> Missing logging dependency in solrj-lib for SolrJ
> -
>
> Key: SOLR-8180
> URL: https://issues.apache.org/jira/browse/SOLR-8180
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8180.patch
>
>
> When using DBVisualizer, SquirrelSQL, or Java JDBC with the Solr JDBC driver, 
> an additional logging dependency must be added otherwise the following 
> exception occurs:
> {code}
> org.apache.solr.common.SolrException: Unable to create HttpClient instance. 
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil$HttpClientFactory.createHttpClient(HttpClientUtil.java:393)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:124)
>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.(CloudSolrClient.java:196)
>   at 
> org.apache.solr.client.solrj.io.SolrClientCache.getCloudSolrClient(SolrClientCache.java:47)
>   at 
> org.apache.solr.client.solrj.io.sql.ConnectionImpl.(ConnectionImpl.java:51)
>   at 
> org.apache.solr.client.solrj.io.sql.DriverImpl.connect(DriverImpl.java:108)
>   at 
> org.apache.solr.client.solrj.io.sql.DriverImpl.connect(DriverImpl.java:76)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.onseven.dbvis.h.B.D.ᅣチ(Z:1548)
>   at com.onseven.dbvis.h.B.F$A.call(Z:1369)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil$HttpClientFactory.createHttpClient(HttpClientUtil.java:391)
>   ... 16 more
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/commons/logging/LogFactory
>   at 
> org.apache.http.impl.client.CloseableHttpClient.(CloseableHttpClient.java:58)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.(AbstractHttpClient.java:287)
>   at 
> org.apache.http.impl.client.DefaultHttpClient.(DefaultHttpClient.java:128)
>   at 
> org.apache.http.impl.client.SystemDefaultHttpClient.(SystemDefaultHttpClient.java:116)
>   ... 21 more
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6273) Cross Data Center Replication

2015-10-22 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969683#comment-14969683
 ] 

Shalin Shekhar Mangar commented on SOLR-6273:
-

Hi [~erickerickson], please post the patch and I can take a look.

> Cross Data Center Replication
> -
>
> Key: SOLR-6273
> URL: https://issues.apache.org/jira/browse/SOLR-6273
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Erick Erickson
> Attachments: SOLR-6273-trunk-testfix1.patch, 
> SOLR-6273-trunk-testfix2.patch, SOLR-6273-trunk-testfix3.patch, 
> SOLR-6273-trunk.patch, SOLR-6273-trunk.patch, SOLR-6273.patch, 
> SOLR-6273.patch, SOLR-6273.patch, SOLR-6273.patch
>
>
> This is the master issue for Cross Data Center Replication (CDCR)
> described at a high level here: 
> http://heliosearch.org/solr-cross-data-center-replication/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8180) Missing logging dependency in solrj-lib for SolrJ

2015-10-22 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969620#comment-14969620
 ] 

Shawn Heisey commented on SOLR-8180:


If those jars are included, jul-to-slf4j and the true log4j jar are also very 
likely required.

> Missing logging dependency in solrj-lib for SolrJ
> -
>
> Key: SOLR-8180
> URL: https://issues.apache.org/jira/browse/SOLR-8180
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8180.patch
>
>
> When using DBVisualizer, SquirrelSQL, or Java JDBC with the Solr JDBC driver, 
> an additional logging dependency must be added otherwise the following 
> exception occurs:
> {code}
> org.apache.solr.common.SolrException: Unable to create HttpClient instance. 
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil$HttpClientFactory.createHttpClient(HttpClientUtil.java:393)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:124)
>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.(CloudSolrClient.java:196)
>   at 
> org.apache.solr.client.solrj.io.SolrClientCache.getCloudSolrClient(SolrClientCache.java:47)
>   at 
> org.apache.solr.client.solrj.io.sql.ConnectionImpl.(ConnectionImpl.java:51)
>   at 
> org.apache.solr.client.solrj.io.sql.DriverImpl.connect(DriverImpl.java:108)
>   at 
> org.apache.solr.client.solrj.io.sql.DriverImpl.connect(DriverImpl.java:76)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.onseven.dbvis.h.B.D.ᅣチ(Z:1548)
>   at com.onseven.dbvis.h.B.F$A.call(Z:1369)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil$HttpClientFactory.createHttpClient(HttpClientUtil.java:391)
>   ... 16 more
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/commons/logging/LogFactory
>   at 
> org.apache.http.impl.client.CloseableHttpClient.(CloseableHttpClient.java:58)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.(AbstractHttpClient.java:287)
>   at 
> org.apache.http.impl.client.DefaultHttpClient.(DefaultHttpClient.java:128)
>   at 
> org.apache.http.impl.client.SystemDefaultHttpClient.(SystemDefaultHttpClient.java:116)
>   ... 21 more
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7843) Importing Deltal create a memory leak

2015-10-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969701#comment-14969701
 ] 

ASF subversion and git services commented on SOLR-7843:
---

Commit 1710078 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1710078 ]

SOLR-7843: DataImportHandler's delta imports leak memory because the delta keys 
are kept in memory and not cleared after the process is finished

> Importing Deltal create a memory leak
> -
>
> Key: SOLR-7843
> URL: https://issues.apache.org/jira/browse/SOLR-7843
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 5.2.1
>Reporter: Pablo Lozano
>Assignee: Shalin Shekhar Mangar
>  Labels: memory-leak
>
> The org.apache.solr.handler.dataimport.SolrWriter is not correctly cleaning 
> itself after finishing importing Deltas as the "Set deltaKeys" is not 
> being cleaned after the process has finished. 
> When using a custom importer or DataSource for my case I need to add 
> additional parameters to the delta keys.
> When the data import finishes the DeltaKeys is not set back to null and the 
> DataImporter, DocBuilder and the SolrWriter are mantained as live objects 
> because there are being referenced by the "infoRegistry" of the SolrCore 
> which seems to be used for Jmx information.
> It appears that starting a second delta import did not freed the memory which 
> may cause on the long run an OutOfMemory, I have not checked if starting a 
> full import would break the references and free the memory.
> An easy fix is possible which  would be to add to the SolrWriter "deltaKeys = 
> null;" on the close method.
> Or nullify the writer on DocBuilder after being used on the method execute();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6842) No way to limit the fields cached in memory and leads to OOM when there are thousand of fields (thousands)

2015-10-22 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969567#comment-14969567
 ] 

David Smiley commented on LUCENE-6842:
--

FWIW one of my clients has ~94 thousand fields and it wasn't a problem.  It's 
some mid to late Solr 4.x version.  Solr's schema browser became unusable 
though ;-)

> No way to limit the fields cached in memory and leads to OOM when there are 
> thousand of fields (thousands)
> --
>
> Key: LUCENE-6842
> URL: https://issues.apache.org/jira/browse/LUCENE-6842
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.6.1
> Environment: Linux, openjdk 1.6.x
>Reporter: Bala Kolla
> Attachments: HistogramOfHeapUsage.png
>
>
> I am opening this defect to get some guidance on how to handle a case of 
> server running out of memory and it seems like it's something to do how we 
> index. But want to know if there is anyway to reduce the impact of this on 
> memory usage before we look into the way of reducing the number of fields. 
> Basically we have many thousands of fields being indexed and it's causing a 
> large amount of memory being used (25GB) and eventually leading to 
> application to hang and force us to restart every few minutes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8180) Missing logging dependency in solrj-lib for SolrJ

2015-10-22 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969566#comment-14969566
 ] 

Kevin Risden commented on SOLR-8180:


in solr/solrj/ivy.xml there are two slf4j dependencies tagged for test only.

{code}


{code}

Should they both be included in solrj-lib or only jcl-over-slf4j based on what 
was discussed above? It would be changing the conf to compile.

I'm not sure what issues would come up if slf4j-log4j12 were to be included. It 
would make the default logging be log4j but at least there would be a concrete 
implementation.

I'm not sure if changing ivy.xml would also change the SolrJ POM.

> Missing logging dependency in solrj-lib for SolrJ
> -
>
> Key: SOLR-8180
> URL: https://issues.apache.org/jira/browse/SOLR-8180
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8180.patch
>
>
> When using DBVisualizer, SquirrelSQL, or Java JDBC with the Solr JDBC driver, 
> an additional logging dependency must be added otherwise the following 
> exception occurs:
> {code}
> org.apache.solr.common.SolrException: Unable to create HttpClient instance. 
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil$HttpClientFactory.createHttpClient(HttpClientUtil.java:393)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:124)
>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.(CloudSolrClient.java:196)
>   at 
> org.apache.solr.client.solrj.io.SolrClientCache.getCloudSolrClient(SolrClientCache.java:47)
>   at 
> org.apache.solr.client.solrj.io.sql.ConnectionImpl.(ConnectionImpl.java:51)
>   at 
> org.apache.solr.client.solrj.io.sql.DriverImpl.connect(DriverImpl.java:108)
>   at 
> org.apache.solr.client.solrj.io.sql.DriverImpl.connect(DriverImpl.java:76)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.onseven.dbvis.h.B.D.ᅣチ(Z:1548)
>   at com.onseven.dbvis.h.B.F$A.call(Z:1369)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil$HttpClientFactory.createHttpClient(HttpClientUtil.java:391)
>   ... 16 more
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/commons/logging/LogFactory
>   at 
> org.apache.http.impl.client.CloseableHttpClient.(CloseableHttpClient.java:58)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.(AbstractHttpClient.java:287)
>   at 
> org.apache.http.impl.client.DefaultHttpClient.(DefaultHttpClient.java:128)
>   at 
> org.apache.http.impl.client.SystemDefaultHttpClient.(SystemDefaultHttpClient.java:116)
>   ... 21 more
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8180) Missing logging dependency in solrj-lib for SolrJ

2015-10-22 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8180:
---
Description: 
When using DBVisualizer, SquirrelSQL, or Java JDBC with the Solr JDBC driver, 
an additional logging dependency must be added otherwise the following 
exception occurs:

{code}
org.apache.solr.common.SolrException: Unable to create HttpClient instance. 
at 
org.apache.solr.client.solrj.impl.HttpClientUtil$HttpClientFactory.createHttpClient(HttpClientUtil.java:393)
at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:124)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.(CloudSolrClient.java:196)
at 
org.apache.solr.client.solrj.io.SolrClientCache.getCloudSolrClient(SolrClientCache.java:47)
at 
org.apache.solr.client.solrj.io.sql.ConnectionImpl.(ConnectionImpl.java:51)
at 
org.apache.solr.client.solrj.io.sql.DriverImpl.connect(DriverImpl.java:108)
at 
org.apache.solr.client.solrj.io.sql.DriverImpl.connect(DriverImpl.java:76)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.onseven.dbvis.h.B.D.ᅣチ(Z:1548)
at com.onseven.dbvis.h.B.F$A.call(Z:1369)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at 
org.apache.solr.client.solrj.impl.HttpClientUtil$HttpClientFactory.createHttpClient(HttpClientUtil.java:391)
... 16 more
Caused by: java.lang.NoClassDefFoundError: org/apache/commons/logging/LogFactory
at 
org.apache.http.impl.client.CloseableHttpClient.(CloseableHttpClient.java:58)
at 
org.apache.http.impl.client.AbstractHttpClient.(AbstractHttpClient.java:287)
at 
org.apache.http.impl.client.DefaultHttpClient.(DefaultHttpClient.java:128)
at 
org.apache.http.impl.client.SystemDefaultHttpClient.(SystemDefaultHttpClient.java:116)
... 21 more
{code} 

  was:
When using DBVisualizer, SquirrelSQL, or Java JDBC with the Solr JDBC driver, 
an additional dependency on commons-logging must be added otherwise the 
following exception occurs:

{code}
org.apache.solr.common.SolrException: Unable to create HttpClient instance. 
at 
org.apache.solr.client.solrj.impl.HttpClientUtil$HttpClientFactory.createHttpClient(HttpClientUtil.java:393)
at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:124)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.(CloudSolrClient.java:196)
at 
org.apache.solr.client.solrj.io.SolrClientCache.getCloudSolrClient(SolrClientCache.java:47)
at 
org.apache.solr.client.solrj.io.sql.ConnectionImpl.(ConnectionImpl.java:51)
at 
org.apache.solr.client.solrj.io.sql.DriverImpl.connect(DriverImpl.java:108)
at 
org.apache.solr.client.solrj.io.sql.DriverImpl.connect(DriverImpl.java:76)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.onseven.dbvis.h.B.D.ᅣチ(Z:1548)
at com.onseven.dbvis.h.B.F$A.call(Z:1369)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
  

[jira] [Commented] (LUCENE-6845) Merge Spans and SpanScorer

2015-10-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14968845#comment-14968845
 ] 

ASF subversion and git services commented on LUCENE-6845:
-

Commit 1709964 from [~romseygeek] in branch 'dev/trunk'
[ https://svn.apache.org/r1709964 ]

LUCENE-6845: Merge SpanScorer into Spans

> Merge Spans and SpanScorer
> --
>
> Key: LUCENE-6845
> URL: https://issues.apache.org/jira/browse/LUCENE-6845
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6845.patch, LUCENE-6845_norenames.patch, 
> LUCENE-6845_norenames.patch, LUCENE-6845_norenames.patch.txt
>
>
> SpanScorer and Spans currently share the burden of scoring span queries, with 
> SpanScorer delegating to Spans for most operations.  Spans is essentially a 
> Scorer, just with the ability to iterate through positions as well, and no 
> SimScorer to use for scoring.  This seems overly complicated.  We should 
> merge the two classes into one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_60) - Build # 14622 - Still Failing!

2015-10-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14622/
Java: 64bit/jdk1.8.0_60 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest

Error Message:
Captured an uncaught exception in thread: Thread[id=4844, 
name=RecoveryThread-source_collection_shard1_replica1, state=RUNNABLE, 
group=TGRP-CdcrReplicationHandlerTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=4844, 
name=RecoveryThread-source_collection_shard1_replica1, state=RUNNABLE, 
group=TGRP-CdcrReplicationHandlerTest]
at 
__randomizedtesting.SeedInfo.seed([BAB715B7F0212BCB:1DF3AD139D9A3872]:0)
Caused by: org.apache.solr.common.cloud.ZooKeeperException: 
at __randomizedtesting.SeedInfo.seed([BAB715B7F0212BCB]:0)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:232)
Caused by: org.apache.solr.common.SolrException: java.io.FileNotFoundException: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CdcrReplicationHandlerTest_BAB715B7F0212BCB-001/jetty-002/cores/source_collection_shard1_replica1/data/tlog/tlog.007.1515729009540857856
 (No such file or directory)
at 
org.apache.solr.update.CdcrTransactionLog.reopenOutputStream(CdcrTransactionLog.java:244)
at 
org.apache.solr.update.CdcrTransactionLog.incref(CdcrTransactionLog.java:173)
at 
org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:1079)
at 
org.apache.solr.update.UpdateLog.seedBucketsWithHighestVersion(UpdateLog.java:1579)
at 
org.apache.solr.update.UpdateLog.seedBucketsWithHighestVersion(UpdateLog.java:1610)
at org.apache.solr.core.SolrCore.seedVersionBuckets(SolrCore.java:877)
at 
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:534)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:225)
Caused by: java.io.FileNotFoundException: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CdcrReplicationHandlerTest_BAB715B7F0212BCB-001/jetty-002/cores/source_collection_shard1_replica1/data/tlog/tlog.007.1515729009540857856
 (No such file or directory)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.(RandomAccessFile.java:243)
at 
org.apache.solr.update.CdcrTransactionLog.reopenOutputStream(CdcrTransactionLog.java:236)
... 7 more




Build Log:
[...truncated 10308 lines...]
   [junit4] Suite: org.apache.solr.cloud.CdcrReplicationHandlerTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CdcrReplicationHandlerTest_BAB715B7F0212BCB-001/init-core-data-001
   [junit4]   2> 733292 INFO  
(SUITE-CdcrReplicationHandlerTest-seed#[BAB715B7F0212BCB]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false)
   [junit4]   2> 733292 INFO  
(SUITE-CdcrReplicationHandlerTest-seed#[BAB715B7F0212BCB]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /ja/
   [junit4]   2> 733293 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[BAB715B7F0212BCB]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 733293 INFO  (Thread-1575) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 733293 INFO  (Thread-1575) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 733393 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[BAB715B7F0212BCB]) [] 
o.a.s.c.ZkTestServer start zk server on port:37924
   [junit4]   2> 733394 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[BAB715B7F0212BCB]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 733394 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[BAB715B7F0212BCB]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 733396 INFO  (zkCallback-474-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@65bebd12 
name:ZooKeeperConnection Watcher:127.0.0.1:37924 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 733396 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[BAB715B7F0212BCB]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 733396 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[BAB715B7F0212BCB]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 733396 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[BAB715B7F0212BCB]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2> 733397 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[BAB715B7F0212BCB]) [] 

[jira] [Commented] (SOLR-6273) Cross Data Center Replication

2015-10-22 Thread Renaud Delbru (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14968987#comment-14968987
 ] 

Renaud Delbru commented on SOLR-6273:
-

The tlog replication is only relevant to the source cluster, as it ensures that 
tlogs are replicated between a master and slaves in case of a recovery (with a 
snappull). If not, then there are some scenarios where a slave can end up with 
an incomplete update log, and if it becomes the master, then we will miss some 
updates and the target cluster becomes inconsistent wrt the source cluster.


> Cross Data Center Replication
> -
>
> Key: SOLR-6273
> URL: https://issues.apache.org/jira/browse/SOLR-6273
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Erick Erickson
> Attachments: SOLR-6273-trunk-testfix1.patch, 
> SOLR-6273-trunk-testfix2.patch, SOLR-6273-trunk-testfix3.patch, 
> SOLR-6273-trunk.patch, SOLR-6273-trunk.patch, SOLR-6273.patch, 
> SOLR-6273.patch, SOLR-6273.patch, SOLR-6273.patch
>
>
> This is the master issue for Cross Data Center Replication (CDCR)
> described at a high level here: 
> http://heliosearch.org/solr-cross-data-center-replication/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6273) Cross Data Center Replication

2015-10-22 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969010#comment-14969010
 ] 

Shalin Shekhar Mangar commented on SOLR-6273:
-

bq. The tlog replication is only relevant to the source cluster, as it ensures 
that tlogs are replicated between a master and slaves in case of a recovery 
(with a snappull)

Ah, I see, thanks for explaining. Am I correct in assuming that since the 
current tlog is not in the logs deque therefore this does not interfere with 
the replaying of buffered updates?

> Cross Data Center Replication
> -
>
> Key: SOLR-6273
> URL: https://issues.apache.org/jira/browse/SOLR-6273
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Erick Erickson
> Attachments: SOLR-6273-trunk-testfix1.patch, 
> SOLR-6273-trunk-testfix2.patch, SOLR-6273-trunk-testfix3.patch, 
> SOLR-6273-trunk.patch, SOLR-6273-trunk.patch, SOLR-6273.patch, 
> SOLR-6273.patch, SOLR-6273.patch, SOLR-6273.patch
>
>
> This is the master issue for Cross Data Center Replication (CDCR)
> described at a high level here: 
> http://heliosearch.org/solr-cross-data-center-replication/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6273) Cross Data Center Replication

2015-10-22 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969015#comment-14969015
 ] 

Shalin Shekhar Mangar commented on SOLR-6273:
-

Any idea why this might happen? Looks like the state is null. This started 
happening after I reloaded the source collection and re-indexed the JSON 
documents.

{code}
339784408 ERROR (cdcr-replicator-41-thread-155-processing-n:127.0.1.1:8001_solr 
x:cdcr_source_shard1_replica1 s:shard1 c:cdcr_source r:core_node1) 
[c:cdcr_source s:shard1 r:core_node1 x:cdcr_source_shard1_replica1] 
o.a.s.c.u.ExecutorUtil Uncaught exception java.lang.NullPointerException thrown 
by thread: cdcr-replicator-41-thread-155-processing-n:127.0.1.1:8001_solr 
x:cdcr_source_shard1_replica1 s:shard1 c:cdcr_source r:core_node1
java.lang.Exception: Submitter stack trace
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.execute(ExecutorUtil.java:204)
at 
org.apache.solr.handler.CdcrReplicatorScheduler$1.run(CdcrReplicatorScheduler.java:80)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Exception in thread "cdcr-replicator-41-thread-155" 
java.lang.NullPointerException
at 
java.util.concurrent.ConcurrentLinkedQueue.checkNotNull(ConcurrentLinkedQueue.java:914)
at 
java.util.concurrent.ConcurrentLinkedQueue.offer(ConcurrentLinkedQueue.java:327)
at 
org.apache.solr.handler.CdcrReplicatorScheduler$1$1.run(CdcrReplicatorScheduler.java:89)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:231)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}

> Cross Data Center Replication
> -
>
> Key: SOLR-6273
> URL: https://issues.apache.org/jira/browse/SOLR-6273
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Erick Erickson
> Attachments: SOLR-6273-trunk-testfix1.patch, 
> SOLR-6273-trunk-testfix2.patch, SOLR-6273-trunk-testfix3.patch, 
> SOLR-6273-trunk.patch, SOLR-6273-trunk.patch, SOLR-6273.patch, 
> SOLR-6273.patch, SOLR-6273.patch, SOLR-6273.patch
>
>
> This is the master issue for Cross Data Center Replication (CDCR)
> described at a high level here: 
> http://heliosearch.org/solr-cross-data-center-replication/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6273) Cross Data Center Replication

2015-10-22 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14968998#comment-14968998
 ] 

Shalin Shekhar Mangar commented on SOLR-6273:
-

Sorry, you are right. I wasn't using the 1ms delay -- I had uploaded the new 
config but forgot to reload the source collection so it was using 1000ms as the 
schedule which explains the slowness.

bq. In term of moving from a batch model to to a pure streaming one, this might 
probably simplify the configuration on the user size, but in term of 
performance, I am not sure...

Yeah, I now see that it probably won't affect performance much. But I would 
still prefer streaming because that the batch size and schedule is really 
achieving the same thing i.e. streaming. Furthermore, as you said, schedule and 
batchSize are two more things for the user to configure whereas setting a 
transfer rate is much easier for the user.



> Cross Data Center Replication
> -
>
> Key: SOLR-6273
> URL: https://issues.apache.org/jira/browse/SOLR-6273
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Erick Erickson
> Attachments: SOLR-6273-trunk-testfix1.patch, 
> SOLR-6273-trunk-testfix2.patch, SOLR-6273-trunk-testfix3.patch, 
> SOLR-6273-trunk.patch, SOLR-6273-trunk.patch, SOLR-6273.patch, 
> SOLR-6273.patch, SOLR-6273.patch, SOLR-6273.patch
>
>
> This is the master issue for Cross Data Center Replication (CDCR)
> described at a high level here: 
> http://heliosearch.org/solr-cross-data-center-replication/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6845) Merge Spans and SpanScorer

2015-10-22 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved LUCENE-6845.
---
Resolution: Fixed

> Merge Spans and SpanScorer
> --
>
> Key: LUCENE-6845
> URL: https://issues.apache.org/jira/browse/LUCENE-6845
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6845.patch, LUCENE-6845_norenames.patch, 
> LUCENE-6845_norenames.patch, LUCENE-6845_norenames.patch.txt
>
>
> SpanScorer and Spans currently share the burden of scoring span queries, with 
> SpanScorer delegating to Spans for most operations.  Spans is essentially a 
> Scorer, just with the ability to iterate through positions as well, and no 
> SimScorer to use for scoring.  This seems overly complicated.  We should 
> merge the two classes into one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 143 - Still Failing!

2015-10-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/143/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.core.TestArbitraryIndexDir.testLoadNewIndexDir

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([F36CB3BD696503AA:1A360885F7FC9302]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:766)
at 
org.apache.solr.core.TestArbitraryIndexDir.testLoadNewIndexDir(TestArbitraryIndexDir.java:128)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: xpath=*[count(//doc)=1]
xml response was: 

00


request was:q=id:2=standard=0=20=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:759)
... 40 more




Build Log:
[...truncated 9713 lines...]
   [junit4] Suite: 

[jira] [Commented] (SOLR-6273) Cross Data Center Replication

2015-10-22 Thread Renaud Delbru (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969006#comment-14969006
 ] 

Renaud Delbru commented on SOLR-6273:
-

Yes, I think we should probably change the default value of the scheduler to 
1ms unless we change the model to a streaming one. 1000ms is way too high as 
default value.

> Cross Data Center Replication
> -
>
> Key: SOLR-6273
> URL: https://issues.apache.org/jira/browse/SOLR-6273
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Erick Erickson
> Attachments: SOLR-6273-trunk-testfix1.patch, 
> SOLR-6273-trunk-testfix2.patch, SOLR-6273-trunk-testfix3.patch, 
> SOLR-6273-trunk.patch, SOLR-6273-trunk.patch, SOLR-6273.patch, 
> SOLR-6273.patch, SOLR-6273.patch, SOLR-6273.patch
>
>
> This is the master issue for Cross Data Center Replication (CDCR)
> described at a high level here: 
> http://heliosearch.org/solr-cross-data-center-replication/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6273) Cross Data Center Replication

2015-10-22 Thread Renaud Delbru (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14968970#comment-14968970
 ] 

Renaud Delbru commented on SOLR-6273:
-

[~shalinmangar] thanks for looking into this.

Regarding performance (2 and 3), it is true that the right batch size and 
scheduler delay is very important for optimal performance. With the proper 
batch sizes and scheduler delays, we have seen very low update latency between 
the source and target clusters. In your setup, one document was approximately 
0.2kb in size, therefore the batch size was ~14kb which should correspond to 
~14mb/s of transfer rate. With such a transfer rate, the replication should 
have been done in a few seconds / minutes, not hours. Could you give more 
information about your setup / benchmark ? Were replication turned off while 
you were indexing on the source, or you turned it on after ?

In term of moving from a batch model to to a pure streaming one, this might 
probably simplify the configuration on the user size, but in term of 
performance, I am not sure - maybe some other people can give their opinion 
here. Batch size might not use that much memory (if properly configured), and 
transfer speed also (if the batch size is properly configured too). One way to 
simplify also the configuration for the user is, like you proposed, having a 
configurable transfer rate but with some logic to automatically adjust the 
batch size and scheduler delay based on the configurable transfer rate ?

About 5, I think transfer rate is a good addition. Latency could be computed as 
the QUEUES monitoring action is returning the last document timestamp.


> Cross Data Center Replication
> -
>
> Key: SOLR-6273
> URL: https://issues.apache.org/jira/browse/SOLR-6273
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Erick Erickson
> Attachments: SOLR-6273-trunk-testfix1.patch, 
> SOLR-6273-trunk-testfix2.patch, SOLR-6273-trunk-testfix3.patch, 
> SOLR-6273-trunk.patch, SOLR-6273-trunk.patch, SOLR-6273.patch, 
> SOLR-6273.patch, SOLR-6273.patch, SOLR-6273.patch
>
>
> This is the master issue for Cross Data Center Replication (CDCR)
> described at a high level here: 
> http://heliosearch.org/solr-cross-data-center-replication/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6845) Merge Spans and SpanScorer

2015-10-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14968990#comment-14968990
 ] 

ASF subversion and git services commented on LUCENE-6845:
-

Commit 1709993 from [~romseygeek] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1709993 ]

LUCENE-6845: Merge SpanScorer into Spans

> Merge Spans and SpanScorer
> --
>
> Key: LUCENE-6845
> URL: https://issues.apache.org/jira/browse/LUCENE-6845
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6845.patch, LUCENE-6845_norenames.patch, 
> LUCENE-6845_norenames.patch, LUCENE-6845_norenames.patch.txt
>
>
> SpanScorer and Spans currently share the burden of scoring span queries, with 
> SpanScorer delegating to Spans for most operations.  Spans is essentially a 
> Scorer, just with the ability to iterate through positions as well, and no 
> SimScorer to use for scoring.  This seems overly complicated.  We should 
> merge the two classes into one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8180) Missing logging dependency in solrj-lib for SolrJ

2015-10-22 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14970032#comment-14970032
 ] 

Kevin Risden commented on SOLR-8180:


If there are no test dependencies in solr/solrj/ivy.xml I receive the following 
error stating that "solr/solrj/test-lib does not exist" when running "ant test" 
in solr/solrj:

{code}
BUILD FAILED
/home/travis/build/risdenk/lucene-solr/solr/build.xml:246: The following error 
occurred while executing this line:
/home/travis/build/risdenk/lucene-solr/solr/common-build.xml:516: The following 
error occurred while executing this line:
/home/travis/build/risdenk/lucene-solr/lucene/common-build.xml:796: The 
following error occurred while executing this line:
/home/travis/build/risdenk/lucene-solr/lucene/common-build.xml:810: The 
following error occurred while executing this line:
/home/travis/build/risdenk/lucene-solr/lucene/common-build.xml:1944: 
/home/travis/build/risdenk/lucene-solr/solr/solrj/test-lib does not exist.
{code}

If I duplicate the two slf4j test dependencies and add them also as compile 
then the "ant test" works.

> Missing logging dependency in solrj-lib for SolrJ
> -
>
> Key: SOLR-8180
> URL: https://issues.apache.org/jira/browse/SOLR-8180
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8180.patch
>
>
> When using DBVisualizer, SquirrelSQL, or Java JDBC with the Solr JDBC driver, 
> an additional logging dependency must be added otherwise the following 
> exception occurs:
> {code}
> org.apache.solr.common.SolrException: Unable to create HttpClient instance. 
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil$HttpClientFactory.createHttpClient(HttpClientUtil.java:393)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:124)
>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.(CloudSolrClient.java:196)
>   at 
> org.apache.solr.client.solrj.io.SolrClientCache.getCloudSolrClient(SolrClientCache.java:47)
>   at 
> org.apache.solr.client.solrj.io.sql.ConnectionImpl.(ConnectionImpl.java:51)
>   at 
> org.apache.solr.client.solrj.io.sql.DriverImpl.connect(DriverImpl.java:108)
>   at 
> org.apache.solr.client.solrj.io.sql.DriverImpl.connect(DriverImpl.java:76)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.onseven.dbvis.h.B.D.ᅣチ(Z:1548)
>   at com.onseven.dbvis.h.B.F$A.call(Z:1369)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil$HttpClientFactory.createHttpClient(HttpClientUtil.java:391)
>   ... 16 more
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/commons/logging/LogFactory
>   at 
> org.apache.http.impl.client.CloseableHttpClient.(CloseableHttpClient.java:58)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.(AbstractHttpClient.java:287)
>   at 
> org.apache.http.impl.client.DefaultHttpClient.(DefaultHttpClient.java:128)
>   at 
> org.apache.http.impl.client.SystemDefaultHttpClient.(SystemDefaultHttpClient.java:116)
>   ... 21 more
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8164) Debug "parsedquery" output no longer handles boosts correctly

2015-10-22 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969805#comment-14969805
 ] 

Yonik Seeley commented on SOLR-8164:


Hmmm, I did have a test case but it seems like it didn't make the commit.
I just added another case to it that doesn't act like I want (I'm getting some 
paren "doubling"), so I'll commit after I figure out a fix.

> Debug "parsedquery" output no longer handles boosts correctly
> -
>
> Key: SOLR-8164
> URL: https://issues.apache.org/jira/browse/SOLR-8164
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Minor
> Fix For: Trunk
>
> Attachments: SOLR-8164.patch
>
>
> Since Lucene's removal of boosts on every query, Solr's debug output has been 
> somewhat broken.
> {code}
> http://localhost:8983/solr/techproducts/query?debugQuery=true=(foo_s:a^3)^4
> shows"parsedquery":"BoostQuery(foo_s:a^3.0)",
> and
> http://localhost:8983/solr/techproducts/query?debugQuery=true=foo_s:a^=2
> shows"parsedquery":"ConstantScore(foo_s:a)",
> {code}
> Since boosts are now explicit (i.e. BoostQuery), we should probably just move 
> to always showing boosts instead of having logic that tries to be smart about 
> it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8164) Debug "parsedquery" output no longer handles boosts correctly

2015-10-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969943#comment-14969943
 ] 

ASF subversion and git services commented on SOLR-8164:
---

Commit 1710106 from [~yo...@apache.org] in branch 'dev/trunk'
[ https://svn.apache.org/r1710106 ]

SOLR-8164: fix parsedquery debug output double-parens, add tests

> Debug "parsedquery" output no longer handles boosts correctly
> -
>
> Key: SOLR-8164
> URL: https://issues.apache.org/jira/browse/SOLR-8164
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Minor
> Fix For: Trunk
>
> Attachments: SOLR-8164.patch
>
>
> Since Lucene's removal of boosts on every query, Solr's debug output has been 
> somewhat broken.
> {code}
> http://localhost:8983/solr/techproducts/query?debugQuery=true=(foo_s:a^3)^4
> shows"parsedquery":"BoostQuery(foo_s:a^3.0)",
> and
> http://localhost:8983/solr/techproducts/query?debugQuery=true=foo_s:a^=2
> shows"parsedquery":"ConstantScore(foo_s:a)",
> {code}
> Since boosts are now explicit (i.e. BoostQuery), we should probably just move 
> to always showing boosts instead of having logic that tries to be smart about 
> it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Solaris (multiarch/jdk1.7.0) - Build # 145 - Failure!

2015-10-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Solaris/145/
Java: multiarch/jdk1.7.0 -d64 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.embedded.LargeVolumeBinaryJettyTest

Error Message:
ObjectTracker found 2 object(s) that were not released!!! [TransactionLog]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 2 object(s) that were not 
released!!! [TransactionLog]
at __randomizedtesting.SeedInfo.seed([5BAB1011C0E67C4D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:237)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11841 lines...]
   [junit4] Suite: 
org.apache.solr.client.solrj.embedded.LargeVolumeBinaryJettyTest
   [junit4]   2> Creating dataDir: 
/export/home/jenkins/workspace/Lucene-Solr-5.x-Solaris/solr/build/solr-solrj/test/J1/temp/solr.client.solrj.embedded.LargeVolumeBinaryJettyTest_5BAB1011C0E67C4D-001/init-core-data-001
   [junit4]   2> 174413 INFO  
(SUITE-LargeVolumeBinaryJettyTest-seed#[5BAB1011C0E67C4D]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false)
   [junit4]   2> 174529 INFO  
(SUITE-LargeVolumeBinaryJettyTest-seed#[5BAB1011C0E67C4D]-worker) [] 
o.a.s.SolrTestCaseJ4 initCore
   [junit4]   2> 174529 INFO  
(SUITE-LargeVolumeBinaryJettyTest-seed#[5BAB1011C0E67C4D]-worker) [] 
o.a.s.SolrTestCaseJ4 initCore end
   [junit4]   2> 174530 INFO  
(SUITE-LargeVolumeBinaryJettyTest-seed#[5BAB1011C0E67C4D]-worker) [] 
o.a.s.SolrTestCaseJ4 Writing core.properties file to 
/export/home/jenkins/workspace/Lucene-Solr-5.x-Solaris/solr/build/solr-solrj/test/J1/temp/solr.client.solrj.embedded.LargeVolumeBinaryJettyTest_5BAB1011C0E67C4D-001/tempDir-002/cores/core
   [junit4]   2> 174533 INFO  
(SUITE-LargeVolumeBinaryJettyTest-seed#[5BAB1011C0E67C4D]-worker) [] 
o.e.j.s.Server jetty-9.2.13.v20150730
   [junit4]   2> 174535 INFO  
(SUITE-LargeVolumeBinaryJettyTest-seed#[5BAB1011C0E67C4D]-worker) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@52528399{/solr,null,AVAILABLE}
   [junit4]   2> 174540 INFO  
(SUITE-LargeVolumeBinaryJettyTest-seed#[5BAB1011C0E67C4D]-worker) [] 
o.e.j.s.ServerConnector Started 
ServerConnector@273ddc08{HTTP/1.1}{127.0.0.1:49550}
   [junit4]   2> 174540 INFO  
(SUITE-LargeVolumeBinaryJettyTest-seed#[5BAB1011C0E67C4D]-worker) [] 
o.e.j.s.Server Started @177912ms
   [junit4]   2> 174540 INFO  

[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_60) - Build # 14628 - Failure!

2015-10-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14628/
Java: 64bit/jdk1.8.0_60 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.handler.component.DistributedTermsComponentTest.test

Error Message:
Error from server at http://127.0.0.1:60755//collection1: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:37760//collection1

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:60755//collection1: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:37760//collection1
at 
__randomizedtesting.SeedInfo.seed([6BE7CD85FB543FBF:E3B3F25F55A85247]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryServer(BaseDistributedSearchTestCase.java:561)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:609)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:591)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:570)
at 
org.apache.solr.handler.component.DistributedTermsComponentTest.test(DistributedTermsComponentTest.java:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 

[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 994 - Still Failing

2015-10-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/994/

2 tests failed.
FAILED:  
org.apache.lucene.codecs.lucene42.TestLucene42TermVectorsFormat.testRamBytesUsed

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([57A52995A507B6FB]:0)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.codecs.lucene42.TestLucene42TermVectorsFormat

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([57A52995A507B6FB]:0)




Build Log:
[...truncated 5140 lines...]
   [junit4] Suite: 
org.apache.lucene.codecs.lucene42.TestLucene42TermVectorsFormat
   [junit4]   2> X 23, 2015 10:37:57 AM 
com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
   [junit4]   2> WARNING: Suite execution timed out: 
org.apache.lucene.codecs.lucene42.TestLucene42TermVectorsFormat
   [junit4]   2>1) Thread[id=144, 
name=SUITE-TestLucene42TermVectorsFormat-seed#[57A52995A507B6FB], 
state=RUNNABLE, group=TGRP-TestLucene42TermVectorsFormat]
   [junit4]   2> at java.lang.Thread.getStackTrace(Thread.java:1589)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$4.run(ThreadLeakControl.java:688)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$4.run(ThreadLeakControl.java:685)
   [junit4]   2> at java.security.AccessController.doPrivileged(Native 
Method)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.getStackTrace(ThreadLeakControl.java:685)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.getThreadsWithTraces(ThreadLeakControl.java:701)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.formatThreadStacksFull(ThreadLeakControl.java:681)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.access$1000(ThreadLeakControl.java:64)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$2.evaluate(ThreadLeakControl.java:414)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:676)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$200(RandomizedRunner.java:140)
   [junit4]   2> at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:587)
   [junit4]   2>2) Thread[id=8, name=JUnit4-serializer-daemon, 
state=TIMED_WAITING, group=main]
   [junit4]   2> at java.lang.Thread.sleep(Native Method)
   [junit4]   2> at 
com.carrotsearch.ant.tasks.junit4.events.Serializer$1.run(Serializer.java:49)
   [junit4]   2>3) Thread[id=145, 
name=TEST-TestLucene42TermVectorsFormat.testRamBytesUsed-seed#[57A52995A507B6FB],
 state=RUNNABLE, group=TGRP-TestLucene42TermVectorsFormat]
   [junit4]   2> at 
org.apache.lucene.util.packed.BlockPackedReaderIterator.skip(BlockPackedReaderIterator.java:145)
   [junit4]   2> at 
org.apache.lucene.codecs.lucene42.Lucene42TermVectorsReader.readPositions(Lucene42TermVectorsReader.java:584)
   [junit4]   2> at 
org.apache.lucene.codecs.lucene42.Lucene42TermVectorsReader.get(Lucene42TermVectorsReader.java:398)
   [junit4]   2> at 
org.apache.lucene.codecs.TermVectorsWriter.merge(TermVectorsWriter.java:194)
   [junit4]   2> at 
org.apache.lucene.index.SegmentMerger.mergeVectors(SegmentMerger.java:187)
   [junit4]   2> at 
org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:127)
   [junit4]   2> at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4068)
   [junit4]   2> at 
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3648)
   [junit4]   2> at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
   [junit4]   2> at 
org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:1929)
   [junit4]   2> at 
org.apache.lucene.index.IndexWriter.doAfterSegmentFlushed(IndexWriter.java:4712)
   [junit4]   2> at 
org.apache.lucene.index.DocumentsWriter$MergePendingEvent.process(DocumentsWriter.java:695)
   [junit4]   2> at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4738)
   [junit4]   2> at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4729)
   [junit4]   2> at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1476)
   [junit4]   2> at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1254)
   [junit4]   2> at 
org.apache.lucene.index.BaseIndexFileFormatTestCase.testRamBytesUsed(BaseIndexFileFormatTestCase.java:262)
   [junit4]   2> at 

[jira] [Commented] (SOLR-7843) Importing Delta create a memory leak

2015-10-22 Thread Joseph Lawson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14970114#comment-14970114
 ] 

Joseph Lawson commented on SOLR-7843:
-

Does this affect 5.3+ as well? If I'm using DIH I'm currently assuming 5.2.0 is 
the only safe version. Is that a correct assumption?

> Importing Delta create a memory leak
> 
>
> Key: SOLR-7843
> URL: https://issues.apache.org/jira/browse/SOLR-7843
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 5.2.1
>Reporter: Pablo Lozano
>Assignee: Shalin Shekhar Mangar
>  Labels: memory-leak
> Fix For: 5.4, Trunk
>
>
> The org.apache.solr.handler.dataimport.SolrWriter is not correctly cleaning 
> itself after finishing importing Deltas as the "Set deltaKeys" is not 
> being cleaned after the process has finished. 
> When using a custom importer or DataSource for my case I need to add 
> additional parameters to the delta keys.
> When the data import finishes the DeltaKeys is not set back to null and the 
> DataImporter, DocBuilder and the SolrWriter are mantained as live objects 
> because there are being referenced by the "infoRegistry" of the SolrCore 
> which seems to be used for Jmx information.
> It appears that starting a second delta import did not freed the memory which 
> may cause on the long run an OutOfMemory, I have not checked if starting a 
> full import would break the references and free the memory.
> An easy fix is possible which  would be to add to the SolrWriter "deltaKeys = 
> null;" on the close method.
> Or nullify the writer on DocBuilder after being used on the method execute();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections

2015-10-22 Thread Damien Kamerman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14970255#comment-14970255
 ] 

Damien Kamerman commented on SOLR-7191:
---

After 2min around 100 collections all-green. This is with a 3-node ensemble. 
Ten minutes would be great, and I guess with 3K collections I would be close to 
that mark.

> Improve stability and startup performance of SolrCloud with thousands of 
> collections
> 
>
> Key: SOLR-7191
> URL: https://issues.apache.org/jira/browse/SOLR-7191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shalin Shekhar Mangar
>  Labels: performance, scalability
> Attachments: SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> SOLR-7191.patch, lots-of-zkstatereader-updates-branch_5x.log
>
>
> A user on the mailing list with thousands of collections (5000 on 4.10.3, 
> 4000 on 5.0) is having severe problems with getting Solr to restart.
> I tried as hard as I could to duplicate the user setup, but I ran into many 
> problems myself even before I was able to get 4000 collections created on a 
> 5.0 example cloud setup.  Restarting Solr takes a very long time, and it is 
> not very stable once it's up and running.
> This kind of setup is very much pushing the envelope on SolrCloud performance 
> and scalability.  It doesn't help that I'm running both Solr nodes on one 
> machine (I started with 'bin/solr -e cloud') and that ZK is embedded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7669) Add SelectStream to Streaming API

2015-10-22 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove updated SOLR-7669:
--
Attachment: SOLR-7669.patch

Deleted the EditStream as its functionality (the removal of fields from a 
tuple) is superseded by the SelectStream. Updated the SQLHandler to use the 
SelectStream instead of the EditStream.

All relevant tests pass. 

> Add SelectStream to Streaming API
> -
>
> Key: SOLR-7669
> URL: https://issues.apache.org/jira/browse/SOLR-7669
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Dennis Gove
>Priority: Minor
>  Labels: Streaming
> Attachments: SOLR-7669.patch, SOLR-7669.patch, SOLR-7669.patch, 
> SOLR-7669.patch
>
>
> Adds a new stream called SelectStream which can be used for two purpose.
>  1. Limit the set of fields included in an outgoing tuple to remove unwanted 
> fields
>  2. Provide aliases for fields. With this it acts as an alternative to the 
> CloudSolrStream's 'aliases' option.
>  For example, in a simple case
> {code}
> select(
>   id, 
>   fieldA_i as fieldA, 
>   fieldB_s as fieldB,
>   search(collection1, q="*:*", fl="id,fieldA_i,fieldB_s", sort="fieldA_i asc, 
> fieldB_s asc, id asc")
> )
> {code}
> This can also be used as part of complex expressions to help keep track of 
> what is being worked on. This is particularly useful when merging/joining 
> multiple collections which share field names. For example, the following 
> results in a set of tuples including only the fields id, left.ident, and 
> right.ident even though the total set of fields required to perform the 
> search and join is much larger than just those three fields.
> {code}
> select(
>   id, left.ident, right.ident,
>   innerJoin(
> select(
>   id, join1_i as left.join1, join2_s as left.join2, ident_s as left.ident,
>   search(collection1, q="side_s:left", fl="id,join1_i,join2_s,ident_s", 
> sort="join1_i asc, join2_s asc, id asc")
> ),
> select(
>   join3_i as right.join1, join2_s as right.join2, ident_s as right.ident,
>   search(collection1, q="side_s:right", fl="join3_i,join2_s,ident_s", 
> sort="join3_i asc, join2_s asc"),
> ),
> on="left.join1=right.join1, left.join2=right.join2"
>   )
> )
> {code}
> This depends on SOLR-7584.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_60) - Build # 5355 - Failure!

2015-10-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5355/
Java: 32bit/jdk1.8.0_60 -server -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.util.TestSolrCLIRunExample.testInteractiveSolrCloudExample

Error Message:
Could not find a healthy node to handle the request.

Stack Trace:
org.apache.solr.common.SolrException: Could not find a healthy node to handle 
the request.
at 
__randomizedtesting.SeedInfo.seed([6599685C56C898C:DD28764FF2194CEA]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1084)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:953)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:953)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:953)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:953)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:953)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:485)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:464)
at 
org.apache.solr.util.TestSolrCLIRunExample.testInteractiveSolrCloudExample(TestSolrCLIRunExample.java:443)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections

2015-10-22 Thread Damien Kamerman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14970253#comment-14970253
 ] 

Damien Kamerman commented on SOLR-7191:
---

1. hmmm cancel that. Initially I noticed very slow (around 60min total) 
shutdown in JmxMonitoredMap.clear(). I went back to test it and was unable to 
reproduce!? I did update the trunk. A partial stack is all I've saved:
at org.apache.solr.core.JmxMonitoredMap.clear(JmxMonitoredMap.java:144)
at org.apache.solr.core.SolrCore.close(SolrCore.java:1263)
at org.apache.solr.core.SolrCores.close(SolrCores.java:124)
at org.apache.solr.core.CoreContainer.shutdown(CoreContainer.java:564)
at 
org.apache.solr.servlet.SolrDispatchFilter.destroy(SolrDispatchFilter.java:172)

2. OK, will look into that.

> Improve stability and startup performance of SolrCloud with thousands of 
> collections
> 
>
> Key: SOLR-7191
> URL: https://issues.apache.org/jira/browse/SOLR-7191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shalin Shekhar Mangar
>  Labels: performance, scalability
> Attachments: SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> SOLR-7191.patch, lots-of-zkstatereader-updates-branch_5x.log
>
>
> A user on the mailing list with thousands of collections (5000 on 4.10.3, 
> 4000 on 5.0) is having severe problems with getting Solr to restart.
> I tried as hard as I could to duplicate the user setup, but I ran into many 
> problems myself even before I was able to get 4000 collections created on a 
> 5.0 example cloud setup.  Restarting Solr takes a very long time, and it is 
> not very stable once it's up and running.
> This kind of setup is very much pushing the envelope on SolrCloud performance 
> and scalability.  It doesn't help that I'm running both Solr nodes on one 
> machine (I started with 'bin/solr -e cloud') and that ZK is embedded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8188) Add hash style joins to the Streaming API and Streaming Expressions

2015-10-22 Thread Dennis Gove (JIRA)
Dennis Gove created SOLR-8188:
-

 Summary: Add hash style joins to the Streaming API and Streaming 
Expressions
 Key: SOLR-8188
 URL: https://issues.apache.org/jira/browse/SOLR-8188
 Project: Solr
  Issue Type: Improvement
  Components: SolrJ
Reporter: Dennis Gove
Priority: Minor


Add HashJoinStream and OuterHashJoinStream to the Streaming API to allow for 
optimized joining between sub-streams.

HashJoinStream is similar to an InnerJoinStream except that it does not insist 
on any particular order and will read all values from the stream being hashed 
(hashStream) when open() is called. During read() it will return the next tuple 
from the stream not being hashed (fullStream) which has at least one matching 
record in hashStream. It will return a tuple which is the merge of both tuples. 
If the tuple from the fullStream matches with more than one tuple from the 
hashStream then calling read() will return the merge with the next matching 
tuple. The order of the resulting stream is the order of the fullStream.

OuterHashJoinStream is similar to a HashJoinStream and LeftOuterJoinStream in 
that a tuple from fullStream will be returned even if it doesn't have a 
matching record in hashStream. All other pieces are identical.

In expression form
{code}
hashJoin(
  search(collection1, q=*:*, fl="fieldA, fieldB, fieldC", ...),
  hashed=search(collection2, q=*:*, fl="fieldA, fieldB, fieldE", ...),
  on="fieldA, fieldB"
)
{code}

{code}
outerHashJoin(
  search(collection1, q=*:*, fl="fieldA, fieldB, fieldC", ...),
  hashed=search(collection2, q=*:*, fl="fieldA, fieldB, fieldE", ...),
  on="fieldA, fieldB"
)
{code}

As you can see the hashStream is named parameter which makes it very clear 
which stream should be hashed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b85) - Build # 14621 - Still Failing!

2015-10-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14621/
Java: 64bit/jdk1.9.0-ea-b85 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithKerberosAlt

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithKerberosAlt: 1) Thread[id=12013, 
name=ou=system.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:747)2) Thread[id=12011, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:747)3) Thread[id=12012, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:747)4) Thread[id=12010, 
name=apacheds, state=WAITING, group=TGRP-TestSolrCloudWithKerberosAlt] 
at java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)5) Thread[id=12014, 
name=groupCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:747)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithKerberosAlt: 
   1) Thread[id=12013, name=ou=system.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 

[jira] [Updated] (SOLR-8157) Dead link to replicas in AngularUI

2015-10-22 Thread Upayavira (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Upayavira updated SOLR-8157:

Attachment: SOLR-8157.patch

Patch attached that adds a concept of a root path and uses it in all collection 
related links. 

Also, it points back to the same tab on the other instance, rather than to the 
root UI, which I always found a bit jarring.

> Dead link to replicas in AngularUI
> --
>
> Key: SOLR-8157
> URL: https://issues.apache.org/jira/browse/SOLR-8157
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Reporter: Jan Høydahl
>Assignee: Upayavira
>Priority: Minor
>  Labels: angularjs
> Attachments: SOLR-8157.patch
>
>
> Dead link to shard replica admin UI - missing # in URL.
> Reproduce:
> # Start Solr in cloud mode {{bin/solr start -e cloud -noprompt}}
> # Go to Angular UI, collection overview:
>http://localhost:8983/solr/index.html#/gettingstarted/collection-overview
> # For one of the shards, expand one of its replicas
> # Click the core name, e.g. 
>http://192.168.127.63:8983/solr/gettingstarted_shard1_replica2
> This link is not valid. It should have had a {{#}} after {{solr/}}
> Another issue is that it points to the OLD UI, perhaps it should stay in the 
> new?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections

2015-10-22 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14968920#comment-14968920
 ] 

Shalin Shekhar Mangar commented on SOLR-7191:
-

Thanks Damien.

# What is the purpose of the fastClose in SolrCore.close(). It only disables 
clearing the jmx registry. Have you found that to be very slow in practice?
# Your schema cache will cache schema indefinitely and won't reload on changes 
made by schema API or manually. You need to use znode version of the schema 
file as part of the key name to ensure that you can reload schemas.

I'll have to test your change of moving the updateClusterState to CoreContainer.

> Improve stability and startup performance of SolrCloud with thousands of 
> collections
> 
>
> Key: SOLR-7191
> URL: https://issues.apache.org/jira/browse/SOLR-7191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shalin Shekhar Mangar
>  Labels: performance, scalability
> Attachments: SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> SOLR-7191.patch, lots-of-zkstatereader-updates-branch_5x.log
>
>
> A user on the mailing list with thousands of collections (5000 on 4.10.3, 
> 4000 on 5.0) is having severe problems with getting Solr to restart.
> I tried as hard as I could to duplicate the user setup, but I ran into many 
> problems myself even before I was able to get 4000 collections created on a 
> 5.0 example cloud setup.  Restarting Solr takes a very long time, and it is 
> not very stable once it's up and running.
> This kind of setup is very much pushing the envelope on SolrCloud performance 
> and scalability.  It doesn't help that I'm running both Solr nodes on one 
> machine (I started with 'bin/solr -e cloud') and that ZK is embedded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7858) Make Angular UI default

2015-10-22 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14968925#comment-14968925
 ] 

Upayavira commented on SOLR-7858:
-

[~markrmil...@gmail.com] the ticket is here: SOLR-8074

> Make Angular UI default
> ---
>
> Key: SOLR-7858
> URL: https://issues.apache.org/jira/browse/SOLR-7858
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Reporter: Upayavira
>Assignee: Upayavira
>Priority: Minor
> Attachments: SOLR-7858.patch, new ui link.png, original UI link.png
>
>
> Angular UI is very close to feature complete. Once SOLR-7856 is dealt with, 
> it should function well in most cases. I propose that, as soon as 5.3 has 
> been released, we make the Angular UI default, ready for the 5.4 release. We 
> can then fix any more bugs as they are found, but more importantly start 
> working on the features that were the reason for doing this work in the first 
> place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8074) LoadAdminUIServlet directly references admin.html

2015-10-22 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14968928#comment-14968928
 ] 

Upayavira commented on SOLR-8074:
-

To add a little more detail:

The original admin UI is rendered by admin.html. web.xml causes this to be 
served by the o.a.s.servlet.LoadAdminUIServlet, which does a few things like 
replacing ${version} tags and setting anti-clickjacking headers.

Therefore, we need to also use this servlet to serve index.html, which is the 
new UI.

However, this servlet includes this line:

InputStream in = getServletContext().getResourceAsStream("/admin.html");

This change is trivial - we just need to get from the request the actual URL 
that was requested, rather than hardwired, and load that filename from disk. 
Then I can add index.html to web.xml, and then we can make the new UI default 
in trunk.


> LoadAdminUIServlet directly references admin.html
> -
>
> Key: SOLR-8074
> URL: https://issues.apache.org/jira/browse/SOLR-8074
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Reporter: Upayavira
>Priority: Minor
> Fix For: 5.4
>
>
> The LoadAdminUIServlet class loads up, and serves back, "admin.html", meaning 
> it cannot be used in its current state to serve up the new admin UI.
> An update is needed to this class to make it serve back whatever html file 
> was requested in the URL. There will, likely, only ever be two of them 
> mentioned in web.xml, but it would be really useful for changes to web.xml 
> not to require Java code changes also.
> I'm hoping that someone with an up-and-running Java coding setup can make 
> this pretty trivial tweak. Any volunteers?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6273) Cross Data Center Replication

2015-10-22 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969218#comment-14969218
 ] 

Shalin Shekhar Mangar commented on SOLR-6273:
-

I used the collection reload API and then added new documents. Since my json 
documents do not have an 'id' field and I am using data driven schema, there is 
no overwriting and the same docs are added again with a new unique key.

> Cross Data Center Replication
> -
>
> Key: SOLR-6273
> URL: https://issues.apache.org/jira/browse/SOLR-6273
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Erick Erickson
> Attachments: SOLR-6273-trunk-testfix1.patch, 
> SOLR-6273-trunk-testfix2.patch, SOLR-6273-trunk-testfix3.patch, 
> SOLR-6273-trunk.patch, SOLR-6273-trunk.patch, SOLR-6273.patch, 
> SOLR-6273.patch, SOLR-6273.patch, SOLR-6273.patch
>
>
> This is the master issue for Cross Data Center Replication (CDCR)
> described at a high level here: 
> http://heliosearch.org/solr-cross-data-center-replication/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8164) Debug "parsedquery" output no longer handles boosts correctly

2015-10-22 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969230#comment-14969230
 ] 

Erik Hatcher commented on SOLR-8164:


[~yo...@apache.org] - good fix, thanks.  But, how about a test case to go along 
with this?   Always good to increase test coverage when a "Bug" is found.

> Debug "parsedquery" output no longer handles boosts correctly
> -
>
> Key: SOLR-8164
> URL: https://issues.apache.org/jira/browse/SOLR-8164
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Minor
> Fix For: Trunk
>
> Attachments: SOLR-8164.patch
>
>
> Since Lucene's removal of boosts on every query, Solr's debug output has been 
> somewhat broken.
> {code}
> http://localhost:8983/solr/techproducts/query?debugQuery=true=(foo_s:a^3)^4
> shows"parsedquery":"BoostQuery(foo_s:a^3.0)",
> and
> http://localhost:8983/solr/techproducts/query?debugQuery=true=foo_s:a^=2
> shows"parsedquery":"ConstantScore(foo_s:a)",
> {code}
> Since boosts are now explicit (i.e. BoostQuery), we should probably just move 
> to always showing boosts instead of having logic that tries to be smart about 
> it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8180) Missing commons-logging dependency in solrj-lib for SolrJ

2015-10-22 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969236#comment-14969236
 ] 

Steve Rowe commented on SOLR-8180:
--

bq. +1 to jcl-over-slf4j. 

To be clear, we're talking about adding this in two places, right?:

# {{dist/solrj-lib/}} in the binary distribution
# As a non-optional dependency in the solrj POM

+1 to the above


> Missing commons-logging dependency in solrj-lib for SolrJ
> -
>
> Key: SOLR-8180
> URL: https://issues.apache.org/jira/browse/SOLR-8180
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8180.patch
>
>
> When using DBVisualizer, SquirrelSQL, or Java JDBC with the Solr JDBC driver, 
> an additional dependency on commons-logging must be added otherwise the 
> following exception occurs:
> {code}
> org.apache.solr.common.SolrException: Unable to create HttpClient instance. 
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil$HttpClientFactory.createHttpClient(HttpClientUtil.java:393)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:124)
>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.(CloudSolrClient.java:196)
>   at 
> org.apache.solr.client.solrj.io.SolrClientCache.getCloudSolrClient(SolrClientCache.java:47)
>   at 
> org.apache.solr.client.solrj.io.sql.ConnectionImpl.(ConnectionImpl.java:51)
>   at 
> org.apache.solr.client.solrj.io.sql.DriverImpl.connect(DriverImpl.java:108)
>   at 
> org.apache.solr.client.solrj.io.sql.DriverImpl.connect(DriverImpl.java:76)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.onseven.dbvis.h.B.D.ᅣチ(Z:1548)
>   at com.onseven.dbvis.h.B.F$A.call(Z:1369)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil$HttpClientFactory.createHttpClient(HttpClientUtil.java:391)
>   ... 16 more
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/commons/logging/LogFactory
>   at 
> org.apache.http.impl.client.CloseableHttpClient.(CloseableHttpClient.java:58)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.(AbstractHttpClient.java:287)
>   at 
> org.apache.http.impl.client.DefaultHttpClient.(DefaultHttpClient.java:128)
>   at 
> org.apache.http.impl.client.SystemDefaultHttpClient.(SystemDefaultHttpClient.java:116)
>   ... 21 more
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6780) GeoPointDistanceQuery doesn't work with a large radius?

2015-10-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969238#comment-14969238
 ] 

ASF subversion and git services commented on LUCENE-6780:
-

Commit 1710027 from [~nknize] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1710027 ]

LUCENE-6780: Improves GeoPointDistanceQuery accuracy with large radius. 
Improves testing rigor to GeoPointField

> GeoPointDistanceQuery doesn't work with a large radius?
> ---
>
> Key: LUCENE-6780
> URL: https://issues.apache.org/jira/browse/LUCENE-6780
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
> Attachments: LUCENE-6780-heap-used-hack.patch, LUCENE-6780.patch, 
> LUCENE-6780.patch, LUCENE-6780.patch, LUCENE-6780.patch, LUCENE-6780.patch, 
> LUCENE-6780.patch, LUCENE-6780.patch, LUCENE-6780.patch, LUCENE-6780.patch, 
> LUCENE-6780.patch, LUCENE-6780.patch
>
>
> I'm working on LUCENE-6698 but struggling with test failures ...
> Then I noticed that TestGeoPointQuery's test never tests on large distances, 
> so I modified the test to sometimes do so (like TestBKDTree) and hit test 
> failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8186) Solr start scripts -- only log to console when running in foreground

2015-10-22 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969247#comment-14969247
 ] 

Shawn Heisey commented on SOLR-8186:


The start script should continue to capture the console to a separate logfile 
when running in the background.  That will capture startup errors and any 
stdout/stderr logging that Solr might do when debugging code changes.

> Solr start scripts -- only log to console when running in foreground
> 
>
> Key: SOLR-8186
> URL: https://issues.apache.org/jira/browse/SOLR-8186
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.3.1
>Reporter: Shawn Heisey
>
> Currently the log4j.properties file logs to the console, and the start 
> scripts capture console output to a logfile that never rotates.  This can 
> fill up the disk, and when the logfile is removed, the user might be alarmed 
> by the way their memory statistics behave -- the "cached" memory might have a 
> sudden and very large drop, making it appear to a novice that the huge 
> logfile was hogging their memory.
> The logfile created by log4j is rotated when it gets big enough, so that 
> logfile is unlikely to fill up the disk.
> I propose that we copy the current log4j.properties file to something like 
> log4j-foreground.properties, remove CONSOLE logging in the log4j.properties 
> file, and have the start script use the alternate config file when running in 
> the foreground.  This way users will see the logging output when running in 
> the foreground, but it will be absent when running normally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8186) Solr start scripts -- only log to console when running in foreground

2015-10-22 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969263#comment-14969263
 ] 

Erick Erickson commented on SOLR-8186:
--

Yeah, it's surprising when your disk fills up!

> Solr start scripts -- only log to console when running in foreground
> 
>
> Key: SOLR-8186
> URL: https://issues.apache.org/jira/browse/SOLR-8186
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.3.1
>Reporter: Shawn Heisey
>
> Currently the log4j.properties file logs to the console, and the start 
> scripts capture console output to a logfile that never rotates.  This can 
> fill up the disk, and when the logfile is removed, the user might be alarmed 
> by the way their memory statistics behave -- the "cached" memory might have a 
> sudden and very large drop, making it appear to a novice that the huge 
> logfile was hogging their memory.
> The logfile created by log4j is rotated when it gets big enough, so that 
> logfile is unlikely to fill up the disk.
> I propose that we copy the current log4j.properties file to something like 
> log4j-foreground.properties, remove CONSOLE logging in the log4j.properties 
> file, and have the start script use the alternate config file when running in 
> the foreground.  This way users will see the logging output when running in 
> the foreground, but it will be absent when running normally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8186) Solr start scripts -- only log to console when running in foreground

2015-10-22 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-8186:
--

 Summary: Solr start scripts -- only log to console when running in 
foreground
 Key: SOLR-8186
 URL: https://issues.apache.org/jira/browse/SOLR-8186
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools
Affects Versions: 5.3.1
Reporter: Shawn Heisey


Currently the log4j.properties file logs to the console, and the start scripts 
capture console output to a logfile that never rotates.  This can fill up the 
disk, and when the logfile is removed, the user might be alarmed by the way 
their memory statistics behave -- the "cached" memory might have a sudden and 
very large drop, making it appear to a novice that the huge logfile was hogging 
their memory.

The logfile created by log4j is rotated when it gets big enough, so that 
logfile is unlikely to fill up the disk.

I propose that we copy the current log4j.properties file to something like 
log4j-foreground.properties, remove CONSOLE logging in the log4j.properties 
file, and have the start script use the alternate config file when running in 
the foreground.  This way users will see the logging output when running in the 
foreground, but it will be absent when running normally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Nick Knize as Lucene/Solr committer

2015-10-22 Thread Karl Wright
Congratulations on joining the Lucene team!

Karl

On Thu, Oct 22, 2015 at 10:42 AM, Erik Hatcher 
wrote:

> Welcome, Nick!
>
> We were so close last week while many of us were at the Lucene Revolution
> in Austin, TX.  You got mentioned and big kudos during David Smiley’s geo
> talk - it’s great to have you aboard.
>
> Erik
>
>
>
>
> On Oct 20, 2015, at 1:40 PM, Nicholas Knize  wrote:
>
> Thanks for the honor to join such a talented group!
>
> Brief Bio:  I started as a Meteorology major in undergrad. After the
> Meteorology program bored me with colored pencils and paper maps (but
> rocking a FORTRAN class) I switched to Computer Science. I received a CS
> Bachelor's and Master's focused on Computer Vision. Interestingly I was
> first exposed to Lucene/Solr in 2006 while working on a proprietary Remote
> Geospatial Imaging System. After a "fun" detour through the land of Oracle
> Spatial integration and MongoDB and Accumulo core development I finished a
> PhD in GIS focused on high dimension spatial Indexing. With sights set on
> working with the open source community I joined Elasticsearch 1 year ago
> this November which brings me here. I currently live in Dallas, TX with my
> Wife, 3 Kids, and a Dog and when my face is not behind a monitor I plant it
> either behind a drum kit, on a hockey rink, or in the park with the family.
>
> I look forward to continuing to virtually work with many of you and hope
> to meet many at conferences and meetups.
>
> Thanks again!
>
> - Nick
>
> On Tue, Oct 20, 2015 at 11:50 AM, Michael McCandless <
> luc...@mikemccandless.com> wrote:
>
>> I'm pleased to announce that Nick Knize has accepted the PMC's
>> invitation to become a committer.
>>
>> Nick, it's tradition that you introduce yourself with a brief bio /
>> origin story, explaining how you arrived here.
>>
>> Your handle "nknize" has already added to the “lucene" LDAP group, so
>> you now have commit privileges.
>>
>> Please celebrate this rite of passage, and confirm that the right
>> karma has in fact enabled, by embarking on the challenge of adding
>> yourself to the committers section of the Who We Are page on the
>> website: http://lucene.apache.org/whoweare.html (use the ASF CMS
>> bookmarklet
>> at the bottom of the page here: https://cms.apache.org/#bookmark -
>> more info here http://www.apache.org/dev/cms.html).
>>
>> Congratulations and welcome!
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>
>
>


Re: [JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b85) - Build # 14620 - Still Failing!

2015-10-22 Thread Michael McCandless
I'll dig.

Mike McCandless

http://blog.mikemccandless.com


On Thu, Oct 22, 2015 at 3:44 AM, Policeman Jenkins Server
 wrote:
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14620/
> Java: 32bit/jdk1.9.0-ea-b85 -client -XX:+UseConcMarkSweepGC
>
> 1 tests failed.
> FAILED:  org.apache.lucene.bkdtree.TestBKDTree.testRandomMedium
>
> Error Message:
> refusing to delete any files: this IndexWriter hit an unrecoverable exception
>
> Stack Trace:
> org.apache.lucene.store.AlreadyClosedException: refusing to delete any files: 
> this IndexWriter hit an unrecoverable exception
> at 
> org.apache.lucene.index.IndexFileDeleter.ensureOpen(IndexFileDeleter.java:341)
> at 
> org.apache.lucene.index.IndexFileDeleter.deleteFile(IndexFileDeleter.java:725)
> at 
> org.apache.lucene.index.IndexFileDeleter.deletePendingFiles(IndexFileDeleter.java:514)
> at 
> org.apache.lucene.index.IndexFileDeleter.decRef(IndexFileDeleter.java:622)
> at 
> org.apache.lucene.index.IndexFileDeleter.checkpoint(IndexFileDeleter.java:564)
> at 
> org.apache.lucene.index.IndexWriter.checkpointNoSIS(IndexWriter.java:2315)
> at 
> org.apache.lucene.index.IndexWriter$ReaderPool.commit(IndexWriter.java:632)
> at 
> org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2795)
> at 
> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2947)
> at org.apache.lucene.index.IndexWriter.shutdown(IndexWriter.java:1066)
> at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1109)
> at 
> org.apache.lucene.util.BaseGeoPointTestCase.verify(BaseGeoPointTestCase.java:591)
> at 
> org.apache.lucene.util.BaseGeoPointTestCase.doTestRandom(BaseGeoPointTestCase.java:399)
> at 
> org.apache.lucene.util.BaseGeoPointTestCase.testRandomMedium(BaseGeoPointTestCase.java:322)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:520)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> 

[jira] [Commented] (SOLR-8180) Missing commons-logging dependency in solrj-lib for SolrJ

2015-10-22 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969190#comment-14969190
 ] 

David Smiley commented on SOLR-8180:


I agree with you Kevin.  Throughout Solr's releases, my SolrJ using search apps 
have had to monkey with its logging dependencies due to either missing log 
dependencies, or to correct erroneous dependencies.  +1 to jcl-over-slf4j.  
What do you think [~steve_rowe]?

> Missing commons-logging dependency in solrj-lib for SolrJ
> -
>
> Key: SOLR-8180
> URL: https://issues.apache.org/jira/browse/SOLR-8180
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8180.patch
>
>
> When using DBVisualizer, SquirrelSQL, or Java JDBC with the Solr JDBC driver, 
> an additional dependency on commons-logging must be added otherwise the 
> following exception occurs:
> {code}
> org.apache.solr.common.SolrException: Unable to create HttpClient instance. 
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil$HttpClientFactory.createHttpClient(HttpClientUtil.java:393)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:124)
>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.(CloudSolrClient.java:196)
>   at 
> org.apache.solr.client.solrj.io.SolrClientCache.getCloudSolrClient(SolrClientCache.java:47)
>   at 
> org.apache.solr.client.solrj.io.sql.ConnectionImpl.(ConnectionImpl.java:51)
>   at 
> org.apache.solr.client.solrj.io.sql.DriverImpl.connect(DriverImpl.java:108)
>   at 
> org.apache.solr.client.solrj.io.sql.DriverImpl.connect(DriverImpl.java:76)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.onseven.dbvis.h.B.D.ᅣチ(Z:1548)
>   at com.onseven.dbvis.h.B.F$A.call(Z:1369)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil$HttpClientFactory.createHttpClient(HttpClientUtil.java:391)
>   ... 16 more
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/commons/logging/LogFactory
>   at 
> org.apache.http.impl.client.CloseableHttpClient.(CloseableHttpClient.java:58)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.(AbstractHttpClient.java:287)
>   at 
> org.apache.http.impl.client.DefaultHttpClient.(DefaultHttpClient.java:128)
>   at 
> org.apache.http.impl.client.SystemDefaultHttpClient.(SystemDefaultHttpClient.java:116)
>   ... 21 more
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 829 - Still Failing

2015-10-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/829/

2 tests failed.
FAILED:  org.apache.lucene.bkdtree.TestBKDTree.testRandomBig

Error Message:
background merge hit exception: _3(6.0.0):C262144/353:delGen=1 
_2(6.0.0):C262144/499:delGen=1 _1(6.0.0):C262144/812:delGen=1 
_0(6.0.0):C262144/2598:delGen=1 _4(6.0.0):C145234/109:delGen=1 into _5 
[maxNumSegments=1]

Stack Trace:
java.io.IOException: background merge hit exception: 
_3(6.0.0):C262144/353:delGen=1 _2(6.0.0):C262144/499:delGen=1 
_1(6.0.0):C262144/812:delGen=1 _0(6.0.0):C262144/2598:delGen=1 
_4(6.0.0):C145234/109:delGen=1 into _5 [maxNumSegments=1]
at 
__randomizedtesting.SeedInfo.seed([9866FC3266415119:1F3181BDF7182D99]:0)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1765)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1705)
at 
org.apache.lucene.util.BaseGeoPointTestCase.verify(BaseGeoPointTestCase.java:588)
at 
org.apache.lucene.util.BaseGeoPointTestCase.doTestRandom(BaseGeoPointTestCase.java:399)
at 
org.apache.lucene.util.BaseGeoPointTestCase.testRandomBig(BaseGeoPointTestCase.java:327)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: cannot delete file: _51874724033.sort, a virus 
scanner has it open
   

Re: [JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 829 - Still Failing

2015-10-22 Thread Michael McCandless
Already fixed by my prior commit...

Mike McCandless

http://blog.mikemccandless.com


On Thu, Oct 22, 2015 at 10:03 AM, Apache Jenkins Server
 wrote:
> Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/829/
>
> 2 tests failed.
> FAILED:  org.apache.lucene.bkdtree.TestBKDTree.testRandomBig
>
> Error Message:
> background merge hit exception: _3(6.0.0):C262144/353:delGen=1 
> _2(6.0.0):C262144/499:delGen=1 _1(6.0.0):C262144/812:delGen=1 
> _0(6.0.0):C262144/2598:delGen=1 _4(6.0.0):C145234/109:delGen=1 into _5 
> [maxNumSegments=1]
>
> Stack Trace:
> java.io.IOException: background merge hit exception: 
> _3(6.0.0):C262144/353:delGen=1 _2(6.0.0):C262144/499:delGen=1 
> _1(6.0.0):C262144/812:delGen=1 _0(6.0.0):C262144/2598:delGen=1 
> _4(6.0.0):C145234/109:delGen=1 into _5 [maxNumSegments=1]
> at 
> __randomizedtesting.SeedInfo.seed([9866FC3266415119:1F3181BDF7182D99]:0)
> at 
> org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1765)
> at 
> org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1705)
> at 
> org.apache.lucene.util.BaseGeoPointTestCase.verify(BaseGeoPointTestCase.java:588)
> at 
> org.apache.lucene.util.BaseGeoPointTestCase.doTestRandom(BaseGeoPointTestCase.java:399)
> at 
> org.apache.lucene.util.BaseGeoPointTestCase.testRandomBig(BaseGeoPointTestCase.java:327)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
> at 

[jira] [Commented] (SOLR-6273) Cross Data Center Replication

2015-10-22 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969223#comment-14969223
 ] 

Shalin Shekhar Mangar commented on SOLR-6273:
-

In that case, this can easily lead to lost updates. We should add a test which 
does constant indexing and triggers a recovery in a replica and asserts that 
all replicas are consistent at steady state. 

> Cross Data Center Replication
> -
>
> Key: SOLR-6273
> URL: https://issues.apache.org/jira/browse/SOLR-6273
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Erick Erickson
> Attachments: SOLR-6273-trunk-testfix1.patch, 
> SOLR-6273-trunk-testfix2.patch, SOLR-6273-trunk-testfix3.patch, 
> SOLR-6273-trunk.patch, SOLR-6273-trunk.patch, SOLR-6273.patch, 
> SOLR-6273.patch, SOLR-6273.patch, SOLR-6273.patch
>
>
> This is the master issue for Cross Data Center Replication (CDCR)
> described at a high level here: 
> http://heliosearch.org/solr-cross-data-center-replication/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Nick Knize as Lucene/Solr committer

2015-10-22 Thread Erik Hatcher
Welcome, Nick!   

We were so close last week while many of us were at the Lucene Revolution in 
Austin, TX.  You got mentioned and big kudos during David Smiley’s geo talk - 
it’s great to have you aboard.

Erik




> On Oct 20, 2015, at 1:40 PM, Nicholas Knize  wrote:
> 
> Thanks for the honor to join such a talented group!
> 
> Brief Bio:  I started as a Meteorology major in undergrad. After the 
> Meteorology program bored me with colored pencils and paper maps (but rocking 
> a FORTRAN class) I switched to Computer Science. I received a CS Bachelor's 
> and Master's focused on Computer Vision. Interestingly I was first exposed to 
> Lucene/Solr in 2006 while working on a proprietary Remote Geospatial Imaging 
> System. After a "fun" detour through the land of Oracle Spatial integration 
> and MongoDB and Accumulo core development I finished a PhD in GIS focused on 
> high dimension spatial Indexing. With sights set on working with the open 
> source community I joined Elasticsearch 1 year ago this November which brings 
> me here. I currently live in Dallas, TX with my Wife, 3 Kids, and a Dog and 
> when my face is not behind a monitor I plant it either behind a drum kit, on 
> a hockey rink, or in the park with the family.
> 
> I look forward to continuing to virtually work with many of you and hope to 
> meet many at conferences and meetups.
> 
> Thanks again!
> 
> - Nick 
> 
> On Tue, Oct 20, 2015 at 11:50 AM, Michael McCandless 
> > wrote:
> I'm pleased to announce that Nick Knize has accepted the PMC's
> invitation to become a committer.
> 
> Nick, it's tradition that you introduce yourself with a brief bio /
> origin story, explaining how you arrived here.
> 
> Your handle "nknize" has already added to the “lucene" LDAP group, so
> you now have commit privileges.
> 
> Please celebrate this rite of passage, and confirm that the right
> karma has in fact enabled, by embarking on the challenge of adding
> yourself to the committers section of the Who We Are page on the
> website: http://lucene.apache.org/whoweare.html 
>  (use the ASF CMS
> bookmarklet
> at the bottom of the page here: https://cms.apache.org/#bookmark 
>  -
> more info here http://www.apache.org/dev/cms.html 
> ).
> 
> Congratulations and welcome!
> 
> Mike McCandless
> 
> http://blog.mikemccandless.com 
> 



[jira] [Updated] (SOLR-8188) Add hash style joins to the Streaming API and Streaming Expressions

2015-10-22 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove updated SOLR-8188:
--
Attachment: SOLR-8188.patch

Added a field seperator to the hash calculation. This is to prevent a situation 
where two tuples have the same hashed value where they shoudn't.

t1.fieldA = "foo"
t1.fieldB = "bar"

t2.fieldA = "foob"
t2.fieldB = "ar"

With this change the hash will be different for t1 and t2.

> Add hash style joins to the Streaming API and Streaming Expressions
> ---
>
> Key: SOLR-8188
> URL: https://issues.apache.org/jira/browse/SOLR-8188
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Dennis Gove
>Priority: Minor
> Attachments: SOLR-8188.patch, SOLR-8188.patch
>
>
> Add HashJoinStream and OuterHashJoinStream to the Streaming API to allow for 
> optimized joining between sub-streams.
> HashJoinStream is similar to an InnerJoinStream except that it does not 
> insist on any particular order and will read all values from the stream being 
> hashed (hashStream) when open() is called. During read() it will return the 
> next tuple from the stream not being hashed (fullStream) which has at least 
> one matching record in hashStream. It will return a tuple which is the merge 
> of both tuples. If the tuple from the fullStream matches with more than one 
> tuple from the hashStream then calling read() will return the merge with the 
> next matching tuple. The order of the resulting stream is the order of the 
> fullStream.
> OuterHashJoinStream is similar to a HashJoinStream and LeftOuterJoinStream in 
> that a tuple from fullStream will be returned even if it doesn't have a 
> matching record in hashStream. All other pieces are identical.
> In expression form
> {code}
> hashJoin(
>   search(collection1, q=*:*, fl="fieldA, fieldB, fieldC", ...),
>   hashed=search(collection2, q=*:*, fl="fieldA, fieldB, fieldE", ...),
>   on="fieldA, fieldB"
> )
> {code}
> {code}
> outerHashJoin(
>   search(collection1, q=*:*, fl="fieldA, fieldB, fieldC", ...),
>   hashed=search(collection2, q=*:*, fl="fieldA, fieldB, fieldE", ...),
>   on="fieldA, fieldB"
> )
> {code}
> As you can see the hashStream is named parameter which makes it very clear 
> which stream should be hashed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 525 - Failure

2015-10-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/525/

1 tests failed.
FAILED:  org.apache.solr.cloud.SyncSliceTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([AE3D6A9E87B84F38:26695544294422C0]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SyncSliceTest.waitTillAllNodesActive(SyncSliceTest.java:239)
at org.apache.solr.cloud.SyncSliceTest.test(SyncSliceTest.java:167)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 9569 lines...]
   

[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_60) - Build # 14630 - Failure!

2015-10-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14630/
Java: 64bit/jdk1.8.0_60 -XX:-UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestDynamicLoading

Error Message:
5 threads leaked from SUITE scope at org.apache.solr.core.TestDynamicLoading:   
  1) Thread[id=11582, name=qtp85907293-11582, state=RUNNABLE, 
group=TGRP-TestDynamicLoading] at 
java.util.WeakHashMap.get(WeakHashMap.java:403) at 
org.apache.solr.servlet.cache.HttpCacheHeaderUtil.calcEtag(HttpCacheHeaderUtil.java:102)
 at 
org.apache.solr.servlet.cache.HttpCacheHeaderUtil.doCacheHeaderValidation(HttpCacheHeaderUtil.java:224)
 at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:452)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
 at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:109)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
 at 
org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83)
 at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:364) 
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
 at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)  
   at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
 at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)   
  at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)   
  at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)  
   at org.eclipse.jetty.server.Server.handle(Server.java:499) at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) 
at 
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)  
   at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=11445, 
name=qtp85907293-11445, state=RUNNABLE, group=TGRP-TestDynamicLoading] 
at java.util.WeakHashMap.get(WeakHashMap.java:403) at 
org.apache.solr.servlet.cache.HttpCacheHeaderUtil.calcEtag(HttpCacheHeaderUtil.java:102)
 at 
org.apache.solr.servlet.cache.HttpCacheHeaderUtil.doCacheHeaderValidation(HttpCacheHeaderUtil.java:224)
 at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:452)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
 at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:109)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
 at 
org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83)
 at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:364) 
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
 at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)  
   at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
 at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)   
  at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)   
  at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)  
   at org.eclipse.jetty.server.Server.handle(Server.java:499) at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) at 

[jira] [Updated] (SOLR-8188) Add hash style joins to the Streaming API and Streaming Expressions

2015-10-22 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove updated SOLR-8188:
--
Attachment: SOLR-8188.patch

All tests pass.

> Add hash style joins to the Streaming API and Streaming Expressions
> ---
>
> Key: SOLR-8188
> URL: https://issues.apache.org/jira/browse/SOLR-8188
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Dennis Gove
>Priority: Minor
> Attachments: SOLR-8188.patch
>
>
> Add HashJoinStream and OuterHashJoinStream to the Streaming API to allow for 
> optimized joining between sub-streams.
> HashJoinStream is similar to an InnerJoinStream except that it does not 
> insist on any particular order and will read all values from the stream being 
> hashed (hashStream) when open() is called. During read() it will return the 
> next tuple from the stream not being hashed (fullStream) which has at least 
> one matching record in hashStream. It will return a tuple which is the merge 
> of both tuples. If the tuple from the fullStream matches with more than one 
> tuple from the hashStream then calling read() will return the merge with the 
> next matching tuple. The order of the resulting stream is the order of the 
> fullStream.
> OuterHashJoinStream is similar to a HashJoinStream and LeftOuterJoinStream in 
> that a tuple from fullStream will be returned even if it doesn't have a 
> matching record in hashStream. All other pieces are identical.
> In expression form
> {code}
> hashJoin(
>   search(collection1, q=*:*, fl="fieldA, fieldB, fieldC", ...),
>   hashed=search(collection2, q=*:*, fl="fieldA, fieldB, fieldE", ...),
>   on="fieldA, fieldB"
> )
> {code}
> {code}
> outerHashJoin(
>   search(collection1, q=*:*, fl="fieldA, fieldB, fieldC", ...),
>   hashed=search(collection2, q=*:*, fl="fieldA, fieldB, fieldE", ...),
>   on="fieldA, fieldB"
> )
> {code}
> As you can see the hashStream is named parameter which makes it very clear 
> which stream should be hashed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8188) Add hash style joins to the Streaming API and Streaming Expressions

2015-10-22 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14970311#comment-14970311
 ] 

Dennis Gove edited comment on SOLR-8188 at 10/23/15 2:43 AM:
-

Added a field separator to the hash calculation. This is to prevent a situation 
where two tuples have the same hashed value where they shouldn't.

t1.fieldA = "foo"
t1.fieldB = "bar"

t2.fieldA = "foob"
t2.fieldB = "ar"

With this change the hash will be different for t1 and t2.


was (Author: dpgove):
Added a field seperator to the hash calculation. This is to prevent a situation 
where two tuples have the same hashed value where they shoudn't.

t1.fieldA = "foo"
t1.fieldB = "bar"

t2.fieldA = "foob"
t2.fieldB = "ar"

With this change the hash will be different for t1 and t2.

> Add hash style joins to the Streaming API and Streaming Expressions
> ---
>
> Key: SOLR-8188
> URL: https://issues.apache.org/jira/browse/SOLR-8188
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Dennis Gove
>Priority: Minor
> Attachments: SOLR-8188.patch, SOLR-8188.patch
>
>
> Add HashJoinStream and OuterHashJoinStream to the Streaming API to allow for 
> optimized joining between sub-streams.
> HashJoinStream is similar to an InnerJoinStream except that it does not 
> insist on any particular order and will read all values from the stream being 
> hashed (hashStream) when open() is called. During read() it will return the 
> next tuple from the stream not being hashed (fullStream) which has at least 
> one matching record in hashStream. It will return a tuple which is the merge 
> of both tuples. If the tuple from the fullStream matches with more than one 
> tuple from the hashStream then calling read() will return the merge with the 
> next matching tuple. The order of the resulting stream is the order of the 
> fullStream.
> OuterHashJoinStream is similar to a HashJoinStream and LeftOuterJoinStream in 
> that a tuple from fullStream will be returned even if it doesn't have a 
> matching record in hashStream. All other pieces are identical.
> In expression form
> {code}
> hashJoin(
>   search(collection1, q=*:*, fl="fieldA, fieldB, fieldC", ...),
>   hashed=search(collection2, q=*:*, fl="fieldA, fieldB, fieldE", ...),
>   on="fieldA, fieldB"
> )
> {code}
> {code}
> outerHashJoin(
>   search(collection1, q=*:*, fl="fieldA, fieldB, fieldC", ...),
>   hashed=search(collection2, q=*:*, fl="fieldA, fieldB, fieldE", ...),
>   on="fieldA, fieldB"
> )
> {code}
> As you can see the hashStream is named parameter which makes it very clear 
> which stream should be hashed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_60) - Build # 14631 - Still Failing!

2015-10-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14631/
Java: 64bit/jdk1.8.0_60 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testMaxTime

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([BEAACA570ADFB58C:245EB7B5944529B0]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:766)
at 
org.apache.solr.update.AutoCommitTest.testMaxTime(AutoCommitTest.java:239)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=id:529=standard=0=20=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:759)
... 40 more




Build Log:
[...truncated 9204 lines...]
   [junit4] Suite: org.apache.solr.update.AutoCommitTest
   

[jira] [Updated] (SOLR-7569) Create an API to force a leader election between nodes

2015-10-22 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-7569:
---
Attachment: SOLR-7569.patch

Thanks Shalin for looking into the patch and your review.

bq. ForceLeaderTest.testReplicasInLIRNoLeader has a 5 second sleep, why? Isn't 
waitForRecoveriesToFinish() enough?
Fixed. This was a left over from some previous patch. I think I wanted to put 
the waitForRecoveriesToFinish(), but forgot to remove the 5 second sleep.

bq. Similarly, ForceLeaderTest.testLeaderDown has a 15 second sleep for steady 
state to be reached? What is this steady state, is there a better way than 
waiting for an arbitrary amount of time? In general, Thread.sleep should be 
avoided as much as possible as a way to reach steady state.
In this case, waiting those 15 seconds results in one of the down replicas to 
become a leader (but stay down). This is the situation I'm using FORCELEADER to 
recover from. Instead of waiting 15 seconds, I've added some polling with wait 
to wake up earlier if needed, while increasing the timeout from 15s to 25s.


bq. Can you please add some javadocs on the various test methods describing the 
scenario that they are test?
Sure, added.

bq. minor nit - can you use assertEquals when testing equality of state etc 
instead of assertTrue. The advantage with assertEquals is that it logs the 
mismatched values in the exception messages.
Used assertEquals() now.

bq. In OverseerCollectionMessageHandler, lirPath can never be null. The lir 
path should probably be logged in debug rather than INFO.
Thanks for the pointer, I've removed the null check. I feel this should be INFO 
instead of DEBUG, so that if a user says I issued FORCELEADER but still nothing 
worked for him, his logs would help us understand if we ever had any LIR state 
which was cleared out. But, please feel free to remove it if this doesn't make 
sense.

bq. minor nit - you can compare enums directly using == instead of .equals
Fixed.

bq. Referring to the following, what is the thinking behind it? when can this 
happen? is there a test which specifically exercises this scenario? seems like 
this can interfere with the leader election if the leader election was taking 
some time? 

I modified the comment text to make it more clear. This is for the situation 
when all replicas are (somehow, due to bug maybe?) down/recovering (but not in 
LIR), and there is no leader, even though many replicas are on live; I don't 
know if this ever happens (the LIR case happens, I know). The 
testAllReplicasDownNoLeader test exercises this scenario. This is more or less 
the scenario that you described (with one difference that there is no leader as 
well): {{Leader is not live: Replicas are live but 'down' or 'recovering' -> 
mark them 'active'}}.

As you point out, I think it can indeed interfere with any on-going leader 
election; my thought was that this FORCELEADER call is issued only because the 
leader election isn't achieving a stable leader, so force marking the queue 
head replica as leader is okay. But I defer to your judgement if this is fine 
or not, and I can remove (or you feel free to remove) that code path from the 
patch if you feel it is not right.

> Create an API to force a leader election between nodes
> --
>
> Key: SOLR-7569
> URL: https://issues.apache.org/jira/browse/SOLR-7569
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>  Labels: difficulty-medium, impact-high
> Attachments: SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, 
> SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, 
> SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, SOLR-7569.patch, 
> SOLR-7569_lir_down_state_test.patch
>
>
> There are many reasons why Solr will not elect a leader for a shard e.g. all 
> replicas' last published state was recovery or due to bugs which cause a 
> leader to be marked as 'down'. While the best solution is that they never get 
> into this state, we need a manual way to fix this when it does get into this  
> state. Right now we can do a series of dance involving bouncing the node 
> (since recovery paths between bouncing and REQUESTRECOVERY are different), 
> but that is difficult when running a large cluster. Although it is possible 
> that such a manual API may lead to some data loss but in some cases, it is 
> the only possible option to restore availability.
> This issue proposes to build a new collection API which can be used to force 
> replicas into recovering a leader while avoiding data loss on a best effort 
> basis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SOLR-8113) Accept replacement strings in CloneFieldUpdateProcessorFactory

2015-10-22 Thread Gus Heck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969112#comment-14969112
 ] 

Gus Heck commented on SOLR-8113:


Any thoughts on my latest patch [~hossman]? Others? comments welcome. 

> Accept replacement strings in CloneFieldUpdateProcessorFactory
> --
>
> Key: SOLR-8113
> URL: https://issues.apache.org/jira/browse/SOLR-8113
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 5.3
>Reporter: Gus Heck
> Attachments: SOLR-8113.patch, SOLR-8113.patch
>
>
> Presently CloneFieldUpdateProcessorFactory accepts regular expressions to 
> select source fields, which mirrors wildcards in the source for copyField in 
> the schema. This patch adds a counterpart to copyField's wildcards in the 
> dest attribute by interpreting the dest parameter as a regex replacement 
> string.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6850) BooleanWeight should not use BS1 when there is a single non-null clause

2015-10-22 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969098#comment-14969098
 ] 

Adrien Grand commented on LUCENE-6850:
--

bq. Another possible optimization here is to check the number of docs in the 
segment, and if it's below a certain size then don't use the bulk scorer.

I like the idea and I was just going to try it out but I'm a bit concerned we 
would lose significant test coverage of BS1. So I'd rather experiment with this 
idea in a different issue where we also make sure to keep good coverage for BS1.

> BooleanWeight should not use BS1 when there is a single non-null clause
> ---
>
> Key: LUCENE-6850
> URL: https://issues.apache.org/jira/browse/LUCENE-6850
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-6850.patch, LUCENE-6850.patch
>
>
> When a disjunction has a single non-null scorer, we still use BS1 for 
> bulk-scoring, which first collects matches into a bit set and then calls the 
> collector. This is inefficient: we should just call the inner bulk scorer 
> directly and wrap the scorer to apply the coord factor (like 
> BooleanTopLevelScorers.BoostedScorer does).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8173) CLONE - Leader recovery process can select the wrong leader if all replicas for a shard are down and trying to recover as well as lose updates that should have been reco

2015-10-22 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969121#comment-14969121
 ] 

Mark Miller commented on SOLR-8173:
---

You are doing this test with version 5.2.1?

> CLONE - Leader recovery process can select the wrong leader if all replicas 
> for a shard are down and trying to recover as well as lose updates that 
> should have been recovered.
> ---
>
> Key: SOLR-8173
> URL: https://issues.apache.org/jira/browse/SOLR-8173
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Matteo Grolla
>Assignee: Mark Miller
>Priority: Critical
>  Labels: leader, recovery
> Fix For: 5.2.1
>
> Attachments: solr_8983.log, solr_8984.log
>
>
> I'm doing this test
> collection test is replicated on two solr nodes running on 8983, 8984
> using external zk
> 1)turn on solr 8984
> 2)add,commit a doc x con solr 8983
> 3)turn off solr 8983
> 4)turn on solr 8984
> 5)shortly after (leader still not elected) turn on solr 8983
> 6)8984 is elected as leader
> 7)doc x is present on 8983 but not on 8984 (check issuing a query)
> In attachment are the solr.log files of both instances



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8180) Missing commons-logging dependency in solrj-lib for SolrJ

2015-10-22 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969134#comment-14969134
 ] 

Kevin Risden commented on SOLR-8180:


I understand that configuring logging can be specific to ones application. 
However using the JDBC part SolrJ out of the box with something like 
DBVisualizer/SquirrelSQL shouldn't require finding more log jars to start.

It looks like commons-logging can be replaced with jcl-over-slf4j and then 
everything out of the box would go through slf4j. It currently looks like with 
SolrJ that part of the logging goes through commons-logging from the httpclient 
and the rest is through slf4j.

Would it make sense to have jcl-over-slf4j included by default in solrj-lib 
instead of trying to include commons-logging? This seems like it would match 
closer to what Solr server does out of the box.

> Missing commons-logging dependency in solrj-lib for SolrJ
> -
>
> Key: SOLR-8180
> URL: https://issues.apache.org/jira/browse/SOLR-8180
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Kevin Risden
> Attachments: SOLR-8180.patch
>
>
> When using DBVisualizer, SquirrelSQL, or Java JDBC with the Solr JDBC driver, 
> an additional dependency on commons-logging must be added otherwise the 
> following exception occurs:
> {code}
> org.apache.solr.common.SolrException: Unable to create HttpClient instance. 
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil$HttpClientFactory.createHttpClient(HttpClientUtil.java:393)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:124)
>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.(CloudSolrClient.java:196)
>   at 
> org.apache.solr.client.solrj.io.SolrClientCache.getCloudSolrClient(SolrClientCache.java:47)
>   at 
> org.apache.solr.client.solrj.io.sql.ConnectionImpl.(ConnectionImpl.java:51)
>   at 
> org.apache.solr.client.solrj.io.sql.DriverImpl.connect(DriverImpl.java:108)
>   at 
> org.apache.solr.client.solrj.io.sql.DriverImpl.connect(DriverImpl.java:76)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.onseven.dbvis.h.B.D.ᅣチ(Z:1548)
>   at com.onseven.dbvis.h.B.F$A.call(Z:1369)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil$HttpClientFactory.createHttpClient(HttpClientUtil.java:391)
>   ... 16 more
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/commons/logging/LogFactory
>   at 
> org.apache.http.impl.client.CloseableHttpClient.(CloseableHttpClient.java:58)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.(AbstractHttpClient.java:287)
>   at 
> org.apache.http.impl.client.DefaultHttpClient.(DefaultHttpClient.java:128)
>   at 
> org.apache.http.impl.client.SystemDefaultHttpClient.(SystemDefaultHttpClient.java:116)
>   ... 21 more
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6273) Cross Data Center Replication

2015-10-22 Thread Renaud Delbru (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969188#comment-14969188
 ] 

Renaud Delbru commented on SOLR-6273:
-

First time I saw this issue.
How did you perform the reload ? Have you deleted it the source collection 
before the reload, or just reload and overwrite the existing documents ?

> Cross Data Center Replication
> -
>
> Key: SOLR-6273
> URL: https://issues.apache.org/jira/browse/SOLR-6273
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Erick Erickson
> Attachments: SOLR-6273-trunk-testfix1.patch, 
> SOLR-6273-trunk-testfix2.patch, SOLR-6273-trunk-testfix3.patch, 
> SOLR-6273-trunk.patch, SOLR-6273-trunk.patch, SOLR-6273.patch, 
> SOLR-6273.patch, SOLR-6273.patch, SOLR-6273.patch
>
>
> This is the master issue for Cross Data Center Replication (CDCR)
> described at a high level here: 
> http://heliosearch.org/solr-cross-data-center-replication/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6273) Cross Data Center Replication

2015-10-22 Thread Renaud Delbru (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969182#comment-14969182
 ] 

Renaud Delbru commented on SOLR-6273:
-

That's a good point, and I think the current implementation might interfere 
with the replay of the buffered updates. The current tlog replication works as 
follow:
1) Fetch the the tlog files from the master
2) reset the update log before switching the tlog directory
3) switch the tlog directory and re-initialise the update log with the new 
directory.
Currently there is no logic to keep "buffered updates" while resetting and 
reinitializing the update log. It looks like the tlog replication still needs 
some work.

> Cross Data Center Replication
> -
>
> Key: SOLR-6273
> URL: https://issues.apache.org/jira/browse/SOLR-6273
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Erick Erickson
> Attachments: SOLR-6273-trunk-testfix1.patch, 
> SOLR-6273-trunk-testfix2.patch, SOLR-6273-trunk-testfix3.patch, 
> SOLR-6273-trunk.patch, SOLR-6273-trunk.patch, SOLR-6273.patch, 
> SOLR-6273.patch, SOLR-6273.patch, SOLR-6273.patch
>
>
> This is the master issue for Cross Data Center Replication (CDCR)
> described at a high level here: 
> http://heliosearch.org/solr-cross-data-center-replication/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6851) Allow using other implementations of Format with NumericConfig

2015-10-22 Thread Trejkaz (JIRA)
Trejkaz created LUCENE-6851:
---

 Summary: Allow using other implementations of Format with 
NumericConfig
 Key: LUCENE-6851
 URL: https://issues.apache.org/jira/browse/LUCENE-6851
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/queryparser
Affects Versions: 5.2.1
Reporter: Trejkaz


NumericConfig forces me to pass in a java.text.NumberFormat.

Our own internationalisation guidelines say to avoid java.text.NumberFormat and 
always use com.ibm.icu.text.NumberFormat instead, because the latter does the 
right thing for Arabic.

I wish NumericConfig would allow me to pass in any Format, so that I didn't 
have to write the horrible adapter class which I am about to write. :)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8074) LoadAdminUIServlet directly references admin.html

2015-10-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-8074:
-

Assignee: Mark Miller

> LoadAdminUIServlet directly references admin.html
> -
>
> Key: SOLR-8074
> URL: https://issues.apache.org/jira/browse/SOLR-8074
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Reporter: Upayavira
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 5.4
>
>
> The LoadAdminUIServlet class loads up, and serves back, "admin.html", meaning 
> it cannot be used in its current state to serve up the new admin UI.
> An update is needed to this class to make it serve back whatever html file 
> was requested in the URL. There will, likely, only ever be two of them 
> mentioned in web.xml, but it would be really useful for changes to web.xml 
> not to require Java code changes also.
> I'm hoping that someone with an up-and-running Java coding setup can make 
> this pretty trivial tweak. Any volunteers?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8074) LoadAdminUIServlet directly references admin.html

2015-10-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8074:
--
Attachment: SOLR-8074.patch

Simple patch - simply uses the resource name from the url.

> LoadAdminUIServlet directly references admin.html
> -
>
> Key: SOLR-8074
> URL: https://issues.apache.org/jira/browse/SOLR-8074
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Reporter: Upayavira
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 5.4
>
> Attachments: SOLR-8074.patch
>
>
> The LoadAdminUIServlet class loads up, and serves back, "admin.html", meaning 
> it cannot be used in its current state to serve up the new admin UI.
> An update is needed to this class to make it serve back whatever html file 
> was requested in the URL. There will, likely, only ever be two of them 
> mentioned in web.xml, but it would be really useful for changes to web.xml 
> not to require Java code changes also.
> I'm hoping that someone with an up-and-running Java coding setup can make 
> this pretty trivial tweak. Any volunteers?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8187) Show how much Physical RAM is available for application use on the admin UI.

2015-10-22 Thread Mark Miller (JIRA)
Mark Miller created SOLR-8187:
-

 Summary: Show how much Physical RAM is available for application 
use on the admin UI.
 Key: SOLR-8187
 URL: https://issues.apache.org/jira/browse/SOLR-8187
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Reporter: Mark Miller
Priority: Minor


The admin UI is a bit misleading when showing used physical memory as it 
includes things like the OS filesystem cache. This is nice to see, but it's 
just confusing if you cannot also see how much physical RAM is actually 
available (at the expense of the fs cache).

I've only checked this on linux and not windows.

We should be able to surface this in the system stats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 993 - Still Failing

2015-10-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/993/

1 tests failed.
FAILED:  
org.apache.solr.cloud.LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR

Error Message:
Captured an uncaught exception in thread: Thread[id=43491, 
name=coreZkRegister-5997-thread-1, state=RUNNABLE, 
group=TGRP-LeaderInitiatedRecoveryOnShardRestartTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=43491, name=coreZkRegister-5997-thread-1, 
state=RUNNABLE, group=TGRP-LeaderInitiatedRecoveryOnShardRestartTest]
Caused by: java.lang.AssertionError
at __randomizedtesting.SeedInfo.seed([7F78F76DDF75FAD1]:0)
at 
org.apache.solr.cloud.ZkController.updateLeaderInitiatedRecoveryState(ZkController.java:2133)
at 
org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:434)
at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:197)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:157)
at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:346)
at 
org.apache.solr.cloud.ZkController.joinElection(ZkController.java:1113)
at org.apache.solr.cloud.ZkController.register(ZkController.java:926)
at org.apache.solr.cloud.ZkController.register(ZkController.java:881)
at org.apache.solr.core.ZkContainer$2.run(ZkContainer.java:183)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:231)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11227 lines...]
   [junit4] Suite: 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnShardRestartTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/build/solr-core/test/J2/temp/solr.cloud.LeaderInitiatedRecoveryOnShardRestartTest_7F78F76DDF75FAD1-001/init-core-data-001
   [junit4]   2> 2804605 INFO  
(SUITE-LeaderInitiatedRecoveryOnShardRestartTest-seed#[7F78F76DDF75FAD1]-worker)
 [] o.a.s.BaseDistributedSearchTestCase Setting hostContext system 
property: /
   [junit4]   2> 2804608 INFO  
(TEST-LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR-seed#[7F78F76DDF75FAD1])
 [] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 2804609 INFO  (Thread-33860) [] o.a.s.c.ZkTestServer 
client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 2804609 INFO  (Thread-33860) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 2804709 INFO  
(TEST-LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR-seed#[7F78F76DDF75FAD1])
 [] o.a.s.c.ZkTestServer start zk server on port:52462
   [junit4]   2> 2804709 INFO  
(TEST-LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR-seed#[7F78F76DDF75FAD1])
 [] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 2804709 INFO  
(TEST-LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR-seed#[7F78F76DDF75FAD1])
 [] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 2804712 INFO  (zkCallback-2280-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@7a70d2b6 
name:ZooKeeperConnection Watcher:127.0.0.1:52462 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 2804712 INFO  
(TEST-LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR-seed#[7F78F76DDF75FAD1])
 [] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 2804712 INFO  
(TEST-LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR-seed#[7F78F76DDF75FAD1])
 [] o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 2804712 INFO  
(TEST-LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR-seed#[7F78F76DDF75FAD1])
 [] o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2> 2804715 INFO  
(TEST-LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR-seed#[7F78F76DDF75FAD1])
 [] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 2804715 INFO  
(TEST-LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR-seed#[7F78F76DDF75FAD1])
 [] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 2804716 INFO  (zkCallback-2281-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@5128e657 
name:ZooKeeperConnection Watcher:127.0.0.1:52462/solr got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 2804716 INFO  

[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b85) - Build # 14620 - Still Failing!

2015-10-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14620/
Java: 32bit/jdk1.9.0-ea-b85 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.lucene.bkdtree.TestBKDTree.testRandomMedium

Error Message:
refusing to delete any files: this IndexWriter hit an unrecoverable exception

Stack Trace:
org.apache.lucene.store.AlreadyClosedException: refusing to delete any files: 
this IndexWriter hit an unrecoverable exception
at 
org.apache.lucene.index.IndexFileDeleter.ensureOpen(IndexFileDeleter.java:341)
at 
org.apache.lucene.index.IndexFileDeleter.deleteFile(IndexFileDeleter.java:725)
at 
org.apache.lucene.index.IndexFileDeleter.deletePendingFiles(IndexFileDeleter.java:514)
at 
org.apache.lucene.index.IndexFileDeleter.decRef(IndexFileDeleter.java:622)
at 
org.apache.lucene.index.IndexFileDeleter.checkpoint(IndexFileDeleter.java:564)
at 
org.apache.lucene.index.IndexWriter.checkpointNoSIS(IndexWriter.java:2315)
at 
org.apache.lucene.index.IndexWriter$ReaderPool.commit(IndexWriter.java:632)
at 
org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2795)
at 
org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2947)
at org.apache.lucene.index.IndexWriter.shutdown(IndexWriter.java:1066)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1109)
at 
org.apache.lucene.util.BaseGeoPointTestCase.verify(BaseGeoPointTestCase.java:591)
at 
org.apache.lucene.util.BaseGeoPointTestCase.doTestRandom(BaseGeoPointTestCase.java:399)
at 
org.apache.lucene.util.BaseGeoPointTestCase.testRandomMedium(BaseGeoPointTestCase.java:322)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[jira] [Commented] (SOLR-7843) Importing Deltal create a memory leak

2015-10-22 Thread Joseph Lawson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969374#comment-14969374
 ] 

Joseph Lawson commented on SOLR-7843:
-

Why is this not a problem?

> Importing Deltal create a memory leak
> -
>
> Key: SOLR-7843
> URL: https://issues.apache.org/jira/browse/SOLR-7843
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 5.2.1
>Reporter: Pablo Lozano
>  Labels: memory-leak
>
> The org.apache.solr.handler.dataimport.SolrWriter is not correctly cleaning 
> itself after finishing importing Deltas as the "Set deltaKeys" is not 
> being cleaned after the process has finished. 
> When using a custom importer or DataSource for my case I need to add 
> additional parameters to the delta keys.
> When the data import finishes the DeltaKeys is not set back to null and the 
> DataImporter, DocBuilder and the SolrWriter are mantained as live objects 
> because there are being referenced by the "infoRegistry" of the SolrCore 
> which seems to be used for Jmx information.
> It appears that starting a second delta import did not freed the memory which 
> may cause on the long run an OutOfMemory, I have not checked if starting a 
> full import would break the references and free the memory.
> An easy fix is possible which  would be to add to the SolrWriter "deltaKeys = 
> null;" on the close method.
> Or nullify the writer on DocBuilder after being used on the method execute();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8074) LoadAdminUIServlet directly references admin.html

2015-10-22 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969399#comment-14969399
 ] 

Upayavira commented on SOLR-8074:
-

Perfect. All I was asking for. Thx!

> LoadAdminUIServlet directly references admin.html
> -
>
> Key: SOLR-8074
> URL: https://issues.apache.org/jira/browse/SOLR-8074
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Reporter: Upayavira
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 5.4
>
> Attachments: SOLR-8074.patch
>
>
> The LoadAdminUIServlet class loads up, and serves back, "admin.html", meaning 
> it cannot be used in its current state to serve up the new admin UI.
> An update is needed to this class to make it serve back whatever html file 
> was requested in the URL. There will, likely, only ever be two of them 
> mentioned in web.xml, but it would be really useful for changes to web.xml 
> not to require Java code changes also.
> I'm hoping that someone with an up-and-running Java coding setup can make 
> this pretty trivial tweak. Any volunteers?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6273) Cross Data Center Replication

2015-10-22 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969445#comment-14969445
 ] 

Erick Erickson commented on SOLR-6273:
--

[~shalinmangar] Renaud and I have been trying to figure out what in the test 
framework seems to be giving us trouble getting the existing tests to pass. 
We've (well, mostly Renaud) have reworked some of the tests but still having 
problems. I have several changes on my local machine that help isolate the 
problems, but don't fix it. But some recent changes have caused a 100% failure 
case so I'm not going to commit anything. If you (or anyone else) want to play 
with the changes I can attach a patch that applies to trunk.

We're getting an NPE that wasn't there before and I won't have time until this 
weekend at best to look any more deeply.

Let me know if you'd like to see the current patch, I've been waiting until I 
could get some better idea of what it is in the current tests that's wonky 
before checking anything in.


> Cross Data Center Replication
> -
>
> Key: SOLR-6273
> URL: https://issues.apache.org/jira/browse/SOLR-6273
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Erick Erickson
> Attachments: SOLR-6273-trunk-testfix1.patch, 
> SOLR-6273-trunk-testfix2.patch, SOLR-6273-trunk-testfix3.patch, 
> SOLR-6273-trunk.patch, SOLR-6273-trunk.patch, SOLR-6273.patch, 
> SOLR-6273.patch, SOLR-6273.patch, SOLR-6273.patch
>
>
> This is the master issue for Cross Data Center Replication (CDCR)
> described at a high level here: 
> http://heliosearch.org/solr-cross-data-center-replication/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8187) Show how much Physical RAM is available for application use on the admin UI.

2015-10-22 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14969449#comment-14969449
 ] 

Shawn Heisey commented on SOLR-8187:


See SOLR-3969 ... up to you which one to mark duplicate.

> Show how much Physical RAM is available for application use on the admin UI.
> 
>
> Key: SOLR-8187
> URL: https://issues.apache.org/jira/browse/SOLR-8187
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Reporter: Mark Miller
>Priority: Minor
>
> The admin UI is a bit misleading when showing used physical memory as it 
> includes things like the OS filesystem cache. This is nice to see, but it's 
> just confusing if you cannot also see how much physical RAM is actually 
> available (at the expense of the fs cache).
> I've only checked this on linux and not windows.
> We should be able to surface this in the system stats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-8187) Show how much Physical RAM is available for application use on the admin UI.

2015-10-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller closed SOLR-8187.
-
Resolution: Duplicate

> Show how much Physical RAM is available for application use on the admin UI.
> 
>
> Key: SOLR-8187
> URL: https://issues.apache.org/jira/browse/SOLR-8187
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Reporter: Mark Miller
>Priority: Minor
>
> The admin UI is a bit misleading when showing used physical memory as it 
> includes things like the OS filesystem cache. This is nice to see, but it's 
> just confusing if you cannot also see how much physical RAM is actually 
> available (at the expense of the fs cache).
> I've only checked this on linux and not windows.
> We should be able to surface this in the system stats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org