One of those filters can be removed -- the thread that was previously
calling system.exit now could be interrupted (it should fail with
access denied when calling system.exit :).

D.

On Fri, Mar 20, 2015 at 9:29 PM, Mark Miller <[email protected]> wrote:
> I think most of the Solr tests don’t leak threads at this point. Whatever is 
> left should be very easy / minor to address. The HDFS tests do still have 
> thread leaks of underlying HDFS stuff. Eventually we should be on a version 
> that doesn’t do that, but at about open source speed :)
>
> - Mark
>
> http://about.me/markrmiller
>
>> On Mar 20, 2015, at 4:24 PM, Dawid Weiss <[email protected]> 
>> wrote:
>>
>> This is the same issue as Hoss reporter earlier. Thread leak detection
>> is largely ignored in Solr tests -- it should be fixed, obviously, but
>> I don't know what the scope of the changes would be if we removed the
>> offending threads from filters.
>>
>> https://issues.apache.org/jira/browse/SOLR-7215
>>
>> Dawid
>>
>> On Fri, Mar 20, 2015 at 9:17 PM, Yonik Seeley <[email protected]> wrote:
>>> Just got a failure from a test that doesn't have any output at all....
>>>
>>>   <testcase classname="junit.framework.TestSuite"
>>> name="org.apache.solr.search.TestDocSet" time="0.0">
>>>
>>>      <failure message="The test or suite printed 10982 bytes to
>>> stdout and stderr, even though the limit was set to 8192 bytes.
>>> Increase the limit with @Limit, ignore it completely with
>>> @SuppressSysoutChecks or run with -Dtests.verbose=true"
>>> type="java.lang.AssertionError">java.lang.AssertionError: The test or
>>> suite printed 10982 bytes to stdout and stderr, even though the limit
>>> was set to 8192 bytes. Increase the limit with @Limit, ignore it
>>> completely with @SuppressSysoutChecks or run with -Dtests.verbose=true
>>>
>>>        at __randomizedtesting.SeedInfo.seed([63638DD5324A94A2]:0)
>>>
>>>
>>>
>>>
>>> Looking at tests-report.txt though, perhaps it's just thread leaks
>>> from other tests?
>>>
>>>
>>>
>>> [15:51:04.358] OK      0.11s J1 | TestDocSet.testFilter
>>>
>>>  2> 1365229 T1109 oahh.LeaseRenewer.run WARN Failed to renew lease
>>> for [DFSClient_NONMAPREDUCE_-144622376_992] for 1130 seconds.  Will
>>> retry shortly ... java.net
>>>
>>> .ConnectException: Call From odin/127.0.1.1 to localhost:33373 failed
>>> on connection exception: java.net.ConnectException: Connection
>>> refused; For more details see
>>>
>>> :  http://wiki.apache.org/hadoop/ConnectionRefused
>>>
>>>  2>    at sun.reflect.GeneratedConstructorAccessor232.newInstance(Unknown
>>> Source)
>>>
>>>  2>    at 
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>>
>>>  2>    at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
>>>
>>>  2>    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
>>>
>>>  2>    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
>>>
>>>  2>    at org.apache.hadoop.ipc.Client.call(Client.java:1410)
>>>
>>>  2>    at org.apache.hadoop.ipc.Client.call(Client.java:1359)
>>>
>>>  2>    at 
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>>>
>>>  2>    at com.sun.proxy.$Proxy42.renewLease(Unknown Source)
>>>
>>>  2>    at sun.reflect.GeneratedMethodAccessor52.invoke(Unknown Source)
>>>
>>>  2>    at 
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>
>>>  2>    at java.lang.reflect.Method.invoke(Method.java:483)
>>>
>>>  2>    at 
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>>>
>>>  2>    at 
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>>>
>>>  2>    at com.sun.proxy.$Proxy42.renewLease(Unknown Source)
>>>
>>>  2>    at 
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:519)
>>>
>>>  2>    at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:773)
>>>
>>>  2>    at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:417)
>>>
>>>  2>    at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:442)
>>>
>>>  2>    at 
>>> org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)
>>>
>>>  2>    at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298)
>>>
>>>  2>    at java.lang.Thread.run(Thread.java:745)
>>>
>>>  2> Caused by: java.net.ConnectException: Connection refused
>>>
>>>  2>    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>>
>>>  2>    at 
>>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
>>>
>>>  2>    at 
>>> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>>>
>>>  2>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
>>>
>>>  2>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
>>>
>>>  2>    at 
>>> org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:601)
>>>
>>>  2>    at 
>>> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:696)
>>>
>>>  2>    at 
>>> org.apache.hadoop.ipc.Client$Connection.access$2700(Client.java:367)
>>>
>>>  2>    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1458)
>>>
>>>  2>    at org.apache.hadoop.ipc.Client.call(Client.java:1377)
>>>
>>>  2>    ... 16 more
>>>
>>>  2>
>>>
>>>
>>>
>>>
>>>
>>>
>>> -Yonik
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: [email protected]
>>> For additional commands, e-mail: [email protected]
>>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: [email protected]
>> For additional commands, e-mail: [email protected]
>>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
>

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to