[ 
https://issues.apache.org/jira/browse/HBASE-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508240#comment-14508240
 ] 

Hadoop QA commented on HBASE-13532:
-----------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12727361/HBASE-13532.patch
  against master branch at commit afd7a8f4742ddbc575b45fc141a75551b55c56f5.
  ATTACHMENT ID: 12727361

    {color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

    {color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
                        Please justify why no new tests are needed for this 
patch.
                        Also please list what manual steps were performed to 
verify this patch.

    {color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

    {color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

    {color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

    {color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

    {color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

    {color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

    {color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

    {color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13782//testReport/
Release Findbugs (version 2.0.3)        warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13782//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13782//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13782//console

This message is automatically generated.

> Make UnknownScannerException logging less scary
> -----------------------------------------------
>
>                 Key: HBASE-13532
>                 URL: https://issues.apache.org/jira/browse/HBASE-13532
>             Project: HBase
>          Issue Type: Improvement
>            Reporter: Apekshit Sharma
>            Priority: Trivial
>         Attachments: HBASE-13532.patch
>
>
> A customer reported seeing client-side UnknownScannerExceptions after an 
> HBase upgrade/restart. Restarting a RS will expire leases on the server side. 
> So, given that there was no actual problem and everything was working as it 
> should, reworking this exception for more appropriate logging.
> {code}
> org.apache.hadoop.hbase.UnknownScannerException: 
> org.apache.hadoop.hbase.UnknownScannerException: Name: 10092964, already 
> closed? 
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3043)
>  
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29497)
>  
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012) 
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98) 
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
>  
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
>  
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
>  
> at java.lang.Thread.run(Thread.java:724) 
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>  
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526) 
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>  
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>  
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:284)
>  
> at 
> org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:287)
>  
> at 
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:153) 
> at 
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:57) 
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:114)
>  
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:90)
>  
> at org.apache.hadoop.hbase.client.ClientScanner.close(ClientScanner.java:431) 
> at 
> com.squareup.moco.persistence.TransactionTable.scan(TransactionTable.java:207)
>  
> at 
> com.squareup.moco.persistence.TransactionTable$$EnhancerByGuice$$a12c1766.CGLIB$scan$9(<generated>)
>  
> at 
> com.squareup.moco.persistence.TransactionTable$$EnhancerByGuice$$a12c1766$$FastClassByGuice$$606c8773.invoke(<generated>)
>  
> at 
> com.google.inject.internal.cglib.proxy.$MethodProxy.invokeSuper(MethodProxy.java:228)
>  
> at 
> com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:75)
>  
> at 
> com.squareup.common.metrics.TimedHistogramInterceptor.invoke(TimedHistogramInterceptor.java:29)
>  
> at 
> com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:75)
>  
> at 
> com.google.inject.internal.InterceptorStackCallback.intercept(InterceptorStackCallback.java:55)
>  
> at 
> com.squareup.moco.persistence.TransactionTable$$EnhancerByGuice$$a12c1766.scan(<generated>)
>  
> at 
> com.squareup.moco.persistence.TransactionTable$1.run(TransactionTable.java:180)
>  
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) 
> at java.util.concurrent.FutureTask.run(FutureTask.java:166) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  
> at java.lang.Thread.run(Thread.java:724) 
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.UnknownScannerException):
>  org.apache.hadoop.hbase.UnknownScannerException: Name: 10092964, already 
> closed? 
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3043)
>  
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29497)
>  
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012) 
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98) 
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
>  
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
>  
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
>  
> at java.lang.Thread.run(Thread.java:724)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to