[ 
https://issues.apache.org/jira/browse/HBASE-17170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15705089#comment-15705089
 ] 

Hadoop QA commented on HBASE-17170:
-----------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
1s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
41s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 37s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 0s 
{color} | {color:red} hbase-client generated 1 new + 0 unchanged - 0 fixed = 1 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 2s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
8s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 46s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-client |
|  |  org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.<static initializer 
for RemoteWithExtrasException>() creates a 
org.apache.hadoop.hbase.util.DynamicClassLoader classloader, which should be 
performed within a doPrivileged block  At RemoteWithExtrasException.java:a 
org.apache.hadoop.hbase.util.DynamicClassLoader classloader, which should be 
performed within a doPrivileged block  At RemoteWithExtrasException.java:[line 
51] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12840846/HBASE-17170.master.001.patch
 |
| JIRA Issue | HBASE-17170 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 3eb141ed1159 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 7bcbac9 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4679/artifact/patchprocess/new-findbugs-hbase-client.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4679/testReport/ |
| modules | C: hbase-client U: hbase-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4679/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> HBase is also retrying DoNotRetryIOException because of class loader 
> differences.
> ---------------------------------------------------------------------------------
>
>                 Key: HBASE-17170
>                 URL: https://issues.apache.org/jira/browse/HBASE-17170
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Ankit Singhal
>            Assignee: Ankit Singhal
>         Attachments: HBASE-17170.master.001.patch
>
>
> The  class loader used by API exposed by hadoop and the context class loader 
> used by RunJar(bin/hadoop jar phoenix-client.jar …. ) are different resulting 
> in classes loaded from jar not visible to other current class loader used by 
> API. 
> {code}
> 16/04/26 21:18:00 INFO client.RpcRetryingCaller: Call exception, tries=32, 
> retries=35, started=491541 ms ago, cancelled=false, msg=
> 16/04/26 21:18:21 INFO client.RpcRetryingCaller: Call exception, tries=33, 
> retries=35, started=511747 ms ago, cancelled=false, msg=
> 16/04/26 21:18:41 INFO client.RpcRetryingCaller: Call exception, tries=34, 
> retries=35, started=531820 ms ago, cancelled=false, msg=
> Exception in thread "main" org.apache.phoenix.exception.PhoenixIOException: 
> Failed after attempts=35, exceptions:
> Tue Apr 26 21:09:49 UTC 2016, 
> RpcRetryingCaller{globalStartTime=1461704989282, pause=100, retries=35}, 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.NamespaceExistException):
>  org.apache.hadoop.hbase.NamespaceExistException: SYSTEM
> at 
> org.apache.hadoop.hbase.master.TableNamespaceManager.create(TableNamespaceManager.java:156)
> at 
> org.apache.hadoop.hbase.master.TableNamespaceManager.create(TableNamespaceManager.java:131)
> at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:2553)
> at 
> org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:447)
> at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:58043)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2115)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:102)
> {code}
> The actual problem is stated in the comment below 
> https://issues.apache.org/jira/browse/PHOENIX-3495?focusedCommentId=15677081&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15677081
> If we are not loading hbase classes from Hadoop classpath(from where hadoop 
> jars are getting loaded), then the RemoteException will not get unwrapped 
> because of ClassNotFoundException and the client will keep on retrying even 
> if the cause of exception is DoNotRetryIOException.
> RunJar#main() context class loader.
> {code}
> ClassLoader loader = createClassLoader(file, workDir);
>     Thread.currentThread().setContextClassLoader(loader);
>     Class<?> mainClass = Class.forName(mainClassName, true, loader);
>     Method main = mainClass.getMethod("main", new Class[] {
>       Array.newInstance(String.class, 0).getClass()
>     });
> HBase classes can be loaded from jar(phoenix-client.jar):-
> hadoop --config /etc/hbase/conf/ jar 
> ~/git/apache/phoenix/phoenix-client/target/phoenix-4.9.0-HBase-1.2-client.jar 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool   --table GIGANTIC_TABLE --input 
> /tmp/b.csv --zookeeper localhost:2181
> {code}
> API(using current class loader).
> {code}
> public class RpcRetryingCaller<T> {
> public IOException unwrapRemoteException() {
>     try {
>       Class<?> realClass = Class.forName(getClassName());
>       return instantiateException(realClass.asSubclass(IOException.class));
>     } catch(Exception e) {
>       // cannot instantiate the original exception, just return this
>     }
>     return this;
>   }
> {code}
> *Possible solution:-*
> We can create our own HBaseRemoteWithExtrasException(extension of 
> RemoteWithExtrasException) so that default class loader will be the one from 
> where the hbase classes are loaded and extend unwrapRemoteException() to 
> throw exception if the unwrapping doesn’t take place because of CNF 
> exception? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to