[
https://issues.apache.org/jira/browse/HBASE-17170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ankit Singhal updated HBASE-17170:
----------------------------------
Description:
The class loader used by API exposed by hadoop and the context class loader
used by RunJar(bin/hadoop jar phoenix-client.jar …. ) are different resulting
in classes loaded from jar not visible to other current class loader used by
API.
The actual problem is stated in the comment below
https://issues.apache.org/jira/browse/PHOENIX-3495?focusedCommentId=15677081&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15677081
If we are not loading hbase classes from Hadoop classpath(from where hadoop
jars are getting loaded), then the RemoteException will not get unwrapped
because of ClassNotFoundException and the client will keep on retrying even if
the cause of exception is DoNotRetryIOException.
RunJar#main() context class loader.
{code}
ClassLoader loader = createClassLoader(file, workDir);
Thread.currentThread().setContextClassLoader(loader);
Class<?> mainClass = Class.forName(mainClassName, true, loader);
Method main = mainClass.getMethod("main", new Class[] {
Array.newInstance(String.class, 0).getClass()
});
HBase classes can be loaded from jar(phoenix-client.jar):-
hadoop --config /etc/hbase/conf/ jar
~/git/apache/phoenix/phoenix-client/target/phoenix-4.9.0-HBase-1.2-client.jar
org.apache.phoenix.mapreduce.CsvBulkLoadTool --table GIGANTIC_TABLE --input
/tmp/b.csv --zookeeper localhost:2181
{code}
API(using current class loader).
{code}
public class RpcRetryingCaller<T> {
public IOException unwrapRemoteException() {
try {
Class<?> realClass = Class.forName(getClassName());
return instantiateException(realClass.asSubclass(IOException.class));
} catch(Exception e) {
// cannot instantiate the original exception, just return this
}
return this;
}
{code}
*Possible solution:-*
We can create our own HBaseRemoteWithExtrasException(extension of
RemoteWithExtrasException) so that default class loader will be the one from
where the hbase classes are loaded and extend unwrapRemoteException() to throw
exception if the unwrapping doesn’t take place because of CNF exception?
was:
The class loader used by API exposed by hadoop and the context class loader
used by RunJar(bin/hadoop jar phoenix-client.jar …. ) are different resulting
in classes loaded from jar not visible to other current class loader used by
API.
The actual problem is stated in the comment below
https://issues.apache.org/jira/browse/PHOENIX-3495?focusedCommentId=15677081&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15677081
If we are not loading hbase classes from Hadoop classpath(from where hadoop
jars are getting loaded), then the RemoteException will not get unwrapped
because of ClassNotFoundException and the client will keep on retrying even if
the cause of exception is DoNotRetryIOException.
public class RpcRetryingCaller<T> {
public IOException unwrapRemoteException() {
try {
Class<?> realClass = Class.forName(getClassName());
return instantiateException(realClass.asSubclass(IOException.class));
} catch(Exception e) {
// cannot instantiate the original exception, just return this
}
return this;
}
*Possible solution:-*
We can create our own HBaseRemoteWithExtrasException(extension of
RemoteWithExtrasException) so that default class loader will be the one from
where the hbase classes are loaded and extend unwrapRemoteException() to throw
exception if the unwraping doesn’t take place because of CNF exception?
> HBase is also retrying DoNotRetryIOException because of class loader
> differences.
> ---------------------------------------------------------------------------------
>
> Key: HBASE-17170
> URL: https://issues.apache.org/jira/browse/HBASE-17170
> Project: HBase
> Issue Type: Bug
> Reporter: Ankit Singhal
> Assignee: Ankit Singhal
>
> The class loader used by API exposed by hadoop and the context class loader
> used by RunJar(bin/hadoop jar phoenix-client.jar …. ) are different resulting
> in classes loaded from jar not visible to other current class loader used by
> API.
> The actual problem is stated in the comment below
> https://issues.apache.org/jira/browse/PHOENIX-3495?focusedCommentId=15677081&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15677081
> If we are not loading hbase classes from Hadoop classpath(from where hadoop
> jars are getting loaded), then the RemoteException will not get unwrapped
> because of ClassNotFoundException and the client will keep on retrying even
> if the cause of exception is DoNotRetryIOException.
> RunJar#main() context class loader.
> {code}
> ClassLoader loader = createClassLoader(file, workDir);
> Thread.currentThread().setContextClassLoader(loader);
> Class<?> mainClass = Class.forName(mainClassName, true, loader);
> Method main = mainClass.getMethod("main", new Class[] {
> Array.newInstance(String.class, 0).getClass()
> });
> HBase classes can be loaded from jar(phoenix-client.jar):-
> hadoop --config /etc/hbase/conf/ jar
> ~/git/apache/phoenix/phoenix-client/target/phoenix-4.9.0-HBase-1.2-client.jar
> org.apache.phoenix.mapreduce.CsvBulkLoadTool --table GIGANTIC_TABLE --input
> /tmp/b.csv --zookeeper localhost:2181
> {code}
> API(using current class loader).
> {code}
> public class RpcRetryingCaller<T> {
> public IOException unwrapRemoteException() {
> try {
> Class<?> realClass = Class.forName(getClassName());
> return instantiateException(realClass.asSubclass(IOException.class));
> } catch(Exception e) {
> // cannot instantiate the original exception, just return this
> }
> return this;
> }
> {code}
> *Possible solution:-*
> We can create our own HBaseRemoteWithExtrasException(extension of
> RemoteWithExtrasException) so that default class loader will be the one from
> where the hbase classes are loaded and extend unwrapRemoteException() to
> throw exception if the unwrapping doesn’t take place because of CNF
> exception?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)