[ 
https://issues.apache.org/jira/browse/PHOENIX-2636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15120492#comment-15120492
 ] 

Samarth Jain commented on PHOENIX-2636:
---------------------------------------

[~mujtabachohan] verified that this issue happens when compiling with 0.98.16.1 
and running against 0.98.17. He also verified that compiling and running 
against 0.98.17 works fine. Maybe an easier work around is to just upgrade our 
pom to 0.98.17. But then this doesn't prevent pain for folks who are managing 
their own forked phoenix versions that compile against 0.98.16 version of 
HBase. FWIW, we at Salesforce are on 0.98.16.1 version of HBase and this will 
essentially stop us from upgrading to 0.98.17 on the server side.

> Figure out a work around for java.lang.NoSuchFieldError: in when compiling 
> against HBase < 0.98.17
> --------------------------------------------------------------------------------------------------
>
>                 Key: PHOENIX-2636
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2636
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: Samarth Jain
>            Assignee: Samarth Jain
>            Priority: Critical
>
> Working on PHOENIX-2629 revealed that when compiling against an HBase version 
> prior to 0.98.17 and running against 0.98.17, region assignments fails to 
> complete because of the error:
> {code}
> java.lang.NoSuchFieldError: in
>       at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$PhoenixBaseDecoder.<init>(IndexedWALEditCodec.java:111)
>       at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$IndexKeyValueDecoder.<init>(IndexedWALEditCodec.java:126)
>       at 
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec.getDecoder(IndexedWALEditCodec.java:68)
>       at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:253)
>       at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:86)
>       at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:129)
>       at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:91)
>       at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:668)
>       at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:577)
>       at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:282)
>       at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:225)
>       at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:143)
>       at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:82)
>       at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to