[
https://issues.apache.org/jira/browse/FLINK-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17119427#comment-17119427
]
ShenDa commented on FLINK-17678:
--------------------------------
We encountered an problem that the hbase shade jar we built can't pass the
verification in test_sql_client.sh :
{code:shell}
if ! [[ $EXTRACTED_FILE = "$EXTRACTED_JAR/org/apache/flink"* ]] && \
! [[ $EXTRACTED_FILE = "$EXTRACTED_JAR/META-INF"* ]] && \
! [[ $EXTRACTED_FILE = "$EXTRACTED_JAR/LICENSE"* ]] && \
! [[ $EXTRACTED_FILE = "$EXTRACTED_JAR/NOTICE"* ]] ; then
echo "Bad file in JAR: $EXTRACTED_FILE"
exit 1
fi
{code}
To avoiding exception thrown by hbase region server, we didn't shade the
org.apache.hadoop.hbase.codec.*, because the hbase server can not find the
shaded codec class to decoding data(byte[]). So we shade the hbase
dependencies blow:
{code:xml}
<relocation>
<pattern>org.apache.hbase</pattern>
<shadedPattern>org.apache.flink.hbase.shaded.org.apache.hbase</shadedPattern>
<excludes>
<exclude>org.apache.hadoop.hbase.codec.*</exclude>
</excludes>
</relocation>
{code}
But by this way, the shaded hbase jar contains a directory named
org/apache/hadoop/hbase. This directory does not obey the rule * ! [[
$EXTRACTED_FILE = "$EXTRACTED_JAR/org/apache/flink"* ]] *.
So how can I solve this problem?
> Support fink-sql-connector-hbase
> --------------------------------
>
> Key: FLINK-17678
> URL: https://issues.apache.org/jira/browse/FLINK-17678
> Project: Flink
> Issue Type: Improvement
> Components: Connectors / HBase
> Reporter: ShenDa
> Assignee: ShenDa
> Priority: Major
> Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Currently, flink doesn't contains a hbase uber jar, so users have to add
> hbase dependency manually.
> Could I create new module called flink-sql-connector-hbase like elasticsaerch
> and kafka sql -connector.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)