[
https://issues.apache.org/jira/browse/FLINK-17384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17108059#comment-17108059
]
jackylau commented on FLINK-17384:
----------------------------------
Hi [~liyu], I have committed my code, but the log is
2020-05-14T13:46:32.9352627Z [ERROR] Failures:
2020-05-14T13:46:32.9361371Z [ERROR]
KafkaProducerExactlyOnceITCase>KafkaProducerTestBase.testExactlyOnceRegularSink:309->KafkaProducerTestBase.testExactlyOnce:370
Test failed: Job execution failed
ERROR] Failed to execute goal
org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test (end-to-end-tests)
on project flink-metrics-availability-test: Unable to generate classpath:
org.apache.maven.artifact.resolver.ArtifactResolutionException: Could not
transfer artifact org.apache.maven.surefire:surefire-grouper:jar:2.22.1 from/to
alicloud-mvn-mirror
(http://mavenmirror.alicloud.dak8s.net:8888/repository/maven-central/): Entry
[id:18][route:{}->http://mavenmirror.alicloud.dak8s.net:8888][state:null] has
not been leased from this pool.
How to solve it , and why that happends. how to make the
[flinkbot|https://github.com/flinkbot] rerun azure
> support read hbase conf dir from flink.conf just like hadoop_conf
> -----------------------------------------------------------------
>
> Key: FLINK-17384
> URL: https://issues.apache.org/jira/browse/FLINK-17384
> Project: Flink
> Issue Type: Bug
> Components: Connectors / HBase, Deployment / Scripts
> Affects Versions: 1.10.0
> Reporter: jackylau
> Assignee: jackylau
> Priority: Major
> Labels: pull-request-available
> Fix For: 1.11.0
>
>
> hi all:
> when user interacts with hbase should do 2 things when using sql
> # export HBASE_CONF_DIR
> # add hbase libs to flink_lib(because the hbase connnector doesn't have
> client's( and others) jar)
> i think it needs to optimise it.
> for 1) we should support read hbase conf dir from flink.conf just like
> hadoop_conf in config.sh
> for 2) we should support HBASE_CLASSPATH in config.sh. In case of jar
> conflicts such as guava , we also should support flink-hbase-shaded just like
> hadoop does
--
This message was sent by Atlassian Jira
(v8.3.4#803005)