[ 
https://issues.apache.org/jira/browse/FLINK-3067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15055758#comment-15055758
 ] 

ASF GitHub Bot commented on FLINK-3067:
---------------------------------------

Github user uce commented on a diff in the pull request:

    https://github.com/apache/flink/pull/1451#discussion_r47480476
  
    --- Diff: 
flink-streaming-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/internals/ZookeeperOffsetHandler.java
 ---
    @@ -98,30 +97,42 @@ public void 
seekFetcherToInitialOffsets(List<KafkaTopicPartitionLeader> partitio
     
        @Override
        public void close() throws IOException {
    -           zkClient.close();
    +           curatorClient.close();
        }
     
        // 
------------------------------------------------------------------------
        //  Communication with Zookeeper
        // 
------------------------------------------------------------------------
        
    -   public static void setOffsetInZooKeeper(ZkClient zkClient, String 
groupId, String topic, int partition, long offset) {
    -           TopicAndPartition tap = new TopicAndPartition(topic, partition);
    -           ZKGroupTopicDirs topicDirs = new ZKGroupTopicDirs(groupId, 
tap.topic());
    -           ZkUtils.updatePersistentPath(zkClient, 
topicDirs.consumerOffsetDir() + "/" + tap.partition(), Long.toString(offset));
    +   public static void setOffsetInZooKeeper(CuratorFramework curatorClient, 
String groupId, String topic, int partition, long offset) throws Exception {
    +           ZKGroupTopicDirs topicDirs = new ZKGroupTopicDirs(groupId, 
topic);
    +           String path = topicDirs.consumerOffsetDir() + "/" + partition;
    +           ensureExists(curatorClient, path);
    +           byte[] data = Long.toString(offset).getBytes();
    +           curatorClient.setData().forPath(path, data);
        }
     
    -   public static long getOffsetFromZooKeeper(ZkClient zkClient, String 
groupId, String topic, int partition) {
    -           TopicAndPartition tap = new TopicAndPartition(topic, partition);
    -           ZKGroupTopicDirs topicDirs = new ZKGroupTopicDirs(groupId, 
tap.topic());
    -
    -           scala.Tuple2<Option<String>, Stat> data = 
ZkUtils.readDataMaybeNull(zkClient,
    -                           topicDirs.consumerOffsetDir() + "/" + 
tap.partition());
    -
    -           if (data._1().isEmpty()) {
    +   public static long getOffsetFromZooKeeper(CuratorFramework 
curatorClient, String groupId, String topic, int partition) throws Exception {
    +           ZKGroupTopicDirs topicDirs = new ZKGroupTopicDirs(groupId, 
topic);
    +           String path = topicDirs.consumerOffsetDir() + "/" + partition;
    +           ensureExists(curatorClient, path);
    +           byte[] data = curatorClient.getData().forPath(path);
    +           if(data == null) {
                        return OFFSET_NOT_SET;
                } else {
    -                   return Long.valueOf(data._1().get());
    +                   String asString = new String(data);
    +                   if(asString.length() == 0) {
    +                           return OFFSET_NOT_SET;
    +                   } else {
    +                           return Long.valueOf(asString);
    +                   }
    +           }
    +   }
    +
    +   private static void ensureExists(CuratorFramework curatorClient, String 
path) throws Exception {
    --- End diff --
    
    There is a ensure method in `CuratorFramework`, which does something 
similar in a retry loop.


> Kafka source fails during checkpoint notifications with NPE
> -----------------------------------------------------------
>
>                 Key: FLINK-3067
>                 URL: https://issues.apache.org/jira/browse/FLINK-3067
>             Project: Flink
>          Issue Type: Bug
>          Components: Kafka Connector
>    Affects Versions: 0.10.0
>            Reporter: Gyula Fora
>            Assignee: Robert Metzger
>             Fix For: 1.0.0, 0.10.2
>
>
> While running a job with many kafka sources I experienced the following error 
> during the checkpoint notifications:
> java.lang.RuntimeException: Error while confirming checkpoint
>       at org.apache.flink.runtime.taskmanager.Task$2.run(Task.java:935)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
>       at 
> org.I0Itec.zkclient.ZkConnection.writeDataReturnStat(ZkConnection.java:115)
>       at org.I0Itec.zkclient.ZkClient$10.call(ZkClient.java:817)
>       at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:675)
>       at org.I0Itec.zkclient.ZkClient.writeDataReturnStat(ZkClient.java:813)
>       at org.I0Itec.zkclient.ZkClient.writeData(ZkClient.java:808)
>       at org.I0Itec.zkclient.ZkClient.writeData(ZkClient.java:777)
>       at kafka.utils.ZkUtils$.updatePersistentPath(ZkUtils.scala:332)
>       at kafka.utils.ZkUtils.updatePersistentPath(ZkUtils.scala)
>       at 
> org.apache.flink.streaming.connectors.kafka.internals.ZookeeperOffsetHandler.setOffsetInZooKeeper(ZookeeperOffsetHandler.java:112)
>       at 
> org.apache.flink.streaming.connectors.kafka.internals.ZookeeperOffsetHandler.commit(ZookeeperOffsetHandler.java:80)
>       at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer.notifyCheckpointComplete(FlinkKafkaConsumer.java:563)
>       at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.notifyOfCompletedCheckpoint(AbstractUdfStreamOperator.java:176)
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.notifyCheckpointComplete(StreamTask.java:478)
>       at org.apache.flink.runtime.taskmanager.Task$2.run(Task.java:931)
>       ... 5 more
> 06:23:28,373 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph     
>    - Source: Kafka[event.rakdos.log] (1/1) (d79e6b7a25b1ac307d2e0c8
> This resulted in the job crashing and getting stuck during cancelling which 
> subsequently lead to having to restart the cluster.
> This might be a zookeeper issue but we should be able to handle it (catch the 
> exception maybe).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to