[ 
https://issues.apache.org/jira/browse/FLINK-20569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-20569:
-----------------------------------
    Labels: auto-deprioritized-major stale-assigned test-stability  (was: 
auto-deprioritized-major test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 14, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it. If the "warning_label" label is not removed in 7 days, the 
issue will be automatically unassigned.


> testKafkaSourceSinkWithMetadata hangs
> -------------------------------------
>
>                 Key: FLINK-20569
>                 URL: https://issues.apache.org/jira/browse/FLINK-20569
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / Kafka, Table SQL / Ecosystem
>    Affects Versions: 1.12.0, 1.13.0
>            Reporter: Huang Xingbo
>            Assignee: Timo Walther
>            Priority: Minor
>              Labels: auto-deprioritized-major, stale-assigned, test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10781&view=logs&j=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f&t=f266c805-9429-58ed-2f9e-482e7b82f58b]
> {code:java}
> 2020-12-10T23:10:46.7788275Z Test testKafkaSourceSinkWithMetadata[legacy = 
> false, format = 
> csv](org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase) is 
> running.
> 2020-12-10T23:10:46.7789360Z 
> --------------------------------------------------------------------------------
> 2020-12-10T23:10:46.7790602Z 23:10:46,776 [                main] INFO  
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl [] - 
> Creating topic metadata_topic_csv
> 2020-12-10T23:10:47.1145296Z 23:10:47,112 [                main] WARN  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer [] - Property 
> [transaction.timeout.ms] not specified. Setting it to 3600000 ms
> 2020-12-10T23:10:47.1683896Z 23:10:47,166 [Sink: 
> Sink(table=[default_catalog.default_database.kafka], fields=[physical_1, 
> physical_2, physical_3, headers, timestamp]) (1/1)#0] WARN  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer [] - Using 
> AT_LEAST_ONCE semantic, but checkpointing is not enabled. Switching to NONE 
> semantic.
> 2020-12-10T23:10:47.2087733Z 23:10:47,206 [Sink: 
> Sink(table=[default_catalog.default_database.kafka], fields=[physical_1, 
> physical_2, physical_3, headers, timestamp]) (1/1)#0] INFO  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer [] - Starting 
> FlinkKafkaInternalProducer (1/1) to produce into default topic 
> metadata_topic_csv
> 2020-12-10T23:10:47.5157133Z 23:10:47,513 [Source: 
> TableSourceScan(table=[[default_catalog, default_database, kafka]], 
> fields=[physical_1, physical_2, physical_3, topic, partition, headers, 
> leader-epoch, timestamp, timestamp-type]) -> Calc(select=[physical_1, 
> physical_2, CAST(timestamp-type) AS timestamp-type, CAST(timestamp) AS 
> timestamp, leader-epoch, CAST(headers) AS headers, CAST(partition) AS 
> partition, CAST(topic) AS topic, physical_3]) -> SinkConversionToTuple2 -> 
> Sink: Select table sink (1/1)#0] INFO  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase [] - 
> Consumer subtask 0 has no restore state.
> 2020-12-10T23:10:47.5233388Z 23:10:47,521 [Source: 
> TableSourceScan(table=[[default_catalog, default_database, kafka]], 
> fields=[physical_1, physical_2, physical_3, topic, partition, headers, 
> leader-epoch, timestamp, timestamp-type]) -> Calc(select=[physical_1, 
> physical_2, CAST(timestamp-type) AS timestamp-type, CAST(timestamp) AS 
> timestamp, leader-epoch, CAST(headers) AS headers, CAST(partition) AS 
> partition, CAST(topic) AS topic, physical_3]) -> SinkConversionToTuple2 -> 
> Sink: Select table sink (1/1)#0] INFO  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase [] - 
> Consumer subtask 0 will start reading the following 1 partitions from the 
> earliest offsets: [KafkaTopicPartition{topic='metadata_topic_csv', 
> partition=0}]
> 2020-12-10T23:10:47.5387239Z 23:10:47,537 [Legacy Source Thread - Source: 
> TableSourceScan(table=[[default_catalog, default_database, kafka]], 
> fields=[physical_1, physical_2, physical_3, topic, partition, headers, 
> leader-epoch, timestamp, timestamp-type]) -> Calc(select=[physical_1, 
> physical_2, CAST(timestamp-type) AS timestamp-type, CAST(timestamp) AS 
> timestamp, leader-epoch, CAST(headers) AS headers, CAST(partition) AS 
> partition, CAST(topic) AS topic, physical_3]) -> SinkConversionToTuple2 -> 
> Sink: Select table sink (1/1)#0] INFO  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase [] - 
> Consumer subtask 0 creating fetcher with offsets 
> {KafkaTopicPartition{topic='metadata_topic_csv', partition=0}=-915623761775}.
> 2020-12-11T02:34:02.6860452Z ##[error]The operation was canceled.
> {code}
> This test started at 2020-12-10T23:10:46.7788275Z and has not been finished 
> at 2020-12-11T02:34:02.6860452Z



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to