Re: Flink 1.17.2 planned?

2023-08-21 Thread liu ron
Hi, Christian

We released 1.17.1 [1] in May, and the main focus of the community is
currently on the 1.18 release, so 1.17.2 should be planned for after the
1.18 release!

[1]
https://flink.apache.org/2023/05/25/apache-flink-1.17.1-release-announcement/


Best,
Ron

Christian Lorenz via user  于2023年8月21日周一 17:33写道:

> Hi team,
>
>
>
> are there any infos about a bugfix release 1.17.2 available? E.g. will
> there be another bugfix release of 1.17 / approximate timing?
>
> We are hit by https://issues.apache.org/jira/browse/FLINK-32296 which
> leads to wrong SQL responses in some circumstances.
>
>
>
> Kind regards,
>
> Christian
>
> This e-mail is from Mapp Digital Group and its international legal
> entities and may contain information that is confidential.
> If you are not the intended recipient, do not read, copy or distribute the
> e-mail or any attachments. Instead, please notify the sender and delete the
> e-mail and any attachments.
>


Flink 1.17.2 planned?

2023-08-21 Thread Christian Lorenz via user
Hi team,

are there any infos about a bugfix release 1.17.2 available? E.g. will there be 
another bugfix release of 1.17 / approximate timing?
We are hit by https://issues.apache.org/jira/browse/FLINK-32296 which leads to 
wrong SQL responses in some circumstances.

Kind regards,
Christian
This e-mail is from Mapp Digital Group and its international legal entities and 
may contain information that is confidential.
If you are not the intended recipient, do not read, copy or distribute the 
e-mail or any attachments. Instead, please notify the sender and delete the 
e-mail and any attachments.


Re: TaskManagers Crushing

2023-08-21 Thread Shammon FY
Hi,

I seems that the node `tef-prod-flink-04/10.11.0.51:37505 [
tef-prod-flink-04:38835-e3ca4d ]` exits unexpected, you can check whether
there are some errors in the log of TM or K8S

Best,
Shammon FY


On Sun, Aug 20, 2023 at 5:42 PM Kenan Kılıçtepe 
wrote:

> Hi,
>
> Nothing interesting on Kafka side.Just sone partition delete/create logs.
> Also I can't understand why all task managers stop at the same time
> without any error log.
>
> Thanks
> Kenan
>
>
>
> On Sun, Aug 20, 2023 at 10:49 AM liu ron  wrote:
>
>> Hi,
>>
>> Maybe you need to check what changed on the Kafka side at that time.
>>
>> Best,
>> Ron
>>
>> Kenan Kılıçtepe  于2023年8月20日周日 08:51写道:
>>
>>> Hi,
>>>
>>> I have 4 task manager working on 4 servers.
>>> They all crush at the same time without any useful error logs.
>>> Only log I can see is some disconnection from Kafka for both consumer
>>> and producers.
>>> Any idea or any help is appreciated.
>>>
>>> Some logs from all taskmanagers:
>>>
>>> I think first server 4 is crushing and it causes crush for all
>>> taskmanagers.
>>>
>>> JobManager:
>>>
>>> 2023-08-18 15:16:46,528 INFO  org.apache.kafka.clients.NetworkClient
>>>   [] - [AdminClient clientId=47539-enumerator-admin-client]
>>> Node 2 disconnected.
>>> 2023-08-18 15:19:00,303 INFO  org.apache.kafka.clients.NetworkClient
>>>   [] - [AdminClient
>>> clientId=tf_25464-enumerator-admin-client] Node 4 disconnected.
>>> 2023-08-18 15:19:16,668 INFO  org.apache.kafka.clients.NetworkClient
>>>   [] - [AdminClient
>>> clientId=cpu_59942-enumerator-admin-client] Node 1 disconnected.
>>> 2023-08-18 15:19:16,764 INFO  org.apache.kafka.clients.NetworkClient
>>>   [] - [AdminClient
>>> clientId=cpu_55128-enumerator-admin-client] Node 3 disconnected.
>>> 2023-08-18 15:19:27,913 WARN  akka.remote.transport.netty.NettyTransport
>>>   [] - Remote connection to [/10.11.0.51:42778] failed
>>> with java.io.IOException: Connection reset by peer
>>> 2023-08-18 15:19:27,963 WARN  akka.remote.ReliableDeliverySupervisor
>>>   [] - Association with remote system
>>> [akka.tcp://flink@tef-prod-flink-04:38835] has failed, address is now
>>> gated for [50] ms. Reason: [Disassociated]
>>> 2023-08-18 15:19:27,967 WARN  akka.remote.ReliableDeliverySupervisor
>>>   [] - Association with remote system
>>> [akka.tcp://flink-metrics@tef-prod-flink-04:46491] has failed, address
>>> is now gated for [50] ms. Reason: [Disassociated]
>>> 2023-08-18 15:19:29,225 INFO
>>>  org.apache.flink.runtime.executiongraph.ExecutionGraph   [] -
>>> RouterReplacementAlgorithm -> kafkaSink_sinkFaultyRouter_windowMode: Writer
>>> -> kafkaSink_sinkFaultyRouter_windowMode: Committer (3/4)
>>> (f6fd65e3fc049bd9021093d8f532bbaf_a47f4a3b960228021159de8de51dbb1f_2_0)
>>> switched from RUNNING to FAILED on
>>> injection-assia-3-pro-cloud-tef-gcp-europe-west1:39011-b24b1d @
>>> injection-assia-3-pro-cloud-tef-gcp-europe-west1 (dataPort=35223).
>>> org.apache.flink.runtime.io.network.netty.exception.RemoteTransportException:
>>> Connection unexpectedly closed by remote task manager 'tef-prod-flink-04/
>>> 10.11.0.51:37505 [ tef-prod-flink-04:38835-e3ca4d ] '. This might
>>> indicate that the remote task manager was lost.
>>> at
>>> org.apache.flink.runtime.io.network.netty.CreditBasedPartitionRequestClientHandler.channelInactive(CreditBasedPartitionRequestClientHandler.java:134)
>>> ~[flink-dist-1.16.2.jar:1.16.2]
>>> at
>>> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:262)
>>> ~[flink-dist-1.16.2.jar:1.16.2]
>>> at
>>> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:248)
>>> ~[flink-dist-1.16.2.jar:1.16.2]
>>> at
>>> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:241)
>>> ~[flink-dist-1.16.2.jar:1.16.2]
>>> at
>>> org.apache.flink.shaded.netty4.io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:81)
>>> ~[flink-dist-1.16.2.jar:1.16.2]
>>> at
>>> org.apache.flink.runtime.io.network.netty.NettyMessageClientDecoderDelegate.channelInactive(NettyMessageClientDecoderDelegate.java:94)
>>> ~[flink-dist-1.16.2.jar:1.16.2]
>>> at
>>> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:262)
>>> ~[flink-dist-1.16.2.jar:1.16.2]
>>> at
>>> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:248)
>>> ~[flink-dist-1.16.2.jar:1.16.2]
>>> at
>>> 

Re: Splitting in Stream Formats for File Source

2023-08-21 Thread Chirag Dewan via user
 Thanks Ron.
For HDFS, a reasonable level of parallelism is reading multiple blocks in 
parallel. Ofcourse that could mean losing the ordering that a file usually 
guarantees. Now if I understand correctly, this may become a problem in 
watermarking. But with smaller files having bounded high watermarks, I feel 
this is a good tradeoff. 
So if every new line in my AVRO encoded Parquet file in HDFS is an AVRO record, 
do you think splittable StreamFormat is a possibility? 
Thanks 
On Sunday, 20 August, 2023 at 01:11:42 pm IST, liu ron  
wrote:  
 
 Hi,
Regarding CSV and AvroParquet stream formats doens't supporting splits, I think 
some hints may be available from [1]. Personally, I think the main 
consideration should be the question of how the row format can find a 
reasonable split point, and how many Splits are appropriate to slice a file 
more than one. For Orc and other columnar formats, within a file, it 
will be further split according to the RowGroup, Page, etc. However, row 
formats do not have such information, maybe we can not find a suitable basis 
for split.


[1] 
https://github.com/apache/flink/blob/9546f8243a24e7b45582b6de6702f819f1d73f97/flink-connectors/flink-connector-files/src/main/java/org/apache/flink/connector/file/src/reader/StreamFormat.java#L57
Best,Ron
Chirag Dewan via user  于2023年8月17日周四 12:00写道:

Hi,I am trying to collect files from HDFS in my DataStream job. I need to 
collect two types of files - CSV and Parquet. 
I understand that Flink supports both formats, but in Streaming mode, Flink 
doesnt support splitting these formats. Splitting is only supported in Table 
API.
I wanted to understand the thought process around this and why splitting is not 
supported in CSV and AvroParquet Stream formats? As far as my understanding 
goes, splitting would work fine with HDFS blocks and multiple blocks can be 
read in parallel. 
Maybe I am missing some fundamental aspect about this. 
Would like to understand more if someone can point me in the right 
direction.Thanks