[ANNOUNCE] Apache Flink 1.16.3 released

2023-11-29 文章 Rui Fan
The Apache Flink community is very happy to announce the release of
Apache Flink 1.16.3, which is the
third bugfix release for the Apache Flink 1.16 series.



Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data
streaming applications.



The release is available for download at:

https://flink.apache.org/downloads.html



Please check out the release blog post for an overview of the
improvements for this bugfix release:

https://flink.apache.org/2023/11/29/apache-flink-1.16.3-release-announcement/



The full release notes are available in Jira:

https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353259



We would like to thank all contributors of the Apache Flink community
who made this release possible!



Feel free to reach out to the release managers (or respond to this
thread) with feedback on the release process. Our goal is to
constantly improve the release process. Feedback on what could be
improved or things that didn't go so well are appreciated.



Regards,

Release Manager


Re:回复: flink sql如何实现json字符数据解析?

2023-11-29 文章 casel.chen
社区Flink自带的那些json函数都没有解析一串json string返回一行或多行ROW的

















在 2023-11-23 15:24:33,"junjie.m...@goupwith.com"  写道:
>可以看下JSON函数
>https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/functions/systemfunctions/#json-functions
>
>
>
>Junjie.M
> 
>发件人: casel.chen
>发送时间: 2023-11-22 20:54
>收件人: user-zh@flink.apache.org
>主题: flink sql如何实现json字符数据解析?
>输入:
> 
>{
> 
>  "uuid":"",
> 
>  "body_data": 
> "[{\"fild1\":1"1231","fild2\":1"2341"},{"fild1\":"abc\","fild2\":"cdf\"}]"
> 
>}
> 
> 
> 
> 
>输出:
> 
>[
> 
>  {
> 
>"uuid": "",
> 
>"body_data: null,
> 
>"body_data.fild1": "123”,
> 
>"body_data.fild2": "234"
> 
>  },
> 
>  {
> 
>"uuid": "",
> 
>"body_data": null,
> 
>"body_data.fild1": "abc",
> 
>"body_data.fild2": "cdf"
> 
>  }
> 
>]
> 
> 
> 
> 
>当格式错误时
> 
> 
> 
> 
>输入:
> 
>{
> 
>"uuid": "”,
> 
>"body_data": "abc"
> 
>}
> 
>输出:
> 
>{
> 
>"uuid": "",
> 
>"body_data": "abc",
> 
>"body_data.fild1": null,
> 
>"body_data.fild2": null
> 
>}


Re:Re: flink sql如何实现json字符数据解析?

2023-11-29 文章 casel.chen



filed字段数量是固定的,但body_data数组包含的元素个数不固定,所以

Insert into SinkT (result) select Array[ROW(uuid, null,body_data[1]. field1 as 
body_data.fild1, body_data[1]. Field2 as body_data.fild2), ROW(uuid, 
null,body_data[2]. field, body_data[2]. field2)] as result




这种写死body_data[X]的sql语句应该不work








在 2023-11-23 15:10:00,"jinzhuguang"  写道:
>Flink 
>SQL比较适合处理结构化的数据,不知道你的body_data中的filed数量是否是固定的。如果是固定的,那可以将源和目标的格式写成Table形式。
>比如:
>
>SourceT: (
>   uuid String,
>   body_data ARRAY>
>)
>
>SinkT (
>   result ARRAY String, body_data.fild2  String>>
>)
>
>Insert into SinkT (result)  select Array[ROW(uuid, null,body_data[1]. field1 
>as body_data.fild1, body_data[1]. Field2 as body_data.fild2), ROW(uuid, 
>null,body_data[2]. field, body_data[2]. field2)] as result
>
>希望对你有帮助
>
>> 2023年11月22日 20:54,casel.chen  写道:
>> 
>> 输入:
>> 
>> {
>> 
>>  "uuid":"",
>> 
>>  "body_data": 
>> "[{\"fild1\":1"1231","fild2\":1"2341"},{"fild1\":"abc\","fild2\":"cdf\"}]"
>> 
>> }
>> 
>> 
>> 
>> 
>> 输出:
>> 
>> [
>> 
>>  {
>> 
>> "uuid": "",
>> 
>> "body_data: null,
>> 
>> "body_data.fild1": "123”,
>> 
>> "body_data.fild2": "234"
>> 
>>  },
>> 
>>  {
>> 
>> "uuid": "",
>> 
>> "body_data": null,
>> 
>> "body_data.fild1": "abc",
>> 
>> "body_data.fild2": "cdf"
>> 
>>  }
>> 
>> ]
>> 
>> 
>> 
>> 
>> 当格式错误时
>> 
>> 
>> 
>> 
>> 输入:
>> 
>> {
>> 
>> "uuid": "”,
>> 
>> "body_data": "abc"
>> 
>> }
>> 
>> 输出:
>> 
>> {
>> 
>> "uuid": "",
>> 
>> "body_data": "abc",
>> 
>> "body_data.fild1": null,
>> 
>> "body_data.fild2": null
>> 
>> }