我尝试使用普通的join
String sql2 = "insert into dws_b2b_trade_year_index\n" +
"WITH temp AS (\n" +
"select \n" +
" ta.gmtStatistical as gmtStatistical,\n" +
" ta.paymentMethod as paymentMethod,\n" +
" tb.CORP_ID as outCorpId,\n" +
" tc.CORP_ID as inCorpId,\n" +
" sum(ta.tradeAmt) as tranAmount,\n" +
我尝试使用普通的join
String sql2 = "insert into dws_b2b_trade_year_index\n" +
"WITH temp AS (\n" +
"select \n" +
" ta.gmtStatistical as gmtStatistical,\n" +
" ta.paymentMethod as paymentMethod,\n" +
" tb.CORP_ID as outCorpId,\n" +
" tc.CORP_ID as inCorpId,\n" +
" sum(ta.tradeAmt) as tranAmount,\n" +
我尝试使用普通的join
String sql2 = "insert into dws_b2b_trade_year_index\n" +
"WITH temp AS (\n" +
"select \n" +
" ta.gmtStatistical as gmtStatistical,\n" +
" ta.paymentMethod as paymentMethod,\n" +
" tb.CORP_ID as outCorpId,\n"
用普通的 join, 不要用 lookup join
Zhiwen Sun
On Fri, Nov 11, 2022 at 11:10 AM Jason_H wrote:
>
>
> hi,大家好
>
> 我正在使用flink的sql实现一个维表join的逻辑,数据源为kafka(交易数据),维表为mysql(账号),现在遇到一个问题:当kafka有数据进来时,没有在维表中找到账号,这时我手动插入该账号,在下一条数据进来时可以匹配上对应的账号信息,但是,输出的累计结果就会缺失没有匹配上的那条数据,举例如下:
> kakfa输入:
> 账号 金额 笔数
> 100 1
退订
Hi Vidya Sagar,
Could you please share the reason for TaskManager restart? If the machine
or JVM process of TaskManager crashes, the `RocksDBKeyedStateBackend` can't
be disposed/closed normally, so the existing rocksdb instance directory
would remain.
BTW, if you use Application Mode on k8s, if
hi,大家好
我正在使用flink的sql实现一个维表join的逻辑,数据源为kafka(交易数据),维表为mysql(账号),现在遇到一个问题:当kafka有数据进来时,没有在维表中找到账号,这时我手动插入该账号,在下一条数据进来时可以匹配上对应的账号信息,但是,输出的累计结果就会缺失没有匹配上的那条数据,举例如下:
kakfa输入:
账号 金额 笔数
100 1 -> 未匹配
100 1 -> 未匹配
100 1 -> 匹配上
维表
账号 企业
-> 后插入的账号信息
实际输出结果
企业
Hi Davide,
I suppose it would be fine. The only difference I can figure out that may
matter is the serialization. Flink uses KryoSerializer as the fallback
serializer if the TypeInformation of the records is not provided, which can
properly process abstract classes. This works well in most cases.
Hi All,
Is anyone looking for a spark scala contract role inside the USA? A company
called Maxonic has an open spark scala contract position (100% remote)
inside the USA if anyone is interested, please send your CV to
kali.tumm...@gmail.com.
Thanks & Regards
Sri Tummala
Hi,
I am using RocksDB state backend for incremental checkpointing with Flink
1.11 version.
Question:
--
For a given Job ID, Intermediate RocksDB checkpoints are stored under the
path defined with ""
The files are stored with "_jobID+ radom UUID" prefixed to the location.
Case : 1
Excellent news -- welcome to the new era of easier, more timely and more
feature-rich releases for everyone!
Great job! Ryan
On Thu, Nov 10, 2022 at 3:15 PM Leonard Xu wrote:
> Thanks Chesnay and Martijn for the great work! I believe the
> flink-connector-shared-utils[1] you built will help
Excellent news -- welcome to the new era of easier, more timely and more
feature-rich releases for everyone!
Great job! Ryan
On Thu, Nov 10, 2022 at 3:15 PM Leonard Xu wrote:
> Thanks Chesnay and Martijn for the great work! I believe the
> flink-connector-shared-utils[1] you built will help
Thanks Chesnay and Martijn for the great work! I believe the
flink-connector-shared-utils[1] you built will help Flink connector developers
a lot.
Best,
Leonard
[1] https://github.com/apache/flink-connector-shared-utils
> 2022年11月10日 下午9:53,Martijn Visser 写道:
>
> Really happy with the
Thanks Chesnay and Martijn for the great work! I believe the
flink-connector-shared-utils[1] you built will help Flink connector developers
a lot.
Best,
Leonard
[1] https://github.com/apache/flink-connector-shared-utils
> 2022年11月10日 下午9:53,Martijn Visser 写道:
>
> Really happy with the
Really happy with the first externalized connector for Flink. Thanks a lot
to all of you involved!
On Thu, Nov 10, 2022 at 12:51 PM Chesnay Schepler
wrote:
> The Apache Flink community is very happy to announce the release of
> Apache Flink Elasticsearch Connector 3.0.0.
>
> Apache Flink® is an
Really happy with the first externalized connector for Flink. Thanks a lot
to all of you involved!
On Thu, Nov 10, 2022 at 12:51 PM Chesnay Schepler
wrote:
> The Apache Flink community is very happy to announce the release of
> Apache Flink Elasticsearch Connector 3.0.0.
>
> Apache Flink® is an
Hi Etienne,
Nice blog! Thanks for sharing!
Best regards,
Jing
On Wed, Nov 9, 2022 at 5:49 PM Etienne Chauchot
wrote:
> Hi Yun Gao,
>
> FYI I just updated the article after your review:
> https://echauchot.blogspot.com/2022/11/flink-howto-migrate-real-life-batch.html
>
> Best
>
> Etienne
> Le
The Apache Flink community is very happy to announce the release of
Apache Flink Elasticsearch Connector 3.0.0.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data
streaming applications.
The release is available
The Apache Flink community is very happy to announce the release of
Apache Flink Elasticsearch Connector 3.0.0.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data
streaming applications.
The release is available
19 matches
Mail list logo