Re: [ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread Hequn Cheng
Congratulations Rong!

Best, Hequn

On Fri, Jul 12, 2019 at 12:19 PM Jeff Zhang  wrote:

> Congrats, Rong!
>
>
> vino yang  于2019年7月12日周五 上午10:08写道:
>
>> congratulations Rong Rong!
>>
>> Fabian Hueske  于2019年7月11日周四 下午10:25写道:
>>
>>> Hi everyone,
>>>
>>> I'm very happy to announce that Rong Rong accepted the offer of the
>>> Flink PMC to become a committer of the Flink project.
>>>
>>> Rong has been contributing to Flink for many years, mainly working on
>>> SQL and Yarn security features. He's also frequently helping out on the
>>> user@f.a.o mailing lists.
>>>
>>> Congratulations Rong!
>>>
>>> Best, Fabian
>>> (on behalf of the Flink PMC)
>>>
>>
>
> --
> Best Regards
>
> Jeff Zhang
>


Re: 關於如何在流數據上計算 Top K 的應用問題

2019-07-11 Thread Caizhi Weng
Hi Tony!

這些資料都不需要在記憶體中實體化,所以不會受限記憶體的擴展性,只有那些需要被放入 Top-N 的資料會被讀取出來存放在一個 in-memory
> 的堆中做為加速運算的優化。


前两种情况下,由于不需要从老数据中捞记录回 Top-N,state 里其实也只要放 Top-N 的数据。Top-N
的数据先在内存里维护,checkpoint 的时候同步到 state。

第三种情况下,你的说法是社区引入 SortedMapState 后可以实现的情况,现在由于暂时没有引入 SortedMapState,每次读
state 还是会把所有数据都读出来(具体来说有两个 state,一个 ValueState 有序地存储了每个 sort key 的记录数量,另一个
MapState 无序地根据 sort key 取出具体的记录列表),所以其实当前实现会受内存限制... 还是要等引入 SortedMapState
就可以不受内存限制了,但是每次从 state 里取数据仍然很耗时。

如果添加了一筆新的紀錄,可能會導致多個紀錄的排名需要加一或是減一,這部分是不是也需要遍歷整個 map 去判斷是否有增減,針對變動的部分通知下游?


是的,Flink 现在会把每一条有排名变化的数据通知下游(具体实现可参考 AppendOnlyTopNFunction 的
processElementWithRowNumber 方法)。这个主要是为了 row_number()
等需要精确知道排名的情况准备的。如果只是单纯的 limit 3 这样的情况,只会通知下游进入和离开 Top-N
的那两条数据(processElementWithoutRowNumber 方法)。

Tony Wei  于2019年7月12日周五 下午12:15写道:

> Hi Caizhi,
>
> 謝謝你的回答。你的第三點想法給了我蠻大的啟發,我本來設想的情況是能否避免把全部使用者
> 資料都存放在 state 來解決這個問題,但聽起來這部分是避免不了的。如果我沒有理解錯,你的
> 作法比較像是將全部使用者的排名資訊都存放在 state,在使用了 rocksdb state backend 的狀況
> 下,這些資料都不需要在記憶體中實體化,所以不會受限記憶體的擴展性,只有那些需要被放入
> Top-N 的資料會被讀取出來存放在一個 in-memory 的堆中做為加速運算的優化。
>
> 在我們目前的應用場景中,精確排名不是必要的資訊,可能還有一些不是硬性的需求來鬆綁這個
> 問題的限制,雖然沒有很有把握但或許可以根據你的想法實現一個專門針對我們應用情境的優化。
>
> 撇開上述特殊的情況,我另外好奇的是第一點中維護的 map state 要記錄精確的排名這件事的細
> 節,想知道如果更新是循序變化的,如果添加了一筆新的紀錄,可能會導致多個紀錄的排名需要
> 加一或是減一,這部分是不是也需要遍歷整個 map 去判斷是否有增減,針對變動的部分通知下
> 游?
>
> Best Regards,
> Tony Wei
>
> Caizhi Weng  於 2019年7月12日 週五 上午11:36寫道:
>
> > Hi Tony!
> >
> > 其实 Flink 对 Top-N 问题并没有很 fancy 的实现... Flink 把 Top-N 问题分成三种情况:
> >
> > 1. 数据只添加,不更新不删除(就像 batch mode)
> > 这种情况的实现是 AppendOnlyTopNFunction,就像你说的一样使用一个 Map
> > 来维护。不能直接使用堆来维护的原因是:因为要告知下游每一条记录的精确排名。
> >
> > 2. 数据可能有添加和更新
> > 这种情况的实现是 UpdatableTopNFunction,但是这个类开头的注释表明了它只能用于以下特殊情况:
> > * 数据更新后排名只能变小不能变大;
> > * 数据的 sort key 要 unique;
> > * 不能删数据或者撤回数据。
> > 这种情况就避免了你上面说的排名变大,导致掉出 Top-N 的情况。还是可以用一个 Map 来维护。
> >
> > 3. 数据可以添加、更新和删除
> > 这种情况的实现是 RetractableTopNFunction。因为数据更新 / 删除后可能会掉出 Top-N,要找新数据补进来,那么只能从
> > state 里捞应该补进来的数据。当前由于社区没有 SortedMapState 的实现,现在是用
> ValueState> 存
> > state。每次读 state 都是把整个 state 拿出来读的,所以数据量大了其实没办法用... 等社区引入了 SortedMapState
> > 以后,就可以用 iterator 只读取前面一些我们想要补进来的数据。
> >
> > Tony Wei  于2019年7月11日周四 上午9:49写道:
> >
> > > Hi,
> > >
> > > 最近正在研究 Top K 的問題,在研究中找到了 Blink SQL 可以透過維護一個儲存 K 的最大紀錄的
> > > ”堆”來優化底下這類 SQL,不過我認為這只能針對 `score` 只會增加不減少的情況。
> > >
> > > > SELECT user_id, score
> > > > FROM (
> > > >   SELECT *,
> > > > ROW_NUMBER() OVER (ORDER BY score DESC) AS row_num
> > > >   FROM user_scores)
> > > > WHERE row_num <= 3
> > > >
> > > >
> > > 我的問題是當如果這樣的計算是應用在流數據上,且 `score` 可能隨時間增加或是“減少”的話,例
> > > 如底下這類的 SQL,能有什麼樣的優化?
> > >
> > > > SELECT user_id, score
> > > > FROM (
> > > >   SELECT *,
> > > > ROW_NUMBER() OVER (ORDER BY score DESC) AS row_num
> > > >   FROM (
> > > >   SELECT user_id, LAST_VAL(score) AS score
> > > >   FROM user_scores
> > > >   GROUP BY user_id))
> > > > WHERE row_num <= 3
> > > >
> > > > SQL 中的 `user_scores` 可以當作是從 DataStream 直接轉換過來的 Dynamic Table,
> > > `LAST_VAL`假設是一種 UDAF,可以挑出目前最新的值。所以,可以想像這張 table 的 user's
> > > `score` 是會隨時間變化增減。
> > >
> > > 上面所說堆的優化無法處理這樣的問題,底下舉個例子。假設今天有一個 top-3 的堆中已經存放
> > > 了三個使用者:A, B, C,各自的 scores 是:4, 3, 2,接下來收到了一個使用者 D 和他的分數是
> > > 1 的話,這個時候演算法會直接忽略掉 D,因為他不在 top-3 的範圍內。但是當下一個如果收到
> > > 的是一個更新 A 使用者的 score 為 0 的紀錄的話,這個時候理論上我們知道 top-3 會改為 B, C,
> > > D,但是在維護 top-3 的堆中我們無力找回被忽略的使用者 D。這樣的優化在 batch mode 是沒有
> > > 問題的,因為最新的 score 在有限的數據中會是固定的不動的。
> > >
> > > 不過當處理流數據,我目前只想到這種應用最終可能需要退回成存放全部使用者 scores 才有辦
> > > 法處理,才能隨時計算出正確的 top-k。所以我想請教各位大牛有沒有什麼樣的優化方式可以處
> > > 理這樣的問題,讓狀態不需要存到全部資料?當然這個問題不侷限在 SQL,如果有任何實作在
> > > DataStream 上的優化都是可接受。感謝大家幫忙。
> > >
> > > Best Regards,
> > > Tony Wei
> > >
> >
>


Re: [ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread Jeff Zhang
Congrats, Rong!


vino yang  于2019年7月12日周五 上午10:08写道:

> congratulations Rong Rong!
>
> Fabian Hueske  于2019年7月11日周四 下午10:25写道:
>
>> Hi everyone,
>>
>> I'm very happy to announce that Rong Rong accepted the offer of the Flink
>> PMC to become a committer of the Flink project.
>>
>> Rong has been contributing to Flink for many years, mainly working on SQL
>> and Yarn security features. He's also frequently helping out on the
>> user@f.a.o mailing lists.
>>
>> Congratulations Rong!
>>
>> Best, Fabian
>> (on behalf of the Flink PMC)
>>
>

-- 
Best Regards

Jeff Zhang


Re: 關於如何在流數據上計算 Top K 的應用問題

2019-07-11 Thread Tony Wei
Hi Caizhi,

謝謝你的回答。你的第三點想法給了我蠻大的啟發,我本來設想的情況是能否避免把全部使用者
資料都存放在 state 來解決這個問題,但聽起來這部分是避免不了的。如果我沒有理解錯,你的
作法比較像是將全部使用者的排名資訊都存放在 state,在使用了 rocksdb state backend 的狀況
下,這些資料都不需要在記憶體中實體化,所以不會受限記憶體的擴展性,只有那些需要被放入
Top-N 的資料會被讀取出來存放在一個 in-memory 的堆中做為加速運算的優化。

在我們目前的應用場景中,精確排名不是必要的資訊,可能還有一些不是硬性的需求來鬆綁這個
問題的限制,雖然沒有很有把握但或許可以根據你的想法實現一個專門針對我們應用情境的優化。

撇開上述特殊的情況,我另外好奇的是第一點中維護的 map state 要記錄精確的排名這件事的細
節,想知道如果更新是循序變化的,如果添加了一筆新的紀錄,可能會導致多個紀錄的排名需要
加一或是減一,這部分是不是也需要遍歷整個 map 去判斷是否有增減,針對變動的部分通知下
游?

Best Regards,
Tony Wei

Caizhi Weng  於 2019年7月12日 週五 上午11:36寫道:

> Hi Tony!
>
> 其实 Flink 对 Top-N 问题并没有很 fancy 的实现... Flink 把 Top-N 问题分成三种情况:
>
> 1. 数据只添加,不更新不删除(就像 batch mode)
> 这种情况的实现是 AppendOnlyTopNFunction,就像你说的一样使用一个 Map
> 来维护。不能直接使用堆来维护的原因是:因为要告知下游每一条记录的精确排名。
>
> 2. 数据可能有添加和更新
> 这种情况的实现是 UpdatableTopNFunction,但是这个类开头的注释表明了它只能用于以下特殊情况:
> * 数据更新后排名只能变小不能变大;
> * 数据的 sort key 要 unique;
> * 不能删数据或者撤回数据。
> 这种情况就避免了你上面说的排名变大,导致掉出 Top-N 的情况。还是可以用一个 Map 来维护。
>
> 3. 数据可以添加、更新和删除
> 这种情况的实现是 RetractableTopNFunction。因为数据更新 / 删除后可能会掉出 Top-N,要找新数据补进来,那么只能从
> state 里捞应该补进来的数据。当前由于社区没有 SortedMapState 的实现,现在是用 ValueState> 存
> state。每次读 state 都是把整个 state 拿出来读的,所以数据量大了其实没办法用... 等社区引入了 SortedMapState
> 以后,就可以用 iterator 只读取前面一些我们想要补进来的数据。
>
> Tony Wei  于2019年7月11日周四 上午9:49写道:
>
> > Hi,
> >
> > 最近正在研究 Top K 的問題,在研究中找到了 Blink SQL 可以透過維護一個儲存 K 的最大紀錄的
> > ”堆”來優化底下這類 SQL,不過我認為這只能針對 `score` 只會增加不減少的情況。
> >
> > > SELECT user_id, score
> > > FROM (
> > >   SELECT *,
> > > ROW_NUMBER() OVER (ORDER BY score DESC) AS row_num
> > >   FROM user_scores)
> > > WHERE row_num <= 3
> > >
> > >
> > 我的問題是當如果這樣的計算是應用在流數據上,且 `score` 可能隨時間增加或是“減少”的話,例
> > 如底下這類的 SQL,能有什麼樣的優化?
> >
> > > SELECT user_id, score
> > > FROM (
> > >   SELECT *,
> > > ROW_NUMBER() OVER (ORDER BY score DESC) AS row_num
> > >   FROM (
> > >   SELECT user_id, LAST_VAL(score) AS score
> > >   FROM user_scores
> > >   GROUP BY user_id))
> > > WHERE row_num <= 3
> > >
> > > SQL 中的 `user_scores` 可以當作是從 DataStream 直接轉換過來的 Dynamic Table,
> > `LAST_VAL`假設是一種 UDAF,可以挑出目前最新的值。所以,可以想像這張 table 的 user's
> > `score` 是會隨時間變化增減。
> >
> > 上面所說堆的優化無法處理這樣的問題,底下舉個例子。假設今天有一個 top-3 的堆中已經存放
> > 了三個使用者:A, B, C,各自的 scores 是:4, 3, 2,接下來收到了一個使用者 D 和他的分數是
> > 1 的話,這個時候演算法會直接忽略掉 D,因為他不在 top-3 的範圍內。但是當下一個如果收到
> > 的是一個更新 A 使用者的 score 為 0 的紀錄的話,這個時候理論上我們知道 top-3 會改為 B, C,
> > D,但是在維護 top-3 的堆中我們無力找回被忽略的使用者 D。這樣的優化在 batch mode 是沒有
> > 問題的,因為最新的 score 在有限的數據中會是固定的不動的。
> >
> > 不過當處理流數據,我目前只想到這種應用最終可能需要退回成存放全部使用者 scores 才有辦
> > 法處理,才能隨時計算出正確的 top-k。所以我想請教各位大牛有沒有什麼樣的優化方式可以處
> > 理這樣的問題,讓狀態不需要存到全部資料?當然這個問題不侷限在 SQL,如果有任何實作在
> > DataStream 上的優化都是可接受。感謝大家幫忙。
> >
> > Best Regards,
> > Tony Wei
> >
>


Re: 關於如何在流數據上計算 Top K 的應用問題

2019-07-11 Thread Caizhi Weng
Hi Tony!

其实 Flink 对 Top-N 问题并没有很 fancy 的实现... Flink 把 Top-N 问题分成三种情况:

1. 数据只添加,不更新不删除(就像 batch mode)
这种情况的实现是 AppendOnlyTopNFunction,就像你说的一样使用一个 Map
来维护。不能直接使用堆来维护的原因是:因为要告知下游每一条记录的精确排名。

2. 数据可能有添加和更新
这种情况的实现是 UpdatableTopNFunction,但是这个类开头的注释表明了它只能用于以下特殊情况:
* 数据更新后排名只能变小不能变大;
* 数据的 sort key 要 unique;
* 不能删数据或者撤回数据。
这种情况就避免了你上面说的排名变大,导致掉出 Top-N 的情况。还是可以用一个 Map 来维护。

3. 数据可以添加、更新和删除
这种情况的实现是 RetractableTopNFunction。因为数据更新 / 删除后可能会掉出 Top-N,要找新数据补进来,那么只能从
state 里捞应该补进来的数据。当前由于社区没有 SortedMapState 的实现,现在是用 ValueState> 存
state。每次读 state 都是把整个 state 拿出来读的,所以数据量大了其实没办法用... 等社区引入了 SortedMapState
以后,就可以用 iterator 只读取前面一些我们想要补进来的数据。

Tony Wei  于2019年7月11日周四 上午9:49写道:

> Hi,
>
> 最近正在研究 Top K 的問題,在研究中找到了 Blink SQL 可以透過維護一個儲存 K 的最大紀錄的
> ”堆”來優化底下這類 SQL,不過我認為這只能針對 `score` 只會增加不減少的情況。
>
> > SELECT user_id, score
> > FROM (
> >   SELECT *,
> > ROW_NUMBER() OVER (ORDER BY score DESC) AS row_num
> >   FROM user_scores)
> > WHERE row_num <= 3
> >
> >
> 我的問題是當如果這樣的計算是應用在流數據上,且 `score` 可能隨時間增加或是“減少”的話,例
> 如底下這類的 SQL,能有什麼樣的優化?
>
> > SELECT user_id, score
> > FROM (
> >   SELECT *,
> > ROW_NUMBER() OVER (ORDER BY score DESC) AS row_num
> >   FROM (
> >   SELECT user_id, LAST_VAL(score) AS score
> >   FROM user_scores
> >   GROUP BY user_id))
> > WHERE row_num <= 3
> >
> > SQL 中的 `user_scores` 可以當作是從 DataStream 直接轉換過來的 Dynamic Table,
> `LAST_VAL`假設是一種 UDAF,可以挑出目前最新的值。所以,可以想像這張 table 的 user's
> `score` 是會隨時間變化增減。
>
> 上面所說堆的優化無法處理這樣的問題,底下舉個例子。假設今天有一個 top-3 的堆中已經存放
> 了三個使用者:A, B, C,各自的 scores 是:4, 3, 2,接下來收到了一個使用者 D 和他的分數是
> 1 的話,這個時候演算法會直接忽略掉 D,因為他不在 top-3 的範圍內。但是當下一個如果收到
> 的是一個更新 A 使用者的 score 為 0 的紀錄的話,這個時候理論上我們知道 top-3 會改為 B, C,
> D,但是在維護 top-3 的堆中我們無力找回被忽略的使用者 D。這樣的優化在 batch mode 是沒有
> 問題的,因為最新的 score 在有限的數據中會是固定的不動的。
>
> 不過當處理流數據,我目前只想到這種應用最終可能需要退回成存放全部使用者 scores 才有辦
> 法處理,才能隨時計算出正確的 top-k。所以我想請教各位大牛有沒有什麼樣的優化方式可以處
> 理這樣的問題,讓狀態不需要存到全部資料?當然這個問題不侷限在 SQL,如果有任何實作在
> DataStream 上的優化都是可接受。感謝大家幫忙。
>
> Best Regards,
> Tony Wei
>


Re: Question in the tutorial

2019-07-11 Thread Xintong Song
Hi Karthik,

I think more information is needed for diagnosing the problem, and I would
suggest you to do the followings:
- Check whether the task managers are configured in the file 'conf/slaves'.
  - If configured, you should see some hosts in the file, each for one task
manager.
- Check whether the task managers are running, by execute the command 'ps
-ef | grep TaskManagerRunner'.
  - If there are task managers running, you should be able to see the
corresponding processes.
- Checks jobmanager / taskmanager logs to find out why the task managers
are not started, or started, but not find by the cluster.
  - You can find the logs in 'log/flink-*-standalonesession-*.log'.
  - If you cannot find the problem from the logs, you can also post them in
this ML for help.

Thank you~

Xintong Song



On Fri, Jul 12, 2019 at 1:57 AM Karthik Guru  wrote:

> Hey Flink team,
>
> Novice here. I just about started using Flink. I have a slight issue with
> respect to the following instruction in the tutorial.
>
> https://ci.apache.org/projects/flink/flink-docs-release-1.8/tutorials/local_setup.html
>
> In this, when I try to 'Start a Local Flink Cluster' and then connect to
> local host, I am not getting an update on the task manager. It still shows
> 0 for all the three categories. Could you please help?
>
> Thanks for your time
> Karthik
>


Re: Graceful Task Manager Termination and Replacement

2019-07-11 Thread Paul Lam
Hi,

Maybe region restart strategy can help. It restarts minimum required tasks. 
Note that it’s recommended to use only after 1.9 release, see [1], unless 
you’re running a stateless job.

[1] https://issues.apache.org/jira/browse/FLINK-10712 


Best,
Paul Lam

> 在 2019年7月12日,03:38,Aaron Levin  写道:
> 
> Hello,
> 
> Is there a way to gracefully terminate a Task Manager beyond just killing it 
> (this seems to be what `./taskmanager.sh stop` does)? Specifically I'm 
> interested in a way to replace a Task Manager that has currently-running 
> tasks. It would be great if it was possible to terminate a Task Manager 
> without restarting the job, though I'm not sure if this is possible.
> 
> Context: at my work we regularly cycle our hosts for maintenance and 
> security. Each time we do this we stop the task manager running on the host 
> being cycled. This causes the entire job to restart, resulting in downtime 
> for the job. I'd love to decrease this downtime if at all possible.
> 
> Thanks! Any insight is appreciated!
> 
> Best,
> 
> Aaron Levin



Re: [ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread vino yang
congratulations Rong Rong!

Fabian Hueske  于2019年7月11日周四 下午10:25写道:

> Hi everyone,
>
> I'm very happy to announce that Rong Rong accepted the offer of the Flink
> PMC to become a committer of the Flink project.
>
> Rong has been contributing to Flink for many years, mainly working on SQL
> and Yarn security features. He's also frequently helping out on the
> user@f.a.o mailing lists.
>
> Congratulations Rong!
>
> Best, Fabian
> (on behalf of the Flink PMC)
>


Re: [ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread Paul Lam
Congrats to Rong! Rong has contributed a lot to the community and well deserves 
it.

Best,
Paul Lam

> 在 2019年7月12日,09:40,JingsongLee  写道:
> 
> Congratulations Rong. 
> Rong Rong has done a lot of nice work in the past time to the flink community.
> 
> Best, JingsongLee
> 
> --
> From:Rong Rong 
> Send Time:2019年7月12日(星期五) 08:09
> To:Hao Sun 
> Cc:Xuefu Z ; dev ; Flink ML 
> 
> Subject:Re: [ANNOUNCE] Rong Rong becomes a Flink committer
> 
> Thank you all for the warm welcome!
> 
> It's my honor to become an Apache Flink committer. 
> I will continue to work on this great project and contribute more to the 
> community.
> 
> Cheers,
> Rong
> 
> On Thu, Jul 11, 2019 at 1:05 PM Hao Sun  > wrote:
> Congratulations Rong. 
> 
> On Thu, Jul 11, 2019, 11:39 Xuefu Z  > wrote:
> Congratulations, Rong!
> 
> On Thu, Jul 11, 2019 at 10:59 AM Bowen Li  > wrote:
> Congrats, Rong!
> 
> 
> On Thu, Jul 11, 2019 at 10:48 AM Oytun Tez  > wrote:
> 
> > Congratulations Rong!
> >
> > ---
> > Oytun Tez
> >
> > *M O T A W O R D*
> > The World's Fastest Human Translation Platform.
> > oy...@motaword.com  — www.motaword.com 
> > 
> >
> >
> > On Thu, Jul 11, 2019 at 1:44 PM Peter Huang  > >
> > wrote:
> >
> >> Congrats Rong!
> >>
> >> On Thu, Jul 11, 2019 at 10:40 AM Becket Qin  >> > wrote:
> >>
> >>> Congrats, Rong!
> >>>
> >>> On Fri, Jul 12, 2019 at 1:13 AM Xingcan Cui  >>> > wrote:
> >>>
>  Congrats Rong!
> 
>  Best,
>  Xingcan
> 
>  On Jul 11, 2019, at 1:08 PM, Shuyi Chen   > wrote:
> 
>  Congratulations, Rong!
> 
>  On Thu, Jul 11, 2019 at 8:26 AM Yu Li   > wrote:
> 
> > Congratulations Rong!
> >
> > Best Regards,
> > Yu
> >
> >
> > On Thu, 11 Jul 2019 at 22:54, zhijiang  > >
> > wrote:
> >
> >> Congratulations Rong!
> >>
> >> Best,
> >> Zhijiang
> >>
> >> --
> >> From:Kurt Young mailto:ykt...@gmail.com>>
> >> Send Time:2019年7月11日(星期四) 22:54
> >> To:Kostas Kloudas mailto:kklou...@gmail.com>>
> >> Cc:Jark Wu mailto:imj...@gmail.com>>; Fabian Hueske 
> >> mailto:fhue...@gmail.com>>;
> >> dev mailto:d...@flink.apache.org>>; user 
> >> mailto:user@flink.apache.org>>
> >> Subject:Re: [ANNOUNCE] Rong Rong becomes a Flink committer
> >>
> >> Congratulations Rong!
> >>
> >> Best,
> >> Kurt
> >>
> >>
> >> On Thu, Jul 11, 2019 at 10:53 PM Kostas Kloudas  >> >
> >> wrote:
> >> Congratulations Rong!
> >>
> >> On Thu, Jul 11, 2019 at 4:40 PM Jark Wu  >> > wrote:
> >> Congratulations Rong Rong!
> >> Welcome on board!
> >>
> >> On Thu, 11 Jul 2019 at 22:25, Fabian Hueske  >> >
> >> wrote:
> >> Hi everyone,
> >>
> >> I'm very happy to announce that Rong Rong accepted the offer of the
> >> Flink PMC to become a committer of the Flink project.
> >>
> >> Rong has been contributing to Flink for many years, mainly working on
> >> SQL and Yarn security features. He's also frequently helping out on the
> >> user@f.a.o mailing lists.
> >>
> >> Congratulations Rong!
> >>
> >> Best, Fabian
> >> (on behalf of the Flink PMC)
> >>
> >>
> >>
> 
> 
> 
> -- 
> Xuefu Zhang
> 
> "In Honey We Trust!"



Re:Re: [ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread Haibo Sun
Congrats Rong!Best,
Haibo

At 2019-07-12 09:40:26, "JingsongLee"  wrote:

Congratulations Rong. 
Rong Rong has done a lot of nice work in the past time to the flink community.


Best, JingsongLee


--
From:Rong Rong 
Send Time:2019年7月12日(星期五) 08:09
To:Hao Sun 
Cc:Xuefu Z ; dev ; Flink ML 

Subject:Re: [ANNOUNCE] Rong Rong becomes a Flink committer


Thank you all for the warm welcome!


It's my honor to become an Apache Flink committer. 
I will continue to work on this great project and contribute more to the 
community.



Cheers,
Rong


On Thu, Jul 11, 2019 at 1:05 PM Hao Sun  wrote:

Congratulations Rong. 


On Thu, Jul 11, 2019, 11:39 Xuefu Z  wrote:

Congratulations, Rong!



On Thu, Jul 11, 2019 at 10:59 AM Bowen Li  wrote:

Congrats, Rong!


On Thu, Jul 11, 2019 at 10:48 AM Oytun Tez  wrote:

> Congratulations Rong!
>
> ---
> Oytun Tez
>
> *M O T A W O R D*
> The World's Fastest Human Translation Platform.
> oy...@motaword.com — www.motaword.com
>
>
> On Thu, Jul 11, 2019 at 1:44 PM Peter Huang 
> wrote:
>
>> Congrats Rong!
>>
>> On Thu, Jul 11, 2019 at 10:40 AM Becket Qin  wrote:
>>
>>> Congrats, Rong!
>>>
>>> On Fri, Jul 12, 2019 at 1:13 AM Xingcan Cui  wrote:
>>>
 Congrats Rong!

 Best,
 Xingcan

 On Jul 11, 2019, at 1:08 PM, Shuyi Chen  wrote:

 Congratulations, Rong!

 On Thu, Jul 11, 2019 at 8:26 AM Yu Li  wrote:

> Congratulations Rong!
>
> Best Regards,
> Yu
>
>
> On Thu, 11 Jul 2019 at 22:54, zhijiang 
> wrote:
>
>> Congratulations Rong!
>>
>> Best,
>> Zhijiang
>>
>> --
>> From:Kurt Young 
>> Send Time:2019年7月11日(星期四) 22:54
>> To:Kostas Kloudas 
>> Cc:Jark Wu ; Fabian Hueske ;
>> dev ; user 
>> Subject:Re: [ANNOUNCE] Rong Rong becomes a Flink committer
>>
>> Congratulations Rong!
>>
>> Best,
>> Kurt
>>
>>
>> On Thu, Jul 11, 2019 at 10:53 PM Kostas Kloudas 
>> wrote:
>> Congratulations Rong!
>>
>> On Thu, Jul 11, 2019 at 4:40 PM Jark Wu  wrote:
>> Congratulations Rong Rong!
>> Welcome on board!
>>
>> On Thu, 11 Jul 2019 at 22:25, Fabian Hueske 
>> wrote:
>> Hi everyone,
>>
>> I'm very happy to announce that Rong Rong accepted the offer of the
>> Flink PMC to become a committer of the Flink project.
>>
>> Rong has been contributing to Flink for many years, mainly working on
>> SQL and Yarn security features. He's also frequently helping out on the
>> user@f.a.o mailing lists.
>>
>> Congratulations Rong!
>>
>> Best, Fabian
>> (on behalf of the Flink PMC)
>>
>>
>>




--

Xuefu Zhang

"In Honey We Trust!"


Re: [ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread JingsongLee
Congratulations Rong. 
Rong Rong has done a lot of nice work in the past time to the flink community.

Best, JingsongLee


--
From:Rong Rong 
Send Time:2019年7月12日(星期五) 08:09
To:Hao Sun 
Cc:Xuefu Z ; dev ; Flink ML 

Subject:Re: [ANNOUNCE] Rong Rong becomes a Flink committer

Thank you all for the warm welcome!

It's my honor to become an Apache Flink committer. 
I will continue to work on this great project and contribute more to the 
community.

Cheers,
Rong
On Thu, Jul 11, 2019 at 1:05 PM Hao Sun  wrote:

Congratulations Rong. 
On Thu, Jul 11, 2019, 11:39 Xuefu Z  wrote:
Congratulations, Rong!

On Thu, Jul 11, 2019 at 10:59 AM Bowen Li  wrote:
Congrats, Rong!


 On Thu, Jul 11, 2019 at 10:48 AM Oytun Tez  wrote:

 > Congratulations Rong!
 >
 > ---
 > Oytun Tez
 >
 > *M O T A W O R D*
 > The World's Fastest Human Translation Platform.
 > oy...@motaword.com — www.motaword.com
 >
 >
 > On Thu, Jul 11, 2019 at 1:44 PM Peter Huang 
 > wrote:
 >
 >> Congrats Rong!
 >>
 >> On Thu, Jul 11, 2019 at 10:40 AM Becket Qin  wrote:
 >>
 >>> Congrats, Rong!
 >>>
 >>> On Fri, Jul 12, 2019 at 1:13 AM Xingcan Cui  wrote:
 >>>
  Congrats Rong!
 
  Best,
  Xingcan
 
  On Jul 11, 2019, at 1:08 PM, Shuyi Chen  wrote:
 
  Congratulations, Rong!
 
  On Thu, Jul 11, 2019 at 8:26 AM Yu Li  wrote:
 
 > Congratulations Rong!
 >
 > Best Regards,
 > Yu
 >
 >
 > On Thu, 11 Jul 2019 at 22:54, zhijiang 
 > wrote:
 >
 >> Congratulations Rong!
 >>
 >> Best,
 >> Zhijiang
 >>
 >> --
 >> From:Kurt Young 
 >> Send Time:2019年7月11日(星期四) 22:54
 >> To:Kostas Kloudas 
 >> Cc:Jark Wu ; Fabian Hueske ;
 >> dev ; user 
 >> Subject:Re: [ANNOUNCE] Rong Rong becomes a Flink committer
 >>
 >> Congratulations Rong!
 >>
 >> Best,
 >> Kurt
 >>
 >>
 >> On Thu, Jul 11, 2019 at 10:53 PM Kostas Kloudas 
 >> wrote:
 >> Congratulations Rong!
 >>
 >> On Thu, Jul 11, 2019 at 4:40 PM Jark Wu  wrote:
 >> Congratulations Rong Rong!
 >> Welcome on board!
 >>
 >> On Thu, 11 Jul 2019 at 22:25, Fabian Hueske 
 >> wrote:
 >> Hi everyone,
 >>
 >> I'm very happy to announce that Rong Rong accepted the offer of the
 >> Flink PMC to become a committer of the Flink project.
 >>
 >> Rong has been contributing to Flink for many years, mainly working on
 >> SQL and Yarn security features. He's also frequently helping out on the
 >> user@f.a.o mailing lists.
 >>
 >> Congratulations Rong!
 >>
 >> Best, Fabian
 >> (on behalf of the Flink PMC)
 >>
 >>
 >>
 


-- 
Xuefu Zhang

"In Honey We Trust!"


Re: [ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread jincheng sun
Congratulations Rong, Well deserved!

Cheers,
Jincheng

Dian Fu  于2019年7月12日周五 上午9:06写道:

>
> Congrats Rong!
>
>
> 在 2019年7月12日,上午8:47,Chen YuZhao  写道:
>
> congratulations!
>
> 获取 Outlook for iOS 
>
> --
> *发件人:* Rong Rong 
> *发送时间:* 星期五, 七月 12, 2019 8:09 上午
> *收件人:* Hao Sun
> *抄送:* Xuefu Z; dev; Flink ML
> *主题:* Re: [ANNOUNCE] Rong Rong becomes a Flink committer
>
> Thank you all for the warm welcome!
>
> It's my honor to become an Apache Flink committer.
> I will continue to work on this great project and contribute more to the
> community.
>
> Cheers,
> Rong
>
> On Thu, Jul 11, 2019 at 1:05 PM Hao Sun  wrote:
>
>> Congratulations Rong.
>>
>> On Thu, Jul 11, 2019, 11:39 Xuefu Z  wrote:
>>
>>> Congratulations, Rong!
>>>
>>> On Thu, Jul 11, 2019 at 10:59 AM Bowen Li  wrote:
>>>
 Congrats, Rong!


 On Thu, Jul 11, 2019 at 10:48 AM Oytun Tez  wrote:

 > Congratulations Rong!
 >
 > ---
 > Oytun Tez
 >
 > *M O T A W O R D*
 > The World's Fastest Human Translation Platform.
 > oy...@motaword.com — www.motaword.com
 >
 >
 > On Thu, Jul 11, 2019 at 1:44 PM Peter Huang <
 huangzhenqiu0...@gmail.com>
 > wrote:
 >
 >> Congrats Rong!
 >>
 >> On Thu, Jul 11, 2019 at 10:40 AM Becket Qin 
 wrote:
 >>
 >>> Congrats, Rong!
 >>>
 >>> On Fri, Jul 12, 2019 at 1:13 AM Xingcan Cui 
 wrote:
 >>>
  Congrats Rong!
 
  Best,
  Xingcan
 
  On Jul 11, 2019, at 1:08 PM, Shuyi Chen 
 wrote:
 
  Congratulations, Rong!
 
  On Thu, Jul 11, 2019 at 8:26 AM Yu Li  wrote:
 
 > Congratulations Rong!
 >
 > Best Regards,
 > Yu
 >
 >
 > On Thu, 11 Jul 2019 at 22:54, zhijiang <
 wangzhijiang...@aliyun.com>
 > wrote:
 >
 >> Congratulations Rong!
 >>
 >> Best,
 >> Zhijiang
 >>
 >>
 --
 >> From:Kurt Young 
 >> Send Time:2019年7月11日(星期四) 22:54
 >> To:Kostas Kloudas 
 >> Cc:Jark Wu ; Fabian Hueske >>> >;
 >> dev ; user 
 >> Subject:Re: [ANNOUNCE] Rong Rong becomes a Flink committer
 >>
 >> Congratulations Rong!
 >>
 >> Best,
 >> Kurt
 >>
 >>
 >> On Thu, Jul 11, 2019 at 10:53 PM Kostas Kloudas <
 kklou...@gmail.com>
 >> wrote:
 >> Congratulations Rong!
 >>
 >> On Thu, Jul 11, 2019 at 4:40 PM Jark Wu 
 wrote:
 >> Congratulations Rong Rong!
 >> Welcome on board!
 >>
 >> On Thu, 11 Jul 2019 at 22:25, Fabian Hueske 
 >> wrote:
 >> Hi everyone,
 >>
 >> I'm very happy to announce that Rong Rong accepted the offer of
 the
 >> Flink PMC to become a committer of the Flink project.
 >>
 >> Rong has been contributing to Flink for many years, mainly
 working on
 >> SQL and Yarn security features. He's also frequently helping out
 on the
 >> user@f.a.o mailing lists.
 >>
 >> Congratulations Rong!
 >>
 >> Best, Fabian
 >> (on behalf of the Flink PMC)
 >>
 >>
 >>
 

>>>
>>>
>>> --
>>> Xuefu Zhang
>>>
>>> "In Honey We Trust!"
>>>
>>
>


Re: [ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread Dian Fu

Congrats Rong!


> 在 2019年7月12日,上午8:47,Chen YuZhao  写道:
> 
> congratulations!
> 
> 获取 Outlook for iOS 
>  
> 发件人: Rong Rong 
> 发送时间: 星期五, 七月 12, 2019 8:09 上午
> 收件人: Hao Sun
> 抄送: Xuefu Z; dev; Flink ML
> 主题: Re: [ANNOUNCE] Rong Rong becomes a Flink committer
>  
> Thank you all for the warm welcome!
> 
> It's my honor to become an Apache Flink committer. 
> I will continue to work on this great project and contribute more to the 
> community.
> 
> Cheers,
> Rong
> 
> On Thu, Jul 11, 2019 at 1:05 PM Hao Sun  > wrote:
> Congratulations Rong. 
> 
> On Thu, Jul 11, 2019, 11:39 Xuefu Z  > wrote:
> Congratulations, Rong!
> 
> On Thu, Jul 11, 2019 at 10:59 AM Bowen Li  > wrote:
> Congrats, Rong!
> 
> 
> On Thu, Jul 11, 2019 at 10:48 AM Oytun Tez  > wrote:
> 
> > Congratulations Rong!
> >
> > ---
> > Oytun Tez
> >
> > *M O T A W O R D*
> > The World's Fastest Human Translation Platform.
> > oy...@motaword.com  ― www.motaword.com 
> > 
> >
> >
> > On Thu, Jul 11, 2019 at 1:44 PM Peter Huang  > >
> > wrote:
> >
> >> Congrats Rong!
> >>
> >> On Thu, Jul 11, 2019 at 10:40 AM Becket Qin  >> > wrote:
> >>
> >>> Congrats, Rong!
> >>>
> >>> On Fri, Jul 12, 2019 at 1:13 AM Xingcan Cui  >>> > wrote:
> >>>
>  Congrats Rong!
> 
>  Best,
>  Xingcan
> 
>  On Jul 11, 2019, at 1:08 PM, Shuyi Chen   > wrote:
> 
>  Congratulations, Rong!
> 
>  On Thu, Jul 11, 2019 at 8:26 AM Yu Li   > wrote:
> 
> > Congratulations Rong!
> >
> > Best Regards,
> > Yu
> >
> >
> > On Thu, 11 Jul 2019 at 22:54, zhijiang  > >
> > wrote:
> >
> >> Congratulations Rong!
> >>
> >> Best,
> >> Zhijiang
> >>
> >> --
> >> From:Kurt Young mailto:ykt...@gmail.com>>
> >> Send Time:2019年7月11日(星期四) 22:54
> >> To:Kostas Kloudas mailto:kklou...@gmail.com>>
> >> Cc:Jark Wu mailto:imj...@gmail.com>>; Fabian Hueske 
> >> mailto:fhue...@gmail.com>>;
> >> dev mailto:d...@flink.apache.org>>; user 
> >> mailto:user@flink.apache.org>>
> >> Subject:Re: [ANNOUNCE] Rong Rong becomes a Flink committer
> >>
> >> Congratulations Rong!
> >>
> >> Best,
> >> Kurt
> >>
> >>
> >> On Thu, Jul 11, 2019 at 10:53 PM Kostas Kloudas  >> >
> >> wrote:
> >> Congratulations Rong!
> >>
> >> On Thu, Jul 11, 2019 at 4:40 PM Jark Wu  >> > wrote:
> >> Congratulations Rong Rong!
> >> Welcome on board!
> >>
> >> On Thu, 11 Jul 2019 at 22:25, Fabian Hueske  >> >
> >> wrote:
> >> Hi everyone,
> >>
> >> I'm very happy to announce that Rong Rong accepted the offer of the
> >> Flink PMC to become a committer of the Flink project.
> >>
> >> Rong has been contributing to Flink for many years, mainly working on
> >> SQL and Yarn security features. He's also frequently helping out on the
> >> user@f.a.o mailing lists.
> >>
> >> Congratulations Rong!
> >>
> >> Best, Fabian
> >> (on behalf of the Flink PMC)
> >>
> >>
> >>
> 
> 
> 
> -- 
> Xuefu Zhang
> 
> "In Honey We Trust!"



Re: [ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread Chen YuZhao
congratulations!

获取 Outlook for iOS


发件人: Rong Rong 
发送时间: 星期五, 七月 12, 2019 8:09 上午
收件人: Hao Sun
抄送: Xuefu Z; dev; Flink ML
主题: Re: [ANNOUNCE] Rong Rong becomes a Flink committer

Thank you all for the warm welcome!

It's my honor to become an Apache Flink committer.
I will continue to work on this great project and contribute more to the 
community.

Cheers,
Rong

On Thu, Jul 11, 2019 at 1:05 PM Hao Sun 
mailto:ha...@zendesk.com>> wrote:
Congratulations Rong.

On Thu, Jul 11, 2019, 11:39 Xuefu Z 
mailto:usxu...@gmail.com>> wrote:
Congratulations, Rong!

On Thu, Jul 11, 2019 at 10:59 AM Bowen Li 
mailto:bowenl...@gmail.com>> wrote:
Congrats, Rong!


On Thu, Jul 11, 2019 at 10:48 AM Oytun Tez 
mailto:oy...@motaword.com>> wrote:

> Congratulations Rong!
>
> ---
> Oytun Tez
>
> *M O T A W O R D*
> The World's Fastest Human Translation Platform.
> oy...@motaword.com ― 
> www.motaword.com
>
>
> On Thu, Jul 11, 2019 at 1:44 PM Peter Huang 
> mailto:huangzhenqiu0...@gmail.com>>
> wrote:
>
>> Congrats Rong!
>>
>> On Thu, Jul 11, 2019 at 10:40 AM Becket Qin 
>> mailto:becket@gmail.com>> wrote:
>>
>>> Congrats, Rong!
>>>
>>> On Fri, Jul 12, 2019 at 1:13 AM Xingcan Cui 
>>> mailto:xingc...@gmail.com>> wrote:
>>>
 Congrats Rong!

 Best,
 Xingcan

 On Jul 11, 2019, at 1:08 PM, Shuyi Chen 
 mailto:suez1...@gmail.com>> wrote:

 Congratulations, Rong!

 On Thu, Jul 11, 2019 at 8:26 AM Yu Li 
 mailto:car...@gmail.com>> wrote:

> Congratulations Rong!
>
> Best Regards,
> Yu
>
>
> On Thu, 11 Jul 2019 at 22:54, zhijiang 
> mailto:wangzhijiang...@aliyun.com>>
> wrote:
>
>> Congratulations Rong!
>>
>> Best,
>> Zhijiang
>>
>> --
>> From:Kurt Young mailto:ykt...@gmail.com>>
>> Send Time:2019年7月11日(星期四) 22:54
>> To:Kostas Kloudas mailto:kklou...@gmail.com>>
>> Cc:Jark Wu mailto:imj...@gmail.com>>; Fabian Hueske 
>> mailto:fhue...@gmail.com>>;
>> dev mailto:d...@flink.apache.org>>; user 
>> mailto:user@flink.apache.org>>
>> Subject:Re: [ANNOUNCE] Rong Rong becomes a Flink committer
>>
>> Congratulations Rong!
>>
>> Best,
>> Kurt
>>
>>
>> On Thu, Jul 11, 2019 at 10:53 PM Kostas Kloudas 
>> mailto:kklou...@gmail.com>>
>> wrote:
>> Congratulations Rong!
>>
>> On Thu, Jul 11, 2019 at 4:40 PM Jark Wu 
>> mailto:imj...@gmail.com>> wrote:
>> Congratulations Rong Rong!
>> Welcome on board!
>>
>> On Thu, 11 Jul 2019 at 22:25, Fabian Hueske 
>> mailto:fhue...@gmail.com>>
>> wrote:
>> Hi everyone,
>>
>> I'm very happy to announce that Rong Rong accepted the offer of the
>> Flink PMC to become a committer of the Flink project.
>>
>> Rong has been contributing to Flink for many years, mainly working on
>> SQL and Yarn security features. He's also frequently helping out on the
>> user@f.a.o mailing lists.
>>
>> Congratulations Rong!
>>
>> Best, Fabian
>> (on behalf of the Flink PMC)
>>
>>
>>



--
Xuefu Zhang

"In Honey We Trust!"


Re: [ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread Rong Rong
Thank you all for the warm welcome!

It's my honor to become an Apache Flink committer.
I will continue to work on this great project and contribute more to the
community.

Cheers,
Rong

On Thu, Jul 11, 2019 at 1:05 PM Hao Sun  wrote:

> Congratulations Rong.
>
> On Thu, Jul 11, 2019, 11:39 Xuefu Z  wrote:
>
>> Congratulations, Rong!
>>
>> On Thu, Jul 11, 2019 at 10:59 AM Bowen Li  wrote:
>>
>>> Congrats, Rong!
>>>
>>>
>>> On Thu, Jul 11, 2019 at 10:48 AM Oytun Tez  wrote:
>>>
>>> > Congratulations Rong!
>>> >
>>> > ---
>>> > Oytun Tez
>>> >
>>> > *M O T A W O R D*
>>> > The World's Fastest Human Translation Platform.
>>> > oy...@motaword.com — www.motaword.com
>>> >
>>> >
>>> > On Thu, Jul 11, 2019 at 1:44 PM Peter Huang <
>>> huangzhenqiu0...@gmail.com>
>>> > wrote:
>>> >
>>> >> Congrats Rong!
>>> >>
>>> >> On Thu, Jul 11, 2019 at 10:40 AM Becket Qin 
>>> wrote:
>>> >>
>>> >>> Congrats, Rong!
>>> >>>
>>> >>> On Fri, Jul 12, 2019 at 1:13 AM Xingcan Cui 
>>> wrote:
>>> >>>
>>>  Congrats Rong!
>>> 
>>>  Best,
>>>  Xingcan
>>> 
>>>  On Jul 11, 2019, at 1:08 PM, Shuyi Chen  wrote:
>>> 
>>>  Congratulations, Rong!
>>> 
>>>  On Thu, Jul 11, 2019 at 8:26 AM Yu Li  wrote:
>>> 
>>> > Congratulations Rong!
>>> >
>>> > Best Regards,
>>> > Yu
>>> >
>>> >
>>> > On Thu, 11 Jul 2019 at 22:54, zhijiang >> >
>>> > wrote:
>>> >
>>> >> Congratulations Rong!
>>> >>
>>> >> Best,
>>> >> Zhijiang
>>> >>
>>> >> --
>>> >> From:Kurt Young 
>>> >> Send Time:2019年7月11日(星期四) 22:54
>>> >> To:Kostas Kloudas 
>>> >> Cc:Jark Wu ; Fabian Hueske ;
>>> >> dev ; user 
>>> >> Subject:Re: [ANNOUNCE] Rong Rong becomes a Flink committer
>>> >>
>>> >> Congratulations Rong!
>>> >>
>>> >> Best,
>>> >> Kurt
>>> >>
>>> >>
>>> >> On Thu, Jul 11, 2019 at 10:53 PM Kostas Kloudas <
>>> kklou...@gmail.com>
>>> >> wrote:
>>> >> Congratulations Rong!
>>> >>
>>> >> On Thu, Jul 11, 2019 at 4:40 PM Jark Wu  wrote:
>>> >> Congratulations Rong Rong!
>>> >> Welcome on board!
>>> >>
>>> >> On Thu, 11 Jul 2019 at 22:25, Fabian Hueske 
>>> >> wrote:
>>> >> Hi everyone,
>>> >>
>>> >> I'm very happy to announce that Rong Rong accepted the offer of
>>> the
>>> >> Flink PMC to become a committer of the Flink project.
>>> >>
>>> >> Rong has been contributing to Flink for many years, mainly
>>> working on
>>> >> SQL and Yarn security features. He's also frequently helping out
>>> on the
>>> >> user@f.a.o mailing lists.
>>> >>
>>> >> Congratulations Rong!
>>> >>
>>> >> Best, Fabian
>>> >> (on behalf of the Flink PMC)
>>> >>
>>> >>
>>> >>
>>> 
>>>
>>
>>
>> --
>> Xuefu Zhang
>>
>> "In Honey We Trust!"
>>
>


Re: [ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread Hao Sun
Congratulations Rong.

On Thu, Jul 11, 2019, 11:39 Xuefu Z  wrote:

> Congratulations, Rong!
>
> On Thu, Jul 11, 2019 at 10:59 AM Bowen Li  wrote:
>
>> Congrats, Rong!
>>
>>
>> On Thu, Jul 11, 2019 at 10:48 AM Oytun Tez  wrote:
>>
>> > Congratulations Rong!
>> >
>> > ---
>> > Oytun Tez
>> >
>> > *M O T A W O R D*
>> > The World's Fastest Human Translation Platform.
>> > oy...@motaword.com — www.motaword.com
>> 
>> >
>> >
>> > On Thu, Jul 11, 2019 at 1:44 PM Peter Huang > >
>> > wrote:
>> >
>> >> Congrats Rong!
>> >>
>> >> On Thu, Jul 11, 2019 at 10:40 AM Becket Qin 
>> wrote:
>> >>
>> >>> Congrats, Rong!
>> >>>
>> >>> On Fri, Jul 12, 2019 at 1:13 AM Xingcan Cui 
>> wrote:
>> >>>
>>  Congrats Rong!
>> 
>>  Best,
>>  Xingcan
>> 
>>  On Jul 11, 2019, at 1:08 PM, Shuyi Chen  wrote:
>> 
>>  Congratulations, Rong!
>> 
>>  On Thu, Jul 11, 2019 at 8:26 AM Yu Li  wrote:
>> 
>> > Congratulations Rong!
>> >
>> > Best Regards,
>> > Yu
>> >
>> >
>> > On Thu, 11 Jul 2019 at 22:54, zhijiang 
>> > wrote:
>> >
>> >> Congratulations Rong!
>> >>
>> >> Best,
>> >> Zhijiang
>> >>
>> >> --
>> >> From:Kurt Young 
>> >> Send Time:2019年7月11日(星期四) 22:54
>> >> To:Kostas Kloudas 
>> >> Cc:Jark Wu ; Fabian Hueske ;
>> >> dev ; user 
>> >> Subject:Re: [ANNOUNCE] Rong Rong becomes a Flink committer
>> >>
>> >> Congratulations Rong!
>> >>
>> >> Best,
>> >> Kurt
>> >>
>> >>
>> >> On Thu, Jul 11, 2019 at 10:53 PM Kostas Kloudas <
>> kklou...@gmail.com>
>> >> wrote:
>> >> Congratulations Rong!
>> >>
>> >> On Thu, Jul 11, 2019 at 4:40 PM Jark Wu  wrote:
>> >> Congratulations Rong Rong!
>> >> Welcome on board!
>> >>
>> >> On Thu, 11 Jul 2019 at 22:25, Fabian Hueske 
>> >> wrote:
>> >> Hi everyone,
>> >>
>> >> I'm very happy to announce that Rong Rong accepted the offer of the
>> >> Flink PMC to become a committer of the Flink project.
>> >>
>> >> Rong has been contributing to Flink for many years, mainly working
>> on
>> >> SQL and Yarn security features. He's also frequently helping out
>> on the
>> >> user@f.a.o mailing lists.
>> >>
>> >> Congratulations Rong!
>> >>
>> >> Best, Fabian
>> >> (on behalf of the Flink PMC)
>> >>
>> >>
>> >>
>> 
>>
>
>
> --
> Xuefu Zhang
>
> "In Honey We Trust!"
>


Re: Graceful Task Manager Termination and Replacement

2019-07-11 Thread Hao Sun
I have a common interest in this topic. My k8s recycle hosts, and I am
facing the same issue. Flink can tolerate this situation, but I am
wondering if I can do better

On Thu, Jul 11, 2019, 12:39 Aaron Levin  wrote:

> Hello,
>
> Is there a way to gracefully terminate a Task Manager beyond just killing
> it (this seems to be what `./taskmanager.sh stop` does)? Specifically I'm
> interested in a way to replace a Task Manager that has currently-running
> tasks. It would be great if it was possible to terminate a Task Manager
> without restarting the job, though I'm not sure if this is possible.
>
> Context: at my work we regularly cycle our hosts for maintenance and
> security. Each time we do this we stop the task manager running on the host
> being cycled. This causes the entire job to restart, resulting in downtime
> for the job. I'd love to decrease this downtime if at all possible.
>
> Thanks! Any insight is appreciated!
>
> Best,
>
> Aaron Levin
>


Graceful Task Manager Termination and Replacement

2019-07-11 Thread Aaron Levin
Hello,

Is there a way to gracefully terminate a Task Manager beyond just killing
it (this seems to be what `./taskmanager.sh stop` does)? Specifically I'm
interested in a way to replace a Task Manager that has currently-running
tasks. It would be great if it was possible to terminate a Task Manager
without restarting the job, though I'm not sure if this is possible.

Context: at my work we regularly cycle our hosts for maintenance and
security. Each time we do this we stop the task manager running on the host
being cycled. This causes the entire job to restart, resulting in downtime
for the job. I'd love to decrease this downtime if at all possible.

Thanks! Any insight is appreciated!

Best,

Aaron Levin


Flink SQL API: Extra columns added from order by

2019-07-11 Thread Morrisa Brenner
Hi Flink folks,

We have a custom date formatting function that we use to format the output
of columns containing dates. Ideally what we want is to format the output
in the select statement but be able to order by the underlying datetime (so
that and output with formatted dates "February 2019" and "April 2019" is
guaranteed to have the rows sorted in time order rather than alphabetical
order).

When I go to add the unformatted column to the order by, however, that gets
appended as an extra column to the select statement during the query
planning process within Calcite. (In the order by parsing, it's considering
this a different column from the one in the select statement.) When the
group by column is different in the same way but there's no order by
column, the extra column isn't added. I've included a couple of simple
examples below.

Is this the intended behavior of the query planner? Does anyone know of a
way around this without needing to change the formatting so that it makes
the output dates correctly sortable?

Thanks for your help!

Morrisa



Example query and output with order by using formatted date:

SELECT

formatDate(floor(`testTable`.`timestamp` TO MONTH), 'MONTH'),

sum(`testTable`.`count`)

FROM `testTable`

GROUP BY formatDate(floor(`testTable`.`timestamp` TO MONTH), 'MONTH')

ORDER BY formatDate(floor(`testTable`.`timestamp` TO MONTH), 'MONTH') ASC

Month

SUM VALUE

April 2019

1052

February 2019

1


Example query and output without order by but group by using unformatted
date:

SELECT

formatDate(floor(`testTable`.`timestamp` TO MONTH), 'MONTH'),

sum(`testTable`.`count`)

FROM `testTable`

GROUP BY floor(`testTable`.`timestamp` TO MONTH)

Month

SUM VALUE

February 2019

1

April 2019

1052

We would like to enforce the ordering, so although this output is what we
want, I don't think we can use this solution.

Example query and output with order by using unformatted date:

SELECT

formatDate(floor(`testTable`.`timestamp` TO MONTH), 'MONTH'),

sum(`testTable`.`count`)

FROM `testTable`

GROUP BY floor(`testTable`.`timestamp` TO MONTH)

ORDER BY floor(`testTable`.`timestamp` TO MONTH) ASC

Month

SUM VALUE

February 2019

1

2/1/2019 12:00 AM

April 2019

1052

4/1/2019 12:00 AM


-- 
Morrisa Brenner
Software Engineer
225 Franklin St, Boston, MA 02110
klaviyo.com 
[image: Klaviyo Logo]


Re: [ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread Xuefu Z
Congratulations, Rong!

On Thu, Jul 11, 2019 at 10:59 AM Bowen Li  wrote:

> Congrats, Rong!
>
>
> On Thu, Jul 11, 2019 at 10:48 AM Oytun Tez  wrote:
>
> > Congratulations Rong!
> >
> > ---
> > Oytun Tez
> >
> > *M O T A W O R D*
> > The World's Fastest Human Translation Platform.
> > oy...@motaword.com — www.motaword.com
> >
> >
> > On Thu, Jul 11, 2019 at 1:44 PM Peter Huang 
> > wrote:
> >
> >> Congrats Rong!
> >>
> >> On Thu, Jul 11, 2019 at 10:40 AM Becket Qin 
> wrote:
> >>
> >>> Congrats, Rong!
> >>>
> >>> On Fri, Jul 12, 2019 at 1:13 AM Xingcan Cui 
> wrote:
> >>>
>  Congrats Rong!
> 
>  Best,
>  Xingcan
> 
>  On Jul 11, 2019, at 1:08 PM, Shuyi Chen  wrote:
> 
>  Congratulations, Rong!
> 
>  On Thu, Jul 11, 2019 at 8:26 AM Yu Li  wrote:
> 
> > Congratulations Rong!
> >
> > Best Regards,
> > Yu
> >
> >
> > On Thu, 11 Jul 2019 at 22:54, zhijiang 
> > wrote:
> >
> >> Congratulations Rong!
> >>
> >> Best,
> >> Zhijiang
> >>
> >> --
> >> From:Kurt Young 
> >> Send Time:2019年7月11日(星期四) 22:54
> >> To:Kostas Kloudas 
> >> Cc:Jark Wu ; Fabian Hueske ;
> >> dev ; user 
> >> Subject:Re: [ANNOUNCE] Rong Rong becomes a Flink committer
> >>
> >> Congratulations Rong!
> >>
> >> Best,
> >> Kurt
> >>
> >>
> >> On Thu, Jul 11, 2019 at 10:53 PM Kostas Kloudas  >
> >> wrote:
> >> Congratulations Rong!
> >>
> >> On Thu, Jul 11, 2019 at 4:40 PM Jark Wu  wrote:
> >> Congratulations Rong Rong!
> >> Welcome on board!
> >>
> >> On Thu, 11 Jul 2019 at 22:25, Fabian Hueske 
> >> wrote:
> >> Hi everyone,
> >>
> >> I'm very happy to announce that Rong Rong accepted the offer of the
> >> Flink PMC to become a committer of the Flink project.
> >>
> >> Rong has been contributing to Flink for many years, mainly working
> on
> >> SQL and Yarn security features. He's also frequently helping out on
> the
> >> user@f.a.o mailing lists.
> >>
> >> Congratulations Rong!
> >>
> >> Best, Fabian
> >> (on behalf of the Flink PMC)
> >>
> >>
> >>
> 
>


-- 
Xuefu Zhang

"In Honey We Trust!"


Re: Apache Flink - Relation between stream time characteristic and timer triggers

2019-07-11 Thread M Singh
 Thanks Fabian/Xingcan/Yun for all your help.  Mans
On Thursday, July 11, 2019, 11:46:42 AM EDT, Fabian Hueske 
 wrote:  
 
 Hi,
ProcessingTime timers are always supportedEventTime timers are only supported 
for EventTime and IngestionTime
Best, Fabian

Am Do., 11. Juli 2019 um 17:44 Uhr schrieb M Singh :

 Thanks Fabian for your response.
Just to clarify then - regardless of the time characteristics, if a processor 
or window trigger registers with a ProcessingTime  and EventTime  timers - they 
will all fire when the appropriate watermarks arrive.
Thanks again.
On Thursday, July 11, 2019, 05:41:54 AM EDT, Fabian Hueske 
 wrote:  
 
 Hi Mans,
IngestionTime is uses the same internal mechanisms as EventTime (record 
timestamps and watermarks).

The difference is that instead of extracting a timestamp from the record (using 
a custom timestamp extractor & wm assigner), Flink will assign timestamps based 
on the machine clock of the machine that runs the source task and will also 
automatically generate watermarks. If you ask for my opinion, IngestionTime 
combines the disadvantages of ProcessingTime and EventTime. You pay the latency 
/ performance penalty of EventTime for the non-determinism of ProcessingTime.

So, if you enable IngestionTime, you can use EventTime timers and 
ProcessingTime timers.
Best, Fabian

Am Mi., 10. Juli 2019 um 09:37 Uhr schrieb M Singh :

 Thanks for your answer Xingcan.
Just to clarify - if the characteristic is set to IngestionTime or 
ProcessingTime, the event time triggers will be ignored and not fire.
Mans
On Tuesday, July 9, 2019, 04:32:00 PM EDT, Xingcan Cui  
wrote:  
 
 Yes, Mans. You can use both processing-time and event-time timers if you set 
the time characteristic to event-time. They'll be triggered by their own time 
semantics, separately. (actually there’s no watermark for processing time)
Cheers,Xingcan

On Jul 9, 2019, at 11:40 AM, M Singh  wrote:
 Thanks Yun for your answers.
Does this mean that we can use processing and event timers (in processors or 
triggers) regardless of the time characteristic ?  Also, is possible to use 
both together and will they both fire at the appropriate watermarks for 
processing and event times ?  
Mans
On Tuesday, July 9, 2019, 12:18:30 AM EDT, Yun Gao  
wrote:  
 
 Hi,    For the three questions,  1. The processing time timer will be trigger. 
IMO you may think the processing time timer as in parallel with the event time 
timer. They are processed separately underlying. The processing time timer will 
be triggered according to the realistic time.  2. I'am not very clear on how to 
changed later in the application. Do you mean call 
`StreamExecutionEnvironment#setStreamTimeCharacteristics` multiple times ? If 
so, then the last call will take effect for all the operators before or after 
the last call, since the setting will only take effect in 
`StreamExecutionEnvironment#execute`.  3. 'assignTimeStampAndWatermark' will 
change the timestamp of the record. IMO you may think each record contains a 
timestamp field, and the filed is set when ingesting, but 
'assignTimeStampAndWatermark' will change the value of this field, so the 
following operators relying on the timestamp will see the updated value.
Best,Yun



--From:M Singh 
Send Time:2019 Jul. 9 (Tue.) 09:42To:User 
Subject:Apache Flink - Relation between stream time 
characteristic and timer triggers
Hi:
I have a few questions about the stream time characteristics:
1. If the time characteristic is set to TimeCharacteristic.EventTime, but the 
timers in a processor or trigger is set using registerProcessingTimeTimer (or 
vice versa), then will that timer fire ?  
2.  Once the time character is set on the stream environment, and changed later 
in the application, which one is applied, the first one or the last one ?
3.  If the stream time characteristic is set to IngestionTime, then is there 
any adverse effect of assigning the timestamp using  
assignTimeStampAndWatermark to a stream later in the application ?
Thanks

  

  
  
  

Re: [ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread Bowen Li
Congrats, Rong!


On Thu, Jul 11, 2019 at 10:48 AM Oytun Tez  wrote:

> Congratulations Rong!
>
> ---
> Oytun Tez
>
> *M O T A W O R D*
> The World's Fastest Human Translation Platform.
> oy...@motaword.com — www.motaword.com
>
>
> On Thu, Jul 11, 2019 at 1:44 PM Peter Huang 
> wrote:
>
>> Congrats Rong!
>>
>> On Thu, Jul 11, 2019 at 10:40 AM Becket Qin  wrote:
>>
>>> Congrats, Rong!
>>>
>>> On Fri, Jul 12, 2019 at 1:13 AM Xingcan Cui  wrote:
>>>
 Congrats Rong!

 Best,
 Xingcan

 On Jul 11, 2019, at 1:08 PM, Shuyi Chen  wrote:

 Congratulations, Rong!

 On Thu, Jul 11, 2019 at 8:26 AM Yu Li  wrote:

> Congratulations Rong!
>
> Best Regards,
> Yu
>
>
> On Thu, 11 Jul 2019 at 22:54, zhijiang 
> wrote:
>
>> Congratulations Rong!
>>
>> Best,
>> Zhijiang
>>
>> --
>> From:Kurt Young 
>> Send Time:2019年7月11日(星期四) 22:54
>> To:Kostas Kloudas 
>> Cc:Jark Wu ; Fabian Hueske ;
>> dev ; user 
>> Subject:Re: [ANNOUNCE] Rong Rong becomes a Flink committer
>>
>> Congratulations Rong!
>>
>> Best,
>> Kurt
>>
>>
>> On Thu, Jul 11, 2019 at 10:53 PM Kostas Kloudas 
>> wrote:
>> Congratulations Rong!
>>
>> On Thu, Jul 11, 2019 at 4:40 PM Jark Wu  wrote:
>> Congratulations Rong Rong!
>> Welcome on board!
>>
>> On Thu, 11 Jul 2019 at 22:25, Fabian Hueske 
>> wrote:
>> Hi everyone,
>>
>> I'm very happy to announce that Rong Rong accepted the offer of the
>> Flink PMC to become a committer of the Flink project.
>>
>> Rong has been contributing to Flink for many years, mainly working on
>> SQL and Yarn security features. He's also frequently helping out on the
>> user@f.a.o mailing lists.
>>
>> Congratulations Rong!
>>
>> Best, Fabian
>> (on behalf of the Flink PMC)
>>
>>
>>



Question in the tutorial

2019-07-11 Thread Karthik Guru
Hey Flink team,

Novice here. I just about started using Flink. I have a slight issue with
respect to the following instruction in the tutorial.
https://ci.apache.org/projects/flink/flink-docs-release-1.8/tutorials/local_setup.html

In this, when I try to 'Start a Local Flink Cluster' and then connect to
local host, I am not getting an update on the task manager. It still shows
0 for all the three categories. Could you please help?

Thanks for your time
Karthik


Re: [ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread Oytun Tez
Congratulations Rong!

---
Oytun Tez

*M O T A W O R D*
The World's Fastest Human Translation Platform.
oy...@motaword.com — www.motaword.com


On Thu, Jul 11, 2019 at 1:44 PM Peter Huang 
wrote:

> Congrats Rong!
>
> On Thu, Jul 11, 2019 at 10:40 AM Becket Qin  wrote:
>
>> Congrats, Rong!
>>
>> On Fri, Jul 12, 2019 at 1:13 AM Xingcan Cui  wrote:
>>
>>> Congrats Rong!
>>>
>>> Best,
>>> Xingcan
>>>
>>> On Jul 11, 2019, at 1:08 PM, Shuyi Chen  wrote:
>>>
>>> Congratulations, Rong!
>>>
>>> On Thu, Jul 11, 2019 at 8:26 AM Yu Li  wrote:
>>>
 Congratulations Rong!

 Best Regards,
 Yu


 On Thu, 11 Jul 2019 at 22:54, zhijiang 
 wrote:

> Congratulations Rong!
>
> Best,
> Zhijiang
>
> --
> From:Kurt Young 
> Send Time:2019年7月11日(星期四) 22:54
> To:Kostas Kloudas 
> Cc:Jark Wu ; Fabian Hueske ; dev
> ; user 
> Subject:Re: [ANNOUNCE] Rong Rong becomes a Flink committer
>
> Congratulations Rong!
>
> Best,
> Kurt
>
>
> On Thu, Jul 11, 2019 at 10:53 PM Kostas Kloudas 
> wrote:
> Congratulations Rong!
>
> On Thu, Jul 11, 2019 at 4:40 PM Jark Wu  wrote:
> Congratulations Rong Rong!
> Welcome on board!
>
> On Thu, 11 Jul 2019 at 22:25, Fabian Hueske  wrote:
> Hi everyone,
>
> I'm very happy to announce that Rong Rong accepted the offer of the
> Flink PMC to become a committer of the Flink project.
>
> Rong has been contributing to Flink for many years, mainly working on
> SQL and Yarn security features. He's also frequently helping out on the
> user@f.a.o mailing lists.
>
> Congratulations Rong!
>
> Best, Fabian
> (on behalf of the Flink PMC)
>
>
>
>>>


Re: [ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread Peter Huang
Congrats Rong!

On Thu, Jul 11, 2019 at 10:40 AM Becket Qin  wrote:

> Congrats, Rong!
>
> On Fri, Jul 12, 2019 at 1:13 AM Xingcan Cui  wrote:
>
>> Congrats Rong!
>>
>> Best,
>> Xingcan
>>
>> On Jul 11, 2019, at 1:08 PM, Shuyi Chen  wrote:
>>
>> Congratulations, Rong!
>>
>> On Thu, Jul 11, 2019 at 8:26 AM Yu Li  wrote:
>>
>>> Congratulations Rong!
>>>
>>> Best Regards,
>>> Yu
>>>
>>>
>>> On Thu, 11 Jul 2019 at 22:54, zhijiang 
>>> wrote:
>>>
 Congratulations Rong!

 Best,
 Zhijiang

 --
 From:Kurt Young 
 Send Time:2019年7月11日(星期四) 22:54
 To:Kostas Kloudas 
 Cc:Jark Wu ; Fabian Hueske ; dev <
 d...@flink.apache.org>; user 
 Subject:Re: [ANNOUNCE] Rong Rong becomes a Flink committer

 Congratulations Rong!

 Best,
 Kurt


 On Thu, Jul 11, 2019 at 10:53 PM Kostas Kloudas 
 wrote:
 Congratulations Rong!

 On Thu, Jul 11, 2019 at 4:40 PM Jark Wu  wrote:
 Congratulations Rong Rong!
 Welcome on board!

 On Thu, 11 Jul 2019 at 22:25, Fabian Hueske  wrote:
 Hi everyone,

 I'm very happy to announce that Rong Rong accepted the offer of the
 Flink PMC to become a committer of the Flink project.

 Rong has been contributing to Flink for many years, mainly working on
 SQL and Yarn security features. He's also frequently helping out on the
 user@f.a.o mailing lists.

 Congratulations Rong!

 Best, Fabian
 (on behalf of the Flink PMC)



>>


Re: [ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread Becket Qin
Congrats, Rong!

On Fri, Jul 12, 2019 at 1:13 AM Xingcan Cui  wrote:

> Congrats Rong!
>
> Best,
> Xingcan
>
> On Jul 11, 2019, at 1:08 PM, Shuyi Chen  wrote:
>
> Congratulations, Rong!
>
> On Thu, Jul 11, 2019 at 8:26 AM Yu Li  wrote:
>
>> Congratulations Rong!
>>
>> Best Regards,
>> Yu
>>
>>
>> On Thu, 11 Jul 2019 at 22:54, zhijiang 
>> wrote:
>>
>>> Congratulations Rong!
>>>
>>> Best,
>>> Zhijiang
>>>
>>> --
>>> From:Kurt Young 
>>> Send Time:2019年7月11日(星期四) 22:54
>>> To:Kostas Kloudas 
>>> Cc:Jark Wu ; Fabian Hueske ; dev <
>>> d...@flink.apache.org>; user 
>>> Subject:Re: [ANNOUNCE] Rong Rong becomes a Flink committer
>>>
>>> Congratulations Rong!
>>>
>>> Best,
>>> Kurt
>>>
>>>
>>> On Thu, Jul 11, 2019 at 10:53 PM Kostas Kloudas 
>>> wrote:
>>> Congratulations Rong!
>>>
>>> On Thu, Jul 11, 2019 at 4:40 PM Jark Wu  wrote:
>>> Congratulations Rong Rong!
>>> Welcome on board!
>>>
>>> On Thu, 11 Jul 2019 at 22:25, Fabian Hueske  wrote:
>>> Hi everyone,
>>>
>>> I'm very happy to announce that Rong Rong accepted the offer of the
>>> Flink PMC to become a committer of the Flink project.
>>>
>>> Rong has been contributing to Flink for many years, mainly working on
>>> SQL and Yarn security features. He's also frequently helping out on the
>>> user@f.a.o mailing lists.
>>>
>>> Congratulations Rong!
>>>
>>> Best, Fabian
>>> (on behalf of the Flink PMC)
>>>
>>>
>>>
>


Re: [ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread Xingcan Cui
Congrats Rong!

Best,
Xingcan

> On Jul 11, 2019, at 1:08 PM, Shuyi Chen  wrote:
> 
> Congratulations, Rong!
> 
> On Thu, Jul 11, 2019 at 8:26 AM Yu Li  > wrote:
> Congratulations Rong!
> 
> Best Regards,
> Yu
> 
> 
> On Thu, 11 Jul 2019 at 22:54, zhijiang  > wrote:
> Congratulations Rong!
> 
> Best,
> Zhijiang
> --
> From:Kurt Young mailto:ykt...@gmail.com>>
> Send Time:2019年7月11日(星期四) 22:54
> To:Kostas Kloudas mailto:kklou...@gmail.com>>
> Cc:Jark Wu mailto:imj...@gmail.com>>; Fabian Hueske 
> mailto:fhue...@gmail.com>>; dev  >; user  >
> Subject:Re: [ANNOUNCE] Rong Rong becomes a Flink committer
> 
> Congratulations Rong!
> 
> Best,
> Kurt
> 
> 
> On Thu, Jul 11, 2019 at 10:53 PM Kostas Kloudas  > wrote:
> Congratulations Rong!
> 
> On Thu, Jul 11, 2019 at 4:40 PM Jark Wu  > wrote:
> Congratulations Rong Rong! 
> Welcome on board!
> 
> On Thu, 11 Jul 2019 at 22:25, Fabian Hueske  > wrote:
> Hi everyone,
> 
> I'm very happy to announce that Rong Rong accepted the offer of the Flink PMC 
> to become a committer of the Flink project.
> 
> Rong has been contributing to Flink for many years, mainly working on SQL and 
> Yarn security features. He's also frequently helping out on the user@f.a.o 
> mailing lists.
> 
> Congratulations Rong!
> 
> Best, Fabian 
> (on behalf of the Flink PMC)
> 



Re: [ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread Shuyi Chen
Congratulations, Rong!

On Thu, Jul 11, 2019 at 8:26 AM Yu Li  wrote:

> Congratulations Rong!
>
> Best Regards,
> Yu
>
>
> On Thu, 11 Jul 2019 at 22:54, zhijiang  wrote:
>
>> Congratulations Rong!
>>
>> Best,
>> Zhijiang
>>
>> --
>> From:Kurt Young 
>> Send Time:2019年7月11日(星期四) 22:54
>> To:Kostas Kloudas 
>> Cc:Jark Wu ; Fabian Hueske ; dev <
>> d...@flink.apache.org>; user 
>> Subject:Re: [ANNOUNCE] Rong Rong becomes a Flink committer
>>
>> Congratulations Rong!
>>
>> Best,
>> Kurt
>>
>>
>> On Thu, Jul 11, 2019 at 10:53 PM Kostas Kloudas 
>> wrote:
>> Congratulations Rong!
>>
>> On Thu, Jul 11, 2019 at 4:40 PM Jark Wu  wrote:
>> Congratulations Rong Rong!
>> Welcome on board!
>>
>> On Thu, 11 Jul 2019 at 22:25, Fabian Hueske  wrote:
>> Hi everyone,
>>
>> I'm very happy to announce that Rong Rong accepted the offer of the Flink
>> PMC to become a committer of the Flink project.
>>
>> Rong has been contributing to Flink for many years, mainly working on SQL
>> and Yarn security features. He's also frequently helping out on the
>> user@f.a.o mailing lists.
>>
>> Congratulations Rong!
>>
>> Best, Fabian
>> (on behalf of the Flink PMC)
>>
>>
>>


Issue starting Flink job with with Avro class

2019-07-11 Thread Steven Nelson
Hello! We are working on a Flink application and came across this
error. The "Record" class is a class generated from an Avro Schema.
It's actually used by a second "Operation" class which doesn't seem to
have this problem. Has anyone seen this before?


org.apache.flink.streaming.runtime.tasks.StreamTaskException: Could
not instantiate chained outputs.
at 
org.apache.flink.streaming.api.graph.StreamConfig.getChainedOutputs(StreamConfig.java:324)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:292)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:133)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:267)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:704)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.InvalidClassException: com.x...Record;
Serializable incompatible with Externalizable
at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:711)
at 
java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1885)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)
at java.io.ObjectInputStream.readClass(ObjectInputStream.java:1716)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1556)
at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1975)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567)
at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
at java.util.ArrayList.readObject(ArrayList.java:797)
at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1170)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2178)
at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
at java.util.ArrayList.readObject(ArrayList.java:797)
at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1170)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2178)
at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
at 

Re: Apache Flink - Relation between stream time characteristic and timer triggers

2019-07-11 Thread Fabian Hueske
Hi,

ProcessingTime timers are always supported
EventTime timers are only supported for EventTime and IngestionTime

Best, Fabian

Am Do., 11. Juli 2019 um 17:44 Uhr schrieb M Singh :

> Thanks Fabian for your response.
>
> Just to clarify then - regardless of the time characteristics, if a
> processor or window trigger registers with a ProcessingTime  and EventTime
> timers - they will all fire when the appropriate watermarks arrive.
>
> Thanks again.
>
> On Thursday, July 11, 2019, 05:41:54 AM EDT, Fabian Hueske <
> fhue...@gmail.com> wrote:
>
>
> Hi Mans,
>
> IngestionTime is uses the same internal mechanisms as EventTime (record
> timestamps and watermarks).
>
> The difference is that instead of extracting a timestamp from the record
> (using a custom timestamp extractor & wm assigner), Flink will assign
> timestamps based on the machine clock of the machine that runs the source
> task and will also automatically generate watermarks. If you ask for my
> opinion, IngestionTime combines the disadvantages of ProcessingTime and
> EventTime. You pay the latency / performance penalty of EventTime for the
> non-determinism of ProcessingTime.
>
> So, if you enable IngestionTime, you can use EventTime timers and
> ProcessingTime timers.
>
> Best, Fabian
>
> Am Mi., 10. Juli 2019 um 09:37 Uhr schrieb M Singh :
>
> Thanks for your answer Xingcan.
>
> Just to clarify - if the characteristic is set to IngestionTime or
> ProcessingTime, the event time triggers will be ignored and not fire.
>
> Mans
>
> On Tuesday, July 9, 2019, 04:32:00 PM EDT, Xingcan Cui 
> wrote:
>
>
> Yes, Mans. You can use both processing-time and event-time timers if you
> set the time characteristic to event-time. They'll be triggered by their
> own time semantics, separately. (actually there’s no watermark for
> processing time)
>
> Cheers,
> Xingcan
>
> On Jul 9, 2019, at 11:40 AM, M Singh  wrote:
>
> Thanks Yun for your answers.
>
> Does this mean that we can use processing and event timers (in processors
> or triggers) regardless of the time characteristic ?  Also, is possible to
> use both together and will they both fire at the appropriate watermarks for
> processing and event times ?
>
> Mans
>
> On Tuesday, July 9, 2019, 12:18:30 AM EDT, Yun Gao 
> wrote:
>
>
> Hi,
> For the three questions,
>   1. The processing time timer will be trigger. IMO you may think the
> processing time timer as in parallel with the event time timer. They are
> processed separately underlying. The processing time timer will be
> triggered according to the realistic time.
>   2. I'am not very clear on how to changed later in the application. Do
> you mean call `StreamExecutionEnvironment#setStreamTimeCharacteristics`
> multiple times ? If so, then the last call will take effect for all the
> operators before or after the last call, since the setting will only take
> effect in `StreamExecutionEnvironment#execute`.
>   3. 'assignTimeStampAndWatermark' will change the timestamp of the
> record. IMO you may think each record contains a timestamp field, and the
> filed is set when ingesting, but 'assignTimeStampAndWatermark' will change
> the value of this field, so the following operators relying on the
> timestamp will see the updated value.
>
> Best,
> Yun
>
>
>
> --
> From:M Singh 
> Send Time:2019 Jul. 9 (Tue.) 09:42
> To:User 
> Subject:Apache Flink - Relation between stream time characteristic and
> timer triggers
>
> Hi:
>
> I have a few questions about the stream time characteristics:
>
> 1. If the time characteristic is set to TimeCharacteristic.EventTime, but
> the timers in a processor or trigger is set using
> registerProcessingTimeTimer (or vice versa), then will that timer fire ?
>
> 2.  Once the time character is set on the stream environment, and changed
> later in the application, which one is applied, the first one or the last
> one ?
>
> 3.  If the stream time characteristic is set to IngestionTime, then is
> there any adverse effect of assigning the timestamp using
> assignTimeStampAndWatermark to a stream later in the application ?
>
> Thanks
>
>
>
>


Re: Apache Flink - Relation between stream time characteristic and timer triggers

2019-07-11 Thread M Singh
 Thanks Fabian for your response.
Just to clarify then - regardless of the time characteristics, if a processor 
or window trigger registers with a ProcessingTime  and EventTime  timers - they 
will all fire when the appropriate watermarks arrive.
Thanks again.
On Thursday, July 11, 2019, 05:41:54 AM EDT, Fabian Hueske 
 wrote:  
 
 Hi Mans,
IngestionTime is uses the same internal mechanisms as EventTime (record 
timestamps and watermarks).

The difference is that instead of extracting a timestamp from the record (using 
a custom timestamp extractor & wm assigner), Flink will assign timestamps based 
on the machine clock of the machine that runs the source task and will also 
automatically generate watermarks. If you ask for my opinion, IngestionTime 
combines the disadvantages of ProcessingTime and EventTime. You pay the latency 
/ performance penalty of EventTime for the non-determinism of ProcessingTime.

So, if you enable IngestionTime, you can use EventTime timers and 
ProcessingTime timers.
Best, Fabian

Am Mi., 10. Juli 2019 um 09:37 Uhr schrieb M Singh :

 Thanks for your answer Xingcan.
Just to clarify - if the characteristic is set to IngestionTime or 
ProcessingTime, the event time triggers will be ignored and not fire.
Mans
On Tuesday, July 9, 2019, 04:32:00 PM EDT, Xingcan Cui  
wrote:  
 
 Yes, Mans. You can use both processing-time and event-time timers if you set 
the time characteristic to event-time. They'll be triggered by their own time 
semantics, separately. (actually there’s no watermark for processing time)
Cheers,Xingcan

On Jul 9, 2019, at 11:40 AM, M Singh  wrote:
 Thanks Yun for your answers.
Does this mean that we can use processing and event timers (in processors or 
triggers) regardless of the time characteristic ?  Also, is possible to use 
both together and will they both fire at the appropriate watermarks for 
processing and event times ?  
Mans
On Tuesday, July 9, 2019, 12:18:30 AM EDT, Yun Gao  
wrote:  
 
 Hi,    For the three questions,  1. The processing time timer will be trigger. 
IMO you may think the processing time timer as in parallel with the event time 
timer. They are processed separately underlying. The processing time timer will 
be triggered according to the realistic time.  2. I'am not very clear on how to 
changed later in the application. Do you mean call 
`StreamExecutionEnvironment#setStreamTimeCharacteristics` multiple times ? If 
so, then the last call will take effect for all the operators before or after 
the last call, since the setting will only take effect in 
`StreamExecutionEnvironment#execute`.  3. 'assignTimeStampAndWatermark' will 
change the timestamp of the record. IMO you may think each record contains a 
timestamp field, and the filed is set when ingesting, but 
'assignTimeStampAndWatermark' will change the value of this field, so the 
following operators relying on the timestamp will see the updated value.
Best,Yun



--From:M Singh 
Send Time:2019 Jul. 9 (Tue.) 09:42To:User 
Subject:Apache Flink - Relation between stream time 
characteristic and timer triggers
Hi:
I have a few questions about the stream time characteristics:
1. If the time characteristic is set to TimeCharacteristic.EventTime, but the 
timers in a processor or trigger is set using registerProcessingTimeTimer (or 
vice versa), then will that timer fire ?  
2.  Once the time character is set on the stream environment, and changed later 
in the application, which one is applied, the first one or the last one ?
3.  If the stream time characteristic is set to IngestionTime, then is there 
any adverse effect of assigning the timestamp using  
assignTimeStampAndWatermark to a stream later in the application ?
Thanks

  

  
  

Re: [ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread Yu Li
Congratulations Rong!

Best Regards,
Yu


On Thu, 11 Jul 2019 at 22:54, zhijiang  wrote:

> Congratulations Rong!
>
> Best,
> Zhijiang
>
> --
> From:Kurt Young 
> Send Time:2019年7月11日(星期四) 22:54
> To:Kostas Kloudas 
> Cc:Jark Wu ; Fabian Hueske ; dev <
> d...@flink.apache.org>; user 
> Subject:Re: [ANNOUNCE] Rong Rong becomes a Flink committer
>
> Congratulations Rong!
>
> Best,
> Kurt
>
>
> On Thu, Jul 11, 2019 at 10:53 PM Kostas Kloudas 
> wrote:
> Congratulations Rong!
>
> On Thu, Jul 11, 2019 at 4:40 PM Jark Wu  wrote:
> Congratulations Rong Rong!
> Welcome on board!
>
> On Thu, 11 Jul 2019 at 22:25, Fabian Hueske  wrote:
> Hi everyone,
>
> I'm very happy to announce that Rong Rong accepted the offer of the Flink
> PMC to become a committer of the Flink project.
>
> Rong has been contributing to Flink for many years, mainly working on SQL
> and Yarn security features. He's also frequently helping out on the
> user@f.a.o mailing lists.
>
> Congratulations Rong!
>
> Best, Fabian
> (on behalf of the Flink PMC)
>
>
>


Re: How are kafka consumer offsets handled if sink fails?

2019-07-11 Thread John Smith
Ok cool. I will try to make my stored proc idempotent. So there no chance
that there's a checkpoint happens after the 5th record and the 5th record
is missed?

On Thu, 11 Jul 2019 at 05:20, Fabian Hueske  wrote:

> Hi John,
>
> let's say Flink performed a checkpoint after the 2nd record (by injecting
> a checkpoint marker into the data flow) and the sink fails on the 5th
> record.
> When Flink restarts the application, it resets the offset after the 2nd
> record (it will read the 3rd record first). Hence, the 3rd and 4th record
> will be emitted again.
>
> Best, Fabian
>
>
> Am Di., 9. Juli 2019 um 21:11 Uhr schrieb John Smith <
> java.dev@gmail.com>:
>
>> Ok so when the sink fails on the 5th record then there's no chance that
>> the checkpoint can be at 6th event right?
>>
>> On Tue, 9 Jul 2019 at 13:51, Konstantin Knauf 
>> wrote:
>>
>>> Hi John,
>>>
>>> this depends on your checkpoint interval. When enabled checkpoints are
>>> triggered periodically [1].
>>>
>>> Cheers,
>>>
>>> Konstantin
>>>
>>> [1]
>>> https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/stream/state/checkpointing.html
>>>
>>>
>>>
>>> On Tue, Jul 9, 2019 at 7:30 PM John Smith 
>>> wrote:
>>>
 Ok so just to be clear. Let's say we started at day 0...

 1- Producer inserted 10 records into Kafka.
 2- Kafka Flink Consumer consumed 5 records.
 3- Some transformations applied to those records.
 4- 4 records sinked, 1 failed.
 5- Flink Job restarts because of above failure.

 When does the checkpoint happen above?
 And does it mean in the above case that it will start back at 0 or will
 it start at the 4th record and continue or where ever the checkpoint
 happend. Example 3rd record?
 My stored proc will be idempotent and I understand if messages get
 replayed what to do.
 Just want to try to understand when and where the checkpointing will
 happen.

 On Mon, 8 Jul 2019 at 22:23, Rong Rong  wrote:

> Hi John,
>
> I think what Konstantin is trying to say is: Flink's Kafka consumer
> does not start consuming from the Kafka commit offset when starting the
> consumer, it would actually start with the offset that's last checkpointed
> to external DFS. (e.g. the starting point of the consumer has no relevance
> with Kafka committed offset whatsoever - if checkpoint is enabled.)
>
> This is to quote:
> "*the Flink Kafka Consumer does only commit offsets back to Kafka on
> a best-effort basis after every checkpoint. Internally Flink "commits" the
> [checkpoints]->[current Kafka offset] as part of its periodic 
> checkpoints.*
> "
>
> However if you do not enable checkpointing, I think your consumer will
> by-default restart from the default kafka offset (which I think is your
> committed group offset).
>
> --
> Rong
>
>
> On Mon, Jul 8, 2019 at 6:39 AM John Smith 
> wrote:
>
>> So when we say a sink is at least once. It's because internally it's
>> not checking any kind of state and it sends what it has regardless,
>> correct? Cause I willl build a sink that calls stored procedures.
>>
>> On Sun., Jul. 7, 2019, 4:03 p.m. Konstantin Knauf, <
>> konstan...@ververica.com> wrote:
>>
>>> Hi John,
>>>
>>> in case of a failure (e.g. in the SQL Sink) the Flink Job will be
>>> restarted from the last checkpoint. This means the offset of all Kafka
>>> partitions will be reset to that point in the stream along with state of
>>> all operators. To enable checkpointing you need to call
>>> StreamExecutionEnvironment#enableCheckpointing(). If you using the
>>> JDBCSinkFunction (which is an at-least-once sink), the output will be
>>> duplicated in the case of failures.
>>>
>>> To answer your questions:
>>>
>>> * For this the FlinkKafkaConsumer handles the offsets manually (no
>>> auto-commit).
>>> * No, the Flink Kafka Consumer does only commit offsets back to
>>> Kafka on a best-effort basis after every checkpoint. Internally Flink
>>> "commits" the checkpoints as part of its periodic checkpoints.
>>> * Yes, along with all other events between the last checkpoint and
>>> the failure.
>>> * It will continue from the last checkpoint.
>>>
>>> Hope this helps.
>>>
>>> Cheers,
>>>
>>> Konstantin
>>>
>>> On Fri, Jul 5, 2019 at 8:37 PM John Smith 
>>> wrote:
>>>
 Hi using Apache Flink 1.8.0

 I'm consuming events from Kafka using nothing fancy...

 Properties props = new Properties();
 props.setProperty("bootstrap.servers", kafkaAddress);
 props.setProperty("group.id",kafkaGroup);

 FlinkKafkaConsumer consumer = new FlinkKafkaConsumer<>(topic, 
 new SimpleStringSchema(),props);


 Do some JSON transforms and then push to my SQL 

Re: [ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread zhijiang
Congratulations Rong!

Best,
Zhijiang
--
From:Kurt Young 
Send Time:2019年7月11日(星期四) 22:54
To:Kostas Kloudas 
Cc:Jark Wu ; Fabian Hueske ; dev 
; user 
Subject:Re: [ANNOUNCE] Rong Rong becomes a Flink committer

Congratulations Rong!

Best,
Kurt


On Thu, Jul 11, 2019 at 10:53 PM Kostas Kloudas  wrote:
Congratulations Rong!
On Thu, Jul 11, 2019 at 4:40 PM Jark Wu  wrote:
Congratulations Rong Rong! 
Welcome on board!
On Thu, 11 Jul 2019 at 22:25, Fabian Hueske  wrote:
Hi everyone,

I'm very happy to announce that Rong Rong accepted the offer of the Flink PMC 
to become a committer of the Flink project.

Rong has been contributing to Flink for many years, mainly working on SQL and 
Yarn security features. He's also frequently helping out on the user@f.a.o 
mailing lists.

Congratulations Rong!

Best, Fabian 
(on behalf of the Flink PMC)



Re: [ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread Shaoxuan Wang
 Congratulations Rong!

On Thu, Jul 11, 2019 at 10:40 PM Jark Wu  wrote:

> Congratulations Rong Rong!
> Welcome on board!
>
> On Thu, 11 Jul 2019 at 22:25, Fabian Hueske  wrote:
>
>> Hi everyone,
>>
>> I'm very happy to announce that Rong Rong accepted the offer of the Flink
>> PMC to become a committer of the Flink project.
>>
>> Rong has been contributing to Flink for many years, mainly working on SQL
>> and Yarn security features. He's also frequently helping out on the
>> user@f.a.o mailing lists.
>>
>> Congratulations Rong!
>>
>> Best, Fabian
>> (on behalf of the Flink PMC)
>>
>


Re: [ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread Kurt Young
Congratulations Rong!

Best,
Kurt


On Thu, Jul 11, 2019 at 10:53 PM Kostas Kloudas  wrote:

> Congratulations Rong!
>
> On Thu, Jul 11, 2019 at 4:40 PM Jark Wu  wrote:
>
>> Congratulations Rong Rong!
>> Welcome on board!
>>
>> On Thu, 11 Jul 2019 at 22:25, Fabian Hueske  wrote:
>>
>>> Hi everyone,
>>>
>>> I'm very happy to announce that Rong Rong accepted the offer of the
>>> Flink PMC to become a committer of the Flink project.
>>>
>>> Rong has been contributing to Flink for many years, mainly working on
>>> SQL and Yarn security features. He's also frequently helping out on the
>>> user@f.a.o mailing lists.
>>>
>>> Congratulations Rong!
>>>
>>> Best, Fabian
>>> (on behalf of the Flink PMC)
>>>
>>


Re: [ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread Kostas Kloudas
Congratulations Rong!

On Thu, Jul 11, 2019 at 4:40 PM Jark Wu  wrote:

> Congratulations Rong Rong!
> Welcome on board!
>
> On Thu, 11 Jul 2019 at 22:25, Fabian Hueske  wrote:
>
>> Hi everyone,
>>
>> I'm very happy to announce that Rong Rong accepted the offer of the Flink
>> PMC to become a committer of the Flink project.
>>
>> Rong has been contributing to Flink for many years, mainly working on SQL
>> and Yarn security features. He's also frequently helping out on the
>> user@f.a.o mailing lists.
>>
>> Congratulations Rong!
>>
>> Best, Fabian
>> (on behalf of the Flink PMC)
>>
>


Re: [ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread Jark Wu
Congratulations Rong Rong!
Welcome on board!

On Thu, 11 Jul 2019 at 22:25, Fabian Hueske  wrote:

> Hi everyone,
>
> I'm very happy to announce that Rong Rong accepted the offer of the Flink
> PMC to become a committer of the Flink project.
>
> Rong has been contributing to Flink for many years, mainly working on SQL
> and Yarn security features. He's also frequently helping out on the
> user@f.a.o mailing lists.
>
> Congratulations Rong!
>
> Best, Fabian
> (on behalf of the Flink PMC)
>


[ANNOUNCE] Rong Rong becomes a Flink committer

2019-07-11 Thread Fabian Hueske
Hi everyone,

I'm very happy to announce that Rong Rong accepted the offer of the Flink
PMC to become a committer of the Flink project.

Rong has been contributing to Flink for many years, mainly working on SQL
and Yarn security features. He's also frequently helping out on the
user@f.a.o mailing lists.

Congratulations Rong!

Best, Fabian
(on behalf of the Flink PMC)


CEP Pattern limit

2019-07-11 Thread Pedro Saraiva
Hello,

I'm using CEP to match a stream against around 1000 different patterns.

To do this I create de patterns and then iterate and call CEP.pattern() for
each. Later on, I merge the PatternStreams into one using
datastream.union().

The problem is that i'm getting this exception:
AstTimeoutException: Ask timed out on Actor... after 1ms. Sender null
sent message of type LocalRpcInvocation.

I noticed that this exception is thrown when I reach around 500 patterns.

Is there a way to overcome this limit?

Kind regards,

Pedro Saraiva


Re: Apache Flink - Relation between stream time characteristic and timer triggers

2019-07-11 Thread Fabian Hueske
Hi Mans,

IngestionTime is uses the same internal mechanisms as EventTime (record
timestamps and watermarks).

The difference is that instead of extracting a timestamp from the record
(using a custom timestamp extractor & wm assigner), Flink will assign
timestamps based on the machine clock of the machine that runs the source
task and will also automatically generate watermarks. If you ask for my
opinion, IngestionTime combines the disadvantages of ProcessingTime and
EventTime. You pay the latency / performance penalty of EventTime for the
non-determinism of ProcessingTime.

So, if you enable IngestionTime, you can use EventTime timers and
ProcessingTime timers.

Best, Fabian

Am Mi., 10. Juli 2019 um 09:37 Uhr schrieb M Singh :

> Thanks for your answer Xingcan.
>
> Just to clarify - if the characteristic is set to IngestionTime or
> ProcessingTime, the event time triggers will be ignored and not fire.
>
> Mans
>
> On Tuesday, July 9, 2019, 04:32:00 PM EDT, Xingcan Cui 
> wrote:
>
>
> Yes, Mans. You can use both processing-time and event-time timers if you
> set the time characteristic to event-time. They'll be triggered by their
> own time semantics, separately. (actually there’s no watermark for
> processing time)
>
> Cheers,
> Xingcan
>
> On Jul 9, 2019, at 11:40 AM, M Singh  wrote:
>
> Thanks Yun for your answers.
>
> Does this mean that we can use processing and event timers (in processors
> or triggers) regardless of the time characteristic ?  Also, is possible to
> use both together and will they both fire at the appropriate watermarks for
> processing and event times ?
>
> Mans
>
> On Tuesday, July 9, 2019, 12:18:30 AM EDT, Yun Gao 
> wrote:
>
>
> Hi,
> For the three questions,
>   1. The processing time timer will be trigger. IMO you may think the
> processing time timer as in parallel with the event time timer. They are
> processed separately underlying. The processing time timer will be
> triggered according to the realistic time.
>   2. I'am not very clear on how to changed later in the application. Do
> you mean call `StreamExecutionEnvironment#setStreamTimeCharacteristics`
> multiple times ? If so, then the last call will take effect for all the
> operators before or after the last call, since the setting will only take
> effect in `StreamExecutionEnvironment#execute`.
>   3. 'assignTimeStampAndWatermark' will change the timestamp of the
> record. IMO you may think each record contains a timestamp field, and the
> filed is set when ingesting, but 'assignTimeStampAndWatermark' will change
> the value of this field, so the following operators relying on the
> timestamp will see the updated value.
>
> Best,
> Yun
>
>
>
> --
> From:M Singh 
> Send Time:2019 Jul. 9 (Tue.) 09:42
> To:User 
> Subject:Apache Flink - Relation between stream time characteristic and
> timer triggers
>
> Hi:
>
> I have a few questions about the stream time characteristics:
>
> 1. If the time characteristic is set to TimeCharacteristic.EventTime, but
> the timers in a processor or trigger is set using
> registerProcessingTimeTimer (or vice versa), then will that timer fire ?
>
> 2.  Once the time character is set on the stream environment, and changed
> later in the application, which one is applied, the first one or the last
> one ?
>
> 3.  If the stream time characteristic is set to IngestionTime, then is
> there any adverse effect of assigning the timestamp using
> assignTimeStampAndWatermark to a stream later in the application ?
>
> Thanks
>
>
>
>


Re: new user does not run job use flink cli

2019-07-11 Thread Biao Liu
Hi,
Do you mean job submission is OK with local user name "flink", but not for
other users?
Have you ever checked the authorization of "hdfs://user/flink/recovery"? I
guess other users do not have the access right.



#38;#38;#38;#38;#38;#10084; <799326...@qq.com> 于2019年7月11日周四 上午11:55写道:

> flink-conf.yaml
> jobmanager.heap.size: 1024m
> taskmanager.heap.size: 6144m
> taskmanager.numberOfTaskSlots: 3
> parallelism.default: 1
> high-availability: zookeeper
> high-availability.zookeeper.quorum: 10.1.1.15:2181,10.1.1.16:2181,
> 10.1.1.17:2181
> high-availability.zookeeper.path.root: /flink
> high-availability.cluster-id: /flink_one
> high-availability.storageDir: hdfs://user/flink/recovery
> state.checkpoints.dir: hdfs://10.1.1.5:8020/user/flink/flink-checkpoints
> state.savepoints.dir: hdfs://10.1.1.5:8020/user/flink/flink-checkpoints
>
> masters
> 10.1.1.12:8081
> 10.1.1.13:8081
> 10.1.1.14:8081
>
> slaves
> 10.1.1.12
> 10.1.1.13
> 10.1.1.14
>
> flink cluster start with user flink. flink user run any job are OK,but
> other user , such as add new user test ,run
> FLINK_HOME/examples/batch/WordCount.jar error.
>
>


Re: Cannot catch exception throws by kafka consumer with JSONKeyValueDeserializationSchema

2019-07-11 Thread Fabian Hueske
Hi,

I'd suggest to implement your own custom deserialization schema for example
by extending JSONKeyValueDeserializationSchema.
Then you can implement whatever logic you need to handle incorrectly
formatted messages.

Best, Fabian

Am Mi., 10. Juli 2019 um 04:29 Uhr schrieb Zhechao Ma <
mazhechaomaill...@gmail.com>:

> I'm trying to catch exception throws by the kafka source, and I've got the
> answer that exception in source or  sink cannot be caught.
>
> Thanks
>
> Haibo Sun  于2019年7月8日周一 下午3:54写道:
>
>> Hi,   Zhechao
>>
>> Usually, if you can, share your full exception stack and where you are
>> trying to capture exceptions in your code (preferably with posting your
>> relevant code directly
>> ). That will help us understand and locate the issue you encounter.
>>
>> Best,
>> Haibo
>>
>> At 2019-07-08 14:11:22, "Zhechao Ma"  wrote:
>>
>> Hello,
>>
>> I'm using flinkKafkaConsumer to read message from a kafka topic with
>> JSONKeyValueDeserializationSchema. When the message is json formatted,
>> everything works fine, but it throws NullPointerException when processing a
>> message is not json formatted. I try to catch the exception but cannot do
>> that.
>>
>> Can anyone give out some tips?
>>
>> flink: 1.5
>> flink-kafka: 1.5
>> kafka-clients: 0.10.1.2_2.11
>> flink-json:
>>
>> --
>> Thanks
>> Zhechao Ma
>>
>>
>
> --
> Thanks
> Zhechao Ma
>


Re: How are kafka consumer offsets handled if sink fails?

2019-07-11 Thread Fabian Hueske
Hi John,

let's say Flink performed a checkpoint after the 2nd record (by injecting a
checkpoint marker into the data flow) and the sink fails on the 5th record.
When Flink restarts the application, it resets the offset after the 2nd
record (it will read the 3rd record first). Hence, the 3rd and 4th record
will be emitted again.

Best, Fabian


Am Di., 9. Juli 2019 um 21:11 Uhr schrieb John Smith :

> Ok so when the sink fails on the 5th record then there's no chance that
> the checkpoint can be at 6th event right?
>
> On Tue, 9 Jul 2019 at 13:51, Konstantin Knauf 
> wrote:
>
>> Hi John,
>>
>> this depends on your checkpoint interval. When enabled checkpoints are
>> triggered periodically [1].
>>
>> Cheers,
>>
>> Konstantin
>>
>> [1]
>> https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/stream/state/checkpointing.html
>>
>>
>>
>> On Tue, Jul 9, 2019 at 7:30 PM John Smith  wrote:
>>
>>> Ok so just to be clear. Let's say we started at day 0...
>>>
>>> 1- Producer inserted 10 records into Kafka.
>>> 2- Kafka Flink Consumer consumed 5 records.
>>> 3- Some transformations applied to those records.
>>> 4- 4 records sinked, 1 failed.
>>> 5- Flink Job restarts because of above failure.
>>>
>>> When does the checkpoint happen above?
>>> And does it mean in the above case that it will start back at 0 or will
>>> it start at the 4th record and continue or where ever the checkpoint
>>> happend. Example 3rd record?
>>> My stored proc will be idempotent and I understand if messages get
>>> replayed what to do.
>>> Just want to try to understand when and where the checkpointing will
>>> happen.
>>>
>>> On Mon, 8 Jul 2019 at 22:23, Rong Rong  wrote:
>>>
 Hi John,

 I think what Konstantin is trying to say is: Flink's Kafka consumer
 does not start consuming from the Kafka commit offset when starting the
 consumer, it would actually start with the offset that's last checkpointed
 to external DFS. (e.g. the starting point of the consumer has no relevance
 with Kafka committed offset whatsoever - if checkpoint is enabled.)

 This is to quote:
 "*the Flink Kafka Consumer does only commit offsets back to Kafka on a
 best-effort basis after every checkpoint. Internally Flink "commits" the
 [checkpoints]->[current Kafka offset] as part of its periodic checkpoints.*
 "

 However if you do not enable checkpointing, I think your consumer will
 by-default restart from the default kafka offset (which I think is your
 committed group offset).

 --
 Rong


 On Mon, Jul 8, 2019 at 6:39 AM John Smith 
 wrote:

> So when we say a sink is at least once. It's because internally it's
> not checking any kind of state and it sends what it has regardless,
> correct? Cause I willl build a sink that calls stored procedures.
>
> On Sun., Jul. 7, 2019, 4:03 p.m. Konstantin Knauf, <
> konstan...@ververica.com> wrote:
>
>> Hi John,
>>
>> in case of a failure (e.g. in the SQL Sink) the Flink Job will be
>> restarted from the last checkpoint. This means the offset of all Kafka
>> partitions will be reset to that point in the stream along with state of
>> all operators. To enable checkpointing you need to call
>> StreamExecutionEnvironment#enableCheckpointing(). If you using the
>> JDBCSinkFunction (which is an at-least-once sink), the output will be
>> duplicated in the case of failures.
>>
>> To answer your questions:
>>
>> * For this the FlinkKafkaConsumer handles the offsets manually (no
>> auto-commit).
>> * No, the Flink Kafka Consumer does only commit offsets back to Kafka
>> on a best-effort basis after every checkpoint. Internally Flink "commits"
>> the checkpoints as part of its periodic checkpoints.
>> * Yes, along with all other events between the last checkpoint and
>> the failure.
>> * It will continue from the last checkpoint.
>>
>> Hope this helps.
>>
>> Cheers,
>>
>> Konstantin
>>
>> On Fri, Jul 5, 2019 at 8:37 PM John Smith 
>> wrote:
>>
>>> Hi using Apache Flink 1.8.0
>>>
>>> I'm consuming events from Kafka using nothing fancy...
>>>
>>> Properties props = new Properties();
>>> props.setProperty("bootstrap.servers", kafkaAddress);
>>> props.setProperty("group.id",kafkaGroup);
>>>
>>> FlinkKafkaConsumer consumer = new FlinkKafkaConsumer<>(topic, 
>>> new SimpleStringSchema(),props);
>>>
>>>
>>> Do some JSON transforms and then push to my SQL database using JDBC
>>> and stored procedure. Let's assume the SQL sink fails.
>>>
>>> We know that Kafka can either periodically commit offsets or it can
>>> be done manually based on consumers logic.
>>>
>>> - How is the source Kafka consumer offsets handled?
>>> - Does the Flink Kafka consumer commit the offset to per
>>> event/record?
>>> - Will that 

Re: Table API and ProcessWindowFunction

2019-07-11 Thread Flavio Pompermaier
Only one proposal here: many times it happens that when working with
streaming sources you need to define which field is the processing/row.
Right now you could define the processing or event time field
implementingthe DefinedProctimeAttribute or DefinedRowtimeAttribute at
source. But this is only helpful if you use SQL API..with TableFunctions
for example you don't have a way to get the proc/row field easily.
Also in the Flink exercises [1] you use aPojo where you have to implement a
method getEventTime() to retrieve the row time field.

So, why not declaring 2 general interfaces like EventTimeObject and
ProcessingTimeObject so I can declare my objects implementing those
interfaces and I can get the fields I need easily?

Best,
Flavio

[1]
https://github.com/ververica/flink-training-exercises/blob/master/src/main/java/com/dataartisans/flinktraining/exercises/datastream_java/datatypes/TaxiRide.java

On Thu, Jul 11, 2019 at 10:01 AM Flavio Pompermaier 
wrote:

> Thanks Hequn, I'll give it a try!
>
> Best, Flavio
>
> On Thu, Jul 11, 2019 at 3:38 AM Hequn Cheng  wrote:
>
>> Hi,
>>
>> > Can you provide a pseudo-code example of how to implement this?
>> Processing time
>> If you use a TumblingProcessingTimeWindows.of(Time.seconds(1)), for each
>> record, you get the timestamp from System.currentTimeMillis(), say t, and
>> w_start = TimeWindow.getWindowStartWithOffset(t, 0, 1000), and w_end =
>> w_start + 1000.
>>
>> Event time
>> If you use a TumblingEventTimeWindows.of(Time.seconds(1)), for each
>> record, get the timestamp from the corresponding timestamp field, say t,
>> and get w_start and w_end same as above.
>>
>> More examples can be found in TimeWindowTest[1].
>>
>> Best, Hequn
>>
>> [1]
>> https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/operators/windowing/TimeWindowTest.java
>>
>>
>> On Wed, Jul 10, 2019 at 8:46 PM Flavio Pompermaier 
>> wrote:
>>
>>> The problem with the LATERAL JOIN (via
>>> a LookupableTableSource+TableFunction because I need to call that function
>>> using the userId a a parameter)  is that I cannot know the window
>>> start/end..to me it's not clear how to get that from
>>> TimeWindow.getWindowStartWithOffset(timestamp, offset, windowSize)...
>>> Can you provide a pseudo-code example of how to implement this?
>>>
>>> On Tue, Jul 9, 2019 at 4:15 AM Hequn Cheng  wrote:
>>>
 Hi Flavio,

 Thanks for your information.

 From your description, it seems that you only use the window to get the
 start and end time. There are no aggregations happen. If this is the case,
 you can get the start and end time by yourself(the
 `TimeWindow.getWindowStartWithOffset()` shows how to get window start
 according to the timestamp). To be more specific, if you use processing
 time, you can get your timestamp with System.currentTimeMillis(), and then
 use it to get the window start and end
 with `TimeWindow.getWindowStartWithOffset()`. For even time, you can get
 the timestamp from the rowtime field.

 With the start and end time, you can then perform LATERAL JOIN to
 enrich the information. You can add a cache in your table function to avoid
 frequent contacting with the REST endpoint.

 Best, Hequn


 On Mon, Jul 8, 2019 at 10:46 PM Flavio Pompermaier <
 pomperma...@okkam.it> wrote:

> Hi Hequn, thanks for your answer.
> What I'm trying to do is to read a stream of events that basically
> contains a UserId field and, every X minutes (i.e. using a Time Window) 
> and
> for each different UserId key, query 3 different REST services to enrich 
> my
> POJOs*.
> For the moment what I do is to use a ProcessWindowFunction after the
> .keyBy().window() as shown in the  previous mail example to contact those 
> 3
> services and enrich my object.
>
> However I don't like this solution because I'd like to use Flink to
> it's full potential so I'd like to enrich my object using LATERAL TABLEs 
> or
> ASYNC IO..
> The main problem I'm facing right now is that  I can't find a way to
> pass the thumbing window start/end to the LATERAL JOIN table functions
> (because this is a parameter of the REST query).
> Moreover I don't know whether this use case is something that Table
> API aims to solve..
>
> * Of course this could kill the REST endpoint if the number of users
> is very big ..because of this I'd like to keep the external state of 
> source
> tables as an internal Flink state and then do a JOIN on the UserId. The
> problem here is that I need to "materialize" them using Debezium (or
> similar) via Kafka and dynamic tables..is there any example of keeping
> multiple tables synched with Flink state through Debezium (without the 
> need
> of rewriting all the logic for managing UPDATE/INSERT/DELETE)?
>
> On Mon, Jul 8, 2019 

Re: Table API and ProcessWindowFunction

2019-07-11 Thread Flavio Pompermaier
Thanks Hequn, I'll give it a try!

Best, Flavio

On Thu, Jul 11, 2019 at 3:38 AM Hequn Cheng  wrote:

> Hi,
>
> > Can you provide a pseudo-code example of how to implement this?
> Processing time
> If you use a TumblingProcessingTimeWindows.of(Time.seconds(1)), for each
> record, you get the timestamp from System.currentTimeMillis(), say t, and
> w_start = TimeWindow.getWindowStartWithOffset(t, 0, 1000), and w_end =
> w_start + 1000.
>
> Event time
> If you use a TumblingEventTimeWindows.of(Time.seconds(1)), for each
> record, get the timestamp from the corresponding timestamp field, say t,
> and get w_start and w_end same as above.
>
> More examples can be found in TimeWindowTest[1].
>
> Best, Hequn
>
> [1]
> https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/operators/windowing/TimeWindowTest.java
>
>
> On Wed, Jul 10, 2019 at 8:46 PM Flavio Pompermaier 
> wrote:
>
>> The problem with the LATERAL JOIN (via
>> a LookupableTableSource+TableFunction because I need to call that function
>> using the userId a a parameter)  is that I cannot know the window
>> start/end..to me it's not clear how to get that from
>> TimeWindow.getWindowStartWithOffset(timestamp, offset, windowSize)...
>> Can you provide a pseudo-code example of how to implement this?
>>
>> On Tue, Jul 9, 2019 at 4:15 AM Hequn Cheng  wrote:
>>
>>> Hi Flavio,
>>>
>>> Thanks for your information.
>>>
>>> From your description, it seems that you only use the window to get the
>>> start and end time. There are no aggregations happen. If this is the case,
>>> you can get the start and end time by yourself(the
>>> `TimeWindow.getWindowStartWithOffset()` shows how to get window start
>>> according to the timestamp). To be more specific, if you use processing
>>> time, you can get your timestamp with System.currentTimeMillis(), and then
>>> use it to get the window start and end
>>> with `TimeWindow.getWindowStartWithOffset()`. For even time, you can get
>>> the timestamp from the rowtime field.
>>>
>>> With the start and end time, you can then perform LATERAL JOIN to enrich
>>> the information. You can add a cache in your table function to avoid
>>> frequent contacting with the REST endpoint.
>>>
>>> Best, Hequn
>>>
>>>
>>> On Mon, Jul 8, 2019 at 10:46 PM Flavio Pompermaier 
>>> wrote:
>>>
 Hi Hequn, thanks for your answer.
 What I'm trying to do is to read a stream of events that basically
 contains a UserId field and, every X minutes (i.e. using a Time Window) and
 for each different UserId key, query 3 different REST services to enrich my
 POJOs*.
 For the moment what I do is to use a ProcessWindowFunction after the
 .keyBy().window() as shown in the  previous mail example to contact those 3
 services and enrich my object.

 However I don't like this solution because I'd like to use Flink to
 it's full potential so I'd like to enrich my object using LATERAL TABLEs or
 ASYNC IO..
 The main problem I'm facing right now is that  I can't find a way to
 pass the thumbing window start/end to the LATERAL JOIN table functions
 (because this is a parameter of the REST query).
 Moreover I don't know whether this use case is something that Table API
 aims to solve..

 * Of course this could kill the REST endpoint if the number of users is
 very big ..because of this I'd like to keep the external state of source
 tables as an internal Flink state and then do a JOIN on the UserId. The
 problem here is that I need to "materialize" them using Debezium (or
 similar) via Kafka and dynamic tables..is there any example of keeping
 multiple tables synched with Flink state through Debezium (without the need
 of rewriting all the logic for managing UPDATE/INSERT/DELETE)?

 On Mon, Jul 8, 2019 at 3:55 PM Hequn Cheng 
 wrote:

> Hi Flavio,
>
> Nice to hear your ideas on Table API!
>
> Could you be more specific about your requirements? A detailed
> scenario would be quite helpful. For example, do you want to emit multi
> records through the collector or do you want to use the timer?
>
> BTW, Table API introduces flatAggregate recently(both non-window
> flatAggregate and window flatAggregate) and will be included in the near
> coming release-1.9. The flatAggregate can emit multi records for a single
> group. More details here[1][2].
> Hope this can solve your problem.
>
> Best, Hequn
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/tableApi.html#row-based-operations
> [2]
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/udfs.html#table-aggregation-functions
>
> On Mon, Jul 8, 2019 at 6:27 PM Flavio Pompermaier <
> pomperma...@okkam.it> wrote:
>
>> Hi to all,
>> from what I understood a ProcessWindowFunction can only be used in
>> the Streaming API.