Re: Flink官网barrier疑问

2019-08-07 Thread Biao Liu
你好,范瑞 Barrier alignment 这里并不会涉及 output/input queue,pending 的只是用于 alignment 的一小部分数据。 如果想了解 checkpoint 的原理,建议阅读文档中引用的两篇论文。[1] [2] 如果想了解 Flink 的具体实现,这里的文档是 internal 部分的,可能需要阅读一下相关代码了。[3] [4] 1. https://arxiv.org/abs/1506.08603 2.

Re: Capping RocksDb memory usage

2019-08-07 Thread Cam Mach
Yes, that is correct. Cam Mach Software Engineer E-mail: cammac...@gmail.com Tel: 206 972 2768 On Wed, Aug 7, 2019 at 8:33 PM Biao Liu wrote: > Hi Cam, > > Do you mean you want to limit the memory usage of RocksDB state backend? > > Thanks, > Biao /'bɪ.aʊ/ > > > > On Thu, Aug 8, 2019 at 2:23

Re: Does RocksDBStateBackend need a separate RocksDB service?

2019-08-07 Thread Biao Liu
Hi wanglei, > Is there an embeded RocksDB service in the flink task? Yes, and "RocksDB is an embeddable persistent key-value store for fast storage". [1] 1. http://rocksdb.org/ Thanks, Biao /'bɪ.aʊ/ On Wed, Aug 7, 2019 at 7:27 PM miki haiat wrote: > There is no need to add an external

Re: Capping RocksDb memory usage

2019-08-07 Thread Biao Liu
Hi Cam, Do you mean you want to limit the memory usage of RocksDB state backend? Thanks, Biao /'bɪ.aʊ/ On Thu, Aug 8, 2019 at 2:23 AM miki haiat wrote: > I think using metrics exporter is the easiest way > > [1] >

Re:Re: FlatMap returning Row<> based on ArrayList elements()

2019-08-07 Thread Haibo Sun
Hi AU, > The problem with this approach is that I'm looking for a standard FlatMap > anonymous function that could return every time: 1. different number of > elements within the Array and 2. the data type can be random likewise. I mean > is not fixed the whole time then my TypeInformation

Re: Can Flink help us solve the following use case

2019-08-07 Thread Biao Liu
Hi Yoandy, Could you explain more of your requirements? Why do you want to split data into "time slices"? Do you want to do some aggregations or just give each record a tag or tags? Thanks, Biao /'bɪ.aʊ/ On Thu, Aug 8, 2019 at 4:52 AM Sameer Wadkar wrote: > You could do this using custom

Re: [EXTERNAL] Re: Delayed processing and Rate limiting

2019-08-07 Thread Biao Liu
Hi Shakir, I'm not sure I have fully understand your requirements. I'll try to answer your questions. >From my understanding, there is no built-in feature of Flink to support "rate limit" directly. I guess you need to implement one yourself. Both of MapFunction or AsyncFunction could satisfy

Re: Passing jvm options to flink

2019-08-07 Thread Zhu Zhu
Hi Vishwas, For question #1, You can use *env.java.opts* to customize java opts for all flink JVMs, including JM and TM. Or you can use *env.java.opts.jobmanager* or *env.java.opts.taskmanager* accordingly for JM or TM. For question #2, You can use* -C* to specify user classpaths when invoking

Re:Re: [ANNOUNCE] Hequn becomes a Flink committer

2019-08-07 Thread Haibo Sun
Congratulations! Best, Haibo At 2019-08-08 02:08:21, "Yun Tang" wrote: >Congratulations Hequn. > >Best >Yun Tang > >From: Rong Rong >Sent: Thursday, August 8, 2019 0:41 >Cc: dev ; user >Subject: Re: [ANNOUNCE] Hequn becomes a Flink committer > >Congratulations

Re: Flink Table API schema doesn't support nested object in ObjectArray

2019-08-07 Thread Jacky Du
thanks Fabian , I created a Jira ticket with a code sample . https://issues.apache.org/jira/projects/FLINK/issues/FLINK-13603?filter=allopenissues I think if the root cause I found is correct, fix this issue could be pretty simple . Thanks Jacky Du Fabian Hueske 于2019年8月2日周五 下午12:07写道: >

Re: Can Flink help us solve the following use case

2019-08-07 Thread Sameer Wadkar
You could do this using custom triggers and evictors in Flink. That way you can control when the windows fire and what elements are fired with it. And lastly the custom evictors know when to remove elements from the window. Yes Flink can support it. Sent from my iPhone > On Aug 7, 2019, at

Can Flink help us solve the following use case

2019-08-07 Thread Yoandy Rodríguez
Hello everybody, We have the following situation: 1) A data stream which collects all system events (near 1/2 a mil per day). 2) A database storing some aggregation of the data. We want to split the data into different "time slices" and be able to "tag it" accordingly. Example: the events in

FlinkDynamoDBStreamsConsumer Atleast Once

2019-08-07 Thread sri hari kali charan Tummala
Hi Flink Experts, how to achieve at least once semantics with FlinkDynamoDBStreamsConsumer + DynamoDB Streams ? Flink checkpointing or save points do the job? My Scenario:- Flink application uses FlinkDynamoDBStreamsConsumer which reads latest changes from DynamoDB streams but if my software

Re: [EXTERNAL] Re: Delayed processing and Rate limiting

2019-08-07 Thread PoolakkalMukkath, Shakir
Hi Victor, Thanks for the reply and it helps. For the delayed processing, this is exactly what I was looking for. But for the Rate Limit, the one you suggested can only control the number of parallel requests. What I am looking is like limit the number of request per second or minute etc.

Re: Capping RocksDb memory usage

2019-08-07 Thread miki haiat
I think using metrics exporter is the easiest way [1] https://ci.apache.org/projects/flink/flink-docs-stable/monitoring/metrics.html#rocksdb On Wed, Aug 7, 2019, 20:28 Cam Mach wrote: > Hello everyone, > > What is the most easy and efficiently way to cap RocksDb's memory usage? > > Thanks, >

Re: [ANNOUNCE] Hequn becomes a Flink committer

2019-08-07 Thread Yun Tang
Congratulations Hequn. Best Yun Tang From: Rong Rong Sent: Thursday, August 8, 2019 0:41 Cc: dev ; user Subject: Re: [ANNOUNCE] Hequn becomes a Flink committer Congratulations Hequn, well deserved! -- Rong On Wed, Aug 7, 2019 at 8:30 AM

Capping RocksDb memory usage

2019-08-07 Thread Cam Mach
Hello everyone, What is the most easy and efficiently way to cap RocksDb's memory usage? Thanks, Cam

Re: how to get the code produced by Flink Code Generator

2019-08-07 Thread Rong Rong
+1. I think this would be a very nice way to provide more verbose feedback for debugging. -- Rong On Wed, Aug 7, 2019 at 9:28 AM Fabian Hueske wrote: > Hi Vincent, > > I don't think there is such a flag in Flink. > However, this sounds like a really good idea. > > Would you mind creating a

Passing jvm options to flink

2019-08-07 Thread Vishwas Siravara
Hi , I am running flink on a standalone cluster without any resource manager like yarn or K8s. I am submitting my job using command line "*f**link run ..." . *I have a couple of questions: *1. *How can I pass JVM parameters to this job. I want to pass a parameter for a dylib like this

Re: [ANNOUNCE] Hequn becomes a Flink committer

2019-08-07 Thread Rong Rong
Congratulations Hequn, well deserved! -- Rong On Wed, Aug 7, 2019 at 8:30 AM wrote: > Congratulations, Hequn! > > > > *From:* Xintong Song > *Sent:* Wednesday, August 07, 2019 10:41 AM > *To:* d...@flink.apache.org > *Cc:* user > *Subject:* Re: [ANNOUNCE] Hequn becomes a Flink committer > >

Re: how to get the code produced by Flink Code Generator

2019-08-07 Thread Fabian Hueske
Hi Vincent, I don't think there is such a flag in Flink. However, this sounds like a really good idea. Would you mind creating a Jira ticket for this? Thank you, Fabian Am Di., 6. Aug. 2019 um 17:53 Uhr schrieb Vincent Cai < caidezhi...@foxmail.com>: > Hi Users, > In Spark, we can invoke

RE: [ANNOUNCE] Hequn becomes a Flink committer

2019-08-07 Thread xingcanc
Congratulations, Hequn! From: Xintong Song Sent: Wednesday, August 07, 2019 10:41 AM To: d...@flink.apache.org Cc: user Subject: Re: [ANNOUNCE] Hequn becomes a Flink committer Congratulations~! Thank you~ Xintong Song On Wed, Aug 7, 2019 at 4:00 PM vino yang

Re: FlatMap returning Row<> based on ArrayList elements()

2019-08-07 Thread Andres Angel
Hello Victor , You are totally right , so now this turn into is Flink capable to handle these cases where would be required define the type info in the row and the Table will infer the columns separated by comma or something similar? thanks AU On Wed, Aug 7, 2019 at 10:33 AM Victor Wong wrote:

Re: Delayed processing and Rate limiting

2019-08-07 Thread Victor Wong
Hi Shakir, > Delayed Processing Maybe you can make use of the function ‘org.apache.flink.streaming.api.TimerService#registerProcessingTimeTimer’, check this doc for more details: https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/operators/process_function.html#example > Rate

Greedy operator in match_recognize doesn't work with defined time interval as I expect it to work

2019-08-07 Thread Theo Diefenthal
Hi there, I built myself a small example in order to test MATCH_RECOGNIZE in flink. I have a pattern of format "A B* C", where B is a `1=1` event matching anything. Additionally, I have a time interval of '2' DAY. Now I create testdata matching event A, send some random events (B), followed

Re: [ANNOUNCE] Hequn becomes a Flink committer

2019-08-07 Thread Xintong Song
Congratulations~! Thank you~ Xintong Song On Wed, Aug 7, 2019 at 4:00 PM vino yang wrote: > Congratulations! > > highfei2...@126.com 于2019年8月7日周三 下午7:09写道: > > > Congrats Hequn! > > > > Best, > > Jeff Yang > > > > > > Original Message > > Subject: Re: [ANNOUNCE] Hequn

Re: FlatMap returning Row<> based on ArrayList elements()

2019-08-07 Thread Victor Wong
Hi Andres, I’d like to share my thoughts: When you register a “Table”, you need to specify its “schema”, so how can you register the table when the number of elements/columns and data types are both nondeterministic. Correct me if I misunderstood your meaning. Best, Victor From: Andres

Delayed processing and Rate limiting

2019-08-07 Thread PoolakkalMukkath, Shakir
Hi Flink Team, I am looking for some direction/recommendation for below tasks 1. Delayed Processing: Having a use case where we need to process events after a time-delay from event time. Let’s say, the event happened at time t1, and it reached the Flink immediately, but I have to wait

Re: [ANNOUNCE] Hequn becomes a Flink committer

2019-08-07 Thread vino yang
Congratulations! highfei2...@126.com 于2019年8月7日周三 下午7:09写道: > Congrats Hequn! > > Best, > Jeff Yang > > > Original Message > Subject: Re: [ANNOUNCE] Hequn becomes a Flink committer > From: Piotr Nowojski > To: JingsongLee > CC: Biao Liu ,Zhu Zhu ,Zili Chen ,Jeff Zhang ,Paul

Re: FlatMap returning Row<> based on ArrayList elements()

2019-08-07 Thread Andres Angel
Hello everyone, let me be more precis on what I'm looking for at the end because your example is right and very accurate in the way about how to turn an array into a Row() object. I have done it seamlessly: out.collect(Row.of(pelements.toArray())); Then I printed and the outcome is as expected:

Re: [ANNOUNCE] Hequn becomes a Flink committer

2019-08-07 Thread Terry Wang
Congratulations Hequn, well deserved! Best, Terry Wang > 在 2019年8月7日,下午9:16,Oytun Tez 写道: > > Congratulations Hequn! > > --- > Oytun Tez > > M O T A W O R D > The World's Fastest Human Translation Platform. > oy...@motaword.com — www.motaword.com >

Re: [ANNOUNCE] Hequn becomes a Flink committer

2019-08-07 Thread Oytun Tez
Congratulations Hequn! --- Oytun Tez *M O T A W O R D* The World's Fastest Human Translation Platform. oy...@motaword.com — www.motaword.com On Wed, Aug 7, 2019 at 9:03 AM Jark Wu wrote: > Congratulations Hequn! It's great to have you in the community! > > > > On Wed, 7 Aug 2019 at 21:00,

Re: [ANNOUNCE] Hequn becomes a Flink committer

2019-08-07 Thread Jark Wu
Congratulations Hequn! It's great to have you in the community! On Wed, 7 Aug 2019 at 21:00, Fabian Hueske wrote: > Congratulations Hequn! > > Am Mi., 7. Aug. 2019 um 14:50 Uhr schrieb Robert Metzger < > rmetz...@apache.org>: > >> Congratulations! >> >> On Wed, Aug 7, 2019 at 1:09 PM

Re: [ANNOUNCE] Hequn becomes a Flink committer

2019-08-07 Thread Fabian Hueske
Congratulations Hequn! Am Mi., 7. Aug. 2019 um 14:50 Uhr schrieb Robert Metzger < rmetz...@apache.org>: > Congratulations! > > On Wed, Aug 7, 2019 at 1:09 PM highfei2...@126.com > wrote: > > > Congrats Hequn! > > > > Best, > > Jeff Yang > > > > > > Original Message > >

Re: [ANNOUNCE] Hequn becomes a Flink committer

2019-08-07 Thread Robert Metzger
Congratulations! On Wed, Aug 7, 2019 at 1:09 PM highfei2...@126.com wrote: > Congrats Hequn! > > Best, > Jeff Yang > > > Original Message > Subject: Re: [ANNOUNCE] Hequn becomes a Flink committer > From: Piotr Nowojski > To: JingsongLee > CC: Biao Liu ,Zhu Zhu ,Zili Chen ,Jeff

Re: Configure Prometheus Exporter

2019-08-07 Thread Chaoran Yu
Got it. I’ll look into this option. Thanks! > On Aug 7, 2019, at 08:46, Chesnay Schepler wrote: > > The only thing you can do at the moment, to limit which metrics are exposed, > is to implement your own MetricsReporter. You could extend the prometheus > one, and introduce any rules you want

Re: Configure Prometheus Exporter

2019-08-07 Thread Chesnay Schepler
The only thing you can do at the moment, to limit which metrics are exposed, is to implement your own MetricsReporter. You could extend the prometheus one, and introduce any rules you want into notifyOfAddedMetric(). On 07/08/2019 14:42, Chaoran Yu wrote: Thanks for the reply. Yes. That’s what

Re: Configure Prometheus Exporter

2019-08-07 Thread Chaoran Yu
Thanks for the reply. Yes. That’s what I’m trying to do. I think Flink by default exports all metrics. Is there anything else I can do to achieve this goal? > On Aug 7, 2019, at 03:58, Chesnay Schepler wrote: > > This is not possible. Are you trying to limit which metrics are exposed? > > On

XML equivalent to JsonRowDeserializationSchema

2019-08-07 Thread françois lacombe
Hi everyone, I've had a good experience with JsonRowDeserializationSchema to deserialise nested json records according to Flink's TypeInformation. As I look to process XML records now, does anyone use an equivalent for this format please? Nested records may be a pain to process sometimes and

Re: flink-1.8.1 yarn per job模式使用

2019-08-07 Thread Yuhuan Li
非常感谢tison,完美的解决了我的问题,以后会多留意社区问题。 具体到自己的hadoop版本,就是在flink工程编译 flink-1.8.1/flink-shaded-hadoop/flink-shaded-hadoop2-uber/target 的jar放在lib下即可 Zili Chen 于2019年8月7日周三 下午7:33写道: > 这个问题以前邮件列表有人提过...不过现在 user-zh 没有 archive 不好引用。 > > 你看下是不是 lib 下面没有 flink-shaded-hadoop-2-uber--7.0.jar 这样一个文件。 > > 1.8.1

Re: flink-1.8.1 yarn per job模式使用

2019-08-07 Thread Zili Chen
这个问题以前邮件列表有人提过...不过现在 user-zh 没有 archive 不好引用。 你看下是不是 lib 下面没有 flink-shaded-hadoop-2-uber--7.0.jar 这样一个文件。 1.8.1 之后 FLINK 把 hadoop(YARN) 的 lib 分开 release 了,你要指定自己的 HADOOP_CLASSPATH 或者下载 FLINK 官网 pre-bundle 的 hadoop。 具体可以看这个页面(https://flink.apache.org/downloads.html)第一段的内容。 Best, tison. 李玉环

Re: Does RocksDBStateBackend need a separate RocksDB service?

2019-08-07 Thread miki haiat
There is no need to add an external RocksDB instance . *The RocksDBStateBackend holds in-flight data in a RocksDB database that is (per default) stored in the TaskManager data directories. * [1]

flink-1.8.1 yarn per job模式使用

2019-08-07 Thread 李玉环
Hi 大家好: 在使用flink过程中,运行官网给的命令 https://ci.apache.org/projects/flink/flink-docs-stable/ops/deployment/yarn_setup.html#run-a-single-flink-job-on-yarn 报错如下: ➜ flink-1.8.1 ./bin/flink run -m yarn-cluster ./examples/batch/WordCount.jar The

Re: [ANNOUNCE] Hequn becomes a Flink committer

2019-08-07 Thread Piotr Nowojski
Congratulations :) > On 7 Aug 2019, at 12:09, JingsongLee wrote: > > Congrats Hequn! > > Best, > Jingsong Lee > > -- > From:Biao Liu > Send Time:2019年8月7日(星期三) 12:05 > To:Zhu Zhu > Cc:Zili Chen ; Jeff Zhang ; Paul Lam > ;

Does RocksDBStateBackend need a separate RocksDB service?

2019-08-07 Thread wangl...@geekplus.com.cn
In my code, I just setStateBackend with a hdfs direcoty. env.setStateBackend(new RocksDBStateBackend("hdfs://user/test/job")); Is there an embeded RocksDB service in the flink task? wangl...@geekplus.com.cn

Re: [ANNOUNCE] Hequn becomes a Flink committer

2019-08-07 Thread JingsongLee
Congrats Hequn! Best, Jingsong Lee -- From:Biao Liu Send Time:2019年8月7日(星期三) 12:05 To:Zhu Zhu Cc:Zili Chen ; Jeff Zhang ; Paul Lam ; jincheng sun ; dev ; user Subject:Re: [ANNOUNCE] Hequn becomes a Flink committer Congrats

Re: [ANNOUNCE] Hequn becomes a Flink committer

2019-08-07 Thread Biao Liu
Congrats Hequn! Thanks, Biao /'bɪ.aʊ/ On Wed, Aug 7, 2019 at 6:00 PM Zhu Zhu wrote: > Congratulations to Hequn! > > Thanks, > Zhu Zhu > > Zili Chen 于2019年8月7日周三 下午5:16写道: > >> Congrats Hequn! >> >> Best, >> tison. >> >> >> Jeff Zhang 于2019年8月7日周三 下午5:14写道: >> >>> Congrats Hequn! >>> >>>

Re: [ANNOUNCE] Hequn becomes a Flink committer

2019-08-07 Thread Zhu Zhu
Congratulations to Hequn! Thanks, Zhu Zhu Zili Chen 于2019年8月7日周三 下午5:16写道: > Congrats Hequn! > > Best, > tison. > > > Jeff Zhang 于2019年8月7日周三 下午5:14写道: > >> Congrats Hequn! >> >> Paul Lam 于2019年8月7日周三 下午5:08写道: >> >>> Congrats Hequn! Well deserved! >>> >>> Best, >>> Paul Lam >>> >>> 在

Re: [ANNOUNCE] Hequn becomes a Flink committer

2019-08-07 Thread Dawid Wysakowicz
Congratulations Hequn! Glad to have you in the community! On 07/08/2019 10:28, jincheng sun wrote: > Hi everyone, > > I'm very happy to announce that Hequn accepted the offer of the Flink > PMC to become a committer of the Flink project. > > Hequn has been contributing to Flink for many years,

Re: [ANNOUNCE] Hequn becomes a Flink committer

2019-08-07 Thread Till Rohrmann
Congrats Hequn and welcome onboard as a committer :-) Cheers, Till On Wed, Aug 7, 2019 at 11:30 AM Becket Qin wrote: > Congrats, Hequn! Well deserved! > > On Wed, Aug 7, 2019 at 11:16 AM Zili Chen wrote: > >> Congrats Hequn! >> >> Best, >> tison. >> >> >> Jeff Zhang 于2019年8月7日周三 下午5:14写道: >>

Re: [ANNOUNCE] Hequn becomes a Flink committer

2019-08-07 Thread Becket Qin
Congrats, Hequn! Well deserved! On Wed, Aug 7, 2019 at 11:16 AM Zili Chen wrote: > Congrats Hequn! > > Best, > tison. > > > Jeff Zhang 于2019年8月7日周三 下午5:14写道: > >> Congrats Hequn! >> >> Paul Lam 于2019年8月7日周三 下午5:08写道: >> >>> Congrats Hequn! Well deserved! >>> >>> Best, >>> Paul Lam >>> >>> 在

Re:关于event-time的定义与产生时间戳位置的问题。

2019-08-07 Thread 邵志鹏
Hi, 可以看下事件时间戳的生成,https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/event_timestamps_watermarks.html 下面例子里时间戳都是来自element里面的时间字段。还有一个AscendingTimestampExtractor。 /** * This generator generates watermarks assuming that elements arrive out of order, * but only to a certain degree. The

Re: [ANNOUNCE] Hequn becomes a Flink committer

2019-08-07 Thread Zili Chen
Congrats Hequn! Best, tison. Jeff Zhang 于2019年8月7日周三 下午5:14写道: > Congrats Hequn! > > Paul Lam 于2019年8月7日周三 下午5:08写道: > >> Congrats Hequn! Well deserved! >> >> Best, >> Paul Lam >> >> 在 2019年8月7日,16:28,jincheng sun 写道: >> >> Hi everyone, >> >> I'm very happy to announce that Hequn accepted

Re: [ANNOUNCE] Hequn becomes a Flink committer

2019-08-07 Thread Dian Fu
Congratulations, Hequn! Well deserved! > 在 2019年8月7日,下午5:13,Jeff Zhang 写道: > > Congrats Hequn! > > Paul Lam mailto:paullin3...@gmail.com>> 于2019年8月7日周三 > 下午5:08写道: > Congrats Hequn! Well deserved! > > Best, > Paul Lam > >> 在 2019年8月7日,16:28,jincheng sun > >

Re: [ANNOUNCE] Hequn becomes a Flink committer

2019-08-07 Thread Jeff Zhang
Congrats Hequn! Paul Lam 于2019年8月7日周三 下午5:08写道: > Congrats Hequn! Well deserved! > > Best, > Paul Lam > > 在 2019年8月7日,16:28,jincheng sun 写道: > > Hi everyone, > > I'm very happy to announce that Hequn accepted the offer of the Flink PMC > to become a committer of the Flink project. > > Hequn

Re: [ANNOUNCE] Hequn becomes a Flink committer

2019-08-07 Thread Paul Lam
Congrats Hequn! Well deserved! Best, Paul Lam > 在 2019年8月7日,16:28,jincheng sun 写道: > > Hi everyone, > > I'm very happy to announce that Hequn accepted the offer of the Flink PMC to > become a committer of the Flink project. > > Hequn has been contributing to Flink for many years, mainly

Re: From Kafka Stream to Flink

2019-08-07 Thread Fabian Hueske
Hi, LAST_VAL is not a built-in function, so you'd need to implement it as a user-defined aggregate function (UDAGG) and register it. The problem with joining an append only table with an updating table is the following. Consider two tables: users (uid, name, zip) and orders (oid, uid, product),

[ANNOUNCE] Hequn becomes a Flink committer

2019-08-07 Thread jincheng sun
Hi everyone, I'm very happy to announce that Hequn accepted the offer of the Flink PMC to become a committer of the Flink project. Hequn has been contributing to Flink for many years, mainly working on SQL/Table API features. He's also frequently helping out on the user mailing lists and helping

Re: Consuming data from dynamoDB streams to flink

2019-08-07 Thread Vinay Patil
Hi Andrey, Thank you for your reply, I understand that the checkpoints are gone when the job is cancelled or killed, may be configuring external checkpoints will help here so that we can resume from there. My points was if the job is terminated, and the stream position is set to TRIM_HORIZON ,

Re: Configure Prometheus Exporter

2019-08-07 Thread Chesnay Schepler
This is not possible. Are you trying to limit which metrics are exposed? On 07/08/2019 06:52, Chaoran Yu wrote: Hello guys,    Does anyone know if the Prometheus metrics exported via the JMX reporter or the Prometheus reporter can be configured using a YAML file similar to this one

Re:FlatMap returning Row<> based on ArrayList elements()

2019-08-07 Thread Haibo Sun
Hi Andres Angel, I guess people don't understand your problem (including me). I don't know if the following sample code is what you want, if not, can you describe the problem more clearly? StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

关于event-time的定义与产生时间戳位置的问题。

2019-08-07 Thread xiaohei.info
hi,all:   event time这个时间戳是在什么时候打到数据上面去的,看api是在flink source收到数据之后再标注的,并不是真正的数据源携带过来的(比如手机终端)?使用kafka source的话根据文档的定义kafka携带的时间戳也仅仅是kafka收到数据的时候打上的时间戳。 那么有个问题:以kafka为例,数据到队列的时候按「顺序」打上时间戳,那么如果数据是「乱序到达」的也被打上了「递增的时间戳」,后续基于event-time的处理都是基于这个时间戳来进行,那不就丧失了真实世界的定义吗?   不知道有哪里是我理解不对的地方望指教!   祝好~

Re: Re: submit jobGraph error on server side

2019-08-07 Thread Zili Chen
从错误堆栈上看你的请求应该是已经发到 jobmanager 上了,也就是不存在找不到端口的问题。 但是 jobmanager 在处理 submit job 的时候某个动作超时了。你这个问题是一旦把 gateway 分开就稳定复现吗?也有可能是 akka 偶然的超时。 Best, tison. 王智 于2019年8月7日周三 下午2:33写道: > 感谢您的回复与指导~ > > > 经过简单的验证(验证方案在邮件末尾),明确是网络问题。 > > > 现在我猜测是flink run 提交job graph 的时候打开了除 这四个以外的端口导致。麻烦再请教一下,flink

Re: Flink 1.8.1: Seeing {"errors":["Not found."]} when trying to access the Jobmanagers web interface

2019-08-07 Thread Kaymak, Tobias
Yes, I did, but I am using a session cluster https://ci.apache.org/projects/flink/flink-docs-release-1.8/ops/deployment/docker.html#flink-session-cluster to run many jobs On Tue, Aug 6, 2019 at 2:05 PM Ufuk Celebi wrote: > Hey Tobias, > > out of curiosity: were you using the job/application

回复:Re: submit jobGraph error on server side

2019-08-07 Thread 王智
感谢您的回复与指导~ 经过简单的验证(验证方案在邮件末尾),明确是网络问题。 现在我猜测是flink run 提交job graph 的时候打开了除 这四个以外的端口导致。麻烦再请教一下,flink jobmanager 是否会打开新的端口进行通讯(或者还有其他端口配置我没有注意到) ports: - containerPort: 6123 protocol: TCP - containerPort: 6124 protocol: TCP - containerPort: 6125 protocol: TCP -

Flink官网barrier疑问

2019-08-07 Thread
Hi,老师: 老师,你好flink官网这个页面(https://ci.apache.org/projects/flink/flink-docs-release-1.8/internals/stream_checkpointing.htm)介绍barrier对齐的这里第三步 • Once the last stream has received barrier n, the operator emits all pending outgoing records, and then emits snapshot n barriers itself.