Re: 回复: 回复: 回复: 回复:如何用SQL表达对设备离在线监控

2019-12-05 Thread Shuo Cheng
用 udagg 应该能比较完美的解决你的问题 ^.^ On 12/6/19, Djeng Lee wrote: > 存在查询 > > 在 2019/12/5 下午4:06,“Yuan,Youjun” 写入: > > Count=0的窗口如何能得到呢?没有数据就没有产出。 > 然而可以同rows > over窗口,将两个前后窗口的sum-当前的count,可以间接得到两个窗口的count是否相等。同时辅以前后窗口时间的差,来辅助判断。 > >

Re: 回复: 回复: 回复: 回复:如何用SQL表达对设备离在线监控

2019-12-05 Thread Djeng Lee
存在查询 在 2019/12/5 下午4:06,“Yuan,Youjun” 写入: Count=0的窗口如何能得到呢?没有数据就没有产出。 然而可以同rows over窗口,将两个前后窗口的sum-当前的count,可以间接得到两个窗口的count是否相等。同时辅以前后窗口时间的差,来辅助判断。 最终在自定义函数last_value_str/first_value_str的帮助下,勉强得以实现(尚不完美,可能出现连续的ONLINE的输出) 下面是我的SQL,仅供参考: INSERT INTO mysink SELECT

yarn-session模式通过python api消费kafka数据报错

2019-12-05 Thread 改改
[root@hdp02 bin]# ./flink run -yid application_1575352295616_0014 -py /opt/tumble_window.py 2019-12-06 14:15:48,262 INFO org.apache.flink.yarn.cli.FlinkYarnSessionCli - Found Yarn properties file under /tmp/.yarn-properties-root. 2019-12-06 14:15:48,262 INFO

Re: flink消费kafka得问题

2019-12-05 Thread Charoes
hi, 参考文档里, 两个flink可以消费不同的partitions. Map specificStartOffsets = new HashMap<>();specificStartOffsets.put(new KafkaTopicPartition("myTopic", 0), 23L);specificStartOffsets.put(new KafkaTopicPartition("myTopic", 1), 31L);specificStartOffsets.put(new KafkaTopicPartition("myTopic", 2), 43L);

flink????kafka??????

2019-12-05 Thread cs
Hi all,??flink??topicgroup idkakfaflink??group id

Re: Need help using AggregateFunction instead of FoldFunction

2019-12-05 Thread devinbost
They released Pulsar 2.4.2, and I was able to pull its dependencies and successfully submit the Flink job. It's able to receive messages from the Pulsar topic successfully. However, I still don't think I'm using the AggregateFunction correctly. I added logging statements everywhere in my code,

Re: User program failures cause JobManager to be shutdown

2019-12-05 Thread Dongwon Kim
FYI, we've launched a session cluster where multiple jobs are managed by a job manager. If that happens, all the other jobs also fail because the job manager is shut down and all the task managers get into chaos (failing to connect to the job manager). I just searched a way to prevent

Re: User program failures cause JobManager to be shutdown

2019-12-05 Thread Dongwon Kim
Hi Robert and Roman, Thank you for taking a look at this. what is your main() method / client doing when it's receiving wrong program > parameters? Does it call System.exit(), or something like that? > I just found that our HTTP client is programmed to call System.exit(1). I should guide not to

Re: What S3 Permissions does StreamingFileSink need?

2019-12-05 Thread Li Peng
Hey Roman, my permissions is as listed above, and here's the error message I get: ava.lang.Exception: org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: Could not forward element to next operator at

Row arity of from does not match serializers.

2019-12-05 Thread srikanth flink
My Flink job does reading from Kafka stream and does some processing. Code snippet: > DataStream flatternedDnsStream = filteredNonNullStream.rebalance() > .map(node -> { > JsonNode source = node.path("source"); > JsonNode destination = node.path("destination"); > JsonNode dns = node.path("dns");

Re: How does Flink handle backpressure in EMR

2019-12-05 Thread Nguyen, Michael
Great! I’ll try it out – thank you Piotrek. Michael From: Piotr Nowojski on behalf of Piotr Nowojski Date: Thursday, December 5, 2019 at 11:03 AM To: Michael Nguyen Cc: Khachatryan Roman , "user@flink.apache.org" Subject: Re: How does Flink handle backpressure in EMR [External] Hi, You

Re: How does Flink handle backpressure in EMR

2019-12-05 Thread Piotr Nowojski
Hi, You can find information how to use metrics here [1]. I don’t think there is a straightforward way to access them from within a job. You could access them via JMX when using JMXReporter or you can implement some custom reporter, that could expose the metrics via localhost connections or

Re: User program failures cause JobManager to be shutdown

2019-12-05 Thread Robert Metzger
Hi Dongwon, what is your main() method / client doing when it's receiving wrong program parameters? Does it call System.exit(), or something like that? By the way, the http address from the error message is publicly available. Not sure if this is internal data or not. On Thu, Dec 5, 2019 at

Re: How does Flink handle backpressure in EMR

2019-12-05 Thread Nguyen, Michael
Hi Piotrek, For the second article, I understand I can monitor the backpressure status via the Flink Web UI. Can I refer to the same metrics in my Flink jobs itself? For example, can I put in an if statement to check for when outPoolUsage reaches 100%? Thank you, Michael From: Piotr Nowojski

Re: How does Flink handle backpressure in EMR

2019-12-05 Thread Piotr Nowojski
Hi, If you are using event time and watermarks, you can monitor the delays using `currentInputWatermark` metric [1]. If not (or alternatively), this blog post [2] describes how to check back pressure status [2] for Flink up to 1.9. In Flink 1.10 there will be an additional new metric for that

Re: How does Flink handle backpressure in EMR

2019-12-05 Thread Nguyen, Michael
Hi Roman, So right now we have a couple Flink jobs that consumes data from one Kinesis data stream. These jobs vary from a simple dump into a PostgreSQL table to calculating anomalies in a 30 minute window. One large scenario we were worried about was what if one of our jobs was taking a long

Re: How does Flink handle backpressure in EMR

2019-12-05 Thread Khachatryan Roman
@Michael, Could you please describe your topology with which operators being slow, back-pressured and probably skews in sources? Regards, Roman On Thu, Dec 5, 2019 at 6:20 PM Nguyen, Michael < michael.nguye...@t-mobile.com> wrote: > Thank you for the response Roman and Piotrek! > > @Roman -

Re: User program failures cause JobManager to be shutdown

2019-12-05 Thread Khachatryan Roman
Hi Dongwon, I wasn't able to reproduce your problem with Flink JobManager 1.9.1 with various kinds of errors in the job. I suggest you try it on a fresh Flink installation without any other jobs submitted. Regards, Roman On Thu, Dec 5, 2019 at 3:48 PM Dongwon Kim wrote: > Hi Roman, > > We're

Re: How does Flink handle backpressure in EMR

2019-12-05 Thread Nguyen, Michael
Thank you for the response Roman and Piotrek! @Roman - can you clarify on what you mean when you mentioned Flink propagating it back to the sources? Also, if one of my Flink operators is processing records too slowly and is getting further away from the latest record of my source data stream,

Re: How does Flink handle backpressure in EMR

2019-12-05 Thread Piotr Nowojski
Hi Michael, As Roman pointed out Flink currently doesn’t support the auto-scaling. It’s on our roadmap but it requires quite a bit of preliminary work to happen before. Piotrek > On 5 Dec 2019, at 15:32, r_khachatryan wrote: > > Hi Michael > > Flink *does* detect backpressure but currently,

Joining multiple temporal tables

2019-12-05 Thread Chris Miller
I want to decorate/enrich a stream by joining it with "lookups" to the most recent data available in other streams. For example, suppose I have a stream of product orders. For each order, I want to add price and FX rate information based on the order's product ID and order currency. Is it

Re: Flink 'Job Cluster' mode Ui Access

2019-12-05 Thread Chesnay Schepler
Ok, it's good to know that the WebUI files are there. Please enable DEBUG logging and try again, searching for messages from the StaticFileServerHandler. This handler logs every file that is requested (which effectively happens when the WebUI is being served); let's see what is actually

Re: User program failures cause JobManager to be shutdown

2019-12-05 Thread Dongwon Kim
Hi Roman, We're using the latest version 1.9.1 and those two lines are all I've seen after executing the job on the web ui. Best, Dongwon On Thu, Dec 5, 2019 at 11:36 PM r_khachatryan wrote: > Hi Dongwon, > > Could you please provide Flink version you are running and the job manager > logs?

Re: User program failures cause JobManager to be shutdown

2019-12-05 Thread r_khachatryan
Hi Dongwon, Could you please provide Flink version you are running and the job manager logs? Regards, Roman eastcirclek wrote > Hi, > > I tried to run a program by uploading a jar on Flink UI. When I > intentionally enter a wrong parameter to my program, JobManager dies. > Below > is all log

Re: How does Flink handle backpressure in EMR

2019-12-05 Thread r_khachatryan
Hi Michael Flink *does* detect backpressure but currently, it only propagates it back to sources. And so it doesn't support auto-scaling. Regards, Roman Nguyen, Michael wrote > How does Flink handle backpressure (caused by an increase in traffic) in a > Flink job when it’s being hosted in an

Re: What S3 Permissions does StreamingFileSink need?

2019-12-05 Thread r_khachatryan
Hi Li, Could you please list the permissions you see and the error message you receive from AWS? Li Peng-2 wrote > Hey folks, I'm trying to use StreamingFileSink with s3, using IAM roles > for > auth. Does anyone know what permissions the role should have for the > specified s3 bucket to work

Re: Using MapState clear, put methods in snapshotState within KeyedCoProcessFunction, valid or not?

2019-12-05 Thread Salva Alcántara
Thanks a lot Yun! -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Re: Emit intermediate accumulator state of a session window

2019-12-05 Thread Andrey Zagrebin
Hi Chandu, I am not sure whether using the windowing API is helpful in this case at all. At least, you could try to consume the data not only by windowing but also by a custom stateful function. You look into the AggregatingState [1]. Then you could do whatever you want with the current

RE: Access to CheckpointStatsCounts

2019-12-05 Thread min.tan
Thanks From: Robert Metzger [mailto:rmetz...@apache.org] Sent: Donnerstag, 5. Dezember 2019 10:55 To: Tan, Min Cc: vino yang; user Subject: [External] Re: Access to CheckpointStatsCounts Hey Min, If checking for empty map states works for you, this could be an option, yes. Alternatively, check

Re: Access to CheckpointStatsCounts

2019-12-05 Thread Robert Metzger
Hey Min, If checking for empty map states works for you, this could be an option, yes. Alternatively, check this out: CheckpointedFunction.initializeState() will pass you a FunctionInitializationContext, which has a method called ".isRestored()" Best, Robert On Thu, Dec 5, 2019 at 10:18 AM

Re: Need help using AggregateFunction instead of FoldFunction

2019-12-05 Thread Chris Miller
I hit the same problem, as far as I can tell it should be fixed in Pulsar 2.4.2. The release of this has already passed voting so I hope it should be available in a day or two. https://github.com/apache/pulsar/pull/5068 -- Original Message -- From: "devinbost" To:

RE: Access to CheckpointStatsCounts

2019-12-05 Thread min.tan
Dear Robert, Thank you very much for sending your reply. What we try to achieve is that 1) In a normal situation, checkpoints or save points are preserved, an application restarts from one of these paths (with configurations are kept in Map states). 2) Sometimes, e.g. during a

Re: Access to CheckpointStatsCounts

2019-12-05 Thread Robert Metzger
Hey Min, when a Flink job recovers after a failure, the main method is not re-executed. The main method is only executed for the submission of the job. The JobManager executing the job is receiving the final job graph. On failure, the JobManager will restore the job based on the job graph. If you

Re: Idiomatic way to split pipeline

2019-12-05 Thread Robert Metzger
Hi Avi, can you post the exception with the stack trace here as well? On Sun, Dec 1, 2019 at 10:03 AM Avi Levi wrote: > Thanks Arvid, > The problem is that I will get an exception on non unique uid on the > *stream* . > > On Thu, Nov 28, 2019 at 2:45 PM Arvid Heise wrote: > >> *This Message

Re: [DISCUSS] Drop Kafka 0.8/0.9

2019-12-05 Thread Zhenghua Gao
+1 for dropping. *Best Regards,* *Zhenghua Gao* On Thu, Dec 5, 2019 at 11:08 AM Dian Fu wrote: > +1 for dropping them. > > Just FYI: there was a similar discussion few months ago [1]. > > [1] >