Re: Problem starting taskexecutor daemons in 3 node cluster

2019-09-13 Thread Till Rohrmann
Hi Komal, could you check that every node can reach the other nodes? It looks a little bit as if the TaskManager cannot talk to the JobManager running on 150.82.218.218:6123. Cheers, Till On Thu, Sep 12, 2019 at 9:30 AM Komal Mariam wrote: > I managed to fix it however ran into another

Re: Jobsubmission fails in Flink 1.7.1 High Availability mode

2019-09-13 Thread Till Rohrmann
Hi Abhinav, I think the problem is the following: Flink has been designed so that the cluster's rest endpoint does not need to run in the same process as the JobManager. However, currently the rest endpoint is started in the same process as the JobManagers. Because of the design one needs to

Re: SIGSEGV error

2019-09-13 Thread Till Rohrmann
Hi Marek, could you share the logs statements which happened before the SIGSEGV with us? They might be helpful to understand what happened before. Moreover, it would be helpful to get access to your custom serializer implementations. I'm also pulling in Gordon who worked on the

Re: Flink web ui authentication using nginx

2019-09-13 Thread Till Rohrmann
Hi Harshith, I'm not an expert of how to setup nginx with authentication for Flink but I could shed some light on the redirection problem. I assume that Flink's redirection response might not be properly understood by nginx. The good news is that with Flink 1.8, we no longer rely on client side

Re: Uncertain result when using group by in stream sql

2019-09-13 Thread Fabian Hueske
Hi, A GROUP BY query on a streaming table requires that the result is continuously updated. Updates are propagated as a retraction stream (see tEnv.toRetractStream(table, Row.class).print(); in your code). A retraction stream encodes the type of the update as a boolean flag, the "true" and

Re: SIGSEGV error

2019-09-13 Thread Stephan Ewen
Given that the segfault happens in the JVM's ZIP stream code, I am curious is this is a bug in Flink or in the JVM core libs, that happens to be triggered now by newer versions of FLink. I found this on StackOverflow, which looks like it could be related:

Compound Keys Using Temporal Tables

2019-09-13 Thread Yuval Itzchakov
Hi, Given table X with an event time, A, B and C columns, is there a way to pass a compound key, i.e. A and B as the primaryKey argument of Table.createTemporalFunction? My attempts so far yield a runtime exception where the String doesn't match a given regex.

[ANNOUNCE] Apache Flink 1.8.2 released

2019-09-13 Thread Jark Wu
Hi, The Apache Flink community is very happy to announce the release of Apache Flink 1.8.2, which is the second bugfix release for the Apache Flink 1.8 series. Apache Flink® is an open-source stream processing framework for distributed, high-performing, always-available, and accurate data

Flink kafka producer partitioning scheme

2019-09-13 Thread Vishwas Siravara
Hi guys, >From the flink doc *By default, if a custom partitioner is not specified for the Flink Kafka Producer, the producer will use a FlinkFixedPartitioner that maps each Flink Kafka Producer parallel subtask to a single Kafka partition (i.e., all records received by a sink subtask will end up

Re: [ANNOUNCE] Apache Flink 1.8.2 released

2019-09-13 Thread jincheng sun
Thanks for being the release manager and the great work Jark :) Also thanks to the community making this release possible! Best, Jincheng Jark Wu 于2019年9月13日周五 下午10:07写道: > Hi, > > The Apache Flink community is very happy to announce the release of Apache > Flink 1.8.2, which is the second

Re: [ANNOUNCE] Apache Flink 1.8.2 released

2019-09-13 Thread Till Rohrmann
Thanks Jark for being our release manager and thanks to everyone who has contributed. Cheers, Till On Fri, Sep 13, 2019 at 4:12 PM jincheng sun wrote: > Thanks for being the release manager and the great work Jark :) > Also thanks to the community making this release possible! > > Best, >

Re: externalizing config flies for flink class loader

2019-09-13 Thread Vijay Bhaskar
Sorry there is a typo, corrected it: val pmtool = ParameterTool.fromArgs(args) val defaultConfig = ConfigFactory.load() //Default config in reference.conf/application.conf/system properties/env of typesafe val overrideConfigFromArgs = ConfigFactory.load(pmtool.toMap) val finalConfig =

Re: externalizing config flies for flink class loader

2019-09-13 Thread Vijay Bhaskar
Hi You can use this way: Use typesafe configuration, which provides excellent configuration methodologies. You supply default configuration, which is read by your application through reference.conf file of typesafe. If you want to override any of the defaults you can supply to command line

Uncertain result when using group by in stream sql

2019-09-13 Thread 刘建刚
I use flink stream sql to write a demo about "group by". The records are [(bj, 1), (bj, 3), (bj, 5)]. I group by the first element and sum the second element. Every time I run the program, the result is different. It seems that the records are out of order. Even sometimes record is

Re: How to handle avro BYTES type in flink

2019-09-13 Thread Fabian Hueske
Thanks for reporting back Catlyn! Am Do., 12. Sept. 2019 um 19:40 Uhr schrieb Catlyn Kong : > Turns out there was some other deserialization problem unrelated to this. > > On Mon, Sep 9, 2019 at 11:15 AM Catlyn Kong wrote: > >> Hi fellow streamers, >> >> I'm trying to support avro BYTES type in

Re: [ANNOUNCE] Apache Flink 1.8.2 released

2019-09-13 Thread Till Rohrmann
Thanks Jark for being our release manager and thanks to everyone who has contributed. Cheers, Till On Fri, Sep 13, 2019 at 4:12 PM jincheng sun wrote: > Thanks for being the release manager and the great work Jark :) > Also thanks to the community making this release possible! > > Best, >

Uncertain result when using group by in stream sql

2019-09-13 Thread 刘建刚
I use flink stream sql to write a demo about "group by". The records are [(bj, 1), (bj, 3), (bj, 5)]. I group by the first element and sum the second element. Every time I run the program, the result is different. It seems that the records are out of order. Even sometimes record is