Re: AskTimeoutException

2019-04-12 Thread Abdul Qadeer
Hi Alex, The timeout shown in the exception is due to AkkaOptions.LOOKUP_TIMEOUT On Fri, 12 Apr 2019 at 09:45, Alex Soto wrote: > Hello, > > I am using Flink version 1.7.1. In a unit test, I create a local > environment: > > Configuration cfg = new Configuration(); >

Apache Flink - CEP vs SQL detecting patterns

2019-04-12 Thread M Singh
Hi: I am looking at the documentation of the CEP and there is way to access patterns which have timeout.  But I could  not find similar capability in the Table and SQL interface detecting patterns.  I am assuming that the CEP interface is more comprehensive and complete than the SQL/Table

Re: Apache Flink - Question about broadcast state pattern usage

2019-04-12 Thread M Singh
Hi Fabian:  Thanks for your answer. >From my understanding (please correct me), in the example above, we are >passing map descriptors to the same broadcast stream.  So, the elements/items >in that stream will be the same.  The only difference would be that in the >processBroadcastElement

Re: Apache Flink - How to destroy global window and release it's resources

2019-04-12 Thread M Singh
Hi Fabian/Guowei:  Thanks for your pointers.   Fabian, as you pointed out, global window is never completely removed since it's end time is Long.MAX_VALUE, and that is my concern.  So, is there any other way of clean up the now purged global windows ? Thanks again. On Thursday, April

Netty channel closed at AKKA gated status

2019-04-12 Thread Wenrui Meng
We encountered the netty channel inactive issue while the AKKA gated that task manager. I'm wondering whether the channel closed because of the AKKA gated status, since all message to the taskManager will be dropped at that moment, which might cause netty channel exception. If so, shall we have

AskTimeoutException

2019-04-12 Thread Alex Soto
Hello, I am using Flink version 1.7.1. In a unit test, I create a local environment: Configuration cfg = new Configuration(); cfg.setString(AkkaOptions.ASK_TIMEOUT, "2 min"); cfg.setString(AkkaOptions.CLIENT_TIMEOUT, "2 min");

Re: Flink JDBC: Disable auto-commit mode

2019-04-12 Thread Rong Rong
Hi Konstantinos, Seems like setting for auto commit is not directly possible in the current JDBCInputFormatBuilder. However there's a way to specify the fetch size [1] for your DB round-trip, doesn't that resolve your issue? Similarly in JDBCOutputFormat, a batching mode was also used to stash

Flink and CVE

2019-04-12 Thread TechnoMage
In preparation for putting a flink project in production we wanted to find the CVE entries for flink applicable to the last few releases, but they appear to have all Apache projects in one big list. Any help in narrowing it down to just Flink entries would be appreciated. Michael

IP resolution for metrics on k8s when the JM ( job cluster ) is rolled but TMs are not

2019-04-12 Thread Vishal Santoshi
Scenerio * savepoint with Cancel followed by a restore on the Job. It brings down the JM and relaunches on a different IP, thus the resolution of dns is a new IP. * The TMs deployment is not rolled ( recreated ) * Note that `flink-conf.yaml:metrics.internal.query-service.port` is hardcoded.

Re: Scalaj vs akka as http client for Asyncio Flink

2019-04-12 Thread Till Rohrmann
Hi Andy, you can do some micro benchmarks where you instantiate your AsyncHttpClient and call the invoke method. But better would be to benchmark it end-to-end by running it on a cluster with a realistic workload which you also expect to occur in production. Cheers, Till On Fri, Apr 12, 2019 at

Re: [ANNOUNCE] Apache Flink 1.8.0 released

2019-04-12 Thread Yu Li
Thanks Aljoscha and all for making this happen, I believe 1.8.0 will be a great and successful release. Best Regards, Yu On Fri, 12 Apr 2019 at 21:23, Patrick Lucas wrote: > The Docker images for 1.8.0 are now available. > > Note that like Flink itself, starting with Flink 1.8 we are no

Flink JDBC: Disable auto-commit mode

2019-04-12 Thread Papadopoulos, Konstantinos
Hi all, We are facing an issue when trying to integrate PostgreSQL with Flink JDBC. When you establish a connection to the PostgreSQL database, it is in auto-commit mode. It means that each SQL statement is treated as a transaction and is automatically committed, but this functionality results

Re: [ANNOUNCE] Apache Flink 1.8.0 released

2019-04-12 Thread Patrick Lucas
The Docker images for 1.8.0 are now available. Note that like Flink itself, starting with Flink 1.8 we are no longer providing images with bundled Hadoop. Users who need an image with bundled Hadoop should create their own, starting from the images published on Docker Hub and fetching the

How would I use OneInputStreamOperator to deal with data skew?

2019-04-12 Thread Felipe Gutierrez
Hi, I was trying to implement a better way to handle data skew using Flink and I found this talk from #FlinkForward SF 2017: "Cliff Resnick & Seth Wiesman - From Zero to Streaming " [1] which says that they used OneInputStreamOperator [2]. Through it, they

Re: FlinkKafkaProducer010: why is checkErroneous() at the beginning of the invoke() method

2019-04-12 Thread Chesnay Schepler
It is called at the top since exceptions can occur asynchronously when invoke() already exited. In this case the only place you can fail is if the next record is being processed. On 12/04/2019 11:00, Kumar Bolar, Harshith wrote: Hi all, I had a requirement to handle Kafka producer

Re: Event Trigger in Flink

2019-04-12 Thread Chesnay Schepler
Sounds like your describing a source function that subscribes to couch db updates. You'd usually implement this as a Co(Flat)MapFunction that has 2 inputs, 1 from kafka and one from couch db, which stores the processing parameters in state. There's no built-in way to subscribe to couch db

Event Trigger in Flink

2019-04-12 Thread Soheil Pourbafrani
Hi, In my problem I should Process Kafka messages Using Apache Flink, while some processing parameters should be read from the CouchDB, So I have two questions: 1- What is Flink way to read data from the CouchDB? 2- I want to trigger Flink to load data from the Couch DB if a new document was

Re: Scalaj vs akka as http client for Asyncio Flink

2019-04-12 Thread Andy Hoang
Hi Till, Unfortunately I have to wait for the cluster to upgrade to 1.8 to use that feature: https://issues.apache.org/jira/browse/FLINK-6756 Meanwhile I can reimplement it in the copy-patse manner but I’m still curious if my AsyncHttpClient

FlinkKafkaProducer010: why is checkErroneous() at the beginning of the invoke() method

2019-04-12 Thread Kumar Bolar, Harshith
Hi all, I had a requirement to handle Kafka producer exceptions so that they don’t bring down the job. I extended FlinkKafkaProducer010 and handled the exceptions there. public void invoke(T value, Context context) throws Exception { try { this.checkErroneous();

Re: Flink add source with Scala

2019-04-12 Thread Chesnay Schepler
There is no separate Scala SourceFunction interface or similar convenience interfaces, so you'll have to work against the Java version. On 12/04/2019 09:07, hai wrote: Hello: Is there a example or best practise code of flink’s source of Scala language, I found one example on official

Re: Version "Unknown" - Flink 1.7.0

2019-04-12 Thread Chesnay Schepler
have you compiled Flink yourself? Could you check whether the flink-dist jar contains a ".version.properties" file in the root directory? On 12/04/2019 03:42, Vishal Santoshi wrote: Hello ZILI, I run flink from the distribution as from

Re: Scalaj vs akka as http client for Asyncio Flink

2019-04-12 Thread Till Rohrmann
Hi Andy, there is also a Scala version of the `RichAsyncFunction`. In Scala you have to specify a value for class members. This is different from Java. User code is first instantiated on the client where you create the job topology (basically where you call new RichAsyncHttpClient). The code is

Flink add source with Scala

2019-04-12 Thread hai
Hello: Is there a example or best practise code of flink’s source of Scala language, I found one example on official code’s HBaseWriteStreamExample: DataStreamString dataStream = env.addSource(new SourceFunctionString() { private static final long serialVersionUID = 1L; private volatile