Far too few watermarks getting generated with Kafka source

2018-01-17 Thread William Saar
Hi, I have a job where we read data from either Kafka or a file (for testing), decode the entries and flat map them into events, and then add a timestamp and watermark assigner to the events in a later operation. This seems to generate periodic watermarks when running from a file, but when Kafka

Re: Which collection to use in Scala case class

2018-01-17 Thread Chesnay Schepler
Can you show us the exception you get when restoring? Looping in Time who knows more about how types are analyzed/serializers are constructed. On 17.01.2018 10:25, shashank agarwal wrote: Hello, A quick question which scala collection should I use in my scala case class which won't go

Re: Which collection to use in Scala case class

2018-01-17 Thread Timo Walther
The issue of Scala case classes that can not be restored is a known issue in Flink 1.4. We need to investigate if it can be fixed easily. @Shashank: Could you give us a little reproducible example? Just a case class with a java.util.List in it? @Gordon: Is there a Jira issue for this? I

Submitting jobs via Java code

2018-01-17 Thread Luigi Sgaglione
Hi, I am a beginner in Flink and I'm trying to deploy a simple example using a java client in a remote Flink server (1.4.0). I'm using org.apache.flink.client.program.Client this is the used code: Configuration config = new Configuration(); config.setString("jobmanager.rpc.address",

Re: Flink CEP exception during RocksDB update

2018-01-17 Thread Varun Dhore
Thank you Kostas. Since this error is not easily reproducible on my end I’ll continue testing this and confirm the resolution once I am able to do so. Thanks, Varun Sent from my iPhone > On Jan 15, 2018, at 10:21 AM, Kostas Kloudas > wrote: > > Hi Varun, > >

Re: Failed to serialize remote message [class org.apache.flink.runtime.messages.JobManagerMessages$JobFound

2018-01-17 Thread Till Rohrmann
Hi, yes you're right. The different standby JobManagers should have different web addresses. Cheers, Till On Tue, Jan 16, 2018 at 6:32 PM, jelmer wrote: > I think i found the issue. I'd just like to verify that my reasoning is > correct > > We had the following keys in our

Re: Problem while debugging a python job

2018-01-17 Thread Mathias Peters
Thanks. That did it. best Mathias On 17.01.2018 13:11, Chesnay Schepler wrote: > All dependencies of flink-python are set to /provided/ so that they > aren't included in the flink-python jar (which would duplicate all > classes already contained in flink-dist). > > You can either temporarily

using a Yarn cluster for both Spark and Flink

2018-01-17 Thread Soheil Pourbafrani
Is it standard approach to set up a Yarn cluster for running both Spark and Flink applications?

Re: using a Yarn cluster for both Spark and Flink

2018-01-17 Thread Georg Heiler
Why not? Isn't a resource manager meant for this? You should however clearly define service level agreements as a flink streaming job might require certain maximum latency opposed to a spark batch job. Soheil Pourbafrani schrieb am Do. 18. Jan. 2018 um 08:30: > Is it

Flink standalone scheduler

2018-01-17 Thread Soheil Pourbafrani
Does Flink standalone scheduler support dynamic resource allocations? Something like Yarn scheduler?

Re: State backend questions

2018-01-17 Thread Chesnay Schepler
According to this thread it is not yet possible to switch to/from RocksDBStatebackend, so I would suggest to with RocksDB from the start. For tuning RocksDB, see here

Which collection to use in Scala case class

2018-01-17 Thread shashank agarwal
Hello, A quick question which scala collection should I use in my scala case class which won't go through generic serializer. I was using java.utill.List in my scala case class before, Will this create the problem in savepoint and restore. cause, my restore is not working so i am trying to

Re: Problem while debugging a python job

2018-01-17 Thread Chesnay Schepler
All dependencies of flink-python are set to /provided/ so that they aren't included in the flink-python jar (which would duplicate all classes already contained in flink-dist). You can either temporarily modify the dependencies and remove the /provided /scope, or create a simple test class