Re: Data Transfer between TM should be encrypted

2016-10-13 Thread vinay patil
Thank you Stephan and Robert for your positive response Regards, Vinay Patil On Thu, Oct 13, 2016 at 11:22 AM, Stephan Ewen [via Apache Flink User Mailing List archive.] wrote: > I agree with Robert. We should probably aim for the release to be end of >

Re: Data Transfer between TM should be encrypted

2016-10-13 Thread Stephan Ewen
I agree with Robert. We should probably aim for the release to be end of November, so that it will come this year even in the presence of some delay. Best, Stephan On Thu, Oct 13, 2016 at 11:36 AM, Robert Metzger wrote: > Hi, > the release dates depend on the community,

Re: JsonMappingException: No content to map due to end-of-input

2016-10-13 Thread Stephan Ewen
This sounds like it is not Flink related - it seems more like a Jackson/Json question than a Flink question. On Thu, Oct 13, 2016 at 5:56 PM, PedroMrChaves wrote: > Hello, > > I recently started programming with Apache Flink API. I am trying to get > input directly >

JsonMappingException: No content to map due to end-of-input

2016-10-13 Thread PedroMrChaves
Hello, I recently started programming with Apache Flink API. I am trying to get input directly from kafka in a JSON format with the following code: /private void kafkaConsumer(String server, String topic) { Properties properties = new Properties();

Re: Allowed Lateness and Window State

2016-10-13 Thread swiesman
Thank makes perfect sense, thank you so much for your quick response Kostas! Seth Wiesman -- View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Allowed-Lateness-and-Window-State-tp9529p9535.html Sent from the Apache Flink User Mailing List

Re: [DISCUSS] Drop Hadoop 1 support with Flink 1.2

2016-10-13 Thread Neelesh Salian
+1 to dropping Hadoop 1.x I am fairly certain there are very few legacy Hadoop users. 2.x is heavily used at the moment. Spark actually changed not just Hadoop but Python versions as well. Hadoop 3 would take a while to mature so I would suggest holding off on that after it is well baked in and

Re: Allowed Lateness and Window State

2016-10-13 Thread Kostas Kloudas
Hi Seth, FIRE and FIRE_AND_PURGE still have the same meaning. So on eventTime, when your trigger says FIRE_AND_PURGE, your window will be evaluated (4) and its state will be purged. Now, when the red circle arrives, the state will be empty, so if the onElement says FIRE, the result will be 1.

[ANNOUNCE] Flink 1.1.3 Released

2016-10-13 Thread Ufuk Celebi
The Flink PMC is pleased to announce the availability of Flink 1.1.3. The official release announcement: https://flink.apache.org/news/2016/10/12/release-1.1.3.html Release binaries: http://apache.lauf-forum.at/flink/flink-1.1.3/ Please update your Maven dependencies to the new 1.1.3 version

RE: [DISCUSS] Drop Hadoop 1 support with Flink 1.2

2016-10-13 Thread ruben.casado.tejedor
I am totally agree with Robert. From the industry point of view, we are not using in any client Hadoop 1.x . Even in legacy system, we have already upgraded the software. From: Robert Metzger [mailto:rmetz...@apache.org] Sent: jueves, 13 de octubre de 2016 16:48 To: d...@flink.apache.org;

Allowed Lateness and Window State

2016-10-13 Thread swiesman
Hi all, I have a question about allowed lateness semantics and window state. I have a custom event time trigger that does an early FIRE onElement for a certain set of conditions and then a FIRE_AND_PURGE onEventTime. And this has worked great for me. However I now want to add a certain

Re: bucketing in RollingSink

2016-10-13 Thread robert.lancaster
Hi Robert, Thanks for the info on 1.2. I copied over all 4 classes in fs.bucketing. I had to make some changes in my version of BucketingSink. BucketingSink relies on an updated StreamingRuntimeContext that provides a TimeServiceProvider, which the version from 1.1.2 doesn’t. It was easy

Re: Error with table sql query - No match found for function signature TUMBLE(, )

2016-10-13 Thread Fabian Hueske
apply() accepts a WindowFunction which is essentially the same as a GroupReduceFunction, i.e., you have an iterator over all events in the window. If you only want to count, you should have a look at incremental window aggregation with a ReduceFunction or FoldFunction [1]. Best, Fabian [1]

Re: Error with table sql query - No match found for function signature TUMBLE(, )

2016-10-13 Thread PedroMrChaves
I have this so far: result = eventData .filter(new FilterFunction(){ public boolean filter(Event event){ return event.action.equals("denied");

Re: Current alternatives for async I/O

2016-10-13 Thread Fabian Hueske
Hi Ken, FYI: we just received a pull request for FLIP-12 [1]. Best, Fabian [1] https://github.com/apache/flink/pull/2629 2016-10-11 9:35 GMT+02:00 Fabian Hueske : > Hi Ken, > > I think your solution should work. > You need to make sure though, that you properly manage the

Re: Error with table sql query - No match found for function signature TUMBLE(, )

2016-10-13 Thread Fabian Hueske
Hi Pedro, the DataStream program would like this: val eventData: DataStream[?] = ??? val result = eventData .filter("action = denied") .keyBy("user", "ip") .timeWindow(Time.hours(1)) .apply("window.end, user, ip, count(*)") .filter("count > 5") .map("windowEnd, user, ip") Note,

Re: Error with table sql query - No match found for function signature TUMBLE(, )

2016-10-13 Thread PedroMrChaves
Hi, Thanks for the response. What would be the easiest way to do this query using the DataStream API? Thank you. -- View this message in context:

Re: Flink Kafka Consumer Behaviour

2016-10-13 Thread Robert Metzger
Thank you for investigating the issue. I've filed a JIRA: https://issues.apache.org/jira/browse/FLINK-4822 On Wed, Oct 12, 2016 at 8:12 PM, Anchit Jatana wrote: > Hi Janardhan/Stephan, > > I just figured out what the issue is (Talking about Flink KafkaConnector08,

Re: Flink Kafka connector08 not updating the offsets with the zookeeper

2016-10-13 Thread Robert Metzger
Okay, I see. According to this document, we need to set a consumer id for each groupid and topic: https://cwiki.apache.org/confluence/display/KAFKA/Kafka+data+structures+in+Zookeeper I created a JIRA for fixing this issue: https://issues.apache.org/jira/browse/FLINK-4822 On Wed, Oct 12, 2016

Re: question about making a temporal Graph with Gelly

2016-10-13 Thread otherwise777
Hello Greg, So far i've added a Tuple3 in the value field of an edge and that seems to work. However in the end i want to make a library on top of Gelly that supports temporal graphs all together. For that i want to add a temporal edge class to use in the graph but i didn't succeed in doing that,

Re: Keyed join Flink Streaming

2016-10-13 Thread Adrienne Kole
Hi Ufuk, Thanks for reply. The example is at [1]. I have few questions: If there is no difference between KeyedStream- KeyedStream join by key and DataStream-DataStream join, then DataStream becomes KeyedStream with `where` and `equal` clauses. Please correct me If I am wrong. Is the

Re: Data Transfer between TM should be encrypted

2016-10-13 Thread Robert Metzger
Hi, the release dates depend on the community, when features are ready and so on. There was no discussion yet when we plan to do the release, because most of the features we want to have in are not yet done yet. I think its likely that we'll have a 1.2 release by end of this year. Regards, Robert

Re: bucketing in RollingSink

2016-10-13 Thread Robert Metzger
Hi, Let me know if you come across any issues with approach #2. Predicting release dates is always hard. I would say we are in the middle of the development cycle. I can imagine that we have 1.2 released by the end of this year. There is no tool that shows the progress, but I've started a Wiki

Re: About Sliding window

2016-10-13 Thread Kostas Kloudas
Hi Zhangrucong, The relevant document is Flip-4: https://cwiki.apache.org/confluence/display/FLINK/FLIP-4+%3A+Enhance+Window+Evictor There you can also find a link to the discussion. Cheers, Kostas > On Oct

Re: Evolution algorithm on flink

2016-10-13 Thread Paris Carbone
Very interesting! Thank you for sharing Andrew! Paris > On Oct 13, 2016, at 11:00 AM, Andrew Ge Wu wrote: > > Hi guys > > I just published my code to maven central, open source ofc. > I try to make this as generic as possible. > If you are interested, please try it

Re: Clean up history of job manager

2016-10-13 Thread Sendoh
Thank you for your reply. Does it mean when calling monitoring REST API - /joboverview/completed, we actually call execution graphs? Because I thought REST API reads a log file somewhere? Cheers, Sendoh -- View this message in context:

Re: Evolution algorithm on flink

2016-10-13 Thread Ufuk Celebi
Thanks for sharing this! :-) On Thu, Oct 13, 2016 at 11:00 AM, Andrew Ge Wu wrote: > Hi guys > > I just published my code to maven central, open source ofc. > I try to make this as generic as possible. > If you are interested, please try it out, and help me to improve

Re: Clean up history of job manager

2016-10-13 Thread Ufuk Celebi
Hey Sendoh, unfortunately this is currently not possible. This would need some refactorings in the way we expose archived execution graphs. If you like you can go ahead an open a JIRA issue for this. – Ufuk On Thu, Oct 13, 2016 at 10:49 AM, Sendoh wrote: > Hi Flink

Evolution algorithm on flink

2016-10-13 Thread Andrew Ge Wu
Hi guys I just published my code to maven central, open source ofc. I try to make this as generic as possible. If you are interested, please try it out, and help me to improve this! https://github.com/CircuitWall/machine-learning Thanks!

Re: Keyed join Flink Streaming

2016-10-13 Thread Ufuk Celebi
Hey Adrienne! On Wed, Oct 12, 2016 at 4:10 PM, Adrienne Kole wrote: > Hi, > > I have 2 streams which are partitioned based on key field. I want to join > those streams based on key fields on windows. This is an example I saw in > the flink website: > > val firstInput:

Clean up history of job manager

2016-10-13 Thread Sendoh
Hi Flink users, Does anyone know how to clean up job history? especially for failed Flink jobs Our use case is to clean up history of failed jobs after we saw it. Best, Sendoh -- View this message in context: