Re: CEP condition expression and its event consuming strategy

2017-08-03 Thread Chao Wang
in the CEP library. We will be happy to hear any comments and suggestions for future improvements. On 28 Jul 2017, at 21:54, Chao Wang <chaow...@wustl.edu> wrote: Hi Dawid, Thank you. Ad. 1 I noticed that the method getEventsForPattern() returns an Iterable and we need to further invoke .op

Flink multithreading, CoFlatMapFunction, CoProcessFunction, internal state

2017-08-14 Thread Chao Wang
Hi, I'd like to know if CoFlatMapFunction/CoProcessFunction is thread-safe, and to what extent? What's the difference between the two Functions? and in general, how does Flink prevent race conditions? Here's my case: I tried to condition on two input streams and produce the third stream if

Re: Flink multithreading, CoFlatMapFunction, CoProcessFunction, internal state

2017-08-16 Thread Chao Wang
at I see in the code (StreamTwoInputProcessor), the same should apply to CoFlatMapFunction and CoProcessFunction so that calls to flatMap1/2 and processElement1/2 are not called in parallel! 3) why would you want to store the CoProcessFunction.Context? Nico On Monday, 14 August 2017 18:36:38 CEST

CEP condition expression and its event consuming strategy

2017-07-26 Thread Chao Wang
Hi, I have two questions regarding the use of the Flink CEP library (flink-cep_2.11:1.3.1), as follows: 1. I'd like to know how to use the API to express "emit event C in the presence of events A and B, with no restriction on the arriving order of A and B"? I've tried by creating two

Re: CEP condition expression and its event consuming strategy

2017-07-28 Thread Chao Wang
k to introduce AfterMatchSkipStrategy[1], but at best it will be merged in 1.4.0. I did not give it much thought, but I would try implement some discarding logic. Regards, Dawid [1] https://issues.apache.org/jira/browse/FLINK-7169 On 26 Jul 2017, at 22:45, Chao Wang <chaow...@wustl.ed

Re: Operations dependencies between values with different key in a ConnectedStreams

2017-07-28 Thread Chao Wang
Hi Gabriele, I think CEP may be able to deal with this kind of expressions, in general, although I am not sure about how to deal with different time windows (5s and 3s, in your case). Take a look at the available patterns in the CEP library doc:

Re: Experiencing long latency while using sockets

2017-08-09 Thread Chao Wang
latency is similar to that of using raw sockets (off by less than 1 ms): Send the first message to Flink and then wait for 110 ms before sending the second message. And for the subsequent sends we can remove the 110 ms wait. Chao On 08/09/2017 10:57 AM, Chao Wang wrote: Thank you, Fabian

Re: Experiencing long latency while using sockets

2017-08-09 Thread Chao Wang
program does not process many records, these records might "get stuck" in the buffers and be emitted after the timeout flushes the buffer. The default timeout is 100ms. Try to reduce it. Best, Fabian 2017-08-08 1:06 GMT+02:00 Chao Wang <chaow...@wustl.edu <mailto:ch

Re: Experiencing long latency while using sockets

2017-08-07 Thread Chao Wang
no::system_clock::now(); jbyte *inCArray = env->GetByteArrayElements(inArray, NULL); std::chrono::system_clock::time_point start; std::memcpy (, inCArray, ::timePointLength); std::cout << std::chrono::duration_cast(end - start).count() << std::endl; } Thank you, Chao

Experiencing long latency while using sockets

2017-08-07 Thread Chao Wang
Hi, I have been trying to benchmark the end-to-end latency of a Flink 1.3.1 application, but got confused regarding the amount of time spent in Flink. In my setting, data source and data sink dwell in separated machines, like the following topology: Machine 1

Re: schema to just read as "byte[] array" from kafka

2017-08-07 Thread Chao Wang
n arg0; } } Chao On 08/07/2017 12:23 PM, Chao Wang wrote: Hi Raja, I just happened to work on the similar thing, and here is how to do it in general, I think (In my case, I did a bit more, to deserialize a tuple of <byte[],byte[]>) : FlinkKafkaConsumer010<byte[]> consumer = new Fli

Re: schema to just read as "byte[] array" from kafka

2017-08-07 Thread Chao Wang
Hi Raja, I just happened to work on the similar thing, and here is how to do it in general, I think (In my case, I did a bit more, to deserialize a tuple of ) : FlinkKafkaConsumer010 consumer = new FlinkKafkaConsumer010<>("topic_name", new MyDe(), properties);

Re: How to run a flink wordcount program

2017-08-17 Thread Chao Wang
The following quickstart offers an end-to-end instruction I think: https://ci.apache.org/projects/flink/flink-docs-release-1.3/quickstart/setup_quickstart.html Chao On 08/17/2017 08:25 AM, P. Ramanjaneya Reddy wrote: On Thu, Aug 17, 2017 at 6:42 PM, P. Ramanjaneya Reddy