Hi,
Is the tentative release date for 1.6.3 decided?
Thanks,
Shailesh
Thank you, Stefan. Any ideas on when can we expect 1.6.3 release?
On Thu, Nov 8, 2018 at 4:28 PM Stefan Richter
wrote:
> Sure, it is already merged as FLINK-10816.
>
> Best,
> Stefan
>
> On 8. Nov 2018, at 11:53, Shailesh Jain
> wrote:
>
> Thanks a lot for loo
gt;> On Tue, Oct 30, 2018 at 9:10 AM Dawid Wysakowicz
>> wrote:
>>
>>> This is some problem with serializing your events using Kryo. I'm adding
>>> Gordon to cc, as he was recently working with serializers. He might give
>>> you more insights what is go
Bump.
On Thu, Oct 25, 2018 at 9:11 AM Shailesh Jain
wrote:
> Hi Dawid,
>
> I've upgraded to flink 1.6.1 and rebased by changes against the tag 1.6.1,
> the only commit on top of 1.6 is this:
> https://github.com/jainshailesh/flink/commit/797e3c4af5b28263fd98fb79daaba97cab
te.java:85)
at
org.apache.flink.runtime.state.UserFacingMapState.get(UserFacingMapState.java:47)
at
org.apache.flink.cep.nfa.sharedbuffer.SharedBuffer.lambda$materializeMatch$1(SharedBuffer.java:303)
... 18 more
On Fri, Sep 28, 2018 at 11:00 AM Shailesh Jain
wrote:
7;ve provided still
> does not correspond to the lines in the exception you've posted previously.
> Could you check if the problem occurs on vanilla flink as well?
>
> Best,
>
> Dawid
>
> On 27/09/18 08:22, Shailesh Jain wrote:
>
> Hi Dawid,
>
> Yes, it is version 1.
inked,
> so if it is a problem than it is definitely a different one. Last thing I
> would recommend upgrading to the newest version, as we rewritten the
> SharedBuffer implementation in 1.6.0.
>
> Best,
>
> Dawid
>
> On 26/09/18 13:50, Shailesh Jain wrote:
>
> Hi,
>
>
Hi,
I think I've hit this same issue on a 3 node standalone cluster (1.4.2)
using HDFS (2.8.4) as state backend.
2018-09-26 17:07:39,370 INFO
org.apache.flink.runtime.taskmanager.Task - Attempting
to fail task externally SelectCepOperator (1/1)
(3bec4aa1ef2226c4e0c5ff7b3860d34
start
> getting OOM errors from JVM (don’t confuse those with OS/kernel's OOM
> errors - those two are on a different level).
>
> Piotrek
>
>
> On 14 Aug 2018, at 07:36, Shailesh Jain
> wrote:
>
> Hi Piotrek,
>
> Thanks for your reply. I checked throug
oblems/crashes/restarts?
>
> Piotrek
>
> On 10 Aug 2018, at 06:59, Shailesh Jain
> wrote:
>
> Hi,
>
> I hit a similar issue yesterday, the task manager died suspiciously, no
> error logs in the task manager logs, but I see the following
Hi,
I hit a similar issue yesterday, the task manager died suspiciously, no
error logs in the task manager logs, but I see the following exceptions in
the job manager logs:
2018-08-05 18:03:28,322 ERROR
akka.remote.Remoting - Association
to [akka.tcp://fli
. Try removing this event and you will
> see it matches the other one.
> If you want to try to construct match with any subsequent start you can
> use "followedByAny", but then remember to add the within clause, as
> otherwise partial matches won't be cleared.
>
>
Hi,
I'm trying to detect a sequence like A followed by B, C, D.
i.e. there is no strict contiguity between A and B, but strict contiguity
between B, C and D.
Sample test case:
https://gist.github.com/jainshailesh/57832683fb5137bd306e4844abd9ef86
testStrictFollowedByRelaxedContiguity passes, but
Thanks Dawid. I'll rebase against your branch and test it. Would revert
back if I hit the issue again.
Regards,
Shailesh
On Sun, May 27, 2018 at 5:54 PM, Dawid Wysakowicz
wrote:
> The logic for SharedBuffer and in result for prunning will be changed in
> FLINK-9418 [1]. We plan to make it backw
Hi guys,
Were you able to RCA this? I think I'm hitting the same issue on 1.4.0, but
not really able to reproduce it through a test case.
In an IterativeCondition (using AfterMatchSkipStrategy.skipPastLastEvent),
while looking up previously matched events, it is hitting a
java.util.NoSuchElementEx
I have a question related to KeyedStream, asking it here instead of
starting a new thread.
If I assign timestamps on a keyed stream, the resulting stream is not
keyed. So essentially I would need to apply the key by operator again after
the assign timestamps operator.
Why should assigning timestam
Hi,
We've been facing issues* w.r.t watermarks not supported per key, which led
us to:
Either (a) run the job in Processing time for a KeyedStream -> compromising
on use cases which revolve around catching time-based patterns
or (b) run the job in Event time for multiple data streams (one data st
In addition to making the addition of patterns dynamic, any updates on FLIP
20 ?
https://cwiki.apache.org/confluence/display/FLINK/FLIP-20%3A+Integration+of+SQL+and+CEP
On Thu, Mar 8, 2018 at 12:23 AM, Vishal Santoshi
wrote:
> I see https://github.com/dawidwys/flink/tree/cep-dynamic-nfa is almos
Hi,
We're working with problems in IoT domain and using Flink to address
certain use cases (dominantly CEP). There are multiple devices (of the same
type, for eg. a temperature sensor) which are continuously pushing events.
These (N) devices are distinct and independent data sources, mostly
residi
ny concurrent operators which all run the same
> CEP operator. Best would be to generate watermarks which work for all keys.
> 3. I think your assumption should be correct. I think monitoring the JM
> process via VisualVM should be quite good to see the memory requirements.
>
> Cheers
e. Especially if you
> only have a single TM with a limited number of slots, I think that you
> effectively queue up jobs. That should reduce the required amount of
> resources for each individual job.
>
> Cheers,
> Till
>
> On Mon, Feb 19, 2018 at 11:35 AM, Shailesh Jain <
> s
ormula for heap size, but isnt's it
> easier just to try out different memory settings and see which works best
> for you?
>
> Thanks,
> Pawel
>
> 17 lut 2018 12:26 "Shailesh Jain"
> napisał(a):
>
> Oops, hit send by mistake.
>
> In the confi
lly helpful.
Thanks,
Shailesh
On Sat, Feb 17, 2018 at 5:53 PM, Shailesh Jain
wrote:
> Hi,
>
> I have flink job with almost 300 operators, and every time I'm trying to
> submit the job, the cluster crashes with OutOfMemory exception.
>
> I have 1 job manager and 1 tas
Hi,
I have flink job with almost 300 operators, and every time I'm trying to
submit the job, the cluster crashes with OutOfMemory exception.
I have 1 job manager and 1 task manager with 2 GB heap space allocated to
both.
In the configuration section of the documentation
true, but there
> are significant differences that prevent using Flink windowing for CEP.
>
> The above implies also that using triggers for early firing is not
> supported and is far from
> trivial to implement.
>
> Thanks,
> Kostas
>
> > On Dec 19, 2017, at 5:27 PM,
Hi,
Similar to the way it is exposed in Windows operator, is it possible to use
Triggers inside the Pattern Operator to fire partially matched patterns
(when certain events are very late and we want some level of controlled
early evaluation)?
I assume that Windows are used internally to implement
If you have python available, a simple script can help you there.
uploadJar.py:
> import requests # you might need to 'pip install requests' from command
> line
>
> uploadUrl = 'http://localhost:8081/jars/upload' # Replace localhost with
> your JobManager url
> jarName = '/path/to/jar/file.jar'
>
s/flink/flink-docs-release-1.3/
> dev/stream/side_output.html
>
>
> Am 12/5/17 um 11:58 AM schrieb Shailesh Jain:
>
> Hi,
>>
>> Is it possible to share state across operators in Flink?
>>
>> I have CoFlatMap operator which maintains a ListState and returns
Missed one point - I'm using Managed Operator state (and not Keyed state -
as my data streams are not keyed).
On Tue, Dec 5, 2017 at 4:28 PM, Shailesh Jain
wrote:
> Hi,
>
> Is it possible to share state across operators in Flink?
>
> I have CoFlatMap operator which maint
Hi,
Is it possible to share state across operators in Flink?
I have CoFlatMap operator which maintains a ListState and returns a
DataStream. And downstream there is a KafkaSink operator for the same
DataStream which needs to access the ListState.
Thanks,
Shailesh
there always would be a potential
> bottleneck of single source/filtering operations. With keyBy you could have
> multiple source operators and keyBy would ensure that events from the same
> device are processed always by one task/machine.
>
> Piotrek
>
> On 21 Nov 2017, at 07:
Jain
> >>> wrote:
> >>>
> >>> 3. Have attached the logs and exception raised (15min - configured akka
> >>> timeout) after submitting the job.
> >>>
> >>> Thanks,
> >>> Shailesh
> >>>
> >>>
Bump.
On Wed, Nov 15, 2017 at 12:34 AM, Shailesh Jain wrote:
> 1. Single data source because I have one kafka topic where all events get
> published. But I am creating multiple data streams by applying a series of
> filter operations on the single input stream, to generate device
Isn’t this a blob server issue?
>
> Piotrek
>
> On 14 Nov 2017, at 11:35, Shailesh Jain
> wrote:
>
> 1. Okay, I understand. My code is similar to what you demonstrated. I have
> attached a snap of my job plan visualization.
>
> 3. Have attached the logs and exce
rce-groups
> Just set it on the sources.
>
> 3. Can you show the logs from job manager and task manager?
>
> 4. As long as you have enough heap memory to run your application/tasks
> there is no upper limit for number of task slots.
>
> Piotrek
>
> On 14 Nov 2017, at 07:26,
bs which can be deployed
on the same task manager across slots.
Thanks,
Shailesh
On Mon, Nov 13, 2017 at 7:21 PM, Piotr Nowojski
wrote:
> Sure, let us know if you have other questions or encounter some issues.
>
> Thanks, Piotrek
>
>
> On 13 Nov 2017, at 14:49, Shailesh Jain
>
te and modify them to suite your
> needs.
>
> I would start and try out from a). If it work for your cluster/scale then
> that’s fine. If not try b) (would share most of the code with a), and as a
> last resort try c).
>
> Kostas, would you like to add something?
>
&g
tions 1, 7, …
…
- source 5, could get partitions 5, 11, ...
Piotrek
On 9 Nov 2017, at 10:18, Shailesh Jain wrote:
Hi,
I'm trying to understand the runtime aspect of Flink when dealing with
multiple data streams and multiple operators per data stream.
Use case: N data streams in a single flink
Hope that helps.
>
> Best,
> Xingcan
>
> On Wed, Nov 8, 2017 at 11:48 PM, Shailesh Jain <
> shailesh.j...@stellapps.com> wrote:
>
>> Hi,
>>
>> I'm working on implementing a use case wherein different physical devices
>> are sending events, and due
Hi,
I'm trying to understand the runtime aspect of Flink when dealing with
multiple data streams and multiple operators per data stream.
Use case: N data streams in a single flink job (each data stream
representing 1 device - with different time latencies), and each of these
data streams gets spl
Hi,
I'm working on implementing a use case wherein different physical devices
are sending events, and due to network/power issues, there can be a delay
in receiving events at Flink source. One of the operators within the flink
job is the Pattern operator, and there are certain patterns which are t
Hi,
Apart from the Java/Scala API for the CEP library, is there any other way
to express patterns/rules which can be run on flink engine?
Are there any plans on adding a DSL/Rule expression language for CEP
anytime soon? If not, any pointers on how it can be achieved now would be
really helpful.
42 matches
Mail list logo