Hi All,
Voting on RC 5 for Flink 1.8.0 has started:
https://lists.apache.org/thread.html/36cd645bac32b4f73972f08118768b03121bb6254217202c11bc6fd5@%3Cdev.flink.apache.org%3E
Please check this out if you want to verify your applications against this new
Flink release.
Best,
Aljoscha
Hi
What is the situation of FlinkCEP and SQL?
Is it already possible to use SQL in CEP?
Is there any example cases where SQL is used in CEP?
BR Esa
Hi BR Esa,
CEP is available in Flink SQL, Please the detail here:
https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/table/sql.html#pattern-recognition
Best,
Jincheng
Esa Heikkinen (TAU) 于2019年4月4日周四 下午4:44写道:
> Hi
>
>
>
> What is the situation of FlinkCEP and SQL?
>
>
>
> Is it
Hi Experts,
When submitting a Flink program to Yarn, the app jar( a fat jar about
200M with Flink dependencies ) will be uploaded to Yarn, which will take a lot
of time. I check the code in CliFrontend, and found that there is a config item
named “yarn.per-job-cluster.include-user-jar”,
> Should the all sources be combined into one big table before operations with
> SQL CEP?
Yes, you should combine them into one table/stream.
Regards,
Dian
> 在 2019年4月4日,下午7:11,Esa Heikkinen (TAU) 写道:
>
> Hi
>
> Thank you for the information. How this SQL CEP is applicable for situation
>
Hi
Thank you for the information. How this SQL CEP is applicable for situation
where there are many sources with different type of events ? Should the all
sources be combined into one big table before operations with SQL CEP?
BR Esa
From: jincheng sun
Sent: Thursday, April 4, 2019 1:05 PM
Hi All,
As you might have already seen there is an effort tracked in FLINK-12005
[1] to support event time scale for state with time-to-live (TTL) [2].
While thinking about design, we realised that there can be multiple options
for semantics of this feature, depending on use case. There is also
Hello,
I am studying the parallelism of tasks on DataStream. So, I have configured
Flink to execute on my machine (master node) and one virtual machine
(worker node). The master has 4 cores (taskmanager.numberOfTaskSlots: 4)
and the worker only 2 cores (taskmanager.numberOfTaskSlots: 2). I don't
Hi all,
I partition DataStream (say dsA) with parallelism 2 and get KeyedStream
(say ksA) with parallelism 2.
Depending on my keys in dsA, one partition remains empty in ksA.
For example when my keys are 10 and 20 in dsA, then both partitions in ksA
are full.
However, with keys 1000 and 1001,
Hi, Yan.
we have met this problem too when using aliyun-pangu and have commented in
FLINK-8801 but no response yet.
I think most file systems including s3/s3a/s3n/azure/aliyun-oss etc can
encounter this problem since they doesn’t implement FileSystem#setTimes but the
PR in FLINK-8801 think
Hi,
I am running issues when trying to move from HDFS to S3 using Flink 1.6.
I am getting an exception from Hadoop code:
IOException("Resource " + sCopy +
" changed on src filesystem (expected " + resource.getTimestamp() +
", was " + sStat.getModificationTime());
Digging into this, I
My 2c:
Timestamp stored with the state value: Event timestamp
Timestamp used to check expiration: Last emitted watermark
That follows the event time processing model used elsewhere is Flink. E.g.
events are segregated into windows based on their event time, but the
windows do not fire until the
Thanks a lot for the replies.
Below I paste my code:
DataStreamSource source = env.addSource(new MySource());
KeyedStream keyedStream =
DataStreamUtils.reinterpretAsKeyedStream(source, new DummyKeySelector(),
TypeInformation.of(Integer.class) );
flink on yarn ha
??flink1.7.2??hadoop2.8.5
??5??flinkCPU48G
flink??
jobmanager.heap.size: 2048m
taskmanager.heap.size: 2048m
taskmanager.numberOfTaskSlots: 4
run a job on flink
大家好,
先说本人的理解,keyed(..).flatmap(mapFunc())
其中,每个具体mapFunc处理的数据,应该是相同的key数据。不知理解是否正确。
我的具体情况是
我对数据对校验处理。首先根据设备id (uuid) 分组,然后针对不同分组进行数据校验。
部分代码如下:
rowData.filter(legalData _)
.map(data => BehaviorComVO(getText(data, "id"), getText(data, "uuid"),
getText(data, "session_id"), getText(data,
15 matches
Mail list logo