Hi LakeShen,
Sorry for the late response.
For the first question, literally, the stop command should be used if one
means to stop the job instead of canceling it.
For the second one, since FLIP-45 is still under discussion [1] [2]
(although a little bit stalled due to priority), we still don't s
Hi,
How to calculate one alarm strategy for each device or one alarm strategy
for each type of IOT device。
My way is:
1. Use ListStateto store device state data for calculation
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi Vitaliy,
Do you mean you are modifying the code of ClusterSpecification? I believe
this is an internal class and is not meant to be modified by users.
Changing the internal code directly might lead to internal inconsistency
and unpredictable problems. If you want to modify JM/TM memory and slot
Hi Vitality,
After FLIP-49, ClusterSpecification.taskManagerMemoryMB is no longer
necessary. It can be completely replaced by
`taskmanager.memory.process.size`. It is kept merely for legacy reasons.
I'm actually thinking about removing ClusterSpecification, maybe after
finishing FLIP-116 [1], whi
Hi:
according to my experience, there are several possible reasons for
checkpoint fail.
1. if you use rocksdb as backend, insufficient disk will cause it.
because file save on local disk, and you may see a exception.
2. Sink can’t be written. all parallelism can’t be complete,
Hi,
I create a job with following parameters:
org.apache.flink.configuration.Configuration{
yarn.containers.vcores=2
yarn.appmaster.vcores=1
}
ClusterSpecification{
taskManagerMemoryMB=1024
slotsPerTaskManager=1
}
After I launch job programmatically I have :
yarn node -list -showDetails
Configure
Hi,
what ClusterSpecificationBuilder.taskManagerMemoryMB is for in flink 1.10?
It's only usage I see is in YarnCluserDescriptor.validateClusterResources
and I do not get the meaning of it.
How is it different from taskmanager.memory.process.size?
And what's the point of having it, if it's not used
Hi Forideal,
which Flink version are you using? If you using 1.9 or older, have a look
at the memory setup [1] and config docs [2]. If you are using 1.10, it
should be enough to increase* taskmanager.network.memory.**fraction* and
*taskmanager.network.memory.**max*. You shouldn't use *taskmanager.
Hi Manas,
both are valid options.
I'd probably add a processing time timeout event in a process function,
which will only trigger after no event has been received after 1 minute. In
this way, you don't need to know which devices there are and just enqueue
one timer per key (=device id).
After th
Dear Flink community!
In our company we have implemented a system that realize the dynamic
business rules pattern. We spoke about it during Flink Forward 2019
https://www.youtube.com/watch?v=CyrQ5B0exqU.
The system is a great success and we would like to improve it. Let me
shortly mention what the
Hey all,
I have another question about the State Processor API. I can't seem to find
a way to create a KeyedBroadcastStateBootstrapFunction operator. The two
options currently available to bootstrap a savepoint with state are
KeyedStateBootstrapFunction and BroadcastStateBootstrapFunction. Because
When YARN kills a job because of memory, it usually means that the job has
used more memory than it requested. Since Flink's memory model consists not
only from the Java on-heap memory but also some rocksdb off-heap memory,
it's usually harder to stay within the boundaries. The general shortcoming
We have a topology and the checkpoints fail to complete a *lot* of the time.
Typically it is just one subtask that fails.
We have a parallelism of 2 on this topology at present and the other
subtask will complete in 3ms though the end to end duration on the rare
times when the checkpointing compl
Hi Steve,
for some reason, it seems as if the Java compiler is not generating the
bridge method [1].
Could you double-check that the Java version of your build process and your
cluster match?
Could you run javap on your generated class file and report back?
[1] https://docs.oracle.com/javase/tu
Hi Reo,
if you want to reduce downtime, the usual approach is the following:
- Let your job run in 1.9 cluster for a while
- Start a job in 1.10 where you migrate state, but dump output to /dev/null
- As soon as 1.10 job catches up, stop old job and start writing output
into the actual storage.
I
Thank you Roman!
That is very helpful! Thank you!
BR,
Reo
Khachatryan Roman 于2020年3月20日周五 下午11:13写道:
> Hi Reo,
>
> Please find the answers to your questions below.
>
> > 1, what is the usage of this tmp files?
> These files are used by Flink internally for things like caching state
> locally,
Hi Jingsong, Dawid,
I created https://issues.apache.org/jira/browse/FLINK-16725 to track this
issue. We can continue discussion there.
Best,
Jark
On Thu, 27 Feb 2020 at 10:32, Jingsong Li wrote:
> Hi Jark,
>
> The matrix I see is SQL cast. If we need bring another conversion matrix
> that is d
Hi Xintong,
Thank you for your reply.
Do you mean you have 700 slots per TM or in total? How many TMs do you have?
And how many slots do you have per TM?
I have a Flink Cluster with 35 TMs,each TM has 16 slots.
cluster info: total TMs=35 ,total slots=560
Job info: request slot 400
It is after
Hi,
I have a scenario where I have an input event stream from various IoT
devices. Every message on this stream can be of some eventType and has an
eventTimestamp. Downstream, some business logic is implemented on this
based on event time.
In case a device goes offline, what's the best way to indic
Hi, 吴志勇.
Please use the *user-zh* mailing list (in CC) to get support in Chinese.
Thanks!
Marta
On Mon, Mar 23, 2020 at 8:35 AM 吴志勇 <1154365...@qq.com> wrote:
> 如题:
> 我向kafka中输出了json格式的数据
> {"id":5,"price":40,"timestamp":1584942626828,"type":"math"}
> {"id":2,"price":70,"timestamp":15849426296
??
kafkajson??
{"id":5,"price":40,"timestamp":1584942626828,"type":"math"}
{"id":2,"price":70,"timestamp":1584942629638,"type":"math"}
{"id":2,"price":70,"timestamp":1584942634951,"type":"math"}
timestamp??13SQL??
-
21 matches
Mail list logo