>>> it.
>>>>
>>>>
>>>>
>>>> On 15/12/2021 12:08, V N, Suchithra (Nokia - IN/Bangalore) wrote:
>>>>
>>>> Hello,
>>>>
>>>>
>>>>
>>>> Could you please tell when we can expect Fli
using it already there, then redeploy
>>> the platform with Helm:
>>> env:
>>>- name: JAVA_TOOL_OPTIONS
>>> value: -Dlog4j2.formatMsgNoLookups=true
>>>
>>>
>>> For any questions, please contact us via our support porta
Folks, what about the veverica platform. Is there any mitigation around it?
On Fri, Dec 10, 2021 at 3:32 PM Chesnay Schepler wrote:
> I would recommend to modify your log4j configurations to set
> log4j2.formatMsgNoLookups to true*.*
>
> As far as I can tell this is equivalent to upgrading
t; All types of state also have a method clear() that clears the state for
> the currently active key, i.e. the key of the input element.
> Could we call the `clear()` method directly to remove the state under the
> specified key?
>
> Best,
> JING ZHANG
>
>
> narasimha
Hi,
I have a use case where the keyed state is managed (create, reset) by
dynamically changing rules. New action "delete" has to be added.
Delete is to completely delete the keyed state, same as how StateTTL does
post expiration time.
Use StateTTL?
Initially used StateTTL, but it ended up in
Use below respectively
flink_taskmanager_job_task_operator_KafkaConsumer_bytes_consumed_rate -
Consumer rate
flink_taskmanager_job_task_operator_KafkaConsumer_records_lag_max -
Consumer lag
flink_taskmanager_job_task_operator_KafkaConsumer_commit_latency_max -
commit latency
unsure if reactive
Hi,
Trying to understand how JobManager. kills TaskManager that didn't respond
for heartbeat after a certain time.
For example:
If a network connection b/w JobManager and TaskManager is lost for some
reasons, the JobManager will bring up another Taskmanager post
hearbeat timeout.
In such a
savepoint and analyzing it using the State Processor API [1]?
>
> Best,
> Matthias
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-stable/dev/libs/state_processor_api.html#state-processor-api
>
> On Wed, Feb 10, 2021 at 6:08 PM narasimha wrote:
>
>> It is
It is not solving the problem.
I could see the memory keep increasing, resulting in a lot of high GCs.
There could be a memory leak, just want to know how to know if older keps
are skill alive, even after the pattern has been satisfied or within range
of the pattern has expired.
Can someone
e will be included in the next upcoming release 2.4 of the
> ververica platform. We plan to release it in the next few months.
>
> Best,
> Fabian
>
>
> On 5. Feb 2021, at 06:23, narasimha wrote:
>
> Thanks Yang for confirming.
>
> I did try putting in the config, al
tion[1],
> I am afraid that setting liveness check could not be supported in VVP.
>
> [1].
> https://docs.ververica.com/user_guide/application_operations/deployments/configure_kubernetes.html
>
> Best,
> Yang
>
> narasimha 于2021年2月5日周五 上午11:29写道:
>
>
that the liveness and readiness
> could help is the long GC.
> During the GC period, the rpc port could not be accessed successfully.
> Also the network issues
> could also benefit from the liveness check.
>
>
> Best,
> Yang
>
> narasimha 于2021年2月5日周五 上午10:26写道:
>
>> I
veness and the readiness probe is not very necessary
> for the Flink job. Since
> in most cases, the JobManager and TaskManager will exit before the rpc
> port is not accessible.
>
> Best,
> Yang
>
>
> narasimha 于2021年2月5日周五 上午2:08写道:
>
>>
>> Hi, I'm using
Hi, I'm using the ververica platform to host flink jobs.
Need help in setting up readiness, liveness probes to the taskmanager,
jobmanager pods.
I tried it locally by adding the probe details in deployment.yml file
respectively, but it didn't work.
Can someone help me with setting up the probes.
emissions order is guaranteed.
Would popping them out and emitting using Sliding window of 1 sec would
solve this?
Thanks,
Narasimha
--
A.Narasimha Swamy
Hi,
I'm using Flink CEP, but couldn't find any examples for writing test cases
for the streams with CEP.
Can someone help on how to write test cases for streams with CEP applied on
it?
--
A.Narasimha Swamy
Interesting use case.
Can you please elaborate more on this.
On what criteria do you want to batch? Time? Count? Or Size?
On Thu, 14 Jan 2021 at 12:15 PM, sagar wrote:
> Hi Team,
>
> I am getting the following error while running DataStream API in with
> batch mode with kafka source.
> I am
Hi,
Context:
Built a fraud detection kind of app.
Business logic is all fine, but when putting into production, Kafka cluster
is becoming unstable.
The topic to which it wrote have approx 80 events/sec. post running for few
hours Kafka broker indexes are getting corrupted.
Topic config: single
Hi,
Facing issues on kafka while running a job that was built with
1.11.2-scala-2.11 version onto flink version 1.11.2-scala-2.12.
kafka-connector with 1.11.2-scala-2.11 is getting packaged with the job.
Kafka cluster was all good when writing to topics, but when someone reads
intermittently the
k-docs-stable/dev/stream/state/state.html#state-time-to-live-ttl
>
> [2]
> https://ci.apache.org/projects/flink/flink-docs-stabledev/table/config.html#table-exec-state-ttl
>
>
>
> On Wed, Dec 23, 2020 at 11:57 PM narasimha wrote:
>
>> Hi,
>>
>> Belos is
Hi,
Belos is the use case.
Have a stream of transaction events, success/failure of a transaction can
be determined by those events.
Partitioning stream by transaction id and applying CEP to determine the
success/failure of a transaction.
Each transaction keyed stream is valid only until the
Hi,
How to configure flink job to follow a certain TimeZone, instead of
default/UTC.
Is it possible in the first place?
Solutions present are for Table/SQL API.
--
A.Narasimha Swamy
Thanks for the information.
Are there any plans to implement this? It is supported on other docker
images...
On Tue, 8 Dec 2020 at 9:36 PM, Fabian Paul
wrote:
> Hi Narasimha,
>
> I investigated your problem and it is caused by multiple issues. First vvp
> in
> general canno
thanks Fabian for responding.
flink image : registry.ververica.com/v2.2/flink:1.11.1-stream1-scala_2.12
There are no errors as such. But it is just considering the first job.
On Thu, Dec 3, 2020 at 5:34 PM Fabian Paul
wrote:
> Hi Narasimha,
>
> Nothing comes to my mind immedi
e/flink/types/parser/FieldParser.java#L287
>
>
> ------Original Mail --
> *Sender:*narasimha
> *Send Date:*Fri Dec 4 00:45:53 2020
> *Recipients:*user
> *Subject:*How to parse list values in csv file
>
>> Hi,
>>
>> Getting below erro
Hi,
Getting below error when trying to read a csv file, one of the field is
list tupe
Can someone help if fixing the issue
jobmanager_1 | Caused by: java.lang.IllegalArgumentException: The type
'java.util.List' is not supported for the CSV input format.
jobmanager_1 | at
Hi,
Using ververica platform to deploy flink jobs, found that it is not
supporting application deployment mode.
Just want to check if it is expected.
Below is a brief of how the main method has been composed.
class Job1 {
public void execute(){
StreamExecutingEnvironemnt env = ...
ically k8s), which could incur lower infra
cost IMO.
Flink will as well do the job, but it has its own merits for appropriate
use cases.
These are all my views, let's wait for what experts have to say.
*Thanks,*
*Narasimha*
On Tue, Nov 3, 2020 at 6:38 PM Thamidu Muthukumarana
wrote:
&
act, making it a fat jar and deploy it.
Steps:
1. Open main class run/debug configurations
2. Click on Include dependencies with Provided scope.
3. Apply
Thanks,
Narasimha
On Sun, Aug 30, 2020 at 11:40 PM Piper Piper wrote:
> Hi,
>
> Till now, I have only been using Flink binaries.
I was looking for testability, debugging practices on Flink Table API/SQL.
Really difficult to find them when compared to Streaming API.
Can someone please share their experiences on debugging, testability.
--
A.Narasimha Swamy
gt;
> https://github.com/apache/flink/blob/master/flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/runtime/stream/sql/FunctionITCase.java#L700
>
> On 19.08.20 12:43, narasimha wrote:
> > Hi,
> >
> > I'm checking on how to do UT/IT of stream
Hi Till,
Yes, I have gone through the Flink testing documentation.
In Table API/SQL connectors can be abstracted in the query itself, trying
to understand how such pipelines can be tested.
Looking for resources around it.
On Wed, Aug 19, 2020 at 5:15 PM Till Rohrmann wrote:
> Hi Narasi
Hi,
I'm checking on how to do UT/IT of streaming job written using Table
API/SQL.
I found
https://stackoverflow.com/questions/54900843/add-a-unit-test-for-flink-sql this
to be useful.
Are there any other recommended libs/ways to do this.
TIA
--
A.Narasimha Swamy
Hi all,
Checking if anyone has deployed flink using k8s operator.
If so what has been the experience and well it has eased the job updates.
Also was there any comparison among other available operators like
• lyft
• Google cloud
Thanks in advance, some insights into the above will save lot
Thanks, Till.
Currently, the instance is getting timeout error and terminating the
TaskManager.
Sure, will try native K8s.
On Thu, Aug 13, 2020 at 3:12 PM Till Rohrmann wrote:
> Hi Narasimha,
>
> if you are deploying the Flink cluster manually on K8s then there is
> no a
]
> https://ci.apache.org/projects/flink/flink-docs-release-1.11/monitoring/metrics.html
> [3]
> https://ci.apache.org/projects/flink/flink-docs-release-1.11/monitoring/application_profiling.html
>
> On Mon, Aug 10, 2020 at 1:06 PM narasimha wrote:
>
>> Hi,
>>
>>
Hi,
I'm new to the streaming world, checking on Performance testing tools. Are
there any recommended Performance testing tools for Flink?
--
A.Narasimha Swamy
own.
> Moreover, idle task manager would also release after 30 seconds by default
> [1].
>
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#resourcemanager-taskmanager-timeout
>
> Best
> Yun Tang
>
>
> --
I'm trying out Flink Per-Job deployment using docker-compose.
Configurations:
version: "2.2"
jobs:
jobmanager:
build: ./
image: flink_local:1.1
ports:
- "8081:8081"
command: standalone-job --job-classname com.organization.BatchJob
environment:
- |
39 matches
Mail list logo