Hi Biao,
I tried, it doesn't work. The cmd is:
flink run -m yarn-cluster -ys 4 -ynm EventCleaning_wl -yjm 2G -ytm 16G -yqu
default -p 8 -Dstate.backend.latency-track.keyed-state-enabled=true -c
com.zkj.task.EventCleaningTask SourceDataCleaning-wl_0410.jar --sourceTopic
dwd_audio_record
Hi Biao,
I tried, it does
On Mon, Apr 8, 2024 at 9:48 AM Biao Geng wrote:
> Hi Lei,
> You can use the "-D" option in the command line to set configs for a
> specific job. E.g, `flink run-application -t
> yarn-application -Djobmanager.memory.process.size=1024m `.
> See
>
Hi, Perez.
Flink use SPI to find the jdbc connector in the classloader and when starting,
the dir '${FLINK_ROOT}/lib' will be added
into the classpath. That is why in AWS the exception throws. IMO there are two
ways to solve this question.
1. upload the connector jar to AWS to let the
Hi Biao,
I will check out running with flink run, but should this be run in the Flink
JobManager? Would that mean that the container for the Flink JobManager would
require both Python installed and a copy of the flink_client.py module? Are
there some examples of running flink run in a
Hi Sachin,
Assignment for tumbling windows is exclusive on the endTime; see
description here
https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/datastream/operators/windows/#tumbling-windows
.
So in your example it would be assigned to window (60, 120) as in reality
the windows
Hi Phil,
It should be totally ok to use `python -m flink_client.job`. It just seems
to me that the flink cli is being used more often.
And yes, you also need to add the sql connector jar to the flink_client
container. After putting the jar in your client container, add codes
like
Hi,
Lets say I have defined 1 minute TumblingEventTimeWindows.
So it will create windows as:
(0, 60), (60, 120),
Now lets say I have an event at time t = 60.
In which window would this get aggregated ?
1st or second or both.
Say I want this to get aggregated only in the second window, how
Hi Biao,
1. I have a Flink client container like this:
# Flink client
flink_client:
container_name: flink_client
image: flink-client:local
build:
context: .
dockerfile: flink_client/Dockerfile
networks:
- standard
depends_on:
- jobmanager
- Kafka
The flink_client/Dockerfile has this bash file
Hi Phil,
Your codes look good. I mean how do you run the python script. Maybe you
are using flink cli? i.e. run commands like ` flink run -t .. -py job.py -j
/path/to/flink-sql-kafka-connector.jar`. If that's the case, the `-j
/path/to/flink-sql-kafka-connector.jar` is necessary so that in client
Hi Biao,
For submitting the job, I run t_env.execute_sql.
Shouldn’t that be sufficient for submitting the job using the Table API with
PyFlink? Isn’t that the recommended way for submitting and running PyFlink jobs
on a running Flink cluster? The Flink cluster runs without issues, but there is
Hi all,
I am using AWS Managed Flink 1.18, where I am getting this error when trying to
submit my job:
```
Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a
connector using option: 'connector'='jdbc'
at
Hi, Wang.
Could you provide more details for this bug, such as minimum reproducible test
code, pom dependencies, etc?
Further more, can you try again to package the dependency "commons-text" with
version "1.10.0" manually to check
if it works? If you can work around this bug by this way, I
12 matches
Mail list logo