Thank you! That fixed the problem.
On Thu, Dec 22, 2022, 3:40 AM yuxia wrote:
> Yes, your understanding is correct. To handle this, you can define a
> watermark strategy that will detect idleness and mark an input as idle.
> Please refer to these two documents[1][2] for more details.
>
> [1]
> h
-
Records the version from which each parameter was added
On Flink Native K8s mode, the pod of JM and TM will disappear if the streaming
job failed.Are there any ways to get the log of the failed Streaming job?
I only think of a solution that is to mount job logs to NFS for persistence
through pv-pvc defined in pod-template.
ENV:
Flink version:1.15.0
The Apache Flink community is very happy to announce the release of Apache
flink-connector-opensearch 1.0.0 for Flink 1.16.
Apache FlinkĀ® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.
The release is avai
Hi Tim,
> Our job happens to be stateless, so we're okay this time, but if we had
used state (like joining two streams or something) we would end up losing
data to fix this bug. Is the only solution to just use the DataStream API?
In case you have a change in your SQL statement, then yes you woul
Hi Marjan,
That's rather weird, because PyFlink uses the same implementation. Could
you file a Jira ticket? If not, let me know and I'll create one for you.
Best regards,
Martijn
On Thu, Dec 22, 2022 at 11:37 AM Marjan Jordanovski
wrote:
> Hello,
>
> I am using custom made connector to create
Yes, your understanding is correct. To handle this, you can define a watermark
strategy that will detect idleness and mark an input as idle.
Please refer to these two documents[1][2] for more details.
[1]
https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/connectors/datastream/kaf
Hello,
I am using custom made connector to create Source table in this way:
create table Source (
ts TIMESTAMP(3),
instance STRING,
sservice STRING,
logdatetime STRING,
threadid STRING,
level STRING,
log_line STRING
) with (
> How did you find "90% of S3 paths of objects are missing" ?
I download the _metadata file of the last checkpoint, extract all the paths to
the objects in the shared directory with a regular expression, and after each
such object I try to get their bucket information (creation date and size).