Hi Xiaolong,

For new versions such as flink-1.17, flink sql-gateway supports job
management and user can stop/start jobs with savepoint. You can start a job
with a given savepoint path as [1] and stop a job with or without savepoint
as [2].

[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sqlclient/#start-a-sql-job-from-a-savepoint
[2]
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sqlclient/#terminating-a-job

Best,
Shammon FY


On Tue, Jul 18, 2023 at 9:56 AM Xiaolong Wang <xiaolong.w...@smartnews.com>
wrote:

> Hi, Shammon,
>
> I know that the job manager can auto-recover via HA configurations, but
> what if I want to upgrade the running Flink SQL submitted by the Flink SQL
> gateway ?
>
> In normal cases, I can use the
>
>> ./flink run application -s ${SAVEPOINT_PATH} local://${FLINK_JOB_JAR}
>
> to resume a Flink job from a savepoint/checkpoint. The question is, how to
> do so with Flink sql gateway ?  What should I fill in the ${FLINK_JOB_JAR}
> field ?
>
> Thanks in advanced.
>
> On Mon, Jul 17, 2023 at 9:14 AM Shammon FY <zjur...@gmail.com> wrote:
>
>> Hi Xiaolong,
>>
>> When a streaming job is submitted via Sql-Gateway, its lifecycle is no
>> longer related to Sql Gateway.
>>
>> Returning to the issue of job recovery, I think if your job cluster is
>> configured with HA, jobmanager will recover running streaming jobs from
>> their checkpoints after a failover occurs.
>>
>> Best,
>> Shammon FY
>>
>>
>> On Thu, Jul 13, 2023 at 10:22 AM Xiaolong Wang <
>> xiaolong.w...@smartnews.com> wrote:
>>
>>> Hi,
>>>
>>> I'm currently working on providing a SQL gateway to submit both
>>> streaming and batch queries.
>>>
>>> My question is, if a streaming SQL is submitted and then the jobmanager
>>> crashes, is it possible to resume the streaming SQL from the latest
>>> checkpoint with the SQL gateway ?
>>>
>>>
>>>
>>

Reply via email to