Roman, thank you for your attention.
It looks like you are absolutely right. Thank you very much for helping.
Before submitting a job I do next steps:
1. ./bin/start-cluster.sh
2. ./bin/taskmanager.sh start
And in my code there is these line:
env.setStateBackend(new
RocksDBStateBackend("file:///
Hey, Roman
I use every time the same key.
And I get the correct value in StateManager every time the processElement()
method executes.
But then I stop the job and submit it again.
And first execution processElement() get me null in state store. The key
wasn't change.
So, I'am in confuse
Thank
Are you starting the job from savepoint [1] when submitting it again?
If not, it is considered as a new job and will not pick up the old state.
[1]
https://ci.apache.org/projects/flink/flink-docs-stable/deployment/cli.html#starting-a-job-from-a-savepoint
Regards,
Roman
On Fri, Mar 12, 2021 at 1
I have following piece of configuration in flink.yaml:
Key Value
high-availability zookeeper
high-availability.storageDir
file:///home/flink/flink-ha-data
high-avai
Hi Yuri,
The state that you access with getRuntimeContext().getState(...) is
scoped to the key (so for every new key this state will be null).
What key do you use?
Regards,
Roman
On Fri, Mar 12, 2021 at 7:22 AM Maminspapin wrote:
>
> I have following piece of configuration in flink.yaml:
>
> Ke