[
https://issues.apache.org/jira/browse/FLINK-18214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129387#comment-17129387
]
Chesnay Schepler commented on FLINK-18214:
------------------------------------------
Then the option is rather misleading; the description explicitly says {{"The
job store cache size in bytes which is used to keep completed jobs in
memory."}} and the current error message just reinforces this wrong
interpretation.
It also doesn't mention what the consequences are; will this just lead to some
jobs being evicted earlier? (no) can this crash the cluster? (answer: *yes*).
The documentation is then also wrong, on both counts to boost:
{{The Job cache resides in the JVM Heap. It can be configured by
jobstore.cache-size which must be less than the configured or derived JVM Heap
size.}}
How good of an estimate is it for actual jobs? For what size of a job is the
estimate correct?
As is stands, a user who uses this option will assume that this provides some
form of assurance that the cache will not impact the job execution. But this is
simply not the case.
I appreciate the difficulty of nailing down the cache size of in-memory
objects, but if we can't do that reliably, we have to document that.
> Incorrect warning if jobstore.cache-size exceeds heap size
> ----------------------------------------------------------
>
> Key: FLINK-18214
> URL: https://issues.apache.org/jira/browse/FLINK-18214
> Project: Flink
> Issue Type: Bug
> Components: Runtime / Configuration
> Reporter: Chesnay Schepler
> Priority: Major
> Fix For: 1.11.0
>
>
> The logging parameters are mixed up.
> {code}
> The configured or derived JVM heap memory size (jobstore.cache-size:
> 128.000mb (134217728 bytes)) is less than the configured or default size of
> the job store cache (jobmanager.memory.heap.size: 1.000gb (1073741825 bytes))
> {code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)