davidradl commented on code in PR #26107: URL: https://github.com/apache/flink/pull/26107#discussion_r1944936024
########## docs/layouts/shortcodes/generated/expert_forst_section.html: ########## @@ -0,0 +1,54 @@ +<table class="configuration table table-bordered"> + <thead> + <tr> + <th class="text-left" style="width: 20%">Key</th> + <th class="text-left" style="width: 15%">Default</th> + <th class="text-left" style="width: 10%">Type</th> + <th class="text-left" style="width: 55%">Description</th> + </tr> + </thead> + <tbody> + <tr> + <td><h5>state.backend.forst.executor.inline-coordinator</h5></td> + <td style="word-wrap: break-word;">false</td> + <td>Boolean</td> + <td>Whether to let the task thread be the coordinator thread responsible for distributing requests.</td> + </tr> + <tr> + <td><h5>state.backend.forst.executor.inline-write</h5></td> + <td style="word-wrap: break-word;">true</td> + <td>Boolean</td> + <td>Whether to let write requests executed within the coordinator thread.</td> + </tr> + <tr> + <td><h5>state.backend.forst.local-dir</h5></td> + <td style="word-wrap: break-word;">(none)</td> + <td>String</td> + <td>The local directory (on the TaskManager) where ForSt puts some meta files. Per default, it will be <WORKING_DIR>/tmp. See <code class="highlighter-rouge">process.taskmanager.working-dir</code> for more details.</td> + </tr> + <tr> + <td><h5>state.backend.forst.memory.fixed-per-slot</h5></td> + <td style="word-wrap: break-word;">(none)</td> + <td>MemorySize</td> + <td>The fixed total amount of memory, shared among all ForSt instances per slot. This option overrides the 'state.backend.forst.memory.managed' option when configured.</td> Review Comment: nit: can we change to: The fixed total amount of memory per slot, shared among all ForSt instances. I suggest we remove "when configured" -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
