This is an automated email from the ASF dual-hosted git repository.

zhijiang pushed a commit to branch release-1.10
in repository https://gitbox.apache.org/repos/asf/flink.git

commit b79331b3ea7a90b23ce408ac2c1bcad1e9c10f08
Author: Zhijiang <wangzhijiang...@aliyun.com>
AuthorDate: Thu Dec 12 12:44:44 2019 +0100

    [hotfix][documentation] Generate the respective htmls because of previous 
configuration changes
---
 .../netty_shuffle_environment_configuration.html   | 12 ++++++------
 .../rocks_db_configurable_configuration.html       | 22 +++++++++++-----------
 2 files changed, 17 insertions(+), 17 deletions(-)

diff --git 
a/docs/_includes/generated/netty_shuffle_environment_configuration.html 
b/docs/_includes/generated/netty_shuffle_environment_configuration.html
index 8e10e44..7e591e6 100644
--- a/docs/_includes/generated/netty_shuffle_environment_configuration.html
+++ b/docs/_includes/generated/netty_shuffle_environment_configuration.html
@@ -27,12 +27,6 @@
             <td>Boolean flag indicating whether the shuffle data will be 
compressed for blocking shuffle mode. Note that data is compressed per buffer 
and compression can incur extra CPU overhead, so it is more effective for IO 
bounded scenario when data compression ratio is high.</td>
         </tr>
         <tr>
-            
<td><h5>taskmanager.network.pipelined-shuffle.compression.enabled</h5></td>
-            <td style="word-wrap: break-word;">false</td>
-            <td>Boolean</td>
-            <td>Boolean flag indicating whether the shuffle data will be 
compressed for pipelined shuffle mode. Note that data is compressed per sliced 
buffer and compression can incur extra CPU overhead, so it is not recommended 
to enable compression if network is not the bottleneck or compression ratio is 
low.</td>
-        </tr>
-        <tr>
             <td><h5>taskmanager.network.detailed-metrics</h5></td>
             <td style="word-wrap: break-word;">false</td>
             <td>Boolean</td>
@@ -51,6 +45,12 @@
             <td>Number of extra network buffers to use for each 
outgoing/incoming gate (result partition/input gate). In credit-based flow 
control mode, this indicates how many floating credits are shared among all the 
input channels. The floating buffers are distributed based on backlog 
(real-time output buffers in the subpartition) feedback, and can help relieve 
back-pressure caused by unbalanced data distribution among the subpartitions. 
This value should be increased in case of highe [...]
         </tr>
         <tr>
+            
<td><h5>taskmanager.network.pipelined-shuffle.compression.enabled</h5></td>
+            <td style="word-wrap: break-word;">false</td>
+            <td>Boolean</td>
+            <td>Boolean flag indicating whether the shuffle data will be 
compressed for pipelined shuffle mode. Note that data is compressed per sliced 
buffer and compression can incur extra CPU overhead, so it is not recommended 
to enable compression if network is not the bottleneck or compression ratio is 
low.</td>
+        </tr>
+        <tr>
             <td><h5>taskmanager.network.request-backoff.initial</h5></td>
             <td style="word-wrap: break-word;">100</td>
             <td>Integer</td>
diff --git a/docs/_includes/generated/rocks_db_configurable_configuration.html 
b/docs/_includes/generated/rocks_db_configurable_configuration.html
index eb35fb7..47e3bef 100644
--- a/docs/_includes/generated/rocks_db_configurable_configuration.html
+++ b/docs/_includes/generated/rocks_db_configurable_configuration.html
@@ -11,67 +11,67 @@
         <tr>
             <td><h5>state.backend.rocksdb.block.blocksize</h5></td>
             <td style="word-wrap: break-word;">(none)</td>
-            <td>String</td>
+            <td>MemorySize</td>
             <td>The approximate size (in bytes) of user data packed per block. 
RocksDB has default blocksize as '4KB'.</td>
         </tr>
         <tr>
             <td><h5>state.backend.rocksdb.block.cache-size</h5></td>
             <td style="word-wrap: break-word;">(none)</td>
-            <td>String</td>
+            <td>MemorySize</td>
             <td>The amount of the cache for data blocks in RocksDB. RocksDB 
has default block-cache size as '8MB'.</td>
         </tr>
         <tr>
             
<td><h5>state.backend.rocksdb.compaction.level.max-size-level-base</h5></td>
             <td style="word-wrap: break-word;">(none)</td>
-            <td>String</td>
+            <td>MemorySize</td>
             <td>The upper-bound of the total size of level base files in 
bytes. RocksDB has default configuration as '10MB'.</td>
         </tr>
         <tr>
             
<td><h5>state.backend.rocksdb.compaction.level.target-file-size-base</h5></td>
             <td style="word-wrap: break-word;">(none)</td>
-            <td>String</td>
+            <td>MemorySize</td>
             <td>The target file size for compaction, which determines a 
level-1 file size. RocksDB has default configuration as '2MB'.</td>
         </tr>
         <tr>
             
<td><h5>state.backend.rocksdb.compaction.level.use-dynamic-size</h5></td>
             <td style="word-wrap: break-word;">(none)</td>
-            <td>String</td>
+            <td>Boolean</td>
             <td>If true, RocksDB will pick target size of each level 
dynamically. From an empty DB, RocksDB would make last level the base level, 
which means merging L0 data into the last level, until it exceeds 
max_bytes_for_level_base. And then repeat this process for second last level 
and so on. RocksDB has default configuration as 'false'. For more information, 
please refer to <a 
href="https://github.com/facebook/rocksdb/wiki/Leveled-Compaction#level_compaction_dynamic_level_bytes-is
 [...]
         </tr>
         <tr>
             <td><h5>state.backend.rocksdb.compaction.style</h5></td>
             <td style="word-wrap: break-word;">(none)</td>
-            <td>String</td>
+            <td><p>Enum</p>Possible values: [LEVEL, UNIVERSAL, FIFO]</td>
             <td>The specified compaction style for DB. Candidate compaction 
style is LEVEL, FIFO or UNIVERSAL, and RocksDB choose 'LEVEL' as default 
style.</td>
         </tr>
         <tr>
             <td><h5>state.backend.rocksdb.files.open</h5></td>
             <td style="word-wrap: break-word;">(none)</td>
-            <td>String</td>
+            <td>Integer</td>
             <td>The maximum number of open files (per TaskManager) that can be 
used by the DB, '-1' means no limit. RocksDB has default configuration as 
'5000'.</td>
         </tr>
         <tr>
             <td><h5>state.backend.rocksdb.thread.num</h5></td>
             <td style="word-wrap: break-word;">(none)</td>
-            <td>String</td>
+            <td>Integer</td>
             <td>The maximum number of concurrent background flush and 
compaction jobs (per TaskManager). RocksDB has default configuration as 
'1'.</td>
         </tr>
         <tr>
             <td><h5>state.backend.rocksdb.writebuffer.count</h5></td>
             <td style="word-wrap: break-word;">(none)</td>
-            <td>String</td>
+            <td>Integer</td>
             <td>Tne maximum number of write buffers that are built up in 
memory. RocksDB has default configuration as '2'.</td>
         </tr>
         <tr>
             <td><h5>state.backend.rocksdb.writebuffer.number-to-merge</h5></td>
             <td style="word-wrap: break-word;">(none)</td>
-            <td>String</td>
+            <td>Integer</td>
             <td>The minimum number of write buffers that will be merged 
together before writing to storage. RocksDB has default configuration as 
'1'.</td>
         </tr>
         <tr>
             <td><h5>state.backend.rocksdb.writebuffer.size</h5></td>
             <td style="word-wrap: break-word;">(none)</td>
-            <td>String</td>
+            <td>MemorySize</td>
             <td>The amount of data built up in memory (backed by an unsorted 
log on disk) before converting to a sorted on-disk files. RocksDB has default 
writebuffer size as '64MB'.</td>
         </tr>
     </tbody>

Reply via email to