swuferhong commented on code in PR #2179: URL: https://github.com/apache/fluss/pull/2179#discussion_r2702397633
##########
website/docs/maintenance/configuration.md:
##########
@@ -157,7 +158,7 @@ during the Fluss cluster working.
| kv.rocksdb.use-bloom-filter | Boolean | true
| If true, every newly created SST file will contain a Bloom
filter. It is enabled by default.
|
| kv.rocksdb.bloom-filter.bits-per-key | Double | 10.0
| Bits per key that bloom filter will use, this only take
effect when bloom filter is used. The default value is 10.0.
|
| kv.rocksdb.bloom-filter.block-based-mode | Boolean | false
| If true, RocksDB will use block-based filter instead of
full filter, this only take effect when bloom filter is used. The default value
is `false`.
|
-| kv.rocksdb.shared-rate-limiter-bytes-per-sec | MemorySize |
Long.MAX_VALUE | The bytes per second rate limit for RocksDB
flush and compaction operations shared across all RocksDB instances on the
TabletServer. The rate limiter is always enabled. The default value is
Long.MAX_VALUE (effectively unlimited). Set to a lower value (e.g., 100MB) to
limit the rate. This configuration can be updated dynamically without server
restart. See [Updating Configs](operations/updating-configs.md) for more
details.
|
+| kv.rocksdb.shared-rate-limiter-bytes-per-sec | MemorySize |
Long.MAX_VALUE | The bytes per second rate limit for RocksDB
flush and compaction operations shared across all RocksDB instances on the
TabletServer. The rate limiter is always enabled. The default value is
Long.MAX_VALUE (effectively unlimited). Set to a lower value (e.g., 100MB) to
limit the rate. This configuration can be updated dynamically without server
restart. See [Updating Configs](operations/updating-configs.md) for more
details.
|
Review Comment:
This is format by idea check style.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
