This is an automated email from the ASF dual-hosted git repository.

zhaijia pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/pulsar.git


The following commit(s) were added to refs/heads/master by this push:
     new d3dd31f  Update reference-configuration.md (#7924)
d3dd31f is described below

commit d3dd31f8633dc5ff72870c9a8a591b398444ee04
Author: sijia-w <[email protected]>
AuthorDate: Sat Aug 29 09:54:33 2020 +0200

    Update reference-configuration.md (#7924)
    
    Update AWS deployment in reference-configuration.
---
 site2/docs/reference-configuration.md | 169 +++++++++++++++++++++++++++-------
 1 file changed, 135 insertions(+), 34 deletions(-)

diff --git a/site2/docs/reference-configuration.md 
b/site2/docs/reference-configuration.md
index 5664db7..80f9393 100644
--- a/site2/docs/reference-configuration.md
+++ b/site2/docs/reference-configuration.md
@@ -34,7 +34,11 @@ BookKeeper is a replicated log storage system that Pulsar 
uses for durable stora
 |bookiePort|The port on which the bookie server listens.|3181|
 |allowLoopback|Whether the bookie is allowed to use a loopback interface as 
its primary interface (i.e. the interface used to establish its identity). By 
default, loopback interfaces are not allowed as the primary interface. Using a 
loopback interface as the primary interface usually indicates a configuration 
error. For example, it’s fairly common in some VPS setups to not configure a 
hostname or to have the hostname resolve to `127.0.0.1`. If this is the case, 
then all bookies in the cl [...]
 |listeningInterface|The network interface on which the bookie listens. If not 
set, the bookie will listen on all interfaces.|eth0|
+|advertisedAddress|Configure a specific hostname or IP address that the bookie 
should use to advertise itself to clients. If not set, bookie will advertised 
its own IP address or hostname, depending on the `listeningInterface` and 
`useHostNameAsBookieID` settings.|N/A|
+|allowMultipleDirsUnderSameDiskPartition|Configure the bookie to 
allow/disallow multiple ledger/index/journal directories in the same filesystem 
disk partition|false|
+|minUsableSizeForIndexFileCreation|The minimum safe usable size available in 
index directory for bookie to create index files while replaying journal at the 
time of bookie starts in Readonly Mode (in bytes).|1073741824|
 |journalDirectory|The directory where Bookkeeper outputs its write-ahead log 
(WAL)|data/bookkeeper/journal|
+|journalDirectories|Directories that BookKeeper outputs its write ahead log. 
Multi directories are available, being separated by `,`. For example: 
`journalDirectories=/tmp/bk-journal1,/tmp/bk-journal2`. If `journalDirectories` 
is set, bookies will skip `journalDirectory` and use this setting 
directory.|/tmp/bk-journal|
 |ledgerDirectories|The directory where Bookkeeper outputs ledger snapshots. 
This could define multiple directories to store snapshots separated by comma, 
for example `ledgerDirectories=/tmp/bk1-data,/tmp/bk2-data`. Ideally, ledger 
dirs and the journal dir are each in a different device, which reduces the 
contention between random I/O and sequential write. It is possible to run with 
a single disk, but performance will be significantly 
lower.|data/bookkeeper/ledgers|
 |ledgerManagerType|The type of ledger manager used to manage how ledgers are 
stored, managed, and garbage collected. See [BookKeeper 
Internals](http://bookkeeper.apache.org/docs/latest/getting-started/concepts) 
for more info.|hierarchical|
 |zkLedgersRootPath|The root ZooKeeper path used to store ledger metadata. This 
parameter is used by the ZooKeeper-based ledger manager as a root znode to 
store all ledgers.|/ledgers|
@@ -42,9 +46,12 @@ BookKeeper is a replicated log storage system that Pulsar 
uses for durable stora
 |entryLogFilePreallocationEnabled|Enable or disable entry logger 
preallocation|true|
 |logSizeLimit|Max file size of the entry logger, in bytes. A new entry log 
file will be created when the old one reaches the file size 
limitation.|2147483648|
 |minorCompactionThreshold|Threshold of minor compaction. Entry log files whose 
remaining size percentage reaches below this threshold will be compacted in a 
minor compaction. If set to less than zero, the minor compaction is 
disabled.|0.2|
-|minorCompactionInterval|Time interval to run minor compaction, in seconds. If 
set to less than zero, the minor compaction is disabled.|3600|
+|minorCompactionInterval|Time interval to run minor compaction, in seconds. If 
set to less than zero, the minor compaction is disabled. Note: should be 
greater than gcWaitTime. |3600|
 |majorCompactionThreshold|The threshold of major compaction. Entry log files 
whose remaining size percentage reaches below this threshold will be compacted 
in a major compaction. Those entry log files whose remaining size percentage is 
still higher than the threshold will never be compacted. If set to less than 
zero, the minor compaction is disabled.|0.5|
-|majorCompactionInterval|The time interval to run major compaction, in 
seconds. If set to less than zero, the major compaction is disabled.|86400|
+|majorCompactionInterval|The time interval to run major compaction, in 
seconds. If set to less than zero, the major compaction is disabled. Note: 
should be greater than gcWaitTime. |86400|
+|readOnlyModeEnabled|If `readOnlyModeEnabled=true`, then on all full ledger 
disks, bookie will be converted to read-only mode and serve only read requests. 
Otherwise the bookie will be shutdown.|true|
+|forceReadOnlyBookie|Whether the bookie is force started in read only 
mode.|false|
+|persistBookieStatusEnabled|Persist the bookie status locally on the disks. So 
the bookies can keep their status upon restarts.|false|
 |compactionMaxOutstandingRequests|Sets the maximum number of entries that can 
be compacted without flushing. When compacting, the entries are written to the 
entrylog and the new offsets are cached in memory. Once the entrylog is flushed 
the index is updated with the new offsets. This parameter controls the number 
of entries added to the entrylog before a flush is forced. A higher value for 
this parameter means more memory will be used for offsets. Each offset consists 
of 3 longs. This pa [...]
 |compactionRate|The rate at which compaction will read entries, in adds per 
second.|1000|
 |isThrottleByBytes|Throttle compaction by bytes or by entries.|false|
@@ -61,34 +68,60 @@ BookKeeper is a replicated log storage system that Pulsar 
uses for durable stora
 |journalBufferedWritesThreshold|Maximum writes to buffer to achieve 
grouping|524288|
 |journalFlushWhenQueueEmpty|If we should flush the journal when journal queue 
is empty|false|
 |numJournalCallbackThreads|The number of threads that should handle journal 
callbacks|8|
-|rereplicationEntryBatchSize|The number of max entries to keep in fragment for 
re-replication|5000|
+|rereplicationEntryBatchSize|The number of max entries to keep in fragment for 
re-replication|100|
+|openLedgerRereplicationGracePeriod|The grace period, in seconds, that the 
replication worker waits before fencing and replicating a ledger fragment 
that's still being written to upon the bookie failure.|30|
+|autoRecoveryDaemonEnabled|Whether the bookie itself can start auto-recovery 
service.|true|
+|lostBookieRecoveryDelay|How long to wait, in seconds, before starting auto 
recovery of a lost bookie.|0|
 |gcWaitTime|How long the interval to trigger next garbage collection, in 
milliseconds. Since garbage collection is running in background, too frequent 
gc will heart performance. It is better to give a higher number of gc interval 
if there is enough disk capacity.|900000|
 |gcOverreplicatedLedgerWaitTime|How long the interval to trigger next garbage 
collection of overreplicated ledgers, in milliseconds. This should not be run 
very frequently since we read the metadata for all the ledgers on the bookie 
from zk.|86400000|
 |flushInterval|How long the interval to flush ledger index pages to disk, in 
milliseconds. Flushing index files will introduce much random disk I/O. If 
separating journal dir and ledger dirs each on different devices, flushing 
would not affect performance. But if putting journal dir and ledger dirs on 
same device, performance degrade significantly on too frequent flushing. You 
can consider increment flush interval to get better performance, but you need 
to pay more time on bookie server  [...]
 |bookieDeathWatchInterval|Interval to watch whether bookie is dead or not, in 
milliseconds|1000|
+|allowStorageExpansion|Allow the bookie storage to expand. Newly added ledger 
and index dirs must be empty.|false|
 |zkServers|A list of one of more servers on which zookeeper is running. The 
server list can be comma separated values, for example: 
zkServers=zk1:2181,zk2:2181,zk3:2181.|localhost:2181|
 |zkTimeout|ZooKeeper client session timeout in milliseconds Bookie server will 
exit if it received SESSION_EXPIRED because it was partitioned off from 
ZooKeeper for more than the session timeout JVM garbage collection, disk I/O 
will cause SESSION_EXPIRED. Increment this value could help avoiding this 
issue|30000|
+|zkRetryBackoffStartMs|The start time that the Zookeeper client backoff 
retries in milliseconds.|1000|
+|zkRetryBackoffMaxMs|The maximum time that the Zookeeper client backoff 
retries in milliseconds.|10000|
+|zkEnableSecurity|Set ACLs on every node written on ZooKeeper, allowing users 
to read and write BookKeeper metadata stored on ZooKeeper. In order to make 
ACLs work you need to setup ZooKeeper JAAS authentication. All the bookies and 
Client need to share the same user, and this is usually done using Kerberos 
authentication. See ZooKeeper documentation.|false|
+|httpServerEnabled|The flag enables/disables starting the admin http 
server.|false|
+|httpServerPort|The http server port to listen on. By default, the value is 
8080. Use `8000` as the port to keep it consistent with prometheus stats 
provider.|8000
+|httpServerClass|The http server 
class.|org.apache.bookkeeper.http.vertx.VertxHttpServer|
 |serverTcpNoDelay|This settings is used to enabled/disabled Nagle’s algorithm, 
which is a means of improving the efficiency of TCP/IP networks by reducing the 
number of packets that need to be sent over the network. If you are sending 
many small messages, such that more than one can fit in a single IP packet, 
setting server.tcpnodelay to false to enable Nagle algorithm can provide better 
performance.|true|
+|serverSockKeepalive|This setting is used to send keep-alive messages on 
connection-oriented sockets.|true|
+|serverTcpLinger|The socket linger timeout on close. When enabled, a close or 
shutdown will not return until all queued messages for the socket have been 
successfully sent or the linger timeout has been reached. Otherwise, the call 
returns immediately and the closing is done in the background.|0|
+|byteBufAllocatorSizeMax|The maximum buf size of the received ByteBuf 
allocator.|1048576|
+|nettyMaxFrameSizeBytes|The maximum netty frame size in bytes. Any message 
received larger than this will be rejected. The default value is 1G.|5253120|
 |openFileLimit|Max number of ledger index files could be opened in bookie 
server If number of ledger index files reaches this limitation, bookie server 
started to swap some ledgers from memory to disk. Too frequent swap will affect 
performance. You can tune this number to gain performance according your 
requirements.|0|
-|pageSize|Size of a index page in ledger cache, in bytes A larger index page 
can improve performance writing page to disk, which is efficent when you have 
small number of ledgers and these ledgers have similar number of entries. If 
you have large number of ledgers and each ledger has fewer entries, smaller 
index page would improve memory usage.|8192|
-|pageLimit|How many index pages provided in ledger cache If number of index 
pages reaches this limitation, bookie server starts to swap some ledgers from 
memory to disk. You can increment this value when you found swap became more 
frequent. But make sure pageLimit*pageSize should not more than JVM max memory 
limitation, otherwise you would got OutOfMemoryException. In general, 
incrementing pageLimit, using smaller index page would gain bettern performance 
in lager number of ledgers with  [...]
+|pageSize|Size of a index page in ledger cache, in bytes A larger index page 
can improve performance writing page to disk, which is efficient when you have 
small number of ledgers and these ledgers have similar number of entries. If 
you have large number of ledgers and each ledger has fewer entries, smaller 
index page would improve memory usage.|8192|
+|pageLimit|How many index pages provided in ledger cache If number of index 
pages reaches this limitation, bookie server starts to swap some ledgers from 
memory to disk. You can increment this value when you found swap became more 
frequent. But make sure pageLimit*pageSize should not more than JVM max memory 
limitation, otherwise you would got OutOfMemoryException. In general, 
incrementing pageLimit, using smaller index page would gain better performance 
in lager number of ledgers with f [...]
 |readOnlyModeEnabled|If all ledger directories configured are full, then 
support only read requests for clients. If “readOnlyModeEnabled=true” then on 
all ledger disks full, bookie will be converted to read-only mode and serve 
only read requests. Otherwise the bookie will be shutdown. By default this will 
be disabled.|true|
-|diskUsageThreshold|For each ledger dir, maximum disk space which can be used. 
Default is 0.95f. i.e. 95% of disk can be used at most after which nothing will 
be written to that partition. If all ledger dir partions are full, then bookie 
will turn to readonly mode if ‘readOnlyModeEnabled=true’ is set, else it will 
shutdown. Valid values should be in between 0 and 1 (exclusive).|0.95|
+|diskUsageThreshold|For each ledger dir, maximum disk space which can be used. 
Default is 0.95f. i.e. 95% of disk can be used at most after which nothing will 
be written to that partition. If all ledger dir partitions are full, then 
bookie will turn to readonly mode if ‘readOnlyModeEnabled=true’ is set, else it 
will shutdown. Valid values should be in between 0 and 1 (exclusive).|0.95|
 |diskCheckInterval|Disk check interval in milli seconds, interval to check the 
ledger dirs usage.|10000|
 |auditorPeriodicCheckInterval|Interval at which the auditor will do a check of 
all ledgers in the cluster. By default this runs once a week. The interval is 
set in seconds. To disable the periodic check completely, set this to 0. Note 
that periodic checking will put extra load on the cluster, so it should not be 
run more frequently than once a day.|604800|
+|sortedLedgerStorageEnabled|Whether sorted-ledger storage is enabled.|ture|
 |auditorPeriodicBookieCheckInterval|The interval between auditor bookie 
checks. The auditor bookie check, checks ledger metadata to see which bookies 
should contain entries for each ledger. If a bookie which should contain 
entries is unavailable, thea the ledger containing that entry is marked for 
recovery. Setting this to 0 disabled the periodic check. Bookie checks will 
still run when a bookie fails. The interval is specified in seconds.|86400|
-|numAddWorkerThreads|number of threads that should handle write requests. if 
zero, the writes would be handled by netty threads directly.|0|
-|numReadWorkerThreads|number of threads that should handle read requests. if 
zero, the reads would be handled by netty threads directly.|8|
+|numAddWorkerThreads|The number of threads that should handle write requests. 
if zero, the writes would be handled by netty threads directly.|0|
+|numReadWorkerThreads|The number of threads that should handle read requests. 
if zero, the reads would be handled by netty threads directly.|8|
+|numHighPriorityWorkerThreads|The umber of threads that should be used for 
high priority requests (i.e. recovery reads and adds, and fencing).|8|
 |maxPendingReadRequestsPerThread|If read workers threads are enabled, limit 
the number of pending requests, to avoid the executor queue to grow 
indefinitely.|2500|
+|maxPendingAddRequestsPerThread|The limited number of pending requests, which 
is used to avoid the executor queue to grow indefinitely when add workers 
threads are enabled.|10000|
+|isForceGCAllowWhenNoSpace|Whether force compaction is allowed when the disk 
is full or almost full. Forcing GC could get some space back, but could also 
fill up the disk space more quickly. This is because new log files are created 
before GC, while old garbage log files are deleted after GC.|false|
+|verifyMetadataOnGC|True if the bookie should double check `readMetadata` 
prior to GC.|false|
+|flushEntrylogBytes|Entry log flush interval in bytes. Flushing in smaller 
chunks but more frequently reduces spikes in disk I/O. Flushing too frequently 
may also affect performance negatively.|268435456|
 |readBufferSizeBytes|The number of bytes we should use as capacity for 
BufferedReadChannel.|4096|
 |writeBufferSizeBytes|The number of bytes used as capacity for the write 
buffer|65536|
-|useHostNameAsBookieID|Whether the bookie should use its hostname to register 
with the coordination service (e.g.: zookeeper service). When false, bookie 
will use its ipaddress for the registration.|false|
+|useHostNameAsBookieID|Whether the bookie should use its hostname to register 
with the coordination service (e.g.: zookeeper service). When false, bookie 
will use its ip address for the registration.|false|
+|allowEphemeralPorts|Whether the bookie is allowed to use an ephemeral port 
(port 0) as its server port. By default, an ephemeral port is not allowed. 
Using an ephemeral port as the service port usually indicates a configuration 
error. However, in unit tests, using an ephemeral port will address port 
conflict problems and allow running tests in parallel.|false|
+|enableLocalTransport|Whether the bookie is allowed to listen for the 
BookKeeper clients executed on the local JVM.|false|
+|disableServerSocketBind|Whether the bookie is allowed to disable bind on 
network interfaces. This bookie will be available only to BookKeeper clients 
executed on the local JVM.|false|
+|skipListArenaChunkSize|The number of bytes that we should use as chunk 
allocation for `org.apache.bookkeeper.bookie.SkipListArena`.|4194304|
+|skipListArenaMaxAllocSize|The maximum size that we should allocate from the 
skiplist arena. Allocations larger than this should be allocated directly by 
the VM to avoid fragmentation.|131072|
+|bookieAuthProviderFactoryClass|The factory class name of the bookie 
authentication provider. If this is null, then there is no authentication.|null|
 
|statsProviderClass||org.apache.bookkeeper.stats.prometheus.PrometheusMetricsProvider|
 |prometheusStatsHttpPort||8000|
-|dbStorage_writeCacheMaxSizeMb|Size of Write Cache. Memory is allocated from 
JVM direct memory. Write cache is used to buffer entries before flushing into 
the entry log For good performance, it should be big enough to hold a sub|25% 
of direct memory|
-|dbStorage_readAheadCacheMaxSizeMb|Size of Read cache. Memory is allocated 
from JVM direct memory. This read cache is pre-filled doing read-ahead whenever 
a cache miss happens|25% of direct memory|
+|dbStorage_writeCacheMaxSizeMb|Size of Write Cache. Memory is allocated from 
JVM direct memory. Write cache is used to buffer entries before flushing into 
the entry log. For good performance, it should be big enough to hold a 
substantial amount of entries in the flush interval. By default, it is 
allocated to 25% of the available direct memory.|N/A
+|dbStorage_readAheadCacheMaxSizeMb|Size of Read cache. Memory is allocated 
from JVM direct memory. This read cache is pre-filled doing read-ahead whenever 
a cache miss happens. By default, it is allocated to 25% of the available 
direct memory.|N/A|
 |dbStorage_readAheadCacheBatchSize|How many entries to pre-fill in cache after 
a read cache miss|1000|
-|dbStorage_rocksDB_blockCacheSize|Size of RocksDB block-cache. For best 
performance, this cache should be big enough to hold a significant portion of 
the index database which can reach ~2GB in some cases|10% of direct memory|
+|dbStorage_rocksDB_blockCacheSize|Size of RocksDB block-cache. For best 
performance, this cache should be big enough to hold a significant portion of 
the index database which can reach ~2GB in some cases. By default, it uses 10% 
of direct memory.|N/A|
 |dbStorage_rocksDB_writeBufferSizeMB||64|
 |dbStorage_rocksDB_sstSizeInMB||64|
 |dbStorage_rocksDB_blockSize||65536|
@@ -111,6 +144,9 @@ internalListenerName|Specify the internal listener name for 
the broker.<br><br>*
 |enablePersistentTopics|  Whether persistent topics are enabled on the broker 
|true|
 |enableNonPersistentTopics| Whether non-persistent topics are enabled on the 
broker |true|
 |functionsWorkerEnabled|  Whether the Pulsar Functions worker service is 
enabled in the broker  |false|
+|exposePublisherStats|Whether to enable topic level metrics.|true|
+|statsUpdateFrequencyInSecs||60|
+|statsUpdateInitialDelayInSecs||60|
 |zookeeperServers|  Zookeeper quorum connection string  ||
 |zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300
 |configurationStoreServers| Configuration store connection string (as a 
comma-separated list) ||
@@ -119,6 +155,13 @@ internalListenerName|Specify the internal listener name 
for the broker.<br><br>*
 |webServicePort|  Port to use to server HTTP request  |8080|
 |webServicePortTls| Port to use to server HTTPS request |8443|
 |webSocketServiceEnabled| Enable the WebSocket API service in broker  |false|
+|webSocketNumIoThreads|The number of IO threads in Pulsar Client used in 
WebSocket proxy.|8|
+|webSocketConnectionsPerBroker|The number of connections per Broker in Pulsar 
Client used in WebSocket proxy.|8|
+|webSocketSessionIdleTimeoutMillis|Time in milliseconds that idle WebSocket 
session times out.|300000|
+|webSocketMaxTextFrameSize|The maximum size of a text message during parsing 
in WebSocket proxy.|1048576|
+|exposeTopicLevelMetricsInPrometheus|Whether to enable topic level 
metrics.|true|
+|exposeConsumerLevelMetricsInPrometheus|Whether to enable consumer level 
metrics.|false|
+|jvmGCMetricsLoggerClassName|Classname of Pluggable JVM GC metrics logger that 
can log GC specific metrics.|N/A|
 |bindAddress| Hostname or IP address the service binds on, default is 0.0.0.0. 
 |0.0.0.0|
 |advertisedAddress| Hostname or IP address the service advertises to the 
outside world. If not set, the value of 
`InetAddress.getLocalHost().getHostName()` is used.  ||
 |clusterName| Name of the cluster to which this broker belongs to ||
@@ -133,9 +176,10 @@ internalListenerName|Specify the internal listener name 
for the broker.<br><br>*
 |skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out 
of memory error. |false|
 |backlogQuotaCheckEnabled|  Enable backlog quota check. Enforces action on 
topic when the quota is reached  |true|
 |backlogQuotaCheckIntervalInSeconds|  How often to check for topics that have 
reached the quota |60|
-|backlogQuotaDefaultLimitGB| The default per-topic backlog quota limit | -1 |
+|backlogQuotaDefaultLimitGB| The default per-topic backlog quota limit. Being 
less than 0 means no limitation. By default, it is -1. | -1 |
+|backlogQuotaDefaultRetentionPolicy|The defaulted backlog quota retention 
policy. By Default, it is `producer_request_hold`. <li>'producer_request_hold' 
Policy which holds producer's send request until the resource becomes available 
(or holding times out)</li> <li>'producer_exception' Policy which throws 
`javax.jms.ResourceAllocationException` to the producer 
</li><li>'consumer_backlog_eviction' Policy which evicts the oldest message 
from the slowest consumer's backlog</li>|producer_requ [...]
 |allowAutoTopicCreation| Enable topic auto creation if a new producer or 
consumer connected |true|
-|allowAutoTopicCreationType| The topic type (partitioned or non-partitioned) 
that is allowed to be automatically created. |Partitioned|
+|allowAutoTopicCreationType| The type of topic that is allowed to be 
automatically created.(partitioned/non-partitioned) |non-partitioned|
 |allowAutoSubscriptionCreation| Enable subscription auto creation if a new 
consumer connected |true|
 |defaultNumPartitions| The number of partitioned topics that is allowed to be 
automatically created if `allowAutoTopicCreationType` is partitioned |1|
 |brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics  
|true|
@@ -144,17 +188,20 @@ internalListenerName|Specify the internal listener name 
for the broker.<br><br>*
 | brokerDeleteInactiveTopicsMaxInactiveDurationSeconds | Set the maximum 
duration for inactive topics. If it is not specified, the 
`brokerDeleteInactiveTopicsFrequencySeconds` parameter is adopted. | N/A |
 |messageExpiryCheckIntervalInMinutes| How frequently to proactively check and 
purge expired messages  |5|
 |brokerServiceCompactionMonitorIntervalInSeconds| Interval between checks to 
see if topics with compaction policies need to be compacted  |60|
+|delayedDeliveryEnabled|Whether to enable the delayed delivery for messages. 
If disabled, messages will be immediately delivered and there will be no 
tracking overhead.|true|
+|delayedDeliveryTickTimeMillis|Control the tick time for retrying on delayed 
delivery, which affecte the accuracy of the delivery time compared to the 
scheduled time. By default, it is 1 second.|1000|
 |activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and 
dispatching messages when active consumer is changed.  |1000|
 |clientLibraryVersionCheckEnabled|  Enable check for minimum allowed client 
library version |false|
 |clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no 
version information  |true|
 |statusFilePath|  Path for the file used to determine the rotation status for 
the broker when responding to service discovery health checks ||
 |preferLaterVersions| If true, (and ModularLoadManagerImpl is being used), the 
load manager will attempt to use only brokers running the latest software 
version (to minimize impact to bundles)  |false|
 |maxNumPartitionsPerPartitionedTopic|Max number of partitions per partitioned 
topic. Use 0 or negative number to disable the check|0|
-|tlsEnabled|  Enable TLS  |false|
+|tlsEnabled|Deprecated - Use `webServicePortTls` and `brokerServicePortTls` 
instead. |false|
 |tlsCertificateFilePath|  Path for the TLS certificate file ||
 |tlsKeyFilePath|  Path for the TLS private key file ||
-|tlsTrustCertsFilePath| Path for the trusted TLS certificate file ||
-|tlsAllowInsecureConnection|  Accept untrusted TLS certificate from client  
|false|
+|tlsTrustCertsFilePath| Path for the trusted TLS certificate file. This cert 
is used to verify that any certs presented by connecting clients are signed by 
a certificate authority. If this verification fails, then the certs are 
untrusted and the connections are dropped. ||
+|tlsAllowInsecureConnection| Accept untrusted TLS certificate from client. If 
it is set to `true`, a client with a cert which cannot be verified with the
+'tlsTrustCertsFilePath' cert will be allowed to connect to the server, though 
the cert will not be used for client authentication. |false|
 |tlsProtocols|Specify the tls protocols the broker will use to negotiate 
during TLS Handshake. Multiple values can be specified, separated by commas. 
Example:- ```TLSv1.2```, ```TLSv1.1```, ```TLSv1``` ||
 |tlsCiphers|Specify the tls cipher the broker will use to negotiate during TLS 
Handshake. Multiple values can be specified, separated by commas. Example:- 
```TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256```||
 |tlsEnabledWithKeyStore| Enable TLS with KeyStore type configuration in broker 
|false|
@@ -184,13 +231,16 @@ subscriptionExpirationTimeMinutes | How long to delete 
inactive subscriptions fr
 |maxConcurrentTopicLoadRequest| Max number of concurrent topic loading request 
broker allows to control number of zk-operations |5000|
 |authenticationEnabled| Enable authentication |false|
 |authenticationProviders| Autentication provider name list, which is comma 
separated list of class names  ||
-| authenticationRefreshCheckSeconds | Interval of time for checking for 
expired authentication credentials | 60s |
+| authenticationRefreshCheckSeconds | Interval of time for checking for 
expired authentication credentials | 60 |
 |authorizationEnabled|  Enforce authorization |false|
 |superUserRoles|  Role names that are treated as “super-user”, meaning they 
will be able to do all admin operations and publish/consume from all topics ||
 |brokerClientAuthenticationPlugin|  Authentication settings of the broker 
itself. Used when the broker connects to other brokers, either in same or other 
clusters  ||
 |brokerClientAuthenticationParameters|||
 |athenzDomainNames| Supported Athenz provider domain names(comma separated) 
for authentication  ||
 |exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, 
set false to use published counter and consumed counter to calculate, this 
would be more efficient but may be inaccurate. |false|
+|schemaRegistryStorageClassName|The schema storage implementation used by this 
broker.|org.apache.pulsar.broker.service.schema.BookkeeperSchemaStorageFactory|
+|isSchemaValidationEnforced|Enforce schema validation on following cases: if a 
producer without a schema attempts to produce to a topic with schema, the 
producer will be failed to connect. PLEASE be carefully on using this, since 
non-java clients don't support schema. If this setting is enabled, then 
non-java clients fail to produce.|false|
+|offloadersDirectory|The directory for all the offloader 
implementations.|./offloaders|
 |bookkeeperMetadataServiceUri| Metadata service uri that bookkeeper is used 
for loading corresponding metadata driver and resolving its metadata service 
location. This value can be fetched using `bookkeeper shell whatisinstanceid` 
command in BookKeeper cluster. For example: 
zk+hierarchical://localhost:2181/ledgers. The metadata service uri list can 
also be semicolon separated values like below: 
zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers ||
 |bookkeeperClientAuthenticationPlugin|  Authentication plugin to use when 
connecting to bookies ||
 |bookkeeperClientAuthenticationParametersName|  BookKeeper auth plugin 
implementatation specifics parameters name and values  ||
@@ -250,14 +300,20 @@ subscriptionExpirationTimeMinutes | How long to delete 
inactive subscriptions fr
 |replicationProducerQueueSize|  Replicator producer queue size  |1000|
 |replicatorPrefix|  Replicator prefix used for replicator producer name and 
cursor name pulsar.repl||
 |replicationTlsEnabled| Enable TLS when talking with other clusters to 
replicate messages |false|
+|brokerServicePurgeInactiveFrequencyInSeconds|Deprecated. Use 
`brokerDeleteInactiveTopicsFrequencySeconds`.|60|
+|transactionCoordinatorEnabled|Whether to enable transaction coordinator in 
broker.|true|
+|transactionMetadataStoreProviderClassName| 
|org.apache.pulsar.transaction.coordinator.impl.InMemTransactionMetadataStoreProvider|
 |defaultRetentionTimeInMinutes| Default message retention time  ||
 |defaultRetentionSizeInMB|  Default retention size  |0|
 |keepAliveIntervalSeconds|  How often to check whether the connections are 
still alive  |30|
+|bootstrapNamespaces| The bootstrap name. | N/A |
 |loadManagerClassName|  Name of load manager to use 
|org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl|
 |supportedNamespaceBundleSplitAlgorithms| Supported algorithms name for 
namespace bundle split |[range_equally_divide,topic_count_equally_divide]|
 |defaultNamespaceBundleSplitAlgorithm| Default algorithm name for namespace 
bundle split |range_equally_divide|
-|managedLedgerOffloadDriver|  Driver to use to offload old data to long term 
storage (Possible values: S3)  ||
+|managedLedgerOffloadDriver| The directory for all the offloader 
implementations
+`offloadersDirectory=./offloaders`. Driver to use to offload old data to long 
term storage (Possible values: S3, aws-s3, google-cloud-storage). When using 
google-cloud-storage, Make sure both Google Cloud Storage and Google Cloud 
Storage JSON API are enabled for the project (check from Developers Console -> 
Api&auth -> APIs). ||
 |managedLedgerOffloadMaxThreads|  Maximum number of thread pool threads for 
ledger offloading |2|
+|managedLedgerOffloadPrefetchRounds|The maximum prefetch rounds for ledger 
reading for offloading.|1|
 |managedLedgerUnackedRangesOpenCacheSetEnabled|  Use Open Range-Set to cache 
unacknowledged messages |true|
 |managedLedgerOffloadDeletionLagMs|Delay between a ledger being successfully 
offloaded to long term storage and the ledger being deleted from bookkeeper | 
14400000|
 |managedLedgerOffloadAutoTriggerSizeThresholdBytes|The number of bytes before 
triggering automatic offload to long term storage |-1 (disabled)|
@@ -266,10 +322,25 @@ subscriptionExpirationTimeMinutes | How long to delete 
inactive subscriptions fr
 |s3ManagedLedgerOffloadServiceEndpoint| For Amazon S3 ledger offload, 
Alternative endpoint to connect to (useful for testing) ||
 |s3ManagedLedgerOffloadMaxBlockSizeInBytes| For Amazon S3 ledger offload, Max 
block size in bytes. (64MB by default, 5MB minimum) |67108864|
 |s3ManagedLedgerOffloadReadBufferSizeInBytes| For Amazon S3 ledger offload, 
Read buffer size in bytes (1MB by default)  |1048576|
+|gcsManagedLedgerOffloadRegion|For Google Cloud Storage ledger offload, region 
where offload bucket is located. Go to this page for more details: 
https://cloud.google.com/storage/docs/bucket-locations .|N/A|
+|gcsManagedLedgerOffloadBucket|For Google Cloud Storage ledger offload, Bucket 
to place offloaded ledger into.|N/A|
+|gcsManagedLedgerOffloadMaxBlockSizeInBytes|For Google Cloud Storage ledger 
offload, the maximum block size in bytes. (64MB by default, 5MB 
minimum)|67108864|
+|gcsManagedLedgerOffloadReadBufferSizeInBytes|For Google Cloud Storage ledger 
offload, Read buffer size in bytes. (1MB by default)|1048576|
+|gcsManagedLedgerOffloadServiceAccountKeyFile|For Google Cloud Storage, path 
to json file containing service account credentials. For more details, see the 
"Service Accounts" section of 
https://support.google.com/googleapi/answer/6158849 .|N/A|
+|fileSystemProfilePath|For File System Storage, file system profile 
path.|../conf/filesystem_offload_core_site.xml|
+|fileSystemURI|For File System Storage, file system uri.|N/A|
 |s3ManagedLedgerOffloadRole| For Amazon S3 ledger offload, provide a role to 
assume before writing to s3 ||
 |s3ManagedLedgerOffloadRoleSessionName| For Amazon S3 ledger offload, provide 
a role session name when using a role |pulsar-s3-offload|
 | acknowledgmentAtBatchIndexLevelEnabled | Enable or disable the batch index 
acknowledgement. | false |
-| maxMessageSize | Set the maximum size of a message. | 5 MB |
+|enableReplicatedSubscriptions|Whether to enable tracking of replicated 
subscriptions state across clusters.|true|
+|replicatedSubscriptionsSnapshotFrequencyMillis|The frequency of snapshots for 
replicated subscriptions tracking.|1000|
+|replicatedSubscriptionsSnapshotTimeoutSeconds|The timeout for building a 
consistent snapshot for tracking replicated subscriptions state.|30|
+|replicatedSubscriptionsSnapshotMaxCachedPerSubscription|The maximum number of 
snapshot to be cached per subscription.|10|
+|maxMessagePublishBufferSizeInMB|The maximum memory size for broker handling 
messages sent from producers. If the processing message size exceeds this 
value, broker stops reading data from the connection. The processing messages 
means messages are sent to broker but broker have not sent response to the 
client. Usually the message are waiting to be written to bookies. It's shared 
across all the topics running in the same broker. The value `-1` disables the 
memory limitation. By default, i [...]
+|messagePublishBufferCheckIntervalInMillis|Interval between checks to see if 
message publish buffer size exceeds the maximum. Use `0` or negative number to 
disable the max publish buffer limiting.|100|
+|retentionCheckIntervalInSeconds|Check between intervals to see if consumed 
ledgers need to be trimmed. Use 0 or negative number to disable the check.|120|
+
+| maxMessageSize | Set the maximum size of a message. | 5242880 |
 | preciseTopicPublishRateLimiterEnable | Enable precise topic publish rate 
limiting. | false |
 
 
@@ -286,7 +357,6 @@ The [`pulsar-client`](reference-cli-tools.md#pulsar-client) 
CLI tool can be used
 |authPlugin|  The authentication plugin.  ||
 |authParams|  The authentication parameters for the cluster, as a 
comma-separated string. ||
 |useTls|  Whether or not TLS authentication will be enforced in the cluster.  
|false|
-|tlsAllowInsecureConnection|||
 | tlsAllowInsecureConnection | Allow TLS connections to servers whose 
certificate cannot be verified to have been signed by a trusted certificate 
authority. | false |
 | tlsEnableHostnameVerification | Whether the server hostname must match the 
common name of the certificate that is used by the server. | false |
 |tlsTrustCertsFilePath|||
@@ -375,14 +445,16 @@ The 
[`pulsar-client`](reference-cli-tools.md#pulsar-client) CLI tool can be used
 |advertisedAddress| The hostname or IP address that the standalone service 
advertises to the outside world. If not set, the value of 
`InetAddress.getLocalHost().getHostName()` is used.  ||
 | numIOThreads | Number of threads to use for Netty IO | 2 * 
Runtime.getRuntime().availableProcessors() |
 | numHttpServerThreads | Number of threads to use for HTTP requests processing 
| 2 * Runtime.getRuntime().availableProcessors()|
+|isRunningStandalone|This flag controls features that are meant to be used 
when running in standalone mode.|N/A|
 |clusterName| The name of the cluster that this broker belongs to. |standalone|
 | failureDomainsEnabled | Enable cluster's failure-domain which can distribute 
brokers into logical region. | false |
 |zooKeeperSessionTimeoutMillis| The ZooKeeper session timeout, in 
milliseconds. |30000|
+|zooKeeperOperationTimeoutSeconds|ZooKeeper operation timeout in seconds.|30|
 |brokerShutdownTimeoutMs| The time to wait for graceful broker shutdown. After 
this time elapses, the process will be killed. |60000|
 |skipBrokerShutdownOnOOM| Flag to skip broker shutdown when broker handles Out 
of memory error. |false|
 |backlogQuotaCheckEnabled|  Enable the backlog quota check, which enforces a 
specified action when the quota is reached.  |true|
 |backlogQuotaCheckIntervalInSeconds|  How often to check for topics that have 
reached the backlog quota.  |60|
-|backlogQuotaDefaultLimitGB|  The default per-topic backlog quota limit.  |10|
+|backlogQuotaDefaultLimitGB| The default per-topic backlog quota limit. Being 
less than 0 means no limitation. By default, it is -1. |-1|
 |ttlDurationDefaultInSeconds|  The default ttl for namespaces if ttl is not 
configured at namespace policies.  |0|
 |brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics. 
|true|
 |brokerDeleteInactiveTopicsFrequencySeconds|  How often to check for inactive 
topics, in seconds. |60|
@@ -391,6 +463,7 @@ The [`pulsar-client`](reference-cli-tools.md#pulsar-client) 
CLI tool can be used
 |activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and 
dispatching messages when active consumer is changed.  |1000|
 | subscriptionExpirationTimeMinutes | How long to delete inactive 
subscriptions from last consumption. When it is set to 0, inactive 
subscriptions are not deleted automatically | 0 |
 | subscriptionRedeliveryTrackerEnabled | Enable subscription message 
redelivery tracker to send redelivery count to consumer. | true |
+|subscriptionKeySharedEnable|Whether to enable the Key_Shared 
subscription.|true|
 | subscriptionKeySharedUseConsistentHashing | In the Key_Shared subscription 
mode, with default AUTO_SPLIT mode, use splitting ranges or consistent hashing 
to reassign keys to new consumers. | false |
 | subscriptionKeySharedConsistentHashingReplicaPoints | In the Key_Shared 
subscription mode, the number of points in the consistent-hashing ring. The 
greater the number, the more equal the assignment of keys to consumers. | 100 |
 | subscriptionExpiryCheckIntervalInMinutes | How frequently to proactively 
check and purge expired subscription |5 |
@@ -407,14 +480,24 @@ The 
[`pulsar-client`](reference-cli-tools.md#pulsar-client) CLI tool can be used
 | maxUnackedMessagesPerBroker | Maximum number of unacknowledged messages 
allowed per broker. Once this limit reaches, the broker stops dispatching 
messages to all shared subscriptions which has a higher number of 
unacknowledged messages until subscriptions start acknowledging messages back 
and unacknowledged messages count reaches to limit/2. When the value is set to 
0, unacknowledged message limit check is disabled and broker does not block 
dispatchers. | 0 |
 | maxUnackedMessagesPerSubscriptionOnBrokerBlocked | Once the broker reaches 
maxUnackedMessagesPerBroker limit, it blocks subscriptions which have higher 
unacknowledged messages than this percentage limit and subscription does not 
receive any new messages until that subscription acknowledges messages back. | 
0.16 |
 |maxNumPartitionsPerPartitionedTopic|Max number of partitions per partitioned 
topic. Use 0 or negative number to disable the check|0|
-| topicPublisherThrottlingTickTimeMillis | Tick time to schedule task that 
checks topic publish rate limiting across all topics. A lower value can give 
more accuracy while throttling publish but it uses more CPU to perform frequent 
check. When the value is set to 0, publish throttling is disabled. | 2|
-| brokerPublisherThrottlingTickTimeMillis | Tick time to schedule task that 
checks broker publish rate limiting across all topics. A lower value can give 
more accuracy while throttling publish but it uses more CPU to perform frequent 
check. When the value is set to 0, publish throttling is disabled. |50 |
+|zookeeperSessionExpiredPolicy|There are two policies when ZooKeeper session 
expired happens, "shutdown" and "reconnect". If it is set to "shutdown" policy, 
when ZooKeeper session expired happens, the broker is shutdown. If it is set to 
"reconnect" policy, the broker tries to reconnect to ZooKeeper server and 
re-register metadata to ZooKeeper. Note: the "reconnect" policy is an 
experiment feature.|shutdown|
+| topicPublisherThrottlingTickTimeMillis | Tick time to schedule task that 
checks topic publish rate limiting across all topics. A lower value can improve 
accuracy while throttling publish but it uses more CPU to perform frequent 
check. (Disable publish throttling with value 0) | 10|
+| brokerPublisherThrottlingTickTimeMillis | Tick time to schedule task that 
checks broker publish rate limiting across all topics. A lower value can 
improve accuracy while throttling publish but it uses more CPU to perform 
frequent check. When the value is set to 0, publish throttling is disabled. |50 
|
 | brokerPublisherThrottlingMaxMessageRate | Maximum rate (in 1 second) of 
messages allowed to publish for a broker if the message rate limiting is 
enabled. When the value is set to 0, message rate limiting is disabled. | 0|
 | brokerPublisherThrottlingMaxByteRate | Maximum rate (in 1 second) of bytes 
allowed to publish for a broker if the  byte rate limiting is enabled. When the 
value is set to 0, the byte rate limiting is disabled. | 0 |
+|subscribeThrottlingRatePerConsumer|Too many subscribe requests from a 
consumer can cause broker rewinding consumer cursors and loading data from 
bookies, hence causing high network bandwidth usage. When the positive value is 
set, broker will throttle the subscribe requests for one consumer. Otherwise, 
the throttling will be disabled. By default, throttling is disabled.|0|
+|subscribeRatePeriodPerConsumerInSecond|Rate period for 
{subscribeThrottlingRatePerConsumer}. By default, it is 30s.|30|
 | dispatchThrottlingRatePerTopicInMsg | Default messages (per second) dispatch 
throttling-limit for every topic. When the value is set to 0, default message 
dispatch throttling-limit is disabled. |0 |
 | dispatchThrottlingRatePerTopicInByte | Default byte (per second) dispatch 
throttling-limit for every topic. When the value is set to 0, default byte 
dispatch throttling-limit is disabled. | 0|
 | dispatchThrottlingRateRelativeToPublishRate | Enable dispatch rate-limiting 
relative to publish rate. | false |
+|dispatchThrottlingRatePerSubscriptionInMsg|The defaulted number of message 
dispatching throttling-limit for a subscription. The value of 0 disables 
message dispatch-throttling.|0|
+|dispatchThrottlingRatePerSubscriptionInByte|The default number of 
message-bytes dispatching throttling-limit for a subscription.
+The value of 0 disables message-byte dispatch-throttling.|0|
 | dispatchThrottlingOnNonBacklogConsumerEnabled | Enable dispatch-throttling 
for both caught up consumers as well as consumers who have backlogs. | true |
+|dispatcherMaxReadBatchSize|The maximum number of entries to read from 
BookKeeper. By default, it is 100 entries.|100|
+|dispatcherMaxReadSizeBytes|The maximum size in bytes of entries to read from 
BookKeeper. By default, it is 5MB.|5242880|
+|dispatcherMinReadBatchSize|The minimum number of entries to read from 
BookKeeper. By default, it is 1 entry. When there is an error occurred on 
reading entries from bookkeeper, the broker will backoff the batch size to this 
minimum number.|1|
+|dispatcherMaxRoundRobinBatchSize|The maximum number of entries to dispatch 
for a shared subscription. By default, it is 20 entries.|20|
 | preciseDispatcherFlowControl | Precise dispathcer flow control according to 
history message number of each entry. | false |
 | maxConcurrentLookupRequest | Maximum number of concurrent lookup request 
that the broker allows to throttle heavy incoming lookup traffic. | 50000 |
 | maxConcurrentTopicLoadRequest | Maximum number of concurrent topic loading 
request that the broker allows to control the number of zk-operations. | 5000 |
@@ -466,16 +549,23 @@ The 
[`pulsar-client`](reference-cli-tools.md#pulsar-client) CLI tool can be used
 |tokenAuthClaim| Specify the token claim that will be used as the 
authentication "principal" or "role". The "subject" field will be used if this 
is left blank ||
 |tokenAudienceClaim| The token audience "claim" name, e.g. "aud". It is used 
to get the audience from token. If it is not set, the audience is not verified. 
||
 | tokenAudience | The token audience stands for this broker. The field 
`tokenAudienceClaim` of a valid token need contains this parameter.| |
+|saslJaasClientAllowedIds|This is a regexp, which limits the range of possible 
ids which can connect to the Broker using SASL. By default, it is set to 
`SaslConstants.JAAS_CLIENT_ALLOWED_IDS_DEFAULT`, which is ".*pulsar.*", so only 
clients whose id contains 'pulsar' are allowed to connect.|N/A|
+|saslJaasBrokerSectionName|Service Principal, for login context name. By 
default, it is set to `SaslConstants.JAAS_DEFAULT_BROKER_SECTION_NAME`, which 
is "Broker".|N/A|
+|httpMaxRequestSize|If the value is larger than 0, it rejects all HTTP 
requests with bodies larged than the configured limit.|-1|
 |exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, 
set false to use published counter and consumed counter to calculate, this 
would be more efficient but may be inaccurate. |false|
+|bookkeeperMetadataServiceUri|Metadata service uri is what BookKeeper used for 
loading corresponding metadata driver and resolving its metadata service 
location. This value can be fetched using `bookkeeper shell whatisinstanceid` 
command in BookKeeper cluster. For example: 
`zk+hierarchical://localhost:2181/ledgers`. The metadata service uri list can 
also be semicolon separated values like: 
`zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers`.|N/A|
 |bookkeeperClientAuthenticationPlugin|  Authentication plugin to be used when 
connecting to bookies (BookKeeper servers). ||
 |bookkeeperClientAuthenticationParametersName|  BookKeeper authentication 
plugin implementation parameters and values.  ||
 |bookkeeperClientAuthenticationParameters|  Parameters associated with the 
bookkeeperClientAuthenticationParametersName ||
 |bookkeeperClientTimeoutInSeconds|  Timeout for BookKeeper add and read 
operations. |30|
 |bookkeeperClientSpeculativeReadTimeoutInMillis|  Speculative reads are 
initiated if a read request doesn’t complete within a certain time. A value of 
0 disables speculative reads.  |0|
+|bookkeeperUseV2WireProtocol|Use older Bookkeeper wire protocol with 
bookie.|true|
 |bookkeeperClientHealthCheckEnabled|  Enable bookie health checks.  |true|
 |bookkeeperClientHealthCheckIntervalSeconds|  The time interval, in seconds, 
at which health checks are performed. New ledgers are not created during health 
checks.  |60|
 |bookkeeperClientHealthCheckErrorThresholdPerInterval|  Error threshold for 
health checks.  |5|
 |bookkeeperClientHealthCheckQuarantineTimeInSeconds|  If bookies have more 
than the allowed number of failures within the time interval specified by 
bookkeeperClientHealthCheckIntervalSeconds |1800|
+|bookkeeperGetBookieInfoIntervalSeconds|Specify options for the GetBookieInfo 
check. This setting helps ensure the list of bookies that are up to date on the 
brokers.|86400|
+|bookkeeperGetBookieInfoRetryIntervalSeconds|Specify options for the 
GetBookieInfo check. This setting helps ensure the list of bookies that are up 
to date on the brokers.|60|
 |bookkeeperClientRackawarePolicyEnabled|    |true|
 |bookkeeperClientRegionawarePolicyEnabled|    |false|
 |bookkeeperClientReorderReadSequenceEnabled|    |false|
@@ -497,10 +587,10 @@ The 
[`pulsar-client`](reference-cli-tools.md#pulsar-client) CLI tool can be used
 |managedLedgerDefaultWriteQuorum|   |1|
 |managedLedgerDefaultAckQuorum|   |1|
 | managedLedgerDigestType | Default type of checksum to use when writing to 
BookKeeper. | CRC32C |
-| managedLedgerNumWorkerThreads | Number of threads to be used for managed 
ledger tasks dispatching. | 4 |
-| managedLedgerNumSchedulerThreads | Number of threads to be used for managed 
ledger scheduled tasks. | 4 |
-|managedLedgerCacheSizeMB|    |1024|
-|managedLedgerCacheCopyEntries| Whether we should make a copy of the entry 
payloads when inserting in cache| false|
+| managedLedgerNumWorkerThreads | Number of threads to be used for managed 
ledger tasks dispatching. | 8 |
+| managedLedgerNumSchedulerThreads | Number of threads to be used for managed 
ledger scheduled tasks. | 8 |
+|managedLedgerCacheSizeMB|    |N/A|
+|managedLedgerCacheCopyEntries| Whether to copy the entry payloads when 
inserting in cache.| false|
 |managedLedgerCacheEvictionWatermark|   |0.9|
 |managedLedgerCacheEvictionFrequency| Configure the cache eviction frequency 
for the managed ledger cache (evictions/sec) | 100.0 |
 |managedLedgerCacheEvictionTimeThresholdMillis| All entries that have stayed 
in cache for more than the configured time, will be evicted | 1000 |
@@ -512,7 +602,7 @@ The [`pulsar-client`](reference-cli-tools.md#pulsar-client) 
CLI tool can be used
 |managedLedgerMaxLedgerRolloverTimeMinutes|   |240|
 |managedLedgerCursorMaxEntriesPerLedger|    |50000|
 |managedLedgerCursorRolloverTimeInSeconds|    |14400|
-| managedLedgerMaxSizePerLedgerMbytes | Maximum ledger size before triggering 
a rollover for a topic. | 2048 MB|
+| managedLedgerMaxSizePerLedgerMbytes | Maximum ledger size before triggering 
a rollover for a topic. | 2048 |
 | managedLedgerMaxUnackedRangesToPersist | Maximum number of "acknowledgment 
holes" that are going to be persistently stored. When acknowledging out of 
order, a consumer leaves holes that are supposed to be quickly filled by 
acknowledging all the messages. The information of which messages are 
acknowledged is persisted by compressing in "ranges" of messages that were 
acknowledged. After the max number of ranges is reached, the information is 
only tracked in memory and messages are redeli [...]
 | managedLedgerMaxUnackedRangesToPersistInZooKeeper | Maximum number of 
"acknowledgment holes" that can be stored in Zookeeper. If the number of 
unacknowledged message range is higher than this limit, the broker persists 
unacknowledged ranges into bookkeeper to avoid additional data overhead into 
Zookeeper. | 1000 |
 |autoSkipNonRecoverableData|    |false|
@@ -520,8 +610,9 @@ The [`pulsar-client`](reference-cli-tools.md#pulsar-client) 
CLI tool can be used
 | managedLedgerReadEntryTimeoutSeconds | Read entries timeout when the broker 
tries to read messages from BookKeeper. | 0 |
 | managedLedgerAddEntryTimeoutSeconds | Add entry timeout when the broker 
tries to publish message to BookKeeper. | 0 |
 | managedLedgerNewEntriesCheckDelayInMillis | New entries check delay for the 
cursor under the managed ledger. If no new messages in the topic, the cursor 
tries to check again after the delay time. For consumption latency sensitive 
scenario, you can set the value to a smaller value or 0. Of course, a smaller 
value may degrade consumption throughput.|10 ms|
-| managedLedgerPrometheusStatsLatencyRolloverSeconds | Managed ledger 
prometheus stats latency rollover seconds.  | 60s |
+| managedLedgerPrometheusStatsLatencyRolloverSeconds | Managed ledger 
prometheus stats latency rollover seconds.  | 60 |
 | managedLedgerTraceTaskExecution | Whether to trace managed ledger task 
execution time. | true |
+|managedLedgerNewEntriesCheckDelayInMillis|New entries check delay for the 
cursor under the managed ledger. If no new messages in the topic, the cursor 
will try to check again after the delay time. For consumption latency sensitive 
scenario, it can be set to a smaller value or 0. A smaller value degrades 
consumption throughput. By default, it is 10ms.|10|
 |loadBalancerEnabled|   |false|
 |loadBalancerPlacementStrategy|   |weightedRandomSelection|
 |loadBalancerReportUpdateThresholdPercentage|   |10|
@@ -548,7 +639,7 @@ The [`pulsar-client`](reference-cli-tools.md#pulsar-client) 
CLI tool can be used
 | loadBalancerCPUResourceWeight | The CPU usage weight when calculating new 
resource usage. It only takes effect in the ThresholdSheddler strategy. | 1.0 |
 | loadBalancerMemoryResourceWeight | The heap memory usage weight when 
calculating new resource usage. It only takes effect in the ThresholdSheddler 
strategy. | 1.0 |
 | loadBalancerDirectMemoryResourceWeight | The direct memory usage weight when 
calculating new resource usage. It only takes effect in the ThresholdSheddler 
strategy. | 1.0 |
-| loadBalancerBundleUnloadMinThroughputThreshold | Bundle unload minimum 
throughput threshold. Avoid bundle unload frequently. It only takes effect in 
the ThresholdSheddler strategy. | 10 MB |
+| loadBalancerBundleUnloadMinThroughputThreshold | Bundle unload minimum 
throughput threshold. Avoid bundle unload frequently. It only takes effect in 
the ThresholdSheddler strategy. | 10 |
 |replicationMetricsEnabled|   |true|
 |replicationConnectionsPerBroker|   |16|
 |replicationProducerQueueSize|    |1000|
@@ -605,8 +696,15 @@ The [Pulsar 
proxy](concepts-architecture-overview.md#pulsar-proxy) can be config
 | brokerWebServiceURLTLS | The TLS Web service URL pointing to the broker 
cluster | |
 | functionWorkerWebServiceURL | The Web service URL pointing to the function 
worker cluster. It is only configured when you setup function workers in a 
separate cluster. | |
 | functionWorkerWebServiceURLTLS | The TLS Web service URL pointing to the 
function worker cluster. It is only configured when you setup function workers 
in a separate cluster. | |
+|brokerServiceURL|If service discovery is disabled, this url should point to 
the discovery service provider.|N/A|
+|brokerServiceURLTLS|If service discovery is disabled, this url should point 
to the discovery service provider.|N/A|
+|brokerWebServiceURL|This settings are unnecessary if `zookeeperServers` is 
specified.|N/A|
+|brokerWebServiceURLTLS|This settings are unnecessary if `zookeeperServers` is 
specified.|N/A|
+|functionWorkerWebServiceURL|If function workers are setup in a separate 
cluster, configure the this setting to point to the function workers 
cluster.|N/A|
+|functionWorkerWebServiceURLTLS|If function workers are setup in a separate 
cluster, configure the this setting to point to the function workers 
cluster.|N/A|
 |zookeeperSessionTimeoutMs| ZooKeeper session timeout (in milliseconds) |30000|
 |zooKeeperCacheExpirySeconds|ZooKeeper cache expiry time in seconds|300
+|advertisedAddress|Hostname or IP address the service advertises to the 
outside world. If not set, the value of 
`InetAddress.getLocalHost().getHostname()` is used.|N/A|
 |servicePort| The port to use for server binary Protobuf requests |6650|
 |servicePortTls|  The port to use to server binary Protobuf TLS requests  
|6651|
 |statusFilePath|  Path for the file used to determine the rotation status for 
the proxy instance when responding to service discovery health checks ||
@@ -625,9 +723,9 @@ The [Pulsar 
proxy](concepts-architecture-overview.md#pulsar-proxy) can be config
 |forwardAuthorizationCredentials| Whether client authorization credentials are 
forwared to the broker for re-authorization. Authentication must be enabled via 
authenticationEnabled=true for this to take effect.  |false|
 |maxConcurrentInboundConnections| Max concurrent inbound connections. The 
proxy will reject requests beyond that. |10000|
 |maxConcurrentLookupRequests| Max concurrent outbound connections. The proxy 
will error out requests beyond that. |50000|
-|tlsEnabledInProxy| Whether TLS is enabled for the proxy  |false|
-|tlsEnabledWithBroker|  Whether TLS is enabled when communicating with Pulsar 
brokers |false|
-| tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in 
seconds. If the value is set 0, check TLS certificate every new connection. | 
300s |
+|tlsEnabledInProxy| Deprecated - use `servicePortTls` and `webServicePortTls` 
instead. |false|
+|tlsEnabledWithBroker|  Whether TLS is enabled when communicating with Pulsar 
brokers. |false|
+| tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in 
seconds. If the value is set 0, check TLS certificate every new connection. | 
300 |
 |tlsCertificateFilePath|  Path for the TLS certificate file ||
 |tlsKeyFilePath|  Path for the TLS private key file ||
 |tlsTrustCertsFilePath| Path for the trusted TLS certificate pem file ||
@@ -657,8 +755,11 @@ ZooKeeper handles a broad range of essential 
configuration- and coordination-rel
 |syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is 
allowed to sync with other ZooKeeper servers. The tick time is set in 
milliseconds using the tickTime parameter.  |5|
 |dataDir| The location where ZooKeeper will store in-memory database snapshots 
as well as the transaction log of updates to the database. |data/zookeeper|
 |clientPort|  The port on which the ZooKeeper server will listen for 
connections. |2181|
+|admin.enableServer|The port at which the admin listens.|true|
+|admin.serverPort|The port at which the admin listens.|9990|
 |autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many 
recent snapshots of the database stored in dataDir to retain within the time 
interval specified by autopurge.purgeInterval (while deleting the rest).  |3|
 |autopurge.purgeInterval| The time interval, in hours, by which the ZooKeeper 
database purge task is triggered. Setting to a non-zero number will enable auto 
purge; setting to 0 will disable. Read this guide before enabling auto purge. 
|1|
+|forceSync|Requires updates to be synced to media of the transaction log 
before finishing processing the update. If this option is set to 'no', 
ZooKeeper will not require updates to be synced to the media. WARNING: it's not 
recommended to run a production ZK cluster with `forceSync` disabled.|yes|
 |maxClientCnxns|  The maximum number of client connections. Increase this if 
you need to handle more ZooKeeper clients. |60|
 
 

Reply via email to