cmccabe commented on code in PR #12642:
URL: https://github.com/apache/kafka/pull/12642#discussion_r971203656


##########
docs/ops.html:
##########
@@ -3180,6 +3180,119 @@ <h4 class="anchor-heading"><a id="zkops" 
class="anchor-link"></a><a href="#zkops
     <li>Don't overbuild the cluster: large clusters, especially in a write 
heavy usage pattern, means a lot of intracluster communication (quorums on the 
writes and subsequent cluster member updates), but don't underbuild it (and 
risk swamping the cluster). Having more servers adds to your read capacity.</li>
   </ul>
   Overall, we try to keep the ZooKeeper system as small as will handle the 
load (plus standard growth capacity planning) and as simple as possible. We try 
not to do anything fancy with the configuration or application layout as 
compared to the official release as well as keep it as self contained as 
possible. For these reasons, we tend to skip the OS packaged versions, since it 
has a tendency to try to put things in the OS standard hierarchy, which can be 
'messy', for want of a better way to word it.
+
+  <h3 class="anchor-heading"><a id="kraft" class="anchor-link"></a><a 
href="#kraft">6.10 KRaft</a></h3>
+
+  <h4 class="anchor-heading"><a id="kraft_config" class="anchor-link"></a><a 
href="#kraft_config">Configuration</a></h4>
+
+  <h5 class="anchor-heading"><a id="kraft_role" class="anchor-link"></a><a 
href="#kraft_role">Process Roles</a></h5>
+
+  <p>In KRaft mode each Kafka server can be configured as a controller, as a 
broker or as both using the <code>process.roles<code> property. This property 
can have the following values:</p>
+
+  <ul>
+    <li>If <code>process.roles</code> is set to <code>broker</code>, the 
server acts as a broker.</li>
+    <li>If <code>process.roles</code> is set to <code>controller</code>, the 
server acts as a controller.</li>
+    <li>If <code>process.roles</code> is set to 
<code>broker,controller</code>, the server acts as a broker and a 
controller.</li>
+    <li>If <code>process.roles</code> is not set at all, it is assumed to be 
in ZooKeeper mode.</li>
+  </ul>
+
+  <p>Nodes that act as both brokers and controllers are referred to as 
"combined" nodes. Combined nodes are simpler to operate for simple use cases 
like a development environment. The key disadvantage is that the controller 
will be less isolated from the rest of the system. Combined mode is not 
recommended is critical deployment environments.</p>
+
+
+  <h5 class="anchor-heading"><a id="kraft_voter" class="anchor-link"></a><a 
href="#kraft_voter">Controllers</a></h5>
+
+  <p>In KRaft mode, only a small group of specially selected servers can act 
as controllers (unlike the ZooKeeper-based mode, where any server can become 
the Controller). The specially selected controller servers will participate in 
the metadata quorum. Each controller server is either active, or a hot standby 
for the current active controller server.</p>
+
+  <p>A Kafka cluster will typically select 3 or 5 servers for this role, 
depending on factors like cost and the number of concurrent failures your 
system should withstand without availability impact. A majority of the 
controllers must be alive in order to maintain availability. With 3 
controllers, the cluster can tolerate 1 controller failure; with 5 controllers, 
the cluster can tolerate 2 controller failures.</p>

Review Comment:
   How about
   > A Kafka admin will typically select...
   the Kafka cluster hasn't achieved sentience .... YET. :)



##########
docs/ops.html:
##########
@@ -3180,6 +3180,119 @@ <h4 class="anchor-heading"><a id="zkops" 
class="anchor-link"></a><a href="#zkops
     <li>Don't overbuild the cluster: large clusters, especially in a write 
heavy usage pattern, means a lot of intracluster communication (quorums on the 
writes and subsequent cluster member updates), but don't underbuild it (and 
risk swamping the cluster). Having more servers adds to your read capacity.</li>
   </ul>
   Overall, we try to keep the ZooKeeper system as small as will handle the 
load (plus standard growth capacity planning) and as simple as possible. We try 
not to do anything fancy with the configuration or application layout as 
compared to the official release as well as keep it as self contained as 
possible. For these reasons, we tend to skip the OS packaged versions, since it 
has a tendency to try to put things in the OS standard hierarchy, which can be 
'messy', for want of a better way to word it.
+
+  <h3 class="anchor-heading"><a id="kraft" class="anchor-link"></a><a 
href="#kraft">6.10 KRaft</a></h3>
+
+  <h4 class="anchor-heading"><a id="kraft_config" class="anchor-link"></a><a 
href="#kraft_config">Configuration</a></h4>
+
+  <h5 class="anchor-heading"><a id="kraft_role" class="anchor-link"></a><a 
href="#kraft_role">Process Roles</a></h5>
+
+  <p>In KRaft mode each Kafka server can be configured as a controller, as a 
broker or as both using the <code>process.roles<code> property. This property 
can have the following values:</p>
+
+  <ul>
+    <li>If <code>process.roles</code> is set to <code>broker</code>, the 
server acts as a broker.</li>
+    <li>If <code>process.roles</code> is set to <code>controller</code>, the 
server acts as a controller.</li>
+    <li>If <code>process.roles</code> is set to 
<code>broker,controller</code>, the server acts as a broker and a 
controller.</li>
+    <li>If <code>process.roles</code> is not set at all, it is assumed to be 
in ZooKeeper mode.</li>
+  </ul>
+
+  <p>Nodes that act as both brokers and controllers are referred to as 
"combined" nodes. Combined nodes are simpler to operate for simple use cases 
like a development environment. The key disadvantage is that the controller 
will be less isolated from the rest of the system. Combined mode is not 
recommended is critical deployment environments.</p>
+
+
+  <h5 class="anchor-heading"><a id="kraft_voter" class="anchor-link"></a><a 
href="#kraft_voter">Controllers</a></h5>
+
+  <p>In KRaft mode, only a small group of specially selected servers can act 
as controllers (unlike the ZooKeeper-based mode, where any server can become 
the Controller). The specially selected controller servers will participate in 
the metadata quorum. Each controller server is either active, or a hot standby 
for the current active controller server.</p>
+
+  <p>A Kafka cluster will typically select 3 or 5 servers for this role, 
depending on factors like cost and the number of concurrent failures your 
system should withstand without availability impact. A majority of the 
controllers must be alive in order to maintain availability. With 3 
controllers, the cluster can tolerate 1 controller failure; with 5 controllers, 
the cluster can tolerate 2 controller failures.</p>

Review Comment:
   How about
   
   > A Kafka admin will typically select...
   
   the Kafka cluster hasn't achieved sentience .... YET. :)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to