apoorvmittal10 commented on code in PR #751:
URL: https://github.com/apache/kafka-site/pull/751#discussion_r2554236253


##########
content/en/0102/implementation/api-design.md:
##########
@@ -36,22 +36,22 @@ The goal is to expose all the producer functionality 
through a single API to the
 `kafka.producer.Producer` provides the ability to batch multiple produce 
requests (`producer.type=async`), before serializing and dispatching them to 
the appropriate kafka broker partition. The size of the batch can be controlled 
by a few config parameters. As events enter a queue, they are buffered in a 
queue, until either `queue.time` or `batch.size` is reached. A background 
thread (`kafka.producer.async.ProducerSendThread`) dequeues the batch of data 
and lets the `kafka.producer.EventHandler` serialize and send the data to the 
appropriate kafka broker partition. A custom event handler can be plugged in 
through the `event.handler` config parameter. At various stages of this 
producer queue pipeline, it is helpful to be able to inject callbacks, either 
for plugging in custom logging/tracing code or custom monitoring logic. This is 
possible by implementing the `kafka.producer.async.CallbackHandler` interface 
and setting `callback.handler` config parameter to that class. 
 
   * handles the serialization of data through a user-specified `Encoder`: 
-    
-            interface Encoder<T> {
-        public Message toMessage(T data);
-        }
         
+        interface Encoder<T> {
+            public Message toMessage(T data);
+            }

Review Comment:
   Seems like below on `md` file, do we need to fix the indentation?
   
   <img width="1021" height="91" alt="Image" 
src="https://github.com/user-attachments/assets/b86ffb2f-ca6c-4ec8-a961-f6d792d4b3a6";
 />



##########
content/en/0102/implementation/api-design.md:
##########
@@ -36,22 +36,22 @@ The goal is to expose all the producer functionality 
through a single API to the
 `kafka.producer.Producer` provides the ability to batch multiple produce 
requests (`producer.type=async`), before serializing and dispatching them to 
the appropriate kafka broker partition. The size of the batch can be controlled 
by a few config parameters. As events enter a queue, they are buffered in a 
queue, until either `queue.time` or `batch.size` is reached. A background 
thread (`kafka.producer.async.ProducerSendThread`) dequeues the batch of data 
and lets the `kafka.producer.EventHandler` serialize and send the data to the 
appropriate kafka broker partition. A custom event handler can be plugged in 
through the `event.handler` config parameter. At various stages of this 
producer queue pipeline, it is helpful to be able to inject callbacks, either 
for plugging in custom logging/tracing code or custom monitoring logic. This is 
possible by implementing the `kafka.producer.async.CallbackHandler` interface 
and setting `callback.handler` config parameter to that class. 
 
   * handles the serialization of data through a user-specified `Encoder`: 
-    
-            interface Encoder<T> {
-        public Message toMessage(T data);
-        }
         
+        interface Encoder<T> {
+            public Message toMessage(T data);
+            }

Review Comment:
   And like below on the rendered server.
   
   <img width="1410" height="812" alt="Image" 
src="https://github.com/user-attachments/assets/ef17f5fb-58d0-4a5b-be76-4cbaaddcb83b";
 />



##########
content/en/0102/operations/monitoring.md:
##########
@@ -12,148 +12,1502 @@ Kafka uses Yammer Metrics for metrics reporting in both 
the server and the clien
 
 The easiest way to see the available metrics is to fire up jconsole and point 
it at a running kafka client or server; this will allow browsing all metrics 
with JMX. 
 
-We do graphing and alerting on the following metrics:  Description | Mbean 
name | Normal value  
----|---|---  
-Message in rate | kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec | 
  
-Byte in rate | kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec |   
-Request rate | 
kafka.network:type=RequestMetrics,name=RequestsPerSec,request={Produce|FetchConsumer|FetchFollower}
 |   
-Byte out rate | kafka.server:type=BrokerTopicMetrics,name=BytesOutPerSec |   
-Log flush rate and time | 
kafka.log:type=LogFlushStats,name=LogFlushRateAndTimeMs |   
-# of under replicated partitions (|ISR| < |all replicas|) | 
kafka.server:type=ReplicaManager,name=UnderReplicatedPartitions | 0  
-Is controller active on broker | 
kafka.controller:type=KafkaController,name=ActiveControllerCount | only one 
broker in the cluster should have 1  
-Leader election rate | 
kafka.controller:type=ControllerStats,name=LeaderElectionRateAndTimeMs | 
non-zero when there are broker failures  
-Unclean leader election rate | 
kafka.controller:type=ControllerStats,name=UncleanLeaderElectionsPerSec | 0  
-Partition counts | kafka.server:type=ReplicaManager,name=PartitionCount | 
mostly even across brokers  
-Leader replica counts | kafka.server:type=ReplicaManager,name=LeaderCount | 
mostly even across brokers  
-ISR shrink rate | kafka.server:type=ReplicaManager,name=IsrShrinksPerSec | If 
a broker goes down, ISR for some of the partitions will shrink. When that 
broker is up again, ISR will be expanded once the replicas are fully caught up. 
Other than that, the expected value for both ISR shrink rate and expansion rate 
is 0.   
-ISR expansion rate | kafka.server:type=ReplicaManager,name=IsrExpandsPerSec | 
See above  
-Max lag in messages btw follower and leader replicas | 
kafka.server:type=ReplicaFetcherManager,name=MaxLag,clientId=Replica | lag 
should be proportional to the maximum batch size of a produce request.  
-Lag in messages per follower replica | 
kafka.server:type=FetcherLagMetrics,name=ConsumerLag,clientId=([-.\w]+),topic=([-.\w]+),partition=([0-9]+)
 | lag should be proportional to the maximum batch size of a produce request.  
-Requests waiting in the producer purgatory | 
kafka.server:type=DelayedOperationPurgatory,name=PurgatorySize,delayedOperation=Produce
 | non-zero if ack=-1 is used  
-Requests waiting in the fetch purgatory | 
kafka.server:type=DelayedOperationPurgatory,name=PurgatorySize,delayedOperation=Fetch
 | size depends on fetch.wait.max.ms in the consumer  
-Request total time | 
kafka.network:type=RequestMetrics,name=TotalTimeMs,request={Produce|FetchConsumer|FetchFollower}
 | broken into queue, local, remote and response send time  
-Time the request waits in the request queue | 
kafka.network:type=RequestMetrics,name=RequestQueueTimeMs,request={Produce|FetchConsumer|FetchFollower}
 |   
-Time the request is processed at the leader | 
kafka.network:type=RequestMetrics,name=LocalTimeMs,request={Produce|FetchConsumer|FetchFollower}
 |   
-Time the request waits for the follower | 
kafka.network:type=RequestMetrics,name=RemoteTimeMs,request={Produce|FetchConsumer|FetchFollower}
 | non-zero for produce requests when ack=-1  
-Time the request waits in the response queue | 
kafka.network:type=RequestMetrics,name=ResponseQueueTimeMs,request={Produce|FetchConsumer|FetchFollower}
 |   
-Time to send the response | 
kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request={Produce|FetchConsumer|FetchFollower}
 |   
-Number of messages the consumer lags behind the producer by. Published by the 
consumer, not broker. |  _Old consumer:_ 
kafka.consumer:type=ConsumerFetcherManager,name=MaxLag,clientId=([-.\w]+) _New 
consumer:_ 
kafka.consumer:type=consumer-fetch-manager-metrics,client-id={client-id} 
Attribute: records-lag-max |   
-The average fraction of time the network processors are idle | 
kafka.network:type=SocketServer,name=NetworkProcessorAvgIdlePercent | between 0 
and 1, ideally > 0.3  
-The average fraction of time the request handler threads are idle | 
kafka.server:type=KafkaRequestHandlerPool,name=RequestHandlerAvgIdlePercent | 
between 0 and 1, ideally > 0.3  
-Quota metrics per (user, client-id), user or client-id | 
kafka.server:type={Produce|Fetch},user=([-.\w]+),client-id=([-.\w]+) | Two 
attributes. throttle-time indicates the amount of time in ms the client was 
throttled. Ideally = 0. byte-rate indicates the data produce/consume rate of 
the client in bytes/sec. For (user, client-id) quotas, both user and client-id 
are specified. If per-client-id quota is applied to the client, user is not 
specified. If per-user quota is applied, client-id is not specified.  
-  
+We do graphing and alerting on the following metrics:   
+<table>  
+<tr>  
+<th>
+

Review Comment:
   I have reviewed the file and looks good when comapred to 
https://kafka.apache.org/0102/documentation.html#monitoring. However, the table 
header looks a bit off to me on new markdown changes as that seems another row 
than header itself.
   
   <img width="1093" height="632" alt="Image" 
src="https://github.com/user-attachments/assets/6ec309d9-4f32-4699-b641-a6fe77ad732b";
 />
   
   <img width="1068" height="700" alt="Image" 
src="https://github.com/user-attachments/assets/92de9856-7c2a-4d06-b813-a944d3066037";
 />



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to