abhishekrb19 commented on code in PR #14281:
URL: https://github.com/apache/druid/pull/14281#discussion_r1195803064


##########
docs/development/extensions-contrib/kafka-emitter.md:
##########
@@ -36,20 +36,28 @@ to monitor the status of your Druid cluster with this 
extension.
 
 All the configuration parameters for the Kafka emitter are under 
`druid.emitter.kafka`.
 
-|property|description|required?|default|
-|--------|-----------|---------|-------|
-|`druid.emitter.kafka.bootstrap.servers`|Comma-separated Kafka broker. 
(`[hostname:port],[hostname:port]...`)|yes|none|
-|`druid.emitter.kafka.metric.topic`|Kafka topic name for emitter's target to 
emit service metric.|yes|none|
-|`druid.emitter.kafka.alert.topic`|Kafka topic name for emitter's target to 
emit alert.|yes|none|
-|`druid.emitter.kafka.request.topic`|Kafka topic name for emitter's target to 
emit request logs. If left empty then request logs will not be sent to the 
Kafka topic.|no|none|
-|`druid.emitter.kafka.producer.config`|JSON formatted configuration which user 
want to set additional properties to Kafka producer.|no|none|
-|`druid.emitter.kafka.clusterName`|Optional value to specify name of your 
druid cluster. It can help make groups in your monitoring environment. |no|none|
+| property                                           | description             
                                                                                
                                                                                
| required? | default               |
+|----------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|-----------------------|
+| `druid.emitter.kafka.bootstrap.servers`            | Comma-separated Kafka 
broker. (`[hostname:port],[hostname:port]...`)                                  
                                                                                
  | yes       | none                  |
+| `druid.emitter.kafka.event.types`                  | Comma-separated event 
types. <br/>Choices: alerts, metrics, requests, segmentMetadata                 
                                                                                
  | no        | ["metrics", "alerts"] |
+| `druid.emitter.kafka.metric.topic`                 | Kafka topic name for 
emitter's target to emit service metric. If `event.types` contains `metrics`, 
this field cannot be left empty                                                 
     | no        | none                  |
+| `druid.emitter.kafka.alert.topic`                  | Kafka topic name for 
emitter's target to emit alert. If `event.types` contains `alerts`, this field 
cannot be left empty                                                            
    | no        | none                  |
+| `druid.emitter.kafka.request.topic`                | Kafka topic name for 
emitter's target to emit request logs. If `event.types` contains `requests`, 
this field cannot be left empty                                                 
      | no        | none                  |
+| `druid.emitter.kafka.segmentMetadata.topic`        | Kafka topic name for 
emitter's target to emit segments related metadata. If `event.types` contains 
`segmentMetadata`, this field cannot be left empty                              
     | no        | none                  |
+| `druid.emitter.kafka.segmentMetadata.topic.format` | Format in which segment 
related metadata will be emitted. <br/>Choices: json, protobuf.<br/> If set to 
`protobuf`, then segment metadata is emitted in `DruidSegmentEvent.proto` 
format | no        | json                  |

Review Comment:
   I'm curious about the need to add protobuf encoding (and thereby a 
dependency) here when we can get away with `json` or a byte format? The output 
format, including json, can always be made backwards compatible.  And if 
downstream consumers want to consume the topics as a proto encoded message, 
that should still be possible by unmarshaling json/bytes into a proto struct as 
needed?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to