priyankaku commented on code in PR #19449: URL: https://github.com/apache/kafka/pull/19449#discussion_r2262072880
########## connect/json/src/main/java/org/apache/kafka/connect/json/JsonConverter.java: ########## @@ -340,13 +351,16 @@ public SchemaAndValue toConnectData(String topic, byte[] value) { throw new DataException("Converting byte[] to Kafka Connect data failed due to serialization error: ", e); } - if (config.schemasEnabled() && (!jsonValue.isObject() || jsonValue.size() != 2 || !jsonValue.has(JsonSchema.ENVELOPE_SCHEMA_FIELD_NAME) || !jsonValue.has(JsonSchema.ENVELOPE_PAYLOAD_FIELD_NAME))) - throw new DataException("JsonConverter with schemas.enable requires \"schema\" and \"payload\" fields and may not contain additional fields." + + if (config.schemasEnabled()) { + if (schema != null) { Review Comment: > Users need to set `schemas.enable=false` in the source connector and then set `schemas.enable=true` along with `schema.content=xxx` in the sink connector to avoid embedding the schema in every message. Is this understanding correct? If so, the `schema.enable` setting might be confusing, as the same flag has completely different meanings for the source and sink connectors -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org