Copilot commented on code in PR #780:
URL: https://github.com/apache/skywalking-java/pull/780#discussion_r2531754358


##########
CHANGES.md:
##########
@@ -20,6 +20,9 @@ Release Notes.
 * Eliminate repeated code with HttpServletRequestWrapper in 
mvc-annotation-commons.
 * Add the jdk httpclient plugin.
 * Fix Gateway 2.0.x plugin not activated for spring-cloud-starter-gateway 
2.0.0.RELEASE.
+* Enhance spring-kafka plugin to support spring-kafka 3.1.0+ and rename 
spring-kafka-2.x-plugin to spring-kafka-2.x-3.x-plugin
+* Upgrade kafka-clients version in optional-reporter-plugins to 3.9.1

Review Comment:
   Missing period at the end of the sentence. For consistency with other 
changelog entries, this line should end with a period.
   ```suggestion
   * Upgrade kafka-clients version in optional-reporter-plugins to 3.9.1.
   ```



##########
test/plugin/scenarios/spring-kafka-3.3.x-scenario/config/expectedData.yaml:
##########
@@ -0,0 +1,123 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+segmentItems:
+  - serviceName: spring-kafka-3.3.x-scenario
+    segmentSize: nq 0
+    segments:
+      - segmentId: not null
+        spans:
+          - operationName: Kafka/spring_test/Producer
+            parentSpanId: 0
+            spanId: 1
+            spanLayer: MQ
+            startTime: not null
+            endTime: not null
+            componentId: 40
+            isError: false
+            spanType: Exit
+            peer: kafka-server:9092
+            skipAnalysis: false
+            tags:
+              - {key: mq.broker, value: 'kafka-server:9092'}
+              - {key: mq.topic, value: spring_test}
+          - operationName: Kafka/spring_test/Producer
+            parentSpanId: 0
+            spanId: 2
+            spanLayer: MQ
+            startTime: not null
+            endTime: not null
+            componentId: 40
+            isError: false
+            spanType: Exit
+            peer: kafka-server:9092
+            skipAnalysis: false
+            tags:
+              - { key: mq.broker, value: 'kafka-server:9092' }
+              - { key: mq.topic, value: spring_test }
+          - operationName: GET:/case/spring-kafka-case
+            parentSpanId: -1
+            spanId: 0
+            spanLayer: Http
+            startTime: not null
+            endTime: not null
+            componentId: 14
+            isError: false
+            spanType: Entry
+            peer: ''
+            skipAnalysis: false
+            tags:
+              - {key: url, value: 
'http://localhost:8080/spring-kafka-3.3.x-scenario/case/spring-kafka-case'}
+              - {key: http.method, value: GET}
+              - {key: http.status_code, value: '200'}
+      - segmentId: not null
+        spans:
+          - operationName: GET:/case/spring-kafka-consumer-ping
+            parentSpanId: -1
+            spanId: 0
+            spanLayer: Http
+            startTime: not null
+            endTime: not null
+            componentId: 14
+            isError: false
+            spanType: Entry
+            peer: ''
+            skipAnalysis: false
+            tags:
+              - {key: url, value: 
'http://localhost:8080/spring-kafka-3.3.x-scenario/case/spring-kafka-consumer-ping'}
+              - {key: http.method, value: GET}
+              - {key: http.status_code, value: '200'}
+            refs:
+              - {parentEndpoint: 
'Kafka/spring_test/Consumer/grop:spring_test', networkAddress: 'localhost:8080',

Review Comment:
   Spelling error: "grop:spring_test" should be "group:spring_test" to match 
the correct consumer group ID format.



##########
CHANGES.md:
##########
@@ -20,6 +20,9 @@ Release Notes.
 * Eliminate repeated code with HttpServletRequestWrapper in 
mvc-annotation-commons.
 * Add the jdk httpclient plugin.
 * Fix Gateway 2.0.x plugin not activated for spring-cloud-starter-gateway 
2.0.0.RELEASE.
+* Enhance spring-kafka plugin to support spring-kafka 3.1.0+ and rename 
spring-kafka-2.x-plugin to spring-kafka-2.x-3.x-plugin

Review Comment:
   Missing period at the end of the sentence. For consistency with other 
changelog entries, this line should end with a period.
   ```suggestion
   * Enhance spring-kafka plugin to support spring-kafka 3.1.0+ and rename 
spring-kafka-2.x-plugin to spring-kafka-2.x-3.x-plugin.
   ```



##########
test/plugin/scenarios/spring-kafka-3.3.x-scenario/src/main/java/test/apache/skywalking/apm/testcase/spring/kafka/controller/CaseController.java:
##########
@@ -0,0 +1,156 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package test.apache.skywalking.apm.testcase.spring.kafka.controller;
+
+import jakarta.annotation.PostConstruct;
+import okhttp3.OkHttpClient;
+import okhttp3.Request;
+import okhttp3.Response;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.consumer.ConsumerRecord;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.common.serialization.Deserializer;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.apache.kafka.common.serialization.StringSerializer;
+import org.springframework.beans.factory.annotation.Value;
+import org.springframework.context.annotation.PropertySource;
+import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
+import org.springframework.kafka.core.DefaultKafkaProducerFactory;
+import org.springframework.kafka.core.KafkaTemplate;
+import org.springframework.kafka.listener.AcknowledgingMessageListener;
+import org.springframework.kafka.listener.ContainerProperties;
+import org.springframework.kafka.listener.KafkaMessageListenerContainer;
+import org.springframework.kafka.support.Acknowledgment;
+import org.springframework.stereotype.Controller;
+import org.springframework.web.bind.annotation.RequestMapping;
+import org.springframework.web.bind.annotation.ResponseBody;
+
+import java.util.Arrays;
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.CountDownLatch;
+
+@Controller
+@RequestMapping("/case")
+@PropertySource("classpath:application.properties")
+public class CaseController {
+
+    private static final String SUCCESS = "Success";
+
+    @Value("${bootstrap.servers:127.0.0.1:9092}")
+    private String bootstrapServers;
+    private String topicName;
+    private KafkaTemplate<String, String> kafkaTemplate;
+    private KafkaTemplate<String, String> kafkaTemplate2;
+
+    private CountDownLatch latch;
+    private String helloWorld = "helloWorld";
+
+    @PostConstruct
+    private void setUp() {
+        topicName = "spring_test";
+        setUpProvider();
+        setUpAnotherProvider();
+        setUpConsumer();
+    }
+
+    private void setUpProvider() {
+        Map<String, Object> props = new HashMap<>();
+        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
+        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, 
StringSerializer.class);
+        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, 
StringSerializer.class);
+        kafkaTemplate = new KafkaTemplate<>(new 
DefaultKafkaProducerFactory<>(props));
+        try {
+            kafkaTemplate.send(topicName, "key", "ping").get();
+            kafkaTemplate.flush();
+        } catch (Exception e) {
+            e.printStackTrace();
+        }
+    }
+
+    private void setUpAnotherProvider() {
+        Map<String, Object> props = new HashMap<>();
+        // use list type here
+        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, 
Arrays.asList(bootstrapServers.split(",")));
+        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, 
StringSerializer.class);
+        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, 
StringSerializer.class);
+        kafkaTemplate2 = new KafkaTemplate<>(new 
DefaultKafkaProducerFactory<>(props));
+        try {
+            kafkaTemplate2.send(topicName, "key", "ping").get();
+            kafkaTemplate2.flush();
+        } catch (Exception e) {
+            e.printStackTrace();
+        }
+    }
+
+    private void setUpConsumer() {
+        Map<String, Object> configs = new HashMap<>();
+        configs.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
+        configs.put(ConsumerConfig.GROUP_ID_CONFIG, "grop:" + topicName);

Review Comment:
   Spelling error: "grop" should be "group". This typo appears in the consumer 
group ID configuration.
   ```suggestion
           configs.put(ConsumerConfig.GROUP_ID_CONFIG, "group:" + topicName);
   ```



##########
test/plugin/scenarios/spring-kafka-3.3.x-scenario/config/expectedData.yaml:
##########
@@ -0,0 +1,123 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+segmentItems:
+  - serviceName: spring-kafka-3.3.x-scenario
+    segmentSize: nq 0
+    segments:
+      - segmentId: not null
+        spans:
+          - operationName: Kafka/spring_test/Producer
+            parentSpanId: 0
+            spanId: 1
+            spanLayer: MQ
+            startTime: not null
+            endTime: not null
+            componentId: 40
+            isError: false
+            spanType: Exit
+            peer: kafka-server:9092
+            skipAnalysis: false
+            tags:
+              - {key: mq.broker, value: 'kafka-server:9092'}
+              - {key: mq.topic, value: spring_test}
+          - operationName: Kafka/spring_test/Producer
+            parentSpanId: 0
+            spanId: 2
+            spanLayer: MQ
+            startTime: not null
+            endTime: not null
+            componentId: 40
+            isError: false
+            spanType: Exit
+            peer: kafka-server:9092
+            skipAnalysis: false
+            tags:
+              - { key: mq.broker, value: 'kafka-server:9092' }
+              - { key: mq.topic, value: spring_test }
+          - operationName: GET:/case/spring-kafka-case
+            parentSpanId: -1
+            spanId: 0
+            spanLayer: Http
+            startTime: not null
+            endTime: not null
+            componentId: 14
+            isError: false
+            spanType: Entry
+            peer: ''
+            skipAnalysis: false
+            tags:
+              - {key: url, value: 
'http://localhost:8080/spring-kafka-3.3.x-scenario/case/spring-kafka-case'}
+              - {key: http.method, value: GET}
+              - {key: http.status_code, value: '200'}
+      - segmentId: not null
+        spans:
+          - operationName: GET:/case/spring-kafka-consumer-ping
+            parentSpanId: -1
+            spanId: 0
+            spanLayer: Http
+            startTime: not null
+            endTime: not null
+            componentId: 14
+            isError: false
+            spanType: Entry
+            peer: ''
+            skipAnalysis: false
+            tags:
+              - {key: url, value: 
'http://localhost:8080/spring-kafka-3.3.x-scenario/case/spring-kafka-consumer-ping'}
+              - {key: http.method, value: GET}
+              - {key: http.status_code, value: '200'}
+            refs:
+              - {parentEndpoint: 
'Kafka/spring_test/Consumer/grop:spring_test', networkAddress: 'localhost:8080',
+                 refType: CrossProcess, parentSpanId: 1, parentTraceSegmentId: 
not null,
+                 parentServiceInstance: not null, parentService: 
spring-kafka-3.3.x-scenario,
+                 traceId: not null}
+      - segmentId: not null
+        spans:
+          - operationName: 
/spring-kafka-3.3.x-scenario/case/spring-kafka-consumer-ping
+            parentSpanId: 0
+            spanId: 1
+            spanLayer: Http
+            startTime: not null
+            endTime: not null
+            componentId: 12
+            isError: false
+            spanType: Exit
+            peer: localhost:8080
+            skipAnalysis: false
+            tags:
+              - {key: http.method, value: GET}
+              - {key: url, value: 
'http://localhost:8080/spring-kafka-3.3.x-scenario/case/spring-kafka-consumer-ping'}
+              - {key: http.status_code, value: '200'}
+          - operationName: Kafka/spring_test/Consumer/grop:spring_test

Review Comment:
   Spelling error: "grop:spring_test" should be "group:spring_test" to match 
the correct consumer group ID format.



##########
apm-sniffer/apm-sdk-plugin/spring-plugins/spring-kafka-2.x-3.x-plugin/src/main/java/org/apache/skywalking/apm/plugin/spring/kafka/ExtendedKafkaConsumerInterceptor.java:
##########
@@ -0,0 +1,157 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.skywalking.apm.plugin.spring.kafka;
+
+import java.lang.reflect.Method;
+import java.nio.charset.StandardCharsets;
+import java.util.Iterator;
+import java.util.Set;
+import java.util.stream.Collectors;
+import org.apache.kafka.clients.consumer.ConsumerRecord;
+import org.apache.kafka.clients.consumer.ConsumerRecords;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.header.Header;
+import org.apache.skywalking.apm.agent.core.context.CarrierItem;
+import org.apache.skywalking.apm.agent.core.context.ContextCarrier;
+import org.apache.skywalking.apm.agent.core.context.ContextManager;
+import org.apache.skywalking.apm.agent.core.context.tag.Tags;
+import org.apache.skywalking.apm.agent.core.context.trace.AbstractSpan;
+import org.apache.skywalking.apm.agent.core.context.trace.SpanLayer;
+import 
org.apache.skywalking.apm.agent.core.plugin.interceptor.enhance.EnhancedInstance;
+import 
org.apache.skywalking.apm.agent.core.plugin.interceptor.enhance.InstanceMethodsAroundInterceptor;
+import 
org.apache.skywalking.apm.agent.core.plugin.interceptor.enhance.MethodInterceptResult;
+import org.apache.skywalking.apm.network.trace.component.ComponentsDefine;
+import org.apache.skywalking.apm.plugin.kafka.define.Constants;
+import org.apache.skywalking.apm.plugin.kafka.define.KafkaContext;
+
+public class ExtendedKafkaConsumerInterceptor implements 
InstanceMethodsAroundInterceptor {
+
+    private static final String OPERATE_NAME_PREFIX = "Kafka/";
+    private static final String CONSUMER_OPERATE_NAME = "/Consumer/";
+    private static final String UNKNOWN = "Unknown";
+
+    @Override
+    public void beforeMethod(EnhancedInstance objInst, Method method, Object[] 
allArguments, Class<?>[] argumentsTypes,
+                             MethodInterceptResult result) throws Throwable {
+        ExtendedConsumerEnhanceRequiredInfo requiredInfo = 
(ExtendedConsumerEnhanceRequiredInfo) objInst.getSkyWalkingDynamicField();
+        if (requiredInfo != null) {
+            requiredInfo.setStartTime(System.currentTimeMillis());
+        }
+    }
+
+    @Override
+    public Object afterMethod(EnhancedInstance objInst, Method method, 
Object[] allArguments, Class<?>[] argumentsTypes,
+                              Object ret) throws Throwable {
+        if (ret == null) {
+            return ret;
+        }
+
+        ConsumerRecords<?, ?> records = (ConsumerRecords<?, ?>) ret;
+
+        // Only create entry span when consumer received at least one message
+        if (records.count() > 0) {
+            createEntrySpan(objInst, records);
+        }
+        return ret;
+    }
+
+    @Override
+    public void handleMethodException(EnhancedInstance objInst, Method method, 
Object[] allArguments,
+                                      Class<?>[] argumentsTypes, Throwable t) {
+        if (ContextManager.isActive()) {
+            ContextManager.activeSpan().log(t);
+        }
+    }
+
+    private void createEntrySpan(EnhancedInstance objInst, ConsumerRecords<?, 
?> records) {
+        KafkaContext context = (KafkaContext) 
ContextManager.getRuntimeContext().get(Constants.KAFKA_FLAG);
+        if (context != null) {
+            ContextManager.createEntrySpan(context.getOperationName(), null);
+            context.setNeedStop(true);
+        }
+
+        ExtendedConsumerEnhanceRequiredInfo requiredInfo = 
(ExtendedConsumerEnhanceRequiredInfo) objInst.getSkyWalkingDynamicField();
+
+        SpanInfo spanInfo = buildSpanInfo(requiredInfo, records);
+
+        String operationName = OPERATE_NAME_PREFIX + spanInfo.topic + 
CONSUMER_OPERATE_NAME + spanInfo.groupId;
+        AbstractSpan activeSpan = 
ContextManager.createEntrySpan(operationName, null);
+
+        if (requiredInfo != null) {
+            activeSpan.start(requiredInfo.getStartTime());
+        }
+
+        activeSpan.setComponent(ComponentsDefine.KAFKA_CONSUMER);
+        SpanLayer.asMQ(activeSpan);
+        Tags.MQ_BROKER.set(activeSpan, spanInfo.brokerServers);
+        Tags.MQ_TOPIC.set(activeSpan, spanInfo.topic);
+        activeSpan.setPeer(spanInfo.brokerServers);
+
+        extractContextCarrier(records);
+        ContextManager.stopSpan();

Review Comment:
   Potential span leak: Two entry spans are created (lines 85 and 94) but only 
one `ContextManager.stopSpan()` is called (line 107). This will leave one span 
active in the context, potentially causing memory leaks and incorrect trace 
hierarchy. Either add another `stopSpan()` call after line 107, or reconsider 
the span creation logic if only one entry span should be created.
   ```suggestion
           ContextManager.stopSpan();
           // Stop the first entry span if it was created
           if (context != null && context.isNeedStop()) {
               ContextManager.stopSpan();
               context.setNeedStop(false);
           }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to