mjsax commented on code in PR #20910:
URL: https://github.com/apache/kafka/pull/20910#discussion_r2536842190


##########
streams/src/main/java/org/apache/kafka/streams/kstream/internals/KTableImpl.java:
##########
@@ -497,6 +522,218 @@ private <VR> KTable<K, VR> doTransformValues(final 
ValueTransformerWithKeySuppli
             builder);
     }
 
+    private <VR> FixedKeyProcessorSupplier<? super K, ? super V, ? extends VR> 
createValueTransformerWithKeySupplierToFixedProcessorSupplierAdaptor(
+        final ValueTransformerWithKeySupplier<? super K, ? super V, ? extends 
VR> transformerSupplier
+    ) {
+        return new FixedKeyProcessorSupplier<>() {
+
+            @Override
+            public Set<StoreBuilder<?>> stores() {
+                return transformerSupplier.stores();
+            }
+
+            @Override
+            public FixedKeyProcessor<K, V, VR> get() {
+                final ValueTransformerWithKey<? super K, ? super V, ? extends 
VR> valueTransformerWithKey = transformerSupplier.get();
+
+                return new FixedKeyProcessor<>() {
+                    private final AtomicReference<Long> timestamp = new 
AtomicReference<>();
+                    private final AtomicReference<Headers> headers = new 
AtomicReference<>();

Review Comment:
   This is some workaround -- in the new PAPI, we get the timestamps and 
headers as part of the input `Record`. To be backward compatible with the old 
"Transformers", we need to ensure that both timestamp and header are accessible 
via the "context" object.
   
   Not sure if there would be better way to do this?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to