tvalentyn commented on code in PR #29542:
URL: https://github.com/apache/beam/pull/29542#discussion_r1410011967


##########
sdks/python/apache_beam/ml/transforms/handlers.py:
##########
@@ -134,7 +135,9 @@ def process(self, element):
         hash_object.update(str(list(value)).encode())
       else:  # assume value is a primitive that can be turned into str
         hash_object.update(str(value).encode())
-    yield (hash_object.hexdigest(), element)
+    # add a unique suffix to the hash key to avoid collisions.
+    unique_suffix = uuid.uuid4().hex

Review Comment:
   > Thinking about this, we can just attach a unique id to each element.
   
   Not sure how accurate this is:
   
https://stackoverflow.com/questions/72989272/python-generates-same-uuid-over-multiple-docker-containers
     but I was also thinking about how the seed will be initialized in docker 
context.
   
   and UUID1 might be not thread-safe:
   https://github.com/ClickHouse/clickhouse-connect/issues/194
   
   I'd expect the combination of   uuid1 + os PID + uuid4  would be exceedingly 
unlikely to collide in beam/dataflow context even with the above 
considerations. we can and should detect collisions though, we can fail the 
pipeline if that happens.
   
   A few other notes:
   
   1. Please add a known issue to Changes.MD and file a GH issue that will be 
fixed by this PR.
   2. Let's rename the ComputeAndAttachHashKey to ComputeAndAttachUniqueID
   3. I wonder if performance and pipeline cost would improve if we  can find a 
way to pass-through columns that do not need to be processed to tft, converting 
them to bytes if necessary, avoiding the shuffle step.  
   
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to