Github user thenatog commented on the issue:

    https://github.com/apache/nifi/pull/2980
  
    Tested out the HashAttribute processor. This all worked fine:
    - MD5 and creating a new attribute
    - MD5 and overwriting the attribute with hashed value
    - SHA256 and creating a new attribute
    - MD5 of chinese characters using UTF-8 (matched web tool hasher and 
command line md5 utility)
    
    UTF-16 is where I came unstuck:
    - MD5 of simple string using "UTF-16" encoding, I get a different hash to 
what I expect.
    - MD5 of simple string using "UTF-16BE" and "UTF-16LE" encoding DO match 
what I expect.
    
    Test input string in all cases: “hehe”
    
    NiFi CalculateAttributeHash:
    UTF-8:MD5       = 529ca8050a00180790cf88b63468826a
    UTF-16BE:MD5 = b0ed26b524e0b0606551d78e42b5b7bc
    UTF-16LE:MD5 = 2db0ecc27f7abd29ba95412feb3b5e07
    UTF-16:MD5     = 9b6dcd3887ebdb43d66fb4b3ef9c259b
    
    CyberChef 
(https://gchq.github.io/CyberChef/#recipe=Encode_text('UTF16BE%20(1201)')MD5()&input=aGVoZQ):
 
    UTF-8:MD5       = 529ca8050a00180790cf88b63468826a
    UTF-16BE:MD5 = b0ed26b524e0b0606551d78e42b5b7bc
    UTF-16LE:MD5 = 2db0ecc27f7abd29ba95412feb3b5e07
    
    I found that “UTF-16” is different because when encoding, Java adds a 
big-endian BOM: _“When decoding, the UTF-16 charset interprets the byte-order 
mark at the beginning of the input stream to indicate the byte-order of the 
stream but defaults to big-endian if there is no byte-order mark; when 
encoding, it uses big-endian byte order and writes a big-endian byte-order 
mark.”_ As expected, adding the BOM changes the output bytes which are then 
hashed, resulting in a different hash to “UTF-16BE” encoding. Is this a 
problem or is this simply expected behaviour - ie. should the user realize that 
there will be a difference between UTF-16 and UTF-16BE encoding and the 
resulting hash?


---

Reply via email to