ted-jenks commented on PR #45408:
URL: https://github.com/apache/spark/pull/45408#issuecomment-1982836579

   @dongjoon-hyun 
   > It sounds like you have other systems to read Spark's data.
   Correct. The issue was that from 3.2 to 3.3 there was a behavior change in 
the base64 encodings used in spark. Previously, they did not chunk. Now, they 
do. Chunked base64 cannot be read by non-MIME compatible base64 decoders 
causing the data output by Spark to be corrupt to systems following the normal 
base64 standard.
   
   I think the best path forward is to use MIME encoding/decoding without 
chunking as this is the most fault tolerant meaning existing use-cases will not 
break, but the pre 3.3 base64 behavior is upheld.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to