viirya commented on pull request #31296:
URL: https://github.com/apache/spark/pull/31296#issuecomment-766554211


   > The problem is other typed functions get the Row as simply `Row`, and able 
to call `Row.getString` or something like that, even knowing which columns the 
Row instance has. It doesn't need to be serialized as some other form. Does it 
apply to the external process? No. Spark should serialize the Row instance to 
send to the external process, and the serialized form of Row instance is 
"unknown" to end users, unless they deal with crafting serializer by their hand 
using `Row.getString` or so on. That's why I said default serializer doesn't 
work with untyped Dataset.
   
   This is why we need a custom function `printRDDElement`. For complex type 
the string representation isn't reliable, a custom function is used to produce 
necessary output to the process.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to