HeartSaVioR commented on pull request #31296:
URL: https://github.com/apache/spark/pull/31296#issuecomment-766562418


   OK so you seem to agree default serializer doesn't work for untyped. And I 
also think we agree default serializer is problematic for non-primitive type T 
on typed. These cases sound majority, and end users only get benefits if their 
Dataset is `Dataset[String]`, `Dataset[Int]`, `Dataset[Long]`, etc. Even for 
them, default serializer can be simply implemented by end users via 
`_.toString`, and they would know what they are doing on serializing the row.
   
   That said, is it still beneficial to provide default serializer and lead end 
users be confused if they don't know about the details? default must be 
reasonable and I don't think current default serializer is reasonable on 
majority cases. I think that is a non-trivial difference between RDD.pipe and 
Dataset.pipe.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to