Github user holdenk commented on the issue:

    https://github.com/apache/spark/pull/20280
  
    I'm worried that people might have two rows with different meanings but the 
same type and their application will start producing garbage #s. I think a lot 
of people go from RDDs of Rows to DFs in PySpark these days, so I'm a little 
nervous about this change even though I think its a step in the right direction.
    
    How about a config flag defaulting to off and we switch over @ 3.0?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to