Github user sun-rui commented on the pull request:
https://github.com/apache/spark/pull/8984#issuecomment-152465285
Per the Scala API doc on cast() of Column, the supported types are: string,
boolean, byte, short, int, long, float, double, decimal, date, timestamp. That
is, complext types are not supported as a target type. So for coltypes<-(),
regardless of the input (NA or not), it should not cast the type of a column of
a complex type. If the corresponding input is NA, coltype<-() can silently skip
the column, while if not NA, then prompts a warning.
What I am concerned about is that coltypes<-() actually returns a new
DataFrame instead in-place changing of the schema of the DataFrame (which is
not supported by Spark Core). Is this a desired behavior?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]