Github user rayortigas commented on the pull request:
https://github.com/apache/spark/pull/5713#issuecomment-102888578
@JoshRosen: As you saw from my comment on #6222 I think it looks good. As
for this PR, yeah, it should be re-implemented on top of your patch.
I think the conversion would still use the type hints given by
`toTypedRDD[T]`, so I guess `getConverterForType` and
`CatalystTypeConverter.toScala` could be overloaded, e.g.:
```scala
def toScala(universeType: universe.Type, @Nullable catalystValue:
CatalystType): ScalaType
```
or given a default param, e.g.:
```scala
def toScala(
@Nullable catalystValue: CatalystType,
universeType: Option[universe.Type] = None
): ScalaType
```
And `BigDecimalConverter` would use the type hint to figure out whether to
create a Java `BigDecimal` or a Scala one.
In any case, looks doable and should be cleaner. If you like, I can update
this PR after you merge your patch.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]