[
https://issues.apache.org/jira/browse/SPARK-12264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15813946#comment-15813946
]
Hyukjin Kwon edited comment on SPARK-12264 at 2/23/17 3:09 PM:
---------------------------------------------------------------
(I just simply changed the title to
{quote}
add a typeTag or scalaTypeTag method to DataType.
{quote}
)
was (Author: hyukjin.kwon):
(I just simple change the title to
{quote}
add a typeTag or scalaTypeTag method to DataType.
{quote}
)
> Add a typeTag or scalaTypeTag method to DataType
> ------------------------------------------------
>
> Key: SPARK-12264
> URL: https://issues.apache.org/jira/browse/SPARK-12264
> Project: Spark
> Issue Type: New Feature
> Components: SQL
> Reporter: Andras Nemeth
> Priority: Minor
>
> We are writing code that's dealing with generic DataFrames as inputs and
> further processes their contents with normal RDD operations (not SQL). We
> need some mechanism that tells us exactly what Scala types we will find
> inside a Row of a given DataFrame.
> The schema of the DataFrame contains this information in an abstract sense.
> But we need to map it to TypeTags, as that's what the rest of the system uses
> to identify what RDD contains what type of data - quite the natural choice in
> Scala.
> As far as I can tell, there is no good way to do this today. For now we have
> a hand coded mapping, but that feels very fragile as spark evolves. Is there
> a better way I'm missing? And if not, could we create one? Adding a typeTag
> or scalaTypeTag method to DataType, or at least to AtomicType seems easy
> enough.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]