[
https://issues.apache.org/jira/browse/SPARK-18541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15866393#comment-15866393
]
Shea Parkes commented on SPARK-18541:
-------------------------------------
Thank you very much!
> Add pyspark.sql.Column.aliasWithMetadata to allow dynamic metadata management
> in pyspark SQL API
> ------------------------------------------------------------------------------------------------
>
> Key: SPARK-18541
> URL: https://issues.apache.org/jira/browse/SPARK-18541
> Project: Spark
> Issue Type: Improvement
> Components: PySpark, SQL
> Affects Versions: 2.0.2
> Environment: all
> Reporter: Shea Parkes
> Assignee: Shea Parkes
> Priority: Minor
> Labels: newbie
> Fix For: 2.2.0
>
> Original Estimate: 24h
> Remaining Estimate: 24h
>
> In the Scala SQL API, you can pass in new metadata when you alias a field.
> That functionality is not available in the Python API. Right now, you have
> to painfully utilize {{SparkSession.createDataFrame}} to manipulate the
> metadata for even a single column. I would propose to add the following
> method to {{pyspark.sql.Column}}:
> {code}
> def aliasWithMetadata(self, name, metadata):
> """
> Make a new Column that has the provided alias and metadata.
> Metadata will be processed with json.dumps()
> """
> _context = pyspark.SparkContext._active_spark_context
> _metadata_str = json.dumps(metadata)
> _metadata_jvm =
> _context._jvm.org.apache.spark.sql.types.Metadata.fromJson(_metadata_str)
> _new_java_column = getattr(self._jc, 'as')(name, _metadata_jvm)
> return Column(_new_java_column)
> {code}
> I can likely complete this request myself if there is any interest for it.
> Just have to dust off my knowledge of doctest and the location of the python
> tests.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]