HyukjinKwon commented on code in PR #37233:
URL: https://github.com/apache/spark/pull/37233#discussion_r929810416


##########
python/pyspark/sql/dataframe.py:
##########
@@ -1422,6 +1422,61 @@ def colRegex(self, colName: str) -> Column:
         jc = self._jdf.colRegex(colName)
         return Column(jc)
 
+    def asSchema(self, schema: StructType) -> "DataFrame":
+        """
+        Returns a new :class:`DataFrame` where each row is reconciled to match 
the specified
+        schema.
+
+        Spark will:
+
+        1, Reorder columns and/or inner fields by name to match the specified 
schema.
+
+        2, Project away columns and/or inner fields that are not needed by the 
specified schema.
+        Missing columns and/or inner fields (present in the specified schema 
but not input
+        DataFrame) lead to failures.
+
+        3, Cast the columns and/or inner fields to match the data types in the 
specified schema,
+        if the types are compatible, e.g., numeric to numeric (error if 
overflows), but not string
+        to int.
+
+        4, Carry over the metadata from the specified schema, while the 
columns and/or inner fields
+        still keep their own metadata if not overwritten by the specified 
schema.
+
+        5, Fail if the nullability are not compatible. For example, the column 
and/or inner field
+        is nullable but the specified schema requires them to be not nullable.
+
+        .. versionadded:: 3.4.0
+
+        Parameters
+        ----------
+        schema : :class:`StructType`
+            Specified schema.
+
+        Examples
+        --------
+        >>> df = spark.createDataFrame([("a", 1)], ["i", "j"])
+        >>> df.schema
+        StructType([StructField('i', StringType(), True), StructField('j', 
LongType(), True)])
+        >>> schema = StructType([StructField("j", StringType()), 
StructField("i", StringType())])
+        >>> df2 = df.asSchema(schema)
+        >>> df2.schema
+        StructType([StructField('j', StringType(), True), StructField('i', 
StringType(), True)])
+        >>> df2.show()
+        +---+---+
+        |  j|  i|
+        +---+---+
+        |  1|  a|
+        +---+---+
+        """
+        assert schema is not None
+        sc = self.sparkSession._sc
+        assert sc is not None and sc._jvm is not None
+        _struct_type = getattr(
+            getattr(sc._jvm.org.apache.spark.sql.types, "StructType$"), 
"MODULE$"
+        )
+        jschema = _struct_type.fromString(schema.json())

Review Comment:
   Yeah, probably using `df._jdf.sparkSession().parseDataType` would be more 
consistent and simplier.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to