maropu commented on a change in pull request #17406:
URL: https://github.com/apache/spark/pull/17406#discussion_r514655019



##########
File path: sql/core/src/main/scala/org/apache/spark/sql/functions.scala
##########
@@ -3055,13 +3056,21 @@ object functions {
    * with the specified schema. Returns `null`, in the case of an unparseable 
string.
    *
    * @param e a string column containing JSON data.
-   * @param schema the schema to use when parsing the json string as a json 
string
+   * @param schema the schema to use when parsing the json string as a json 
string. In Spark 2.1,
+   *               the user-provided schema has to be in JSON format. Since 
Spark 2.2, the DDL
+   *               format is also supported for the schema.
    *
    * @group collection_funcs
    * @since 2.1.0
    */
-  def from_json(e: Column, schema: String, options: java.util.Map[String, 
String]): Column =
-    from_json(e, DataType.fromJson(schema), options)
+  def from_json(e: Column, schema: String, options: java.util.Map[String, 
String]): Column = {
+    val dataType = try {
+      DataType.fromJson(schema)
+    } catch {
+      case NonFatal(_) => StructType.fromDDL(schema)
+    }

Review comment:
       Thanks, @HyukjinKwon. Ah, I see, I totally forgot it...




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to