Github user maropu commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16928#discussion_r102665176
  
    --- Diff: python/pyspark/sql/readwriter.py ---
    @@ -193,8 +193,9 @@ def json(self, path, schema=None, 
primitivesAsString=None, prefersDecimal=None,
     
                     *  ``PERMISSIVE`` : sets other fields to ``null`` when it 
meets a corrupted \
                       record and puts the malformed string into a new field 
configured by \
    -                 ``columnNameOfCorruptRecord``. When a schema is set by 
user, it sets \
    -                 ``null`` for extra fields.
    +                 ``columnNameOfCorruptRecord``. An user-defined schema can 
include \
    +                 a string type field named ``columnNameOfCorruptRecord`` 
for corrupt records. \
    +                 When a schema is set by user, it sets ``null`` for extra 
fields.
    --- End diff --
    
    Ah..., a bit different I think. As @HyukjinKwon said 
above(https://github.com/apache/spark/pull/16928#discussion_r102645047), CSV 
formats depend on a length of parsed tokens (if the length shorter, fills 
`null`, and if longer, drops them in permissive mode). One the other hand, in 
JSON formats, fields in a required schema are mapped by `key`. In case of 
missing keys in JSON formats, it just sets `null` in these fields with the keys 
in all the three mode. cc: @HyukjinKwon 
    e.x.)
    ```
    
    import org.apache.spark.sql.types._
    scala> Seq("""{"a": "a", "b" : 1}""", """{"a": 
"a"}""").toDF().write.text("/Users/maropu/Desktop/data")
    scala> val dataSchema = StructType(StructField("a", StringType, true) :: 
StructField("b", IntegerType, true) :: Nil)
    scala> spark.read.schema(dataSchema).option("mode", 
"PERMISSIVE").json("/Users/maropu/Desktop/data").show()
    +---+----+
    |  a|   b|
    +---+----+
    |  a|   1|
    |  a|null|
    +---+----+
    
    scala> spark.read.schema(dataSchema).option("mode", 
"FAILFAST").json("/Users/maropu/Desktop/data").show()
    +---+----+
    |  a|   b|
    +---+----+
    |  a|   1|
    |  a|null|
    +---+----+
    
    scala> spark.read.schema(dataSchema).option("mode", 
"DROPMALFORMED").json("/Users/maropu/Desktop/data").show()
    +---+----+
    |  a|   b|
    +---+----+
    |  a|   1|
    |  a|null|
    +---+----+
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to