viirya commented on a change in pull request #24894: [SPARK-28058][DOC] Add a 
note to DROPMALFORMED mode of CSV for column pruning
URL: https://github.com/apache/spark/pull/24894#discussion_r294351908
 
 

 ##########
 File path: python/pyspark/sql/readwriter.py
 ##########
 @@ -441,7 +441,12 @@ def csv(self, path, schema=None, sep=None, encoding=None, 
quote=None, escape=Non
                   When it meets a record having fewer tokens than the length 
of the schema, \
                   sets ``null`` to extra fields. When the record has more 
tokens than the \
                   length of the schema, it drops extra tokens.
-                * ``DROPMALFORMED`` : ignores the whole corrupted records.
+                * ``DROPMALFORMED`` : ignores the whole corrupted records. 
Note that when CSV \
+                  parser column pruning 
(``spark.sql.csv.parser.columnPruning.enabled``) is \
+                  enabled (it is enabled by default), the malformed columns 
can be ignored during \
+                  parsing if they are pruned, resulting the corrupted records 
are not dropped. \
+                  Disabling the column pruning feature can drop corrupted 
records even malformed \
+                  columns are not read.
 
 Review comment:
   Not sure which one you think is possibly a bug. Let's discuss it on the JIRA?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to