Github user viirya commented on the issue:

    https://github.com/apache/spark/pull/20648
  
    >  Yup, +1 for starting this by disallowing but up to my knowledge R's 
read.csv allows then the legnth of tokens are shorter then its schema, putting 
nulls (or NA) into missing fields, as a valid case.
    
    @HyukjinKwon If the length of tokens are longer than its schema, R's 
read.csv seems not to have error. Is this behavior also we want?
    
    Spark's CSV reader just drops extra tokens when under permissive mode.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to