[ https://issues.apache.org/jira/browse/SPARK-11319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15007220#comment-15007220 ]
Daniel Jalova commented on SPARK-11319: --------------------------------------- Seems that this is possible in the Scala API too. > PySpark silently Accepts null values in non-nullable DataFrame fields. > ---------------------------------------------------------------------- > > Key: SPARK-11319 > URL: https://issues.apache.org/jira/browse/SPARK-11319 > Project: Spark > Issue Type: Bug > Components: PySpark, SQL > Reporter: Kevin Cox > > Running the following code with a null value in a non-nullable column > silently works. This makes the code incredibly hard to trust. > {code} > In [2]: from pyspark.sql.types import * > In [3]: sqlContext.createDataFrame([(None,)], StructType([StructField("a", > TimestampType(), False)])).collect() > Out[3]: [Row(a=None)] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org