[
https://issues.apache.org/jira/browse/SPARK-14932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723620#comment-15723620
]
Josh Rosen commented on SPARK-14932:
------------------------------------
I think that there's a similar issue impacting the Scala / Java equivalent of
this API. Running
{code}
df.na.replace("*", Map[String, String]("NULL" -> null))
{code}
will produce the exception
{code}
java.lang.IllegalArgumentException: Unsupported value type java.lang.String
(NULL).
at
org.apache.spark.sql.DataFrameNaFunctions.org$apache$spark$sql$DataFrameNaFunctions$$convertToDouble(DataFrameNaFunctions.scala:436)
at
org.apache.spark.sql.DataFrameNaFunctions$$anonfun$4.apply(DataFrameNaFunctions.scala:348)
at
org.apache.spark.sql.DataFrameNaFunctions$$anonfun$4.apply(DataFrameNaFunctions.scala:348)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at
org.apache.spark.sql.DataFrameNaFunctions.replace0(DataFrameNaFunctions.scala:348)
at
org.apache.spark.sql.DataFrameNaFunctions.replace(DataFrameNaFunctions.scala:313)
{code}
The "convertToDouble" appearing in the stracktrace is because the pattern match
at
https://github.com/apache/spark/blob/v2.0.2/sql/core/src/main/scala/org/apache/spark/sql/DataFrameNaFunctions.scala#L345
doesn't have a case to handle nulls so it ends up falling through to the
convertToDouble case.
I bet this will be easy to fix: just add a {{case null: }} at the start of the
pattern match, then do a change similar to what [~nchammas] is suggesting here
to fix things for Python users.
> Allow DataFrame.replace() to replace values with None
> -----------------------------------------------------
>
> Key: SPARK-14932
> URL: https://issues.apache.org/jira/browse/SPARK-14932
> Project: Spark
> Issue Type: Improvement
> Components: SQL
> Reporter: Nicholas Chammas
> Priority: Minor
>
> Current doc:
> http://spark.apache.org/docs/1.6.1/api/python/pyspark.sql.html#pyspark.sql.DataFrame.replace
> I would like to specify {{None}} as the value to substitute in. This is
> currently
> [disallowed|https://github.com/apache/spark/blob/9797cc20c0b8fb34659df11af8eccb9ed293c52c/python/pyspark/sql/dataframe.py#L1144-L1145].
> My use case is for replacing bad values with {{None}} so I can then ignore
> them with {{dropna()}}.
> For example, I have a dataset that incorrectly includes empty strings where
> there should be {{None}} values. I would like to replace the empty strings
> with {{None}} and then drop all null data with {{dropna()}}.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]