Hi team,

I'm a software developer, working with Apache Spark.

Last week I have encountered a strange issue, which might be a bug.

I see different precision for the same BigDecimal value, when calling the
map() once against a dataFrame created as val df = sc.parallelize(seq).toDF(),
and second when calling map() against a dataFrame created as val df = sc.
parallelize(seq).toDF().limit(2)

For more details i have created a small example, which can be found at the
following link:

https://databricks-prod-cloudfront.cloud.databricks.com/publ
ic/4027ec902e239c93eaaa8714f173bcfc/7958346027016861/2296698
945593142/5693253843748751/latest.html

Hope the example is clear enough.
I am waiting for your response.

Thank you for your time,
Irina Stan

Reply via email to