Github user viirya commented on the issue:
https://github.com/apache/spark/pull/21952
@dbtsai This is what I see when testing on Spark 2.3. Compared with above
numbers, seems to me there are no such significant difference as same as your
findings.
```scala
> "com.databricks.spark.avro - Spark 2.3"
scala> spark.sparkContext.parallelize(writeTimes.slice(50,
150)).toDF("writeTimes").describe("writeTimes").show()
+-------+-------------------+
|summary| writeTimes|
+-------+-------------------+
| count| 100|
| mean|0.21722999999999998|
| stddev|0.04375479309963559|
| min| 0.176|
| max| 0.481|
+-------+-------------------+
scala> spark.sparkContext.parallelize(readTimes.slice(50,
150)).toDF("readTimes").describe("readTimes").show()
+-------+-------------------+
|summary| readTimes|
+-------+-------------------+
| count| 100|
| mean|0.12025999999999999|
| stddev|0.04034638406438311|
| min| 0.072|
| max| 0.26|
+-------+-------------------+
```
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]