Github user viirya commented on the pull request:
https://github.com/apache/spark/pull/7182#issuecomment-118451750
I ran a simple benchmark as:
// create 1002 parquet files with very simple schema
(0 to 1000).foreach { idx =>
sqlContext.range(0, 10).toDF("a").write.parquet(new Path(basePath,
s"foo=$idx").toString)
}
sqlContext.range(0, 10).toDF("b").write.parquet(new Path(basePath,
"foo=1001").toString)
// load parquet files and merge schema
sqlContext.read.parquet(basePath)
before: 0.953615048s
after: 0.847720209s
So it shows 11% relative improvement.
Because I just use very simple schema and the number of files is limited
too, it can be expected that this PR can bring much more improvement in real
use cases.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]