GitHub user debugger87 opened a pull request:
https://github.com/apache/spark/pull/5089
[SPARK-5387] [SQL] parquet writer runs into OOM during writing when number
of rows is large
In some extreme cases, e.g. one row has thousands of columns, this line of
code "val writter = format.getRecordWriter(hadoopContext)" will throw OOM
exception by parquet and lead to shutdown of Spark application. Adding "try
{...} catch {...}" can prevent that issue.
I'm developing a data processing system based on Spark SQL. There are some
very special table which is derived from json file. Every row of those table
has thousands columns. If I want to save DataFrame(or SchemaRDD) generated by
`SQLContext.jsonFile` as parquet file, OOM exception will be thrown by
parquet-mr and Spark application will be shutdown. To handle that fatal
exception, `try {...} catch {...}` the code in function writeShard is needed.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/debugger87/spark master
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/5089.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #5089
----
commit 41e56d871bca2e0559b4d8fec019024f617e6889
Author: debugger87 <[email protected]>
Date: 2015-03-19T12:01:51Z
[SPARK-5387] [SQL] parquet writer runs into OOM during writing when number
of rows is large
In some extreme case, e.g. one row has thousands of columns, this line of
code "val writter = format.getRecordWriter(hadoopContext)" will throw OOM
exception by parquet and lead to shutdown of Spark application. Adding "try
{...} catch {...}" can prevent that issue.
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]