[ https://issues.apache.org/jira/browse/SPARK-10146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Yin Huai updated SPARK-10146: ----------------------------- Issue Type: Sub-task (was: Improvement) Parent: SPARK-9932 > Have an easy way to set data source reader/writer specific confs > ---------------------------------------------------------------- > > Key: SPARK-10146 > URL: https://issues.apache.org/jira/browse/SPARK-10146 > Project: Spark > Issue Type: Sub-task > Components: SQL > Reporter: Yin Huai > Priority: Critical > > Right now, it is hard to set data source reader/writer specifics confs > correctly (e.g. parquet's row group size). Users need to set those confs in > hadoop conf before start the application or through > {{org.apache.spark.deploy.SparkHadoopUtil.get.conf}} at runtime. It will be > great if we can have an easy to set those confs. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org