IIRC, the reason we did it is: `SQLConf` was in SQL core module. So we need
to create methods in `CatalystConf`, and `SQLConf` implements
`CatalystConf`.

Now the problem has gone: we moved `SQLConf` to catalyst module. I think we
can remove these methods.

On Fri, Dec 14, 2018 at 3:45 PM Reynold Xin <r...@databricks.com> wrote:

> In SQLConf, for each config option, we declare them in two places:
>
> First in the SQLConf object, e.g.:
>
> *val **CSV_PARSER_COLUMN_PRUNING *= 
> *buildConf*("spark.sql.csv.parser.columnPruning.enabled")
>   .internal()
>   .doc("If it is set to true, column names of the requested schema are passed 
> to CSV parser. " +
>     "Other column values can be ignored during parsing even if they are 
> malformed.")
>   .booleanConf
>   .createWithDefault(*true*)
>
>
> Second in SQLConf class:
>
> *def *csvColumnPruning: Boolean = getConf(SQLConf.*CSV_PARSER_COLUMN_PRUNING*)
>
>
>
> As the person that introduced both, I'm now thinking we should remove
> almost all of the latter, unless it is used more than 5 times. The vast
> majority of config options are read only in one place, so the functions are
> pretty redundant ...
>
>
>
>
>

Reply via email to