Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/993#discussion_r15444048
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLConf.scala ---
@@ -39,6 +39,18 @@ trait SQLConf {
private[spark] def numShufflePartitions: Int = get(SHUFFLE_PARTITIONS,
"200").toInt
/**
+ * When set to true, Spark SQL will use the Scala compiler at runtime to
generate custom bytecode
+ * that evaluates expressions found in queries. In general this custom
code runs much faster
+ * than interpreted evaluation, but there are significant start-up costs
due to compilation.
+ * As a result codegen is only benificial when queries run for a long
time, or when the same
+ * expressions are used multiple times.
+ *
+ * Defaults to false as this feature is currently experimental.
+ */
+ private[spark] def codegenEnabled: Boolean =
+ if (get("spark.sql.codegen", "false") == "true") true else false
--- End diff --
Collected all Spark SQL configurations properties in [`object
SQLConf`](https://github.com/apache/spark/blob/81fcdd22c8ef52889ed51b3ec5c2747708505fc2/sql/core/src/main/scala/org/apache/spark/sql/SQLConf.scala#L94-L102)
in the JDBC Thrift server PR. We can put this one there too.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---