maropu commented on issue #27563: [SPARK-30812][SQL][CORE] Revise boolean 
config name to comply with new config naming policy
URL: https://github.com/apache/spark/pull/27563#issuecomment-586789734
 
 
   > Hi @dongjoon-hyun , I've checked again and attached all the related JIRAs.
   
   I've double-checked that; all the boolean options below are generated by 
[running code](https://gist.github.com/maropu/3054150bf16aa10795d52b48c3eaa921) 
in v2.4.5/master. I've checked that all the boolean configs in this PR are 
included in the list below;
   ```
   // All the boolean SQL options newly added in v3.0.0
   +--------------------------------------------------------------+             
   
   |_c0                                                           |
   +--------------------------------------------------------------+
   |spark.sql.adaptive.forceApply                                 |
   |spark.sql.adaptive.optimizeSkewedJoin.enabled                 |
   |spark.sql.adaptive.shuffle.fetchShuffleBlocksInBatch.enabled  |
   |spark.sql.adaptive.shuffle.localShuffleReader.enabled         |
   |spark.sql.adaptive.shuffle.reducePostShufflePartitions.enabled|
   |spark.sql.analyzer.failAmbiguousSelfJoin.enabled              |
   |spark.sql.ansi.enabled                                        |
   |spark.sql.cbo.planStats.enabled                               |
   |spark.sql.codegen.aggregate.map.vectorized.enable             |
   |spark.sql.codegen.aggregate.splitAggregateFunc.enabled        |
   |spark.sql.csv.filterPushdown.enabled                          |
   |spark.sql.datetime.java8API.enabled                           |
   |spark.sql.defaultUrlStreamHandlerFactory.enabled              |
   |spark.sql.execution.arrow.sparkr.enabled                      |
   |spark.sql.execution.pandas.arrowSafeTypeConversion            |
   |spark.sql.execution.subquery.reuse.enabled                    |
   |spark.sql.inMemoryTableScanStatistics.enable                  |
   |spark.sql.jsonGenerator.ignoreNullFields                      |
   |spark.sql.legacy.addDirectory.recursive.enabled               |
   |spark.sql.legacy.allowNegativeScaleOfDecimal.enabled          |
   |spark.sql.legacy.arrayExistsFollowsThreeValuedLogic           |
   |spark.sql.legacy.bucketedTableScan.outputOrdering             |
   |spark.sql.legacy.createHiveTableByDefault.enabled             |
   |spark.sql.legacy.dataset.nameNonStructGroupingKeyAsValue      |
   |spark.sql.legacy.exponentLiteralAsDecimal.enabled             |
   |spark.sql.legacy.fromDayTimeString.enabled                    |
   |spark.sql.legacy.integralDivide.returnBigint                  |
   |spark.sql.legacy.json.allowEmptyString.enabled                |
   |spark.sql.legacy.looseUpcast                                  |
   |spark.sql.legacy.property.nonReserved                         |
   |spark.sql.legacy.sessionInitWithConfigDefaults                |
   |spark.sql.legacy.setCommandRejectsSparkCoreConfs              |
   |spark.sql.legacy.timeParser.enabled                           |
   |spark.sql.legacy.typeCoercion.datetimeToString.enabled        |
   |spark.sql.optimizer.dynamicPartitionPruning.enabled           |
   |spark.sql.optimizer.dynamicPartitionPruning.reuseBroadcast    |
   |spark.sql.optimizer.dynamicPartitionPruning.useStats          |
   |spark.sql.optimizer.expression.nestedPruning.enabled          |
   |spark.sql.optimizer.serializer.nestedSchemaPruning.enabled    |
   |spark.sql.orc.mergeSchema                                     |
   |spark.sql.sources.ignoreDataLocality.enabled                  |
   |spark.sql.sources.validatePartitionColumns                    |
   |spark.sql.streaming.checkpoint.escapedPathCheck.enabled       |
   |spark.sql.streaming.fileSource.schema.forceNullable           |
   |spark.sql.streaming.forceDeleteTempCheckpointLocation.enabled |
   |spark.sql.streaming.stopActiveRunOnRestart                    |
   |spark.sql.streaming.ui.enabled                                |
   +--------------------------------------------------------------+
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to