Github user ajantha-bhat commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2966#discussion_r241646014
--- Diff:
integration/spark-common/src/main/scala/org/apache/spark/sql/execution/command/carbonTableSchemaCommon.scala
---
@@ -848,6 +848,19 @@ class TableNewProcessor(cm: TableModel) {
tableSchema.getTableId,
cm.databaseNameOp.getOrElse("default"))
tablePropertiesMap.put("bad_record_path", badRecordsPath)
+ if (tablePropertiesMap.get("sort_columns") != null) {
+ val sortCol = tablePropertiesMap.get("sort_columns")
+ if ((!sortCol.trim.isEmpty) && tablePropertiesMap.get("sort_scope")
== null) {
+ // If sort_scope is not specified, but sort_columns are present,
set sort_scope as
+ // local_sort in carbon_properties (cannot add in table properties
as if user sets carbon
+ // properties it won't be reflected as table properties is given
higher priority)
+ if
(CarbonProperties.getInstance().getProperty(CarbonCommonConstants.LOAD_SORT_SCOPE)
==
+ null) {
+ CarbonProperties.getInstance()
+ .addProperty(CarbonCommonConstants.LOAD_SORT_SCOPE,
"LOCAL_SORT")
--- End diff --
there are test cases, that just set sort scope in carbon property but not
sort columns, that test case fails as table property that I set has higher
priority than carbon properties
example, sort_columns are mentioned in table properties, but sort_scope is
present as "Batch_sort" in carbon properties, so in this time If I set in table
properties, test case fails.
that too they set carbon properties after creating the table !
---