akashrn5 commented on a change in pull request #3787:
URL: https://github.com/apache/carbondata/pull/3787#discussion_r486228364
##########
File path:
index/secondary-index/src/test/scala/org/apache/carbondata/spark/testsuite/secondaryindex/TestSIWithSecondryIndex.scala
##########
@@ -86,6 +86,64 @@ class TestSIWithSecondryIndex extends QueryTest with
BeforeAndAfterAll {
.contains("Alter table drop column operation failed:"))
}
+ test("test create secondary index global sort after insert") {
+ sql("drop table if exists table1")
+ sql("create table table1 (name string, id string, country string) stored
as carbondata")
+ sql("insert into table1 select 'xx', '2', 'china' union all select 'xx',
'1', 'india'")
+ sql("create index table1_index on table table1(id, country) as
'carbondata' properties" +
+ "('sort_scope'='global_sort', 'Global_sort_partitions'='3')")
+ checkAnswerWithoutSort(sql("select id, country from table1_index"),
+ Seq(Row("1", "india"), Row("2", "china")))
+ // check for valid sort_scope
+ checkExistence(sql("describe formatted table1_index"), true, "Sort Scope
global_sort")
+ // check the invalid sort scope
+ assert(intercept[MalformedCarbonCommandException](sql(
+ "create index index_2 on table table1(id, country) as 'carbondata'
properties" +
+ "('sort_scope'='tim_sort', 'Global_sort_partitions'='3')"))
+ .getMessage
+ .contains("Invalid SORT_SCOPE tim_sort"))
+ // check for invalid global_sort_partitions
+ assert(intercept[MalformedCarbonCommandException](sql(
+ "create index index_2 on table table1(id, country) as 'carbondata'
properties" +
+ "('sort_scope'='global_sort', 'Global_sort_partitions'='-1')"))
+ .getMessage
+ .contains("Table property global_sort_partitions : -1 is invalid"))
+ sql("drop index table1_index on table1")
Review comment:
can just do drop table, it will drop index too, no need to separately
run drop index and suggest to give a better tableName and index name and please
check other test for same input.
##########
File path:
index/secondary-index/src/test/scala/org/apache/carbondata/spark/testsuite/secondaryindex/TestSIWithSecondryIndex.scala
##########
@@ -86,6 +86,64 @@ class TestSIWithSecondryIndex extends QueryTest with
BeforeAndAfterAll {
.contains("Alter table drop column operation failed:"))
}
+ test("test create secondary index global sort after insert") {
+ sql("drop table if exists table1")
+ sql("create table table1 (name string, id string, country string) stored
as carbondata")
+ sql("insert into table1 select 'xx', '2', 'china' union all select 'xx',
'1', 'india'")
+ sql("create index table1_index on table table1(id, country) as
'carbondata' properties" +
+ "('sort_scope'='global_sort', 'Global_sort_partitions'='3')")
+ checkAnswerWithoutSort(sql("select id, country from table1_index"),
+ Seq(Row("1", "india"), Row("2", "china")))
+ // check for valid sort_scope
+ checkExistence(sql("describe formatted table1_index"), true, "Sort Scope
global_sort")
+ // check the invalid sort scope
+ assert(intercept[MalformedCarbonCommandException](sql(
+ "create index index_2 on table table1(id, country) as 'carbondata'
properties" +
+ "('sort_scope'='tim_sort', 'Global_sort_partitions'='3')"))
+ .getMessage
+ .contains("Invalid SORT_SCOPE tim_sort"))
+ // check for invalid global_sort_partitions
+ assert(intercept[MalformedCarbonCommandException](sql(
+ "create index index_2 on table table1(id, country) as 'carbondata'
properties" +
+ "('sort_scope'='global_sort', 'Global_sort_partitions'='-1')"))
+ .getMessage
+ .contains("Table property global_sort_partitions : -1 is invalid"))
+ sql("drop index table1_index on table1")
Review comment:
a) not talking about overhead, why to call the command when that will be
handled by drop table, why to take effort to call another command, please
remove it and same for other test case.
b) even though its not an example file, we should always give proper and
meaningful names. Just because user uses carbon and see code, we cant give non
meaningful names right...!!!
##########
File path:
index/secondary-index/src/test/scala/org/apache/carbondata/spark/testsuite/secondaryindex/TestSIWithSecondryIndex.scala
##########
@@ -86,6 +86,64 @@ class TestSIWithSecondryIndex extends QueryTest with
BeforeAndAfterAll {
.contains("Alter table drop column operation failed:"))
}
+ test("test create secondary index global sort after insert") {
+ sql("drop table if exists table1")
+ sql("create table table1 (name string, id string, country string) stored
as carbondata")
+ sql("insert into table1 select 'xx', '2', 'china' union all select 'xx',
'1', 'india'")
+ sql("create index table1_index on table table1(id, country) as
'carbondata' properties" +
+ "('sort_scope'='global_sort', 'Global_sort_partitions'='3')")
+ checkAnswerWithoutSort(sql("select id, country from table1_index"),
+ Seq(Row("1", "india"), Row("2", "china")))
+ // check for valid sort_scope
+ checkExistence(sql("describe formatted table1_index"), true, "Sort Scope
global_sort")
+ // check the invalid sort scope
+ assert(intercept[MalformedCarbonCommandException](sql(
+ "create index index_2 on table table1(id, country) as 'carbondata'
properties" +
+ "('sort_scope'='tim_sort', 'Global_sort_partitions'='3')"))
+ .getMessage
+ .contains("Invalid SORT_SCOPE tim_sort"))
+ // check for invalid global_sort_partitions
+ assert(intercept[MalformedCarbonCommandException](sql(
+ "create index index_2 on table table1(id, country) as 'carbondata'
properties" +
+ "('sort_scope'='global_sort', 'Global_sort_partitions'='-1')"))
+ .getMessage
+ .contains("Table property global_sort_partitions : -1 is invalid"))
+ sql("drop index table1_index on table1")
Review comment:
its not about the experience, but i always prefer the code to be very
clean and meaningful and reader or developer should be happy reading it. Clean
and meaningful names are very important aspect in any code...!
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]