[jira] [Commented] (CARBONDATA-4147) Carbondata 2.1.0 MV ERROR inserting data into table with MV
[ https://issues.apache.org/jira/browse/CARBONDATA-4147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17307581#comment-17307581 ] Sushant Sammanwar commented on CARBONDATA-4147: --- [~Indhumathi27] How can i download and apply the fix ? Can you share it ? > Carbondata 2.1.0 MV ERROR inserting data into table with MV > > > Key: CARBONDATA-4147 > URL: https://issues.apache.org/jira/browse/CARBONDATA-4147 > Project: CarbonData > Issue Type: Bug > Components: core >Affects Versions: 2.1.0 > Environment: Apache carbondata 2.1.0 >Reporter: Sushant Sammanwar >Assignee: Indhumathi Muthumurugesh >Priority: Major > Labels: datatype,double, materializedviews > Fix For: 2.1.1 > > Attachments: carbondata_210_insert_error_stack-trace > > Time Spent: 3h 10m > Remaining Estimate: 0h > > Hi Team , > > We are working on a POC where we are using carbon 2.1.0. > We have created below tables, MV : > create table if not exists fact_365_1_eutrancell_21 (ts timestamp, metric > STRING, tags_id STRING, value DOUBLE) partitioned by (ts2 timestamp) stored > as carbondata TBLPROPERTIES ('SORT_COLUMNS'='metric') > create materialized view if not exists fact_365_1_eutrancell_21_30_minute as > select tags_id ,metric ,ts2, timeseries(ts,'thirty_minute') as > ts,sum(value),avg(value),min(value),max(value) from fact_365_1_eutrancell_21 > group by metric, tags_id, timeseries(ts,'thirty_minute') ,ts2 > > When i try to insert data into above Table, below error is thrown : > scala> carbon.sql("insert into fact_365_1_eutrancell_21 values ('2020-09-25 > 05:30:00','eUtranCell.HHO.X2.InterFreq.PrepAttOut','ff6cb0f7-fba0-4134-81ee-55e820574627',392.2345,'2020-09-25 > 05:30:00')").show() > 21/03/10 22:32:20 AUDIT audit: \{"time":"March 10, 2021 10:32:20 PM > IST","username":"root","opName":"INSERT > INTO","opId":"33474031950342736","opStatus":"START"} > [Stage 0:> (0 + 1) / 1]21/03/10 22:32:32 WARN CarbonOutputIteratorWrapper: > try to poll a row batch one more time. > 21/03/10 22:32:32 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/03/10 22:32:32 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/03/10 22:32:36 WARN log: Updating partition stats fast for: > fact_365_1_eutrancell_21 > 21/03/10 22:32:36 WARN log: Updated size to 2699 > 21/03/10 22:32:38 AUDIT audit: \{"time":"March 10, 2021 10:32:38 PM > IST","username":"root","opName":"INSERT > OVERWRITE","opId":"33474049863830951","opStatus":"START"} > [Stage 3:==>(199 + 1) / > 200]21/03/10 22:33:07 WARN CarbonOutputIteratorWrapper: try to poll a row > batch one more time. > 21/03/10 22:33:07 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/03/10 22:33:07 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/03/10 22:33:07 ERROR CarbonFactDataHandlerColumnar: Error in producer > java.lang.ClassCastException: java.lang.Double cannot be cast to > java.lang.Long > at > org.apache.carbondata.core.datastore.page.ColumnPage.putData(ColumnPage.java:402) > at > org.apache.carbondata.processing.store.TablePage.convertToColumnarAndAddToPages(TablePage.java:239) > at > org.apache.carbondata.processing.store.TablePage.addRow(TablePage.java:201) > at > org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processDataRows(CarbonFactDataHandlerColumnar.java:397) > at > org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.access$500(CarbonFactDataHandlerColumnar.java:60) > at > org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar$Producer.call(CarbonFactDataHandlerColumnar.java:637) > at > org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar$Producer.call(CarbonFactDataHandlerColumnar.java:614) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > > > It seems the method is converting "decimal" data type of table to a "long" > data type for MV. > During value conversion it is throwing the error. > Could you please check if this is a defect / bug or let me know if i have > missed something ? > Note : This was working in carbon 2.0.1 > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4111: [CARBONDATA-4155] Fix Create table like table with MV
CarbonDataQA2 commented on pull request #4111: URL: https://github.com/apache/carbondata/pull/4111#issuecomment-804653371 Build Success with Spark 2.3.4, Please check CI http://121.244.95.60:12602/job/ApacheCarbonPRBuilder2.3/5080/ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4111: [CARBONDATA-4155] Fix Create table like table with MV
CarbonDataQA2 commented on pull request #4111: URL: https://github.com/apache/carbondata/pull/4111#issuecomment-804653917 Build Failed with Spark 2.4.5, Please check CI http://121.244.95.60:12602/job/ApacheCarbon_PR_Builder_2.4.5/3328/ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (CARBONDATA-4151) When data sampling is done on large data set using Spark's df.sample function - the size of sampled table is not matching with record size of non sampled (Raw Tabl
[ https://issues.apache.org/jira/browse/CARBONDATA-4151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306825#comment-17306825 ] Mahesh Raju Somalaraju commented on CARBONDATA-4151: Hi, can you please provide some more details regarding this? Like which all operations from carbondata side you are performing and API(df.sample) input parameters. > When data sampling is done on large data set using Spark's df.sample function > - the size of sampled table is not matching with record size of non sampled > (Raw Table) > - > > Key: CARBONDATA-4151 > URL: https://issues.apache.org/jira/browse/CARBONDATA-4151 > Project: CarbonData > Issue Type: Bug > Components: core >Affects Versions: 2.0.1 > Environment: Apache carbondata 2.0.1, spark 2.4.5, hadoop 2.7.2 >Reporter: Amaranadh Vayyala >Priority: Blocker > Fix For: 2.1.0, 2.0.1 > > > Hi Team, > When we are performing 5%, 10% data sampling on large dataset using Spark's > df.sample - the size of sampled table is not matching with record size of non > sampled (Raw Table). > Our Raw table size is around 11 GB, so when we perform 5%, 10% sampling then > the sampled table size should come as 550 MB, 1.1 GB. However in our case > they are coming as 1.5 GB and 3 GB. Which is 3 times higher than the expected > number. > Could you please check and help us in understand where is the issue? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4101: [CARBONDATA-4156] Fix Writing Segment Min max with all blocks of a segment
CarbonDataQA2 commented on pull request #4101: URL: https://github.com/apache/carbondata/pull/4101#issuecomment-804676491 Build Failed with Spark 2.3.4, Please check CI http://121.244.95.60:12602/job/ApacheCarbonPRBuilder2.3/5081/ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4101: [CARBONDATA-4156] Fix Writing Segment Min max with all blocks of a segment
CarbonDataQA2 commented on pull request #4101: URL: https://github.com/apache/carbondata/pull/4101#issuecomment-804675918 Build Failed with Spark 2.4.5, Please check CI http://121.244.95.60:12602/job/ApacheCarbon_PR_Builder_2.4.5/3329/ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] ShreelekhyaG opened a new pull request #4112: [CARBONDATA-4149] Fix query issues after alter add empty partition location
ShreelekhyaG opened a new pull request #4112: URL: https://github.com/apache/carbondata/pull/4112 ### Why is this PR needed? Query with SI after add partition based on empty location on partition table gives incorrect results. ### What changes were proposed in this PR? while creating blockid, get segment number from the file name for the external partition. This blockid will be added to SI and used for pruning. To identify as an external partition during the compaction process, instead of checking with loadmetapath, checking with filepath.startswith(tablepath) format. ### Does this PR introduce any user interface change? - No ### Is any new testcase added? - Yes -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4101: [WIP][CARBONDATA-4156] Fix Writing Segment Min max with all blocks of a segment
CarbonDataQA2 commented on pull request #4101: URL: https://github.com/apache/carbondata/pull/4101#issuecomment-804829696 Build Failed with Spark 2.3.4, Please check CI http://121.244.95.60:12602/job/ApacheCarbonPRBuilder2.3/5086/ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (CARBONDATA-4146) Query fails and the error message "unable to get file status" is displayed. query is normal after the "drop metacache on table" command is executed.
[ https://issues.apache.org/jira/browse/CARBONDATA-4146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akash R Nilugal resolved CARBONDATA-4146. - Fix Version/s: 2.1.1 Resolution: Fixed > Query fails and the error message "unable to get file status" is displayed. > query is normal after the "drop metacache on table" command is executed. > - > > Key: CARBONDATA-4146 > URL: https://issues.apache.org/jira/browse/CARBONDATA-4146 > Project: CarbonData > Issue Type: Bug > Components: data-query >Affects Versions: 1.6.1, 2.0.0, 2.1.0 >Reporter: liuhe0702 >Priority: Major > Fix For: 2.1.1 > > Time Spent: 8h 40m > Remaining Estimate: 0h > > During compact execution, the status of the new segment is set to success > before index files are merged. After index files are merged, the carbonindex > files are deleted. As a result, the query task cannot find the cached > carbonindex files. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4101: [CARBONDATA-4156] Fix Writing Segment Min max with all blocks of a segment
CarbonDataQA2 commented on pull request #4101: URL: https://github.com/apache/carbondata/pull/4101#issuecomment-804742230 Build Failed with Spark 2.3.4, Please check CI http://121.244.95.60:12602/job/ApacheCarbonPRBuilder2.3/5082/ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] akashrn5 commented on pull request #4104: [CARBONDATA-4146]Query fails and the error message "unable to get file status" is displayed. query is normal after the "drop metacache o
akashrn5 commented on pull request #4104: URL: https://github.com/apache/carbondata/pull/4104#issuecomment-804763379 LGTM -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4112: [CARBONDATA-4149] Fix query issues after alter add empty partition location
CarbonDataQA2 commented on pull request #4112: URL: https://github.com/apache/carbondata/pull/4112#issuecomment-804811963 Build Failed with Spark 2.4.5, Please check CI http://121.244.95.60:12602/job/ApacheCarbon_PR_Builder_2.4.5/3332/ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] Indhumathi27 commented on pull request #4111: [CARBONDATA-4155] Fix Create table like table with MV
Indhumathi27 commented on pull request #4111: URL: https://github.com/apache/carbondata/pull/4111#issuecomment-804726833 retest this please -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] ShreelekhyaG commented on pull request #4104: [CARBONDATA-4146]Query fails and the error message "unable to get file status" is displayed. query is normal after the "drop metacac
ShreelekhyaG commented on pull request #4104: URL: https://github.com/apache/carbondata/pull/4104#issuecomment-804754945 LGTM -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4111: [CARBONDATA-4155] Fix Create table like table with MV
CarbonDataQA2 commented on pull request #4111: URL: https://github.com/apache/carbondata/pull/4111#issuecomment-804788820 Build Success with Spark 2.4.5, Please check CI http://121.244.95.60:12602/job/ApacheCarbon_PR_Builder_2.4.5/3331/ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4111: [CARBONDATA-4155] Fix Create table like table with MV
CarbonDataQA2 commented on pull request #4111: URL: https://github.com/apache/carbondata/pull/4111#issuecomment-804788766 Build Success with Spark 2.3.4, Please check CI http://121.244.95.60:12602/job/ApacheCarbonPRBuilder2.3/5083/ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4112: [CARBONDATA-4149] Fix query issues after alter add empty partition location
CarbonDataQA2 commented on pull request #4112: URL: https://github.com/apache/carbondata/pull/4112#issuecomment-804813300 Build Success with Spark 2.3.4, Please check CI http://121.244.95.60:12602/job/ApacheCarbonPRBuilder2.3/5084/ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] asfgit closed pull request #4104: [CARBONDATA-4146]Query fails and the error message "unable to get file status" is displayed. query is normal after the "drop metacache on table"
asfgit closed pull request #4104: URL: https://github.com/apache/carbondata/pull/4104 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4101: [WIP][CARBONDATA-4156] Fix Writing Segment Min max with all blocks of a segment
CarbonDataQA2 commented on pull request #4101: URL: https://github.com/apache/carbondata/pull/4101#issuecomment-804832548 Build Failed with Spark 2.4.5, Please check CI http://121.244.95.60:12602/job/ApacheCarbon_PR_Builder_2.4.5/3334/ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4101: [CARBONDATA-4156] Fix Writing Segment Min max with all blocks of a segment
CarbonDataQA2 commented on pull request #4101: URL: https://github.com/apache/carbondata/pull/4101#issuecomment-804745412 Build Failed with Spark 2.4.5, Please check CI http://121.244.95.60:12602/job/ApacheCarbon_PR_Builder_2.4.5/3330/ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] asfgit closed pull request #4108: [CARBONDATA-4153] Fix DoNot Push down not equal to filter with Cast on SI
asfgit closed pull request #4108: URL: https://github.com/apache/carbondata/pull/4108 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4101: [WIP][CARBONDATA-4156] Fix Writing Segment Min max with all blocks of a segment
CarbonDataQA2 commented on pull request #4101: URL: https://github.com/apache/carbondata/pull/4101#issuecomment-804909035 Build Failed with Spark 2.4.5, Please check CI http://121.244.95.60:12602/job/ApacheCarbon_PR_Builder_2.4.5/3335/ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] kunal642 commented on pull request #4108: [CARBONDATA-4153] Fix DoNot Push down not equal to filter with Cast on SI
kunal642 commented on pull request #4108: URL: https://github.com/apache/carbondata/pull/4108#issuecomment-804862337 LGTM -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] ShreelekhyaG commented on a change in pull request #4112: [CARBONDATA-4149] Fix query issues after alter add empty partition location
ShreelekhyaG commented on a change in pull request #4112: URL: https://github.com/apache/carbondata/pull/4112#discussion_r599576453 ## File path: index/secondary-index/src/test/scala/org/apache/carbondata/spark/testsuite/secondaryindex/TestSIWithPartition.scala ## @@ -460,6 +461,60 @@ class TestSIWithPartition extends QueryTest with BeforeAndAfterAll { Row(2, "red", "def2", 22), Row(5, "red", "abc", 22))) assert(extSegmentQuery.queryExecution.executedPlan.isInstanceOf[BroadCastSIFilterPushJoin]) sql("drop table if exists partition_table") + FileFactory.deleteAllCarbonFilesOfDir(FileFactory.getCarbonFile(sdkWritePath1)) + FileFactory.deleteAllCarbonFilesOfDir(FileFactory.getCarbonFile(sdkWritePath2)) + } + + test("test si with add partition based on empty location on partition table") { +sql("drop table if exists partitionTable") +sql( + """create table partition_table (id int,name String) partitioned by(email string) + stored as carbondata""".stripMargin) +sql("CREATE INDEX partitionTable_si on table partition_table (name) as 'carbondata'") +sql("insert into partition_table select 1,'blue','abc'") +val location = target + "/" + "def" +FileFactory.deleteAllCarbonFilesOfDir(FileFactory.getCarbonFile(location)) +sql(s"""alter table partition_table add partition (email='def') location '$location'""") +sql("insert into partition_table select 2,'red','def'") +var extSegmentQuery = sql("select * from partition_table where name = 'red'") +checkAnswer(extSegmentQuery, Seq(Row(2, "red", "def"))) +sql("insert into partition_table select 4,'grey','bcd'") +sql("insert into partition_table select 5,'red','abc'") +sql("alter table partition_table compact 'minor'") +extSegmentQuery = sql("select * from partition_table where name = 'red'") +checkAnswer(extSegmentQuery, Seq(Row(2, "red", "def"), Row(5, "red", "abc"))) + assert(extSegmentQuery.queryExecution.executedPlan.isInstanceOf[BroadCastSIFilterPushJoin]) +sql("drop table if exists partition_table") +FileFactory.deleteAllCarbonFilesOfDir(FileFactory.getCarbonFile(location)) + } + + test("test si with add multiple partitions based on empty location on partition table") { +sql("drop table if exists partition_table") +sql("create table partition_table (id int,name String) " + +"partitioned by(email string, age int) stored as carbondata") +sql("insert into partition_table select 1,'blue','abc', 20") +sql("CREATE INDEX partitionTable_si on table partition_table (name) as 'carbondata'") +val location1 = target + "/" + "def" +val location2 = target + "/" + "def2" +FileFactory.deleteAllCarbonFilesOfDir(FileFactory.getCarbonFile(location1)) +FileFactory.deleteAllCarbonFilesOfDir(FileFactory.getCarbonFile(location2)) +sql( Review comment: Done ## File path: index/secondary-index/src/test/scala/org/apache/carbondata/spark/testsuite/secondaryindex/TestSIWithPartition.scala ## @@ -414,6 +414,7 @@ class TestSIWithPartition extends QueryTest with BeforeAndAfterAll { checkAnswer(extSegmentQuery, Seq(Row(2, "red", "def"), Row(5, "red", "abc"))) assert(extSegmentQuery.queryExecution.executedPlan.isInstanceOf[BroadCastSIFilterPushJoin]) sql("drop table if exists partition_table") + FileFactory.deleteAllCarbonFilesOfDir(FileFactory.getCarbonFile(sdkWritePath)) Review comment: Done -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (CARBONDATA-4157) load timestamp data didn't consider daylight saving time
[ https://issues.apache.org/jira/browse/CARBONDATA-4157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yahui Liu updated CARBONDATA-4157: -- Summary: load timestamp data didn't consider daylight saving time (was: load data timestamp data didn't consider daylight saving time) > load timestamp data didn't consider daylight saving time > > > Key: CARBONDATA-4157 > URL: https://issues.apache.org/jira/browse/CARBONDATA-4157 > Project: CarbonData > Issue Type: Bug > Components: data-load >Affects Versions: 2.1.0 >Reporter: Yahui Liu >Priority: Minor > > # prepare one txt file contains 1 time data in daylight saving time, for > example "1991-08-12 00:00:00" > # upload the file to one hdfs folder, for example /tmp/test_time > # create carbon table: create table test_time(t timestamp) stored as > carbondata; > # create one external txt table with location to the data file folder: > create table test_time_txt(t timestamp) location '/tmp/test_time'; > # insert the data in txt table into carbon table: insert into test_time > select * from test_time_txt; then query carbon table, result is: > # ++ > | t | > ++ > | 1991-08-12 01:00:00.0 | > ++ > # load data directly into carbon table: load data inpath '/tmp/test_time' > into table test_time options('fileheader'='t'); then query carbon table, > result is: > # ++ > | t | > ++ > | 1991-08-12 00:00:00.0 | > ++ > # same data file, insert into and load data give different result, and > because "1991-08-12 00:00:00" is in daylight saving time, so most of the file > format give "1991-08-12 01:00:00" as the result. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [carbondata] kunal642 commented on pull request #4111: [CARBONDATA-4155] Fix Create table like table with MV
kunal642 commented on pull request #4111: URL: https://github.com/apache/carbondata/pull/4111#issuecomment-804858854 LGTM -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] Indhumathi27 commented on a change in pull request #4112: [CARBONDATA-4149] Fix query issues after alter add empty partition location
Indhumathi27 commented on a change in pull request #4112: URL: https://github.com/apache/carbondata/pull/4112#discussion_r599523297 ## File path: index/secondary-index/src/test/scala/org/apache/carbondata/spark/testsuite/secondaryindex/TestSIWithPartition.scala ## @@ -414,6 +414,7 @@ class TestSIWithPartition extends QueryTest with BeforeAndAfterAll { checkAnswer(extSegmentQuery, Seq(Row(2, "red", "def"), Row(5, "red", "abc"))) assert(extSegmentQuery.queryExecution.executedPlan.isInstanceOf[BroadCastSIFilterPushJoin]) sql("drop table if exists partition_table") + FileFactory.deleteAllCarbonFilesOfDir(FileFactory.getCarbonFile(sdkWritePath)) Review comment: can you add insert into existing external partition also -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] asfgit closed pull request #4111: [CARBONDATA-4155] Fix Create table like table with MV
asfgit closed pull request #4111: URL: https://github.com/apache/carbondata/pull/4111 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (CARBONDATA-4153) DoNot Push down 'not equal to' filter with Cast on SI
[ https://issues.apache.org/jira/browse/CARBONDATA-4153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kunal Kapoor resolved CARBONDATA-4153. -- Fix Version/s: 2.1.1 Resolution: Fixed > DoNot Push down 'not equal to' filter with Cast on SI > - > > Key: CARBONDATA-4153 > URL: https://issues.apache.org/jira/browse/CARBONDATA-4153 > Project: CarbonData > Issue Type: Bug >Reporter: Indhumathi Muthumurugesh >Priority: Minor > Fix For: 2.1.1 > > Time Spent: 2h 40m > Remaining Estimate: 0h > > For NOT EQUAL TO filter on SI index column, should not be pushed down to SI > table. > Currently, where x!='2' is not pushing down to SI, but where x!=2 is pushed > down to SI. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (CARBONDATA-4157) load data timestamp data didn't consider daylight saving time
Yahui Liu created CARBONDATA-4157: - Summary: load data timestamp data didn't consider daylight saving time Key: CARBONDATA-4157 URL: https://issues.apache.org/jira/browse/CARBONDATA-4157 Project: CarbonData Issue Type: Bug Components: data-load Affects Versions: 2.1.0 Reporter: Yahui Liu # prepare one txt file contains 1 time data in daylight saving time, for example "1991-08-12 00:00:00" # upload the file to one hdfs folder, for example /tmp/test_time # create carbon table: create table test_time(t timestamp) stored as carbondata; # create one external txt table with location to the data file folder: create table test_time_txt(t timestamp) location '/tmp/test_time'; # insert the data in txt table into carbon table: insert into test_time select * from test_time_txt; then query carbon table, result is: # ++ | t | ++ | 1991-08-12 01:00:00.0 | ++ # load data directly into carbon table: load data inpath '/tmp/test_time' into table test_time options('fileheader'='t'); then query carbon table, result is: # ++ | t | ++ | 1991-08-12 00:00:00.0 | ++ # same data file, insert into and load data give different result, and because "1991-08-12 00:00:00" is in daylight saving time, so most of the file format give "1991-08-12 01:00:00" as the result. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (CARBONDATA-4155) CReate table like on table with MV fails
[ https://issues.apache.org/jira/browse/CARBONDATA-4155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kunal Kapoor resolved CARBONDATA-4155. -- Fix Version/s: 2.1.1 Resolution: Fixed > CReate table like on table with MV fails > - > > Key: CARBONDATA-4155 > URL: https://issues.apache.org/jira/browse/CARBONDATA-4155 > Project: CarbonData > Issue Type: Bug >Reporter: Indhumathi Muthumurugesh >Assignee: Kunal Kapoor >Priority: Minor > Fix For: 2.1.1 > > Time Spent: 1h 40m > Remaining Estimate: 0h > > steps to reproduce: > {color:#067d17}create table maintable(name string, c_code int, price int) > STORED AS carbondata;{color} > {color:#067d17}create materialized view mv_table as select name, sum(price) > from maintable group by name;{color} > {color:#067d17}create table new_Table like maintable;{color} > {color:#172b4d}Result: > {color} > 2021-03-22 20:40:06 ERROR CarbonCreateTableCommand:176 - > org.apache.spark.sql.AnalysisException: == Spark Parser: > org.apache.spark.sql.execution.SparkSqlParser == > extraneous input 'default' expecting \{')', ','}(line 8, pos 25) > == SQL == > CREATE TABLE default.new_table > (`name` string,`c_code` int,`price` int) > USING carbondata > OPTIONS ( > indexexists "false", > sort_columns "", > comment "", > relatedmvtablesmap "\{"default":["mv_table"]}", > -^^^ > bad_record_path "", > local_dictionary_enable "true", > indextableexists "false", > tableName "new_table", > dbName "default", > tablePath > "/home/root1/carbondata/integration/spark/target/warehouse/new_table", > path > "file:/home/root1/carbondata/integration/spark/target/warehouse/new_table", > isExternal "false", > isTransactional "true", > isVisible "true" > ,carbonSchemaPartsNo '1',carbonSchema0 > '\{"databaseName":"default","tableUniqueName":"default_new_table","factTable":{"tableId":"4ddbaea5-42b8-4ca2-b0ce-dec0af81d3b6","tableName":"new_table","listOfColumns":[{"dataType":{"id":0,"precedenceOrder":0,"name":"STRING","sizeInBytes":-1},"columnName":"name","columnUniqueId":"2293eee8-41fa-4869-8275-8c16a5dd7222","columnReferenceId":"2293eee8-41fa-4869-8275-8c16a5dd7222","isColumnar":true,"encodingList":[],"isDimensionColumn":true,"scale":-1,"precision":-1,"schemaOrdinal":0,"numberOfChild":0,"columnProperties":{},"invisible":false,"isSortColumn":false,"aggFunction":"","timeSeriesFunction":"","isLocalDictColumn":true},\{"dataType":{"id":5,"precedenceOrder":3,"name":"INT","sizeInBytes":4},"columnName":"c_code","columnUniqueId":"cc3ab016-51e9-4791-8f37-8d697d972b8a","columnReferenceId":"cc3ab016-51e9-4791-8f37-8d697d972b8a","isColumnar":true,"encodingList":[],"isDimensionColumn":false,"scale":-1,"precision":-1,"schemaOrdinal":1,"numberOfChild":0,"columnProperties":{},"invisible":false,"isSortColumn":false,"aggFunction":"","timeSeriesFunction":"","isLocalDictColumn":false},\{"dataType":{"id":5,"precedenceOrder":3,"name":"INT","sizeInBytes":4},"columnName":"price","columnUniqueId":"c67ed6d5-8f10-488f-a990-dfda20739907","columnReferenceId":"c67ed6d5-8f10-488f-a990-dfda20739907","isColumnar":true,"encodingList":[],"isDimensionColumn":false,"scale":-1,"precision":-1,"schemaOrdinal":2,"numberOfChild":0,"columnProperties":{},"invisible":false,"isSortColumn":false,"aggFunction":"","timeSeriesFunction":"","isLocalDictColumn":false}],"schemaEvolution":\{"schemaEvolutionEntryList":[{"timeStamp":1616425806915}]},"tableProperties":\{"indexexists":"false","sort_columns":"","comment":"","relatedmvtablesmap":"{\"default\":[\"mv_table\"]}","bad_record_path":"","local_dictionary_enable":"true","indextableexists":"false"}},"lastUpdatedTime":1616425806915,"tablePath":"file:/home/root1/carbondata/integration/spark/target/warehouse/new_table","isTransactionalTable":true,"hasColumnDrift":false,"isSchemaModified":false}') -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (CARBONDATA-4155) CReate table like on table with MV fails
[ https://issues.apache.org/jira/browse/CARBONDATA-4155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kunal Kapoor reassigned CARBONDATA-4155: Assignee: Kunal Kapoor > CReate table like on table with MV fails > - > > Key: CARBONDATA-4155 > URL: https://issues.apache.org/jira/browse/CARBONDATA-4155 > Project: CarbonData > Issue Type: Bug >Reporter: Indhumathi Muthumurugesh >Assignee: Kunal Kapoor >Priority: Minor > Time Spent: 1h 40m > Remaining Estimate: 0h > > steps to reproduce: > {color:#067d17}create table maintable(name string, c_code int, price int) > STORED AS carbondata;{color} > {color:#067d17}create materialized view mv_table as select name, sum(price) > from maintable group by name;{color} > {color:#067d17}create table new_Table like maintable;{color} > {color:#172b4d}Result: > {color} > 2021-03-22 20:40:06 ERROR CarbonCreateTableCommand:176 - > org.apache.spark.sql.AnalysisException: == Spark Parser: > org.apache.spark.sql.execution.SparkSqlParser == > extraneous input 'default' expecting \{')', ','}(line 8, pos 25) > == SQL == > CREATE TABLE default.new_table > (`name` string,`c_code` int,`price` int) > USING carbondata > OPTIONS ( > indexexists "false", > sort_columns "", > comment "", > relatedmvtablesmap "\{"default":["mv_table"]}", > -^^^ > bad_record_path "", > local_dictionary_enable "true", > indextableexists "false", > tableName "new_table", > dbName "default", > tablePath > "/home/root1/carbondata/integration/spark/target/warehouse/new_table", > path > "file:/home/root1/carbondata/integration/spark/target/warehouse/new_table", > isExternal "false", > isTransactional "true", > isVisible "true" > ,carbonSchemaPartsNo '1',carbonSchema0 > '\{"databaseName":"default","tableUniqueName":"default_new_table","factTable":{"tableId":"4ddbaea5-42b8-4ca2-b0ce-dec0af81d3b6","tableName":"new_table","listOfColumns":[{"dataType":{"id":0,"precedenceOrder":0,"name":"STRING","sizeInBytes":-1},"columnName":"name","columnUniqueId":"2293eee8-41fa-4869-8275-8c16a5dd7222","columnReferenceId":"2293eee8-41fa-4869-8275-8c16a5dd7222","isColumnar":true,"encodingList":[],"isDimensionColumn":true,"scale":-1,"precision":-1,"schemaOrdinal":0,"numberOfChild":0,"columnProperties":{},"invisible":false,"isSortColumn":false,"aggFunction":"","timeSeriesFunction":"","isLocalDictColumn":true},\{"dataType":{"id":5,"precedenceOrder":3,"name":"INT","sizeInBytes":4},"columnName":"c_code","columnUniqueId":"cc3ab016-51e9-4791-8f37-8d697d972b8a","columnReferenceId":"cc3ab016-51e9-4791-8f37-8d697d972b8a","isColumnar":true,"encodingList":[],"isDimensionColumn":false,"scale":-1,"precision":-1,"schemaOrdinal":1,"numberOfChild":0,"columnProperties":{},"invisible":false,"isSortColumn":false,"aggFunction":"","timeSeriesFunction":"","isLocalDictColumn":false},\{"dataType":{"id":5,"precedenceOrder":3,"name":"INT","sizeInBytes":4},"columnName":"price","columnUniqueId":"c67ed6d5-8f10-488f-a990-dfda20739907","columnReferenceId":"c67ed6d5-8f10-488f-a990-dfda20739907","isColumnar":true,"encodingList":[],"isDimensionColumn":false,"scale":-1,"precision":-1,"schemaOrdinal":2,"numberOfChild":0,"columnProperties":{},"invisible":false,"isSortColumn":false,"aggFunction":"","timeSeriesFunction":"","isLocalDictColumn":false}],"schemaEvolution":\{"schemaEvolutionEntryList":[{"timeStamp":1616425806915}]},"tableProperties":\{"indexexists":"false","sort_columns":"","comment":"","relatedmvtablesmap":"{\"default\":[\"mv_table\"]}","bad_record_path":"","local_dictionary_enable":"true","indextableexists":"false"}},"lastUpdatedTime":1616425806915,"tablePath":"file:/home/root1/carbondata/integration/spark/target/warehouse/new_table","isTransactionalTable":true,"hasColumnDrift":false,"isSchemaModified":false}') -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (CARBONDATA-4155) CReate table like on table with MV fails
[ https://issues.apache.org/jira/browse/CARBONDATA-4155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kunal Kapoor reassigned CARBONDATA-4155: Assignee: (was: Kunal Kapoor) > CReate table like on table with MV fails > - > > Key: CARBONDATA-4155 > URL: https://issues.apache.org/jira/browse/CARBONDATA-4155 > Project: CarbonData > Issue Type: Bug >Reporter: Indhumathi Muthumurugesh >Priority: Minor > Fix For: 2.1.1 > > Time Spent: 1h 50m > Remaining Estimate: 0h > > steps to reproduce: > {color:#067d17}create table maintable(name string, c_code int, price int) > STORED AS carbondata;{color} > {color:#067d17}create materialized view mv_table as select name, sum(price) > from maintable group by name;{color} > {color:#067d17}create table new_Table like maintable;{color} > {color:#172b4d}Result: > {color} > 2021-03-22 20:40:06 ERROR CarbonCreateTableCommand:176 - > org.apache.spark.sql.AnalysisException: == Spark Parser: > org.apache.spark.sql.execution.SparkSqlParser == > extraneous input 'default' expecting \{')', ','}(line 8, pos 25) > == SQL == > CREATE TABLE default.new_table > (`name` string,`c_code` int,`price` int) > USING carbondata > OPTIONS ( > indexexists "false", > sort_columns "", > comment "", > relatedmvtablesmap "\{"default":["mv_table"]}", > -^^^ > bad_record_path "", > local_dictionary_enable "true", > indextableexists "false", > tableName "new_table", > dbName "default", > tablePath > "/home/root1/carbondata/integration/spark/target/warehouse/new_table", > path > "file:/home/root1/carbondata/integration/spark/target/warehouse/new_table", > isExternal "false", > isTransactional "true", > isVisible "true" > ,carbonSchemaPartsNo '1',carbonSchema0 > '\{"databaseName":"default","tableUniqueName":"default_new_table","factTable":{"tableId":"4ddbaea5-42b8-4ca2-b0ce-dec0af81d3b6","tableName":"new_table","listOfColumns":[{"dataType":{"id":0,"precedenceOrder":0,"name":"STRING","sizeInBytes":-1},"columnName":"name","columnUniqueId":"2293eee8-41fa-4869-8275-8c16a5dd7222","columnReferenceId":"2293eee8-41fa-4869-8275-8c16a5dd7222","isColumnar":true,"encodingList":[],"isDimensionColumn":true,"scale":-1,"precision":-1,"schemaOrdinal":0,"numberOfChild":0,"columnProperties":{},"invisible":false,"isSortColumn":false,"aggFunction":"","timeSeriesFunction":"","isLocalDictColumn":true},\{"dataType":{"id":5,"precedenceOrder":3,"name":"INT","sizeInBytes":4},"columnName":"c_code","columnUniqueId":"cc3ab016-51e9-4791-8f37-8d697d972b8a","columnReferenceId":"cc3ab016-51e9-4791-8f37-8d697d972b8a","isColumnar":true,"encodingList":[],"isDimensionColumn":false,"scale":-1,"precision":-1,"schemaOrdinal":1,"numberOfChild":0,"columnProperties":{},"invisible":false,"isSortColumn":false,"aggFunction":"","timeSeriesFunction":"","isLocalDictColumn":false},\{"dataType":{"id":5,"precedenceOrder":3,"name":"INT","sizeInBytes":4},"columnName":"price","columnUniqueId":"c67ed6d5-8f10-488f-a990-dfda20739907","columnReferenceId":"c67ed6d5-8f10-488f-a990-dfda20739907","isColumnar":true,"encodingList":[],"isDimensionColumn":false,"scale":-1,"precision":-1,"schemaOrdinal":2,"numberOfChild":0,"columnProperties":{},"invisible":false,"isSortColumn":false,"aggFunction":"","timeSeriesFunction":"","isLocalDictColumn":false}],"schemaEvolution":\{"schemaEvolutionEntryList":[{"timeStamp":1616425806915}]},"tableProperties":\{"indexexists":"false","sort_columns":"","comment":"","relatedmvtablesmap":"{\"default\":[\"mv_table\"]}","bad_record_path":"","local_dictionary_enable":"true","indextableexists":"false"}},"lastUpdatedTime":1616425806915,"tablePath":"file:/home/root1/carbondata/integration/spark/target/warehouse/new_table","isTransactionalTable":true,"hasColumnDrift":false,"isSchemaModified":false}') -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4101: [WIP][CARBONDATA-4156] Fix Writing Segment Min max with all blocks of a segment
CarbonDataQA2 commented on pull request #4101: URL: https://github.com/apache/carbondata/pull/4101#issuecomment-804905022 Build Failed with Spark 2.3.4, Please check CI http://121.244.95.60:12602/job/ApacheCarbonPRBuilder2.3/5087/ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] Indhumathi27 commented on a change in pull request #4112: [CARBONDATA-4149] Fix query issues after alter add empty partition location
Indhumathi27 commented on a change in pull request #4112: URL: https://github.com/apache/carbondata/pull/4112#discussion_r599530878 ## File path: index/secondary-index/src/test/scala/org/apache/carbondata/spark/testsuite/secondaryindex/TestSIWithPartition.scala ## @@ -460,6 +461,60 @@ class TestSIWithPartition extends QueryTest with BeforeAndAfterAll { Row(2, "red", "def2", 22), Row(5, "red", "abc", 22))) assert(extSegmentQuery.queryExecution.executedPlan.isInstanceOf[BroadCastSIFilterPushJoin]) sql("drop table if exists partition_table") + FileFactory.deleteAllCarbonFilesOfDir(FileFactory.getCarbonFile(sdkWritePath1)) + FileFactory.deleteAllCarbonFilesOfDir(FileFactory.getCarbonFile(sdkWritePath2)) + } + + test("test si with add partition based on empty location on partition table") { +sql("drop table if exists partitionTable") +sql( + """create table partition_table (id int,name String) partitioned by(email string) + stored as carbondata""".stripMargin) +sql("CREATE INDEX partitionTable_si on table partition_table (name) as 'carbondata'") +sql("insert into partition_table select 1,'blue','abc'") +val location = target + "/" + "def" +FileFactory.deleteAllCarbonFilesOfDir(FileFactory.getCarbonFile(location)) +sql(s"""alter table partition_table add partition (email='def') location '$location'""") +sql("insert into partition_table select 2,'red','def'") +var extSegmentQuery = sql("select * from partition_table where name = 'red'") +checkAnswer(extSegmentQuery, Seq(Row(2, "red", "def"))) +sql("insert into partition_table select 4,'grey','bcd'") +sql("insert into partition_table select 5,'red','abc'") +sql("alter table partition_table compact 'minor'") +extSegmentQuery = sql("select * from partition_table where name = 'red'") +checkAnswer(extSegmentQuery, Seq(Row(2, "red", "def"), Row(5, "red", "abc"))) + assert(extSegmentQuery.queryExecution.executedPlan.isInstanceOf[BroadCastSIFilterPushJoin]) +sql("drop table if exists partition_table") +FileFactory.deleteAllCarbonFilesOfDir(FileFactory.getCarbonFile(location)) + } + + test("test si with add multiple partitions based on empty location on partition table") { +sql("drop table if exists partition_table") +sql("create table partition_table (id int,name String) " + +"partitioned by(email string, age int) stored as carbondata") +sql("insert into partition_table select 1,'blue','abc', 20") +sql("CREATE INDEX partitionTable_si on table partition_table (name) as 'carbondata'") +val location1 = target + "/" + "def" +val location2 = target + "/" + "def2" +FileFactory.deleteAllCarbonFilesOfDir(FileFactory.getCarbonFile(location1)) +FileFactory.deleteAllCarbonFilesOfDir(FileFactory.getCarbonFile(location2)) +sql( Review comment: please move these changes to existing testcase and add drop external partition scenario also -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4112: [CARBONDATA-4149] Fix query issues after alter add empty partition location
CarbonDataQA2 commented on pull request #4112: URL: https://github.com/apache/carbondata/pull/4112#issuecomment-804984214 Build Success with Spark 2.3.4, Please check CI http://121.244.95.60:12602/job/ApacheCarbonPRBuilder2.3/5089/ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] asfgit closed pull request #4112: [CARBONDATA-4149] Fix query issues after alter add empty partition location
asfgit closed pull request #4112: URL: https://github.com/apache/carbondata/pull/4112 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4112: [CARBONDATA-4149] Fix query issues after alter add empty partition location
CarbonDataQA2 commented on pull request #4112: URL: https://github.com/apache/carbondata/pull/4112#issuecomment-804987718 Build Success with Spark 2.4.5, Please check CI http://121.244.95.60:12602/job/ApacheCarbon_PR_Builder_2.4.5/3337/ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] Indhumathi27 commented on pull request #4112: [CARBONDATA-4149] Fix query issues after alter add empty partition location
Indhumathi27 commented on pull request #4112: URL: https://github.com/apache/carbondata/pull/4112#issuecomment-804996033 LGTM -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4101: [WIP][CARBONDATA-4156] Fix Writing Segment Min max with all blocks of a segment
CarbonDataQA2 commented on pull request #4101: URL: https://github.com/apache/carbondata/pull/4101#issuecomment-805073112 Build Success with Spark 2.4.5, Please check CI http://121.244.95.60:12602/job/ApacheCarbon_PR_Builder_2.4.5/3339/ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4101: [WIP][CARBONDATA-4156] Fix Writing Segment Min max with all blocks of a segment
CarbonDataQA2 commented on pull request #4101: URL: https://github.com/apache/carbondata/pull/4101#issuecomment-805072953 Build Success with Spark 2.3.4, Please check CI http://121.244.95.60:12602/job/ApacheCarbonPRBuilder2.3/5091/ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org