[jira] [Closed] (CARBONDATA-3847) Dataload fails for table with data of 10 records having string type bucket column for if number of buckets exceed large no (300).
[ https://issues.apache.org/jira/browse/CARBONDATA-3847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chetan Bhat closed CARBONDATA-3847. --- Resolution: Cannot Reproduce Cant reproduce this more than once thereafter. Might be related to cluster configuration. Hence closing the issue. > Dataload fails for table with data of 10 records having string type bucket > column for if number of buckets exceed large no (300). > - > > Key: CARBONDATA-3847 > URL: https://issues.apache.org/jira/browse/CARBONDATA-3847 > Project: CarbonData > Issue Type: Bug > Components: data-query >Affects Versions: 2.0.0 > Environment: Spark 2.3.2, Spark 2.4.5 >Reporter: Chetan Bhat >Priority: Minor > > *Steps -* > 0: jdbc:hive2://10.20.251.163:23040/default> create table if not exists > all_data_types1(bool_1 boolean,bool_2 boolean,chinese string,Number > int,smallNumber smallint,BigNumber bigint,LargeDecimal double,smalldecimal > float,customdecimal decimal(38,15),words string,smallwords char(8),varwords > varchar(20),time timestamp,day date,emptyNumber int,emptysmallNumber > smallint,emptyBigNumber bigint,emptyLargeDecimal double,emptysmalldecimal > float,emptycustomdecimal decimal(38,38),emptywords string,emptysmallwords > char(8),emptyvarwords varchar(20)) stored as carbondata TBLPROPERTIES > (*'BUCKET_NUMBER'='300'*, 'BUCKET_COLUMNS'='chinese'); > +-+--+ > | Result | > +-+--+ > +-+--+ > No rows selected (0.241 seconds) > 0: jdbc:hive2://10.20.251.163:23040/default> LOAD DATA INPATH > 'hdfs://hacluster/chetan/datafile_0.csv' into table all_data_types1 > OPTIONS('DELIMITER'=',' , > 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='bool_1 ,bool_2 > ,chinese ,Number ,smallNumber ,BigNumber ,LargeDecimal ,smalldecimal > ,customdecimal,words ,smallwords ,varwords ,time ,day ,emptyNumber > ,emptysmallNumber ,emptyBigNumber ,emptyLargeDecimal > ,emptysmalldecimal,emptycustomdecimal ,emptywords ,emptysmallwords > ,emptyvarwords'); > *Error: java.lang.Exception: DataLoad failure (state=,code=0)* > > *Log -* > java.lang.Exception: DataLoad failure > at > org.apache.carbondata.spark.rdd.CarbonDataRDDFactory$.loadCarbonData(CarbonDataRDDFactory.scala:565) > at > org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.loadData(CarbonLoadDataCommand.scala:207) > at > org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.processData(CarbonLoadDataCommand.scala:168) > at > org.apache.spark.sql.execution.command.AtomicRunnableCommand$$anonfun$run$3.apply(package.scala:148) > at > org.apache.spark.sql.execution.command.AtomicRunnableCommand$$anonfun$run$3.apply(package.scala:145) > at > org.apache.spark.sql.execution.command.Auditable$class.runWithAudit(package.scala:104) > at > org.apache.spark.sql.execution.command.AtomicRunnableCommand.runWithAudit(package.scala:141) > at > org.apache.spark.sql.execution.command.AtomicRunnableCommand.run(package.scala:145) > at > org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:71) > at > org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:69) > at > org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:80) > at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:196) > at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:196) > at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3379) > at > org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:90) > at > org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:137) > at > org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:85) > at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3378) > at org.apache.spark.sql.Dataset.(Dataset.scala:196) > at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:79) > at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:651) > at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) > at > org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:248) > at > org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:178) > at > org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:174) > at java.security.AccessController.doPrivileged(Native Method) > at
[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3832: [CARBONDATA-3893] [IUD] Fix getting block name in compacted segment with dot for horizontal compaction delta files
CarbonDataQA1 commented on pull request #3832: URL: https://github.com/apache/carbondata/pull/3832#issuecomment-656467186 Build Success with Spark 2.4.5, Please check CI http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1602/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3832: [CARBONDATA-3893] [IUD] Fix getting block name in compacted segment with dot for horizontal compaction delta files
CarbonDataQA1 commented on pull request #3832: URL: https://github.com/apache/carbondata/pull/3832#issuecomment-656466931 Build Success with Spark 2.3.4, Please check CI http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3342/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] marchpure commented on a change in pull request #3832: [CARBONDATA-3893] [IUD] Fix getting block name in compacted segment with dot for horizontal compaction delta files
marchpure commented on a change in pull request #3832: URL: https://github.com/apache/carbondata/pull/3832#discussion_r452606626 ## File path: core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentUpdateStatusManager.java ## @@ -450,8 +450,7 @@ public boolean accept(CarbonFile pathName) { String fileName = pathName.getName(); if (fileName.endsWith(CarbonCommonConstants.DELETE_DELTA_FILE_EXT) && pathName.getSize() > 0) { Review comment: if ( pathName.getSize() > 0 && fileName.endsWith(CarbonCommonConstants.DELETE_DELTA_FILE_EXT)) will be better This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (CARBONDATA-3896) Throw an exception using an index server query
li created CARBONDATA-3896: -- Summary: Throw an exception using an index server query Key: CARBONDATA-3896 URL: https://issues.apache.org/jira/browse/CARBONDATA-3896 Project: CarbonData Issue Type: Bug Components: core Affects Versions: 1.6.1 Reporter: li Fix For: 1.6.1 2020-07-10 10:49:02 WARN Server:1853 - Unable to read call parameters for client 10.10.151.15on connection protocol Server for rpcKind RPC_WRITABLE java.io.EOFException at java.io.DataInputStream.readFully(DataInputStream.java:197) at java.io.DataInputStream.readUTF(DataInputStream.java:609) at java.io.DataInputStream.readUTF(DataInputStream.java:564) at org.apache.carbondata.core.datamap.DistributableDataMapFormat.readFields(DistributableDataMapFormat.java:286) at org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:285) at org.apache.hadoop.ipc.WritableRpcEngine$Invocation.readFields(WritableRpcEngine.java:161) at org.apache.hadoop.ipc.Server$Connection.processRpcRequest(Server.java:1851) at org.apache.hadoop.ipc.Server$Connection.processOneRpc(Server.java:1783) at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1541) at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:771) at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:637) at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:608) 2020-07-10 10:49:02 INFO Server:780 - Socket Reader #1 for port 9596: readAndProcess from client 10.10.151.15 threw exception [org.apache.hadoop.ipc.RpcServerException: IPC server unable to read call parameters: null] 2020-07-10 10:50:00 WARN Server:1853 - Unable to read call parameters for client 10.10.151.15on connection protocol Server for rpcKind RPC_WRITABLE java.io.EOFException at java.io.DataInputStream.readFully(DataInputStream.java:197) at java.io.DataInputStream.readUTF(DataInputStream.java:609) at java.io.DataInputStream.readUTF(DataInputStream.java:564) at org.apache.carbondata.core.datamap.DistributableDataMapFormat.readFields(DistributableDataMapFormat.java:286) at org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:285) at org.apache.hadoop.ipc.WritableRpcEngine$Invocation.readFields(WritableRpcEngine.java:161) at org.apache.hadoop.ipc.Server$Connection.processRpcRequest(Server.java:1851) at org.apache.hadoop.ipc.Server$Connection.processOneRpc(Server.java:1783) at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1541) at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:771) at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:637) at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:608) 2020-07-10 10:50:00 INFO Server:780 - Socket Reader #1 for port 9596: readAndProcess from client 10.10.151.15 threw exception [org.apache.hadoop.ipc.RpcServerException: IPC server unable to read call parameters: null] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3822: [CARBONDATA-3887] Fixed insert failure for global sort null data
CarbonDataQA1 commented on pull request #3822: URL: https://github.com/apache/carbondata/pull/3822#issuecomment-656372214 Build Failed with Spark 2.4.5, Please check CI http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1601/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3822: [CARBONDATA-3887] Fixed insert failure for global sort null data
CarbonDataQA1 commented on pull request #3822: URL: https://github.com/apache/carbondata/pull/3822#issuecomment-656371645 Build Failed with Spark 2.3.4, Please check CI http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3341/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3776: [CARBONDATA-3834]Segment directory and the segment file in metadata are not created for partitioned table when 'carbon.merge.index.
CarbonDataQA1 commented on pull request #3776: URL: https://github.com/apache/carbondata/pull/3776#issuecomment-656260686 Build Success with Spark 2.4.5, Please check CI http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1600/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3776: [CARBONDATA-3834]Segment directory and the segment file in metadata are not created for partitioned table when 'carbon.merge.index.
CarbonDataQA1 commented on pull request #3776: URL: https://github.com/apache/carbondata/pull/3776#issuecomment-656260141 Build Success with Spark 2.3.4, Please check CI http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3340/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] VenuReddy2103 commented on pull request #3776: [CARBONDATA-3834]Segment directory and the segment file in metadata are not created for partitioned table when 'carbon.merge.index.
VenuReddy2103 commented on pull request #3776: URL: https://github.com/apache/carbondata/pull/3776#issuecomment-656190931 retest this please This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3834: Sdk iud
CarbonDataQA1 commented on pull request #3834: URL: https://github.com/apache/carbondata/pull/3834#issuecomment-656182179 Build Failed with Spark 2.3.4, Please check CI http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3339/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3834: Sdk iud
CarbonDataQA1 commented on pull request #3834: URL: https://github.com/apache/carbondata/pull/3834#issuecomment-656181587 Build Failed with Spark 2.4.5, Please check CI http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1599/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] Karan-c980 opened a new pull request #3834: Sdk iud
Karan-c980 opened a new pull request #3834: URL: https://github.com/apache/carbondata/pull/3834 ### Why is this PR needed? Currently carbondata SDK doesn't provide delete/update feature. This PR will supports carbondata SDK to delete/update of records from carbondata files ### What changes were proposed in this PR? With the help of this PR carbondata SDK will support delete/update features. For more details please refer to https://issues.apache.org/jira/browse/CARBONDATA-3865 ### Does this PR introduce any user interface change? - No ### Is any new testcase added? - Yes This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3832: [CARBONDATA-3893] [IUD] Fix getting block name in compacted segment with dot for horizontal compaction delta files
CarbonDataQA1 commented on pull request #3832: URL: https://github.com/apache/carbondata/pull/3832#issuecomment-656120519 Build Failed with Spark 2.3.4, Please check CI http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3338/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3832: [CARBONDATA-3893] [IUD] Fix getting block name in compacted segment with dot for horizontal compaction delta files
CarbonDataQA1 commented on pull request #3832: URL: https://github.com/apache/carbondata/pull/3832#issuecomment-656119978 Build Failed with Spark 2.4.5, Please check CI http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1598/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] kunal642 commented on pull request #3776: [CARBONDATA-3834]Segment directory and the segment file in metadata are not created for partitioned table when 'carbon.merge.index.in.se
kunal642 commented on pull request #3776: URL: https://github.com/apache/carbondata/pull/3776#issuecomment-656037525 @VenuReddy2103 Build failed..Please check This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3772: [CARBONDATA-3832]Added block and blocket pruning for the polygon expression processing
CarbonDataQA1 commented on pull request #3772: URL: https://github.com/apache/carbondata/pull/3772#issuecomment-656031544 Build Success with Spark 2.4.5, Please check CI http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1597/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3772: [CARBONDATA-3832]Added block and blocket pruning for the polygon expression processing
CarbonDataQA1 commented on pull request #3772: URL: https://github.com/apache/carbondata/pull/3772#issuecomment-656030202 Build Success with Spark 2.3.4, Please check CI http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3337/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] kunal642 commented on pull request #3823: [CARBONDATA-3890] Fix MV case sensitive issues with ImplicitCastInputTypes and Add Doc for Show MV
kunal642 commented on pull request #3823: URL: https://github.com/apache/carbondata/pull/3823#issuecomment-656027397 LGTM This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] Zhangshunyu commented on pull request #3830: [CARBONDATA-3853] Data load failure when loading with bucket column as DATE data type
Zhangshunyu commented on pull request #3830: URL: https://github.com/apache/carbondata/pull/3830#issuecomment-656014919 LGTM This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3812: [CARBONDATA-3895]Fx FileNotFound exception in query after global sort compaction
CarbonDataQA1 commented on pull request #3812: URL: https://github.com/apache/carbondata/pull/3812#issuecomment-655993853 Build Success with Spark 2.3.4, Please check CI http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3336/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3812: [CARBONDATA-3895]Fx FileNotFound exception in query after global sort compaction
CarbonDataQA1 commented on pull request #3812: URL: https://github.com/apache/carbondata/pull/3812#issuecomment-655992570 Build Success with Spark 2.4.5, Please check CI http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1596/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (CARBONDATA-3873) Secondary index compaction with maintable clean files causing exception
[ https://issues.apache.org/jira/browse/CARBONDATA-3873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akash R Nilugal resolved CARBONDATA-3873. - Fix Version/s: 2.1.0 Resolution: Fixed > Secondary index compaction with maintable clean files causing exception > > > Key: CARBONDATA-3873 > URL: https://issues.apache.org/jira/browse/CARBONDATA-3873 > Project: CarbonData > Issue Type: Bug >Reporter: Mahesh Raju Somalaraju >Priority: Major > Fix For: 2.1.0 > > Time Spent: 3h > Remaining Estimate: 0h > > 1) Compaction with secondary index and cleaning files of the main table after > compaction fails then we are getting Exception. > If any compaction is failed then disable all SI which are successful. They > will be enabled after the next load. Below are defect reproduce steps. > a) Create main table > b) create 2 or more SI tables > c) Load the data multiple times. > d) lets do compaction [this time we need to make scenario such way that one > of compaction should fail] > e) clean the files of main table > 2) Changing Error to the warning in table creation flow as it is a success > case we should not give ERROR. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (CARBONDATA-3842) Select with limit displays incorrect resultset after datamap creation
[ https://issues.apache.org/jira/browse/CARBONDATA-3842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akash R Nilugal resolved CARBONDATA-3842. - Fix Version/s: 2.1.0 Resolution: Fixed > Select with limit displays incorrect resultset after datamap creation > - > > Key: CARBONDATA-3842 > URL: https://issues.apache.org/jira/browse/CARBONDATA-3842 > Project: CarbonData > Issue Type: Bug > Components: data-query >Affects Versions: 2.0.1 > Environment: Spark 2.3.2 >Reporter: Chetan Bhat >Priority: Minor > Fix For: 2.1.0 > > Time Spent: 3h 20m > Remaining Estimate: 0h > > *Steps :-* > create table tab1(id int, name string, dept string) STORED as carbondata; > create materialized view datamap31 as select a.id, a.name from tab1 a; > insert into tab1 select 1,'ram','cs'; > insert into tab1 select 2,'shyam','it'; > select a.id, a.name from tab1 a order by a.id limit 1; > *Issue :* > Select with limit displays incorrect resultset (2 records instead of 1) after > datamap creation. > 0: jdbc:hive2://10.20.251.163:23040/default> select a.id, a.name from tab1 a > order by a.id limit 1; > INFO : Execution ID: 558 > +-++--+ > | id | name | > +-++--+ > | 2 | shyam | > | 1 | ram | > +-++--+ > *2 rows selected (0.601 seconds)* > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (CARBONDATA-3874) segment mismatch between maintable and SI table when load with concurrency
[ https://issues.apache.org/jira/browse/CARBONDATA-3874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akash R Nilugal resolved CARBONDATA-3874. - Fix Version/s: 2.1.0 Resolution: Fixed > segment mismatch between maintable and SI table when load with concurrency > -- > > Key: CARBONDATA-3874 > URL: https://issues.apache.org/jira/browse/CARBONDATA-3874 > Project: CarbonData > Issue Type: Bug >Reporter: Mahesh Raju Somalaraju >Priority: Minor > Fix For: 2.1.0 > > Time Spent: 3.5h > Remaining Estimate: 0h > > 1. In concurrent loads, if one of the load failed for SI table then > 'isSITableEnabled' will be disabled(isSITableEnabled = false). > So in failed SI event listener case we are just checking SI enabled is > true(isSITableEnabled == true) then we are not loading current load to SI > table. In concurrent scenarios, this might be happening as SI enabled state > is true but segment difference may exist. > So instead of checking just SI enabled is true(isSITableEnabled == true) we > should also check if any segment difference between maintable and SI table. > The final output flag checking will be as follows. > `` > if ( isSITableEnabled == true || mainTblAndSidiff == true ) { > --- > } > `` > *Defect reproduce steps:* > 1) Create main table > 2) Create multiple SI tables > 3) Load the data multiple times [ Make sure one of load should fail for one > of SI table] > 4) change the flag (isSITableEnabled == true) for failed segment by alter > command. > 5) Load data > 6) check the segment difference between main table and SI table. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [carbondata] asfgit closed pull request #3786: [CARBONDATA-3842] Fix incorrect results on mv with limit (Missed code during mv refcatory)
asfgit closed pull request #3786: URL: https://github.com/apache/carbondata/pull/3786 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] akashrn5 commented on pull request #3786: [CARBONDATA-3842] Fix incorrect results on mv with limit (Missed code during mv refcatory)
akashrn5 commented on pull request #3786: URL: https://github.com/apache/carbondata/pull/3786#issuecomment-655974118 LGTM This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3786: [CARBONDATA-3842] Fix incorrect results on mv with limit (Missed code during mv refcatory)
CarbonDataQA1 commented on pull request #3786: URL: https://github.com/apache/carbondata/pull/3786#issuecomment-655971860 Build Success with Spark 2.4.5, Please check CI http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1595/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3786: [CARBONDATA-3842] Fix incorrect results on mv with limit (Missed code during mv refcatory)
CarbonDataQA1 commented on pull request #3786: URL: https://github.com/apache/carbondata/pull/3786#issuecomment-655968434 Build Success with Spark 2.3.4, Please check CI http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3335/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [carbondata] kunal642 commented on pull request #3772: [CARBONDATA-3832]Added block and blocket pruning for the polygon expression processing
kunal642 commented on pull request #3772: URL: https://github.com/apache/carbondata/pull/3772#issuecomment-655957238 retest this please This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (CARBONDATA-3895) Filenotfound exception after global sort compaction
Akash R Nilugal created CARBONDATA-3895: --- Summary: Filenotfound exception after global sort compaction Key: CARBONDATA-3895 URL: https://issues.apache.org/jira/browse/CARBONDATA-3895 Project: CarbonData Issue Type: New Feature Reporter: Akash R Nilugal Assignee: Akash R Nilugal Filenotfound exception after global sort compaction execute this test present in PR test("test global sort compaction, clean files, update delete") { sql("DROP TABLE IF EXISTS carbon_global_sort_update") sql( """ | CREATE TABLE carbon_global_sort_update(id INT, name STRING, city STRING, age INT) | STORED AS carbondata TBLPROPERTIES('SORT_SCOPE'='GLOBAL_SORT', 'sort_columns' = 'name, city') """.stripMargin) sql(s"LOAD DATA LOCAL INPATH '$filePath' INTO TABLE carbon_global_sort_update") sql(s"LOAD DATA LOCAL INPATH '$filePath' INTO TABLE carbon_global_sort_update") sql("alter table carbon_global_sort_update compact 'major'") sql("clean files for table carbon_global_sort_update") assert(sql("select * from carbon_global_sort_update").count() == 24) val updatedRows = sql("update carbon_global_sort_update d set (id) = (id + 3) where d.name = 'd'").collect() assert(updatedRows.head.get(0) == 2) val deletedRows = sql("delete from carbon_global_sort_update d where d.id = 12").collect() assert(deletedRows.head.get(0) == 2) assert(sql("select * from carbon_global_sort_update").count() == 22) } -- This message was sent by Atlassian Jira (v8.3.4#803005)