[jira] [Commented] (CARBONDATA-1744) Carbon1.3.0 Concurrent Load-Delete:Delete query is not working correctly if load is already in process.
[ https://issues.apache.org/jira/browse/CARBONDATA-1744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299669#comment-16299669 ] anubhav tarar commented on CARBONDATA-1744: --- @Ajeet Rai please provide me your scripts by which you are running your code in case there are multiple splits delete will fail in current master i have resolved it but not yet merged > Carbon1.3.0 Concurrent Load-Delete:Delete query is not working correctly if > load is already in process. > --- > > Key: CARBONDATA-1744 > URL: https://issues.apache.org/jira/browse/CARBONDATA-1744 > Project: CarbonData > Issue Type: Bug > Components: sql >Affects Versions: 1.3.0 > Environment: > 3 Node ant cluster >Reporter: Ajeet Rai >Assignee: anubhav tarar >Priority: Minor > Labels: DFX > > Concurrent Load-Delete:Delete query is not working correctly if load is > already in process. > steps: > 1:Create a table > 2: Start a large data load > 3: Execute delete query from another session(delete from table_name) > 4: Observe that Delete operation doesn't give any error and completed as > success. > 5: Execute show segment query and observe that status of current segment is > in progress. > 6: execute delete query again once load is completed. > 7: Observe that delete is success but segments are not marked for delete. > Current status is still success which is wrong. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (CARBONDATA-1744) Carbon1.3.0 Concurrent Load-Delete:Delete query is not working correctly if load is already in process.
[ https://issues.apache.org/jira/browse/CARBONDATA-1744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] anubhav tarar reassigned CARBONDATA-1744: - Assignee: anubhav tarar > Carbon1.3.0 Concurrent Load-Delete:Delete query is not working correctly if > load is already in process. > --- > > Key: CARBONDATA-1744 > URL: https://issues.apache.org/jira/browse/CARBONDATA-1744 > Project: CarbonData > Issue Type: Bug > Components: sql >Affects Versions: 1.3.0 > Environment: > 3 Node ant cluster >Reporter: Ajeet Rai >Assignee: anubhav tarar >Priority: Minor > Labels: DFX > > Concurrent Load-Delete:Delete query is not working correctly if load is > already in process. > steps: > 1:Create a table > 2: Start a large data load > 3: Execute delete query from another session(delete from table_name) > 4: Observe that Delete operation doesn't give any error and completed as > success. > 5: Execute show segment query and observe that status of current segment is > in progress. > 6: execute delete query again once load is completed. > 7: Observe that delete is success but segments are not marked for delete. > Current status is still success which is wrong. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] carbondata issue #1681: [CARBONDATA-1908][PARTITION] Support UPDATE/DELETE o...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1681 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2472/ ---
[GitHub] carbondata issue #1681: [CARBONDATA-1908][PARTITION] Support UPDATE/DELETE o...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1681 Build Failed with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/984/ ---
[GitHub] carbondata issue #1681: [CARBONDATA-1908][PARTITION] Support UPDATE/DELETE o...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1681 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2207/ ---
[GitHub] carbondata issue #1116: [CARBONDATA-1249] Wrong order of columns in redirect...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1116 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2206/ ---
[GitHub] carbondata issue #1116: [CARBONDATA-1249] Wrong order of columns in redirect...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1116 Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/983/ ---
[jira] [Commented] (CARBONDATA-1758) Carbon1.3.0- No Inverted Index : Select column with is null for no_inverted_index column throws java.lang.ArrayIndexOutOfBoundsException
[ https://issues.apache.org/jira/browse/CARBONDATA-1758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299650#comment-16299650 ] Sangeeta Gulia commented on CARBONDATA-1758: [~chetdb] This is the result of my query after executing the entire sequence of queries you have mentioned. 0: jdbc:hive2://hadoop-master:1> Select CUST_ID from uniqdata_DI_int where CUST_ID is null; +--+--+ | CUST_ID | +--+--+ | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | | NULL | +--+--+ 26 rows selected (0.408 seconds) 0: jdbc:hive2://hadoop-master:1> > Carbon1.3.0- No Inverted Index : Select column with is null for > no_inverted_index column throws java.lang.ArrayIndexOutOfBoundsException > > > Key: CARBONDATA-1758 > URL: https://issues.apache.org/jira/browse/CARBONDATA-1758 > Project: CarbonData > Issue Type: Bug > Components: data-query >Affects Versions: 1.3.0 > Environment: 3 node cluster >Reporter: Chetan Bhat > Labels: Functional > > Steps : > In Beeline user executes the queries in sequence. > CREATE TABLE uniqdata_DI_int (CUST_ID int,CUST_NAME > String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, > BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), > DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 > double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' > TBLPROPERTIES('DICTIONARY_INCLUDE'='cust_id','NO_INVERTED_INDEX'='cust_id'); > LOAD DATA INPATH 'hdfs://hacluster/chetan/3000_UniqData.csv' into table > uniqdata_DI_int OPTIONS('DELIMITER'=',', > 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1'); > Select count(CUST_ID) from uniqdata_DI_int; > Select count(CUST_ID)*10 as multiple from uniqdata_DI_int; > Select avg(CUST_ID) as average from uniqdata_DI_int; > Select floor(CUST_ID) as average from uniqdata_DI_int; > Select ceil(CUST_ID) as average from uniqdata_DI_int; > Select ceiling(CUST_ID) as average from uniqdata_DI_int; > Select CUST_ID*integer_column1 as multiple from uniqdata_DI_int; > Select CUST_ID from uniqdata_DI_int where CUST_ID is null; > *Issue : Select column with is null for no_inverted_index column throws > java.lang.ArrayIndexOutOfBoundsException* > 0: jdbc:hive2://10.18.98.34:23040> Select CUST_ID from uniqdata_DI_int where > CUST_ID is null; > Error: org.apache.spark.SparkException: Job aborted due to stage failure: > Task 0 in stage 79.0 failed 4 times, most recent failure: Lost task 0.3 in > stage 79.0 (TID 123, BLR114278, executor 18): > org.apache.spark.util.TaskCompletionListenerException: > java.util.concurrent.ExecutionException: > java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:105) > at org.apache.spark.scheduler.Task.run(Task.scala:112) > at > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Driver stacktrace: (state=,code=0) > Expected : Select column with is null for no_inverted_index column should be > successful displaying the correct result set. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] carbondata issue #1082: [CARBONDATA-1218] In case of data-load failure the B...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1082 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2471/ ---
[GitHub] carbondata issue #1681: [CARBONDATA-1908][PARTITION] Support UPDATE/DELETE o...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1681 retest this please ---
[GitHub] carbondata pull request #1126: [CARBONDATA-1258] CarbonData should not allow...
Github user asfgit closed the pull request at: https://github.com/apache/carbondata/pull/1126 ---
[GitHub] carbondata issue #1126: [CARBONDATA-1258] CarbonData should not allow loadin...
Github user manishgupta88 commented on the issue: https://github.com/apache/carbondata/pull/1126 LGTM ---
[GitHub] carbondata issue #1126: [CARBONDATA-1258] CarbonData should not allow loadin...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1126 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2205/ ---
[jira] [Resolved] (CARBONDATA-1899) Add CarbonData concurrency test case
[ https://issues.apache.org/jira/browse/CARBONDATA-1899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manish Gupta resolved CARBONDATA-1899. -- Resolution: Fixed Fix Version/s: 1.3.0 > Add CarbonData concurrency test case > > > Key: CARBONDATA-1899 > URL: https://issues.apache.org/jira/browse/CARBONDATA-1899 > Project: CarbonData > Issue Type: Bug >Reporter: xubo245 >Assignee: xubo245 >Priority: Minor > Fix For: 1.3.0 > > Time Spent: 7h 10m > Remaining Estimate: 0h > > Add CarbonData concurrency test case -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] carbondata issue #1311: [CARBONDATA-1439] Wrong Error message shown for Bad ...
Github user manishgupta88 commented on the issue: https://github.com/apache/carbondata/pull/1311 retest this please ---
[GitHub] carbondata issue #1126: [CARBONDATA-1258] CarbonData should not allow loadin...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1126 Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/982/ ---
[GitHub] carbondata pull request #1670: [CARBONDATA-1899] Add CarbonData concurrency ...
Github user asfgit closed the pull request at: https://github.com/apache/carbondata/pull/1670 ---
[GitHub] carbondata issue #1670: [CARBONDATA-1899] Add CarbonData concurrency test ca...
Github user manishgupta88 commented on the issue: https://github.com/apache/carbondata/pull/1670 LGTM ---
[jira] [Comment Edited] (CARBONDATA-1775) (Carbon1.3.0 - Streaming) Select query fails with java.io.EOFException when data streaming is in progress
[ https://issues.apache.org/jira/browse/CARBONDATA-1775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299617#comment-16299617 ] Jatin edited comment on CARBONDATA-1775 at 12/21/17 6:32 AM: - [~chetdb] Not able to replicate with the latest jar. This issue is fixed with PR : https://github.com/apache/carbondata/pull/1621 was (Author: jatin demla): [~chetdb] Not able to replicate with the latest jar. > (Carbon1.3.0 - Streaming) Select query fails with java.io.EOFException when > data streaming is in progress > -- > > Key: CARBONDATA-1775 > URL: https://issues.apache.org/jira/browse/CARBONDATA-1775 > Project: CarbonData > Issue Type: Bug > Components: data-query >Affects Versions: 1.3.0 > Environment: 3 node ant cluster >Reporter: Chetan Bhat > Labels: DFX > > Steps : > User starts the thrift server using the command - bin/spark-submit --master > yarn-client --executor-memory 10G --executor-cores 5 --driver-memory 5G > --num-executors 3 --class > org.apache.carbondata.spark.thriftserver.CarbonThriftServer > /srv/spark2.2Bigdata/install/spark/sparkJdbc/carbonlib/carbondata_2.11-1.3.0-SNAPSHOT-shade-hadoop2.7.2.jar > "hdfs://hacluster/user/hive/warehouse/carbon.store" > User connects to spark shell using the command - bin/spark-shell --master > yarn-client --executor-memory 10G --executor-cores 5 --driver-memory 5G > --num-executors 3 --jars > /srv/spark2.2Bigdata/install/spark/sparkJdbc/carbonlib/carbondata_2.11-1.3.0-SNAPSHOT-shade-hadoop2.7.2.jar > In spark shell User creates a table and does streaming load in the table as > per the below socket streaming script. > import java.io.{File, PrintWriter} > import java.net.ServerSocket > import org.apache.spark.sql.{CarbonEnv, SparkSession} > import org.apache.spark.sql.hive.CarbonRelation > import org.apache.spark.sql.streaming.{ProcessingTime, StreamingQuery} > import org.apache.carbondata.core.constants.CarbonCommonConstants > import org.apache.carbondata.core.util.CarbonProperties > import org.apache.carbondata.core.util.path.{CarbonStorePath, CarbonTablePath} > CarbonProperties.getInstance().addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, > "/MM/dd") > import org.apache.spark.sql.CarbonSession._ > val carbonSession = SparkSession. > builder(). > appName("StreamExample"). > getOrCreateCarbonSession("hdfs://hacluster/user/hive/warehouse/david") > > carbonSession.sparkContext.setLogLevel("INFO") > def sql(sql: String) = carbonSession.sql(sql) > def writeSocket(serverSocket: ServerSocket): Thread = { > val thread = new Thread() { > override def run(): Unit = { > // wait for client to connection request and accept > val clientSocket = serverSocket.accept() > val socketWriter = new PrintWriter(clientSocket.getOutputStream()) > var index = 0 > for (_ <- 1 to 1000) { > // write 5 records per iteration > for (_ <- 0 to 100) { > index = index + 1 > socketWriter.println(index.toString + ",name_" + index >+ ",city_" + index + "," + (index * > 1.00).toString + >",school_" + index + ":school_" + index + > index + "$" + index) > } > socketWriter.flush() > Thread.sleep(2000) > } > socketWriter.close() > System.out.println("Socket closed") > } > } > thread.start() > thread > } > > def startStreaming(spark: SparkSession, tablePath: CarbonTablePath, > tableName: String, port: Int): Thread = { > val thread = new Thread() { > override def run(): Unit = { > var qry: StreamingQuery = null > try { > val readSocketDF = spark.readStream > .format("socket") > .option("host", "10.18.98.34") > .option("port", port) > .load() > qry = readSocketDF.writeStream > .format("carbondata") > .trigger(ProcessingTime("5 seconds")) > .option("checkpointLocation", tablePath.getStreamingCheckpointDir) > .option("tablePath", tablePath.getPath).option("tableName", > tableName) > .start() > qry.awaitTermination() > } catch { > case ex: Throwable => > ex.printStackTrace() > println("Done reading and writing streaming data") > } finally { > qry.stop() > } > } > } > thread.start() > thread > } > val streamTableName = "stream_table" > sql(s"CREATE TABLE $streamTableName (id INT,name STRING,city STRING,salary > FLOAT) STORED BY 'carbondata' TBLPROPERTIES('streaming'='true', > 'sort_columns'='name')") > sql(s"LOAD DATA LOCAL INPATH
[jira] [Commented] (CARBONDATA-1775) (Carbon1.3.0 - Streaming) Select query fails with java.io.EOFException when data streaming is in progress
[ https://issues.apache.org/jira/browse/CARBONDATA-1775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299617#comment-16299617 ] Jatin commented on CARBONDATA-1775: --- [~chetdb] Not able to replicate with the latest jar. > (Carbon1.3.0 - Streaming) Select query fails with java.io.EOFException when > data streaming is in progress > -- > > Key: CARBONDATA-1775 > URL: https://issues.apache.org/jira/browse/CARBONDATA-1775 > Project: CarbonData > Issue Type: Bug > Components: data-query >Affects Versions: 1.3.0 > Environment: 3 node ant cluster >Reporter: Chetan Bhat > Labels: DFX > > Steps : > User starts the thrift server using the command - bin/spark-submit --master > yarn-client --executor-memory 10G --executor-cores 5 --driver-memory 5G > --num-executors 3 --class > org.apache.carbondata.spark.thriftserver.CarbonThriftServer > /srv/spark2.2Bigdata/install/spark/sparkJdbc/carbonlib/carbondata_2.11-1.3.0-SNAPSHOT-shade-hadoop2.7.2.jar > "hdfs://hacluster/user/hive/warehouse/carbon.store" > User connects to spark shell using the command - bin/spark-shell --master > yarn-client --executor-memory 10G --executor-cores 5 --driver-memory 5G > --num-executors 3 --jars > /srv/spark2.2Bigdata/install/spark/sparkJdbc/carbonlib/carbondata_2.11-1.3.0-SNAPSHOT-shade-hadoop2.7.2.jar > In spark shell User creates a table and does streaming load in the table as > per the below socket streaming script. > import java.io.{File, PrintWriter} > import java.net.ServerSocket > import org.apache.spark.sql.{CarbonEnv, SparkSession} > import org.apache.spark.sql.hive.CarbonRelation > import org.apache.spark.sql.streaming.{ProcessingTime, StreamingQuery} > import org.apache.carbondata.core.constants.CarbonCommonConstants > import org.apache.carbondata.core.util.CarbonProperties > import org.apache.carbondata.core.util.path.{CarbonStorePath, CarbonTablePath} > CarbonProperties.getInstance().addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, > "/MM/dd") > import org.apache.spark.sql.CarbonSession._ > val carbonSession = SparkSession. > builder(). > appName("StreamExample"). > getOrCreateCarbonSession("hdfs://hacluster/user/hive/warehouse/david") > > carbonSession.sparkContext.setLogLevel("INFO") > def sql(sql: String) = carbonSession.sql(sql) > def writeSocket(serverSocket: ServerSocket): Thread = { > val thread = new Thread() { > override def run(): Unit = { > // wait for client to connection request and accept > val clientSocket = serverSocket.accept() > val socketWriter = new PrintWriter(clientSocket.getOutputStream()) > var index = 0 > for (_ <- 1 to 1000) { > // write 5 records per iteration > for (_ <- 0 to 100) { > index = index + 1 > socketWriter.println(index.toString + ",name_" + index >+ ",city_" + index + "," + (index * > 1.00).toString + >",school_" + index + ":school_" + index + > index + "$" + index) > } > socketWriter.flush() > Thread.sleep(2000) > } > socketWriter.close() > System.out.println("Socket closed") > } > } > thread.start() > thread > } > > def startStreaming(spark: SparkSession, tablePath: CarbonTablePath, > tableName: String, port: Int): Thread = { > val thread = new Thread() { > override def run(): Unit = { > var qry: StreamingQuery = null > try { > val readSocketDF = spark.readStream > .format("socket") > .option("host", "10.18.98.34") > .option("port", port) > .load() > qry = readSocketDF.writeStream > .format("carbondata") > .trigger(ProcessingTime("5 seconds")) > .option("checkpointLocation", tablePath.getStreamingCheckpointDir) > .option("tablePath", tablePath.getPath).option("tableName", > tableName) > .start() > qry.awaitTermination() > } catch { > case ex: Throwable => > ex.printStackTrace() > println("Done reading and writing streaming data") > } finally { > qry.stop() > } > } > } > thread.start() > thread > } > val streamTableName = "stream_table" > sql(s"CREATE TABLE $streamTableName (id INT,name STRING,city STRING,salary > FLOAT) STORED BY 'carbondata' TBLPROPERTIES('streaming'='true', > 'sort_columns'='name')") > sql(s"LOAD DATA LOCAL INPATH 'hdfs://hacluster/tmp/streamSample.csv' INTO > TABLE $streamTableName OPTIONS('HEADER'='true')") > sql(s"select * from $streamTableName").show > val carbonTable =
[GitHub] carbondata issue #1681: [CARBONDATA-1908][PARTITION] Support UPDATE/DELETE o...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1681 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2470/ ---
[GitHub] carbondata issue #1681: [CARBONDATA-1908][PARTITION] Support UPDATE/DELETE o...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1681 Build Failed with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/981/ ---
[GitHub] carbondata issue #1681: [CARBONDATA-1908][PARTITION] Support UPDATE/DELETE o...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1681 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2204/ ---
[GitHub] carbondata issue #1690: [WIP] CI random failure
Github user QiangCai commented on the issue: https://github.com/apache/carbondata/pull/1690 retest this please ---
[GitHub] carbondata issue #1104: [CARBONDATA-1239] Add validation for set command par...
Github user mohammadshahidkhan commented on the issue: https://github.com/apache/carbondata/pull/1104 retest this please ---
[GitHub] carbondata issue #1116: [CARBONDATA-1249] Wrong order of columns in redirect...
Github user mohammadshahidkhan commented on the issue: https://github.com/apache/carbondata/pull/1116 retest this please ---
[GitHub] carbondata issue #1126: [CARBONDATA-1258] CarbonData should not allow loadin...
Github user mohammadshahidkhan commented on the issue: https://github.com/apache/carbondata/pull/1126 retest this please ---
[GitHub] carbondata issue #1668: [CARBONDATA-1787] Updated data-management-on-carbond...
Github user vandana7 commented on the issue: https://github.com/apache/carbondata/pull/1668 @sgururajshetty please review this PR ---
[GitHub] carbondata issue #1660: [CARBONDATA-1731,CARBONDATA-1728] [BugFix] Update fa...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1660 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2203/ ---
[GitHub] carbondata issue #1660: [CARBONDATA-1731,CARBONDATA-1728] [BugFix] Update fa...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1660 Build Failed with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/980/ ---
[GitHub] carbondata issue #1575: [CARBONDATA-1698]Adding support for table level comp...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1575 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2469/ ---
[GitHub] carbondata issue #1681: [CARBONDATA-1908][PARTITION] Support UPDATE/DELETE o...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1681 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2202/ ---
[GitHub] carbondata issue #1670: [CARBONDATA-1899] Add CarbonData concurrency test ca...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1670 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2201/ ---
[GitHub] carbondata issue #1681: [CARBONDATA-1908][PARTITION] Support UPDATE/DELETE o...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1681 Build Failed with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/979/ ---
[GitHub] carbondata issue #1575: [CARBONDATA-1698]Adding support for table level comp...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1575 Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/978/ ---
[GitHub] carbondata pull request #1681: [CARBONDATA-1908][PARTITION] Support UPDATE/D...
Github user ravipesala commented on a diff in the pull request: https://github.com/apache/carbondata/pull/1681#discussion_r158197465 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonLoadDataCommand.scala --- @@ -488,7 +490,24 @@ case class CarbonLoadDataCommand( } InternalRow.fromSeq(data) } - LogicalRDD(attributes, rdd)(sparkSession) + if (updateModel.isDefined) { +sparkSession.sparkContext.setLocalProperty(EXECUTION_ID_KEY, null) +// In case of update, we don't need the segmrntid column in case of partitioning +val dropAttributes = attributes.dropRight(1) +val finalOutput = relation.output.map { attr => + dropAttributes.find { d => +val index = d.name.lastIndexOf("-updatedColumn") --- End diff -- It requires change the order in UpdateCommand, so it impacts the actual flow of IUD. So I guess better handle here ---
[GitHub] carbondata pull request #1696: [CARBONDATA-1884] SDV test cases for CTAS sup...
Github user pawanmalwal closed the pull request at: https://github.com/apache/carbondata/pull/1696 ---
[GitHub] carbondata pull request #1696: [CARBONDATA-1884] SDV test cases for CTAS sup...
GitHub user pawanmalwal reopened a pull request: https://github.com/apache/carbondata/pull/1696 [CARBONDATA-1884] SDV test cases for CTAS support to carbondata SDV test cases for CTAS support to carbondata. Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [X] Any interfaces changed? None - [X] Any backward compatibility impacted? NA - [X] Document update required? NA - [X] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. Added SDV test cases for CTAS support to carbondata - [X] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. NA You can merge this pull request into a Git repository by running: $ git pull https://github.com/pawanmalwal/carbondata sdv_Test_Cases_CreateTableAsSelect Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/1696.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1696 commit e8fb2455a42eba6feff76f24b2f0b03635390056 Author: Pawan MalwalDate: 2017-12-20T10:00:58Z [CARBONDATA-1884]SDV test cases for CTAS support to carbondata ---
[GitHub] carbondata issue #1701: [HOTFIX] rename CarbonStandardAlterTableDropPartitio...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1701 @jackylk I will change name of class in my another PR 1681 ---
[GitHub] carbondata issue #1699: [CARBONDATA-1924][PARTITION] Restrict streaming on P...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1699 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2468/ ---
[GitHub] carbondata pull request #1683: [CARBONDATA-1911] Added Insert into query tes...
Github user ManoharVanam commented on a diff in the pull request: https://github.com/apache/carbondata/pull/1683#discussion_r158196677 --- Diff: integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/insertQuery/InsertIntoNonCarbonTableTestCase.scala --- @@ -0,0 +1,109 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + *http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.carbondata.spark.testsuite.insertQuery + +import org.apache.spark.sql.Row +import org.apache.spark.sql.test.util.QueryTest +import org.scalatest.BeforeAndAfterAll + + +class InsertIntoNonCarbonTableTestCase extends QueryTest with BeforeAndAfterAll { + override def beforeAll { +sql("drop table if exists TCarbonSource") +sql( + "create table TCarbonSource (imei string,deviceInformationId int,MAC string,deviceColor " + + "string,device_backColor string,modelId string,marketName string,AMSize string,ROMSize " + + "string,CUPAudit string,CPIClocked string,series string,productionDate timestamp,bomCode " + + "string,internalModels string, deliveryTime string, channelsId string, channelsName string " + + ", deliveryAreaId string, deliveryCountry string, deliveryProvince string, deliveryCity " + + "string,deliveryDistrict string, deliveryStreet string, oxSingleNumber string, " + + "ActiveCheckTime string, ActiveAreaId string, ActiveCountry string, ActiveProvince string, " + + "Activecity string, ActiveDistrict string, ActiveStreet string, ActiveOperatorId string, " + + "Active_releaseId string, Active_EMUIVersion string, Active_operaSysVersion string, " + + "Active_BacVerNumber string, Active_BacFlashVer string, Active_webUIVersion string, " + + "Active_webUITypeCarrVer string,Active_webTypeDataVerNumber string, Active_operatorsVersion" + + " string, Active_phonePADPartitionedVersions string, Latest_YEAR int, Latest_MONTH int, " + + "Latest_DAY Decimal(30,10), Latest_HOUR string, Latest_areaId string, Latest_country " + + "string, Latest_province string, Latest_city string, Latest_district string, Latest_street " + + "string, Latest_releaseId string, Latest_EMUIVersion string, Latest_operaSysVersion string," + + " Latest_BacVerNumber string, Latest_BacFlashVer string, Latest_webUIVersion string, " + + "Latest_webUITypeCarrVer string, Latest_webTypeDataVerNumber string, " + + "Latest_operatorsVersion string, Latest_phonePADPartitionedVersions string, " + + "Latest_operatorId string, gamePointDescription string,gamePointId double,contractNumber " + + "BigInt) STORED BY 'org.apache.carbondata.format'") +sql( + s"LOAD DATA INPATH '$resourcesPath/100_olap.csv' INTO table TCarbonSource options " + + "('DELIMITER'=',', 'QUOTECHAR'='\', 'FILEHEADER'='imei,deviceInformationId,MAC,deviceColor," + + "device_backColor,modelId,marketName,AMSize,ROMSize,CUPAudit,CPIClocked,series," + + "productionDate,bomCode,internalModels,deliveryTime,channelsId,channelsName,deliveryAreaId," + + "deliveryCountry,deliveryProvince,deliveryCity,deliveryDistrict,deliveryStreet," + + "oxSingleNumber,ActiveCheckTime,ActiveAreaId,ActiveCountry,ActiveProvince,Activecity," + + "ActiveDistrict,ActiveStreet,ActiveOperatorId,Active_releaseId,Active_EMUIVersion," + + "Active_operaSysVersion,Active_BacVerNumber,Active_BacFlashVer,Active_webUIVersion," + + "Active_webUITypeCarrVer,Active_webTypeDataVerNumber,Active_operatorsVersion," + + "Active_phonePADPartitionedVersions,Latest_YEAR,Latest_MONTH,Latest_DAY,Latest_HOUR," + + "Latest_areaId,Latest_country,Latest_province,Latest_city,Latest_district,Latest_street," + + "Latest_releaseId,Latest_EMUIVersion,Latest_operaSysVersion,Latest_BacVerNumber," + + "Latest_BacFlashVer,Latest_webUIVersion,Latest_webUITypeCarrVer," + +
[GitHub] carbondata issue #1693: [CARBONDATA-1909] Load is failing during insert into...
Github user ManoharVanam commented on the issue: https://github.com/apache/carbondata/pull/1693 retest this please ---
[GitHub] carbondata issue #1698: [CARBONDATA-1923] Remove file after running test cla...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1698 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2200/ ---
[GitHub] carbondata issue #1670: [CARBONDATA-1899] Add CarbonData concurrency test ca...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1670 Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/977/ ---
[GitHub] carbondata issue #1699: [CARBONDATA-1924][PARTITION] Restrict streaming on P...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1699 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2199/ ---
[GitHub] carbondata issue #1699: [CARBONDATA-1924][PARTITION] Restrict streaming on P...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1699 Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/976/ ---
[GitHub] carbondata issue #1690: [WIP] CI random failure
Github user QiangCai commented on the issue: https://github.com/apache/carbondata/pull/1690 retest sdv please ---
[GitHub] carbondata issue #1690: [WIP] CI random failure
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1690 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2467/ ---
[GitHub] carbondata pull request #1701: [HOTFIX] rename CarbonStandardAlterTableDropP...
GitHub user jackylk opened a pull request: https://github.com/apache/carbondata/pull/1701 [HOTFIX] rename CarbonStandardAlterTableDropPartition Rename CarbonStandardAlterTableDropPartition to CarbonAlterTableDropHivePartitionCommand, make it consistent with other commands - [X] Any interfaces changed? No - [X] Any backward compatibility impacted? No - [X] Document update required? No - [X] Testing done No logic is modified, retest all testcase - [X] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. NA You can merge this pull request into a Git repository by running: $ git pull https://github.com/jackylk/incubator-carbondata hotfix Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/1701.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1701 commit 943aee087d57fb8e05f2e3d1f7d7c49342ca22b0 Author: Jacky LiDate: 2017-12-21T03:53:45Z rename CarbonStandardAlterTableDropPartition.scala ---
[GitHub] carbondata issue #1660: [CARBONDATA-1731,CARBONDATA-1728] [BugFix] Update fa...
Github user chenliang613 commented on the issue: https://github.com/apache/carbondata/pull/1660 retest this please ---
[GitHub] carbondata issue #1690: [WIP] CI random failure
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1690 Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/975/ ---
[GitHub] carbondata pull request #1699: [CARBONDATA-1924][PARTITION] Restrict streami...
Github user jackylk commented on a diff in the pull request: https://github.com/apache/carbondata/pull/1699#discussion_r158191780 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/parser/CarbonSparkSqlParser.scala --- @@ -233,6 +233,9 @@ class CarbonHelperSqlAstBuilder(conf: SQLConf, case _ => // ignore this case } +if (partitionFields.nonEmpty && options.isStreaming) { --- End diff -- maybe it is better to do it in `validateStreamingProperty`, it is invoked in line 213 ---
[GitHub] carbondata issue #1695: [CARBONDATA-1920] [PrestoIntegration] Sparksql query...
Github user chenliang613 commented on the issue: https://github.com/apache/carbondata/pull/1695 sure, i will review it. thanks for your contribution. ---
[GitHub] carbondata pull request #1683: [CARBONDATA-1911] Added Insert into query tes...
Github user jackylk commented on a diff in the pull request: https://github.com/apache/carbondata/pull/1683#discussion_r158191551 --- Diff: integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/insertQuery/InsertIntoNonCarbonTableTestCase.scala --- @@ -0,0 +1,109 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + *http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.carbondata.spark.testsuite.insertQuery + +import org.apache.spark.sql.Row +import org.apache.spark.sql.test.util.QueryTest +import org.scalatest.BeforeAndAfterAll + + +class InsertIntoNonCarbonTableTestCase extends QueryTest with BeforeAndAfterAll { + override def beforeAll { +sql("drop table if exists TCarbonSource") +sql( + "create table TCarbonSource (imei string,deviceInformationId int,MAC string,deviceColor " + + "string,device_backColor string,modelId string,marketName string,AMSize string,ROMSize " + + "string,CUPAudit string,CPIClocked string,series string,productionDate timestamp,bomCode " + + "string,internalModels string, deliveryTime string, channelsId string, channelsName string " + + ", deliveryAreaId string, deliveryCountry string, deliveryProvince string, deliveryCity " + + "string,deliveryDistrict string, deliveryStreet string, oxSingleNumber string, " + + "ActiveCheckTime string, ActiveAreaId string, ActiveCountry string, ActiveProvince string, " + + "Activecity string, ActiveDistrict string, ActiveStreet string, ActiveOperatorId string, " + + "Active_releaseId string, Active_EMUIVersion string, Active_operaSysVersion string, " + + "Active_BacVerNumber string, Active_BacFlashVer string, Active_webUIVersion string, " + + "Active_webUITypeCarrVer string,Active_webTypeDataVerNumber string, Active_operatorsVersion" + + " string, Active_phonePADPartitionedVersions string, Latest_YEAR int, Latest_MONTH int, " + + "Latest_DAY Decimal(30,10), Latest_HOUR string, Latest_areaId string, Latest_country " + + "string, Latest_province string, Latest_city string, Latest_district string, Latest_street " + + "string, Latest_releaseId string, Latest_EMUIVersion string, Latest_operaSysVersion string," + + " Latest_BacVerNumber string, Latest_BacFlashVer string, Latest_webUIVersion string, " + + "Latest_webUITypeCarrVer string, Latest_webTypeDataVerNumber string, " + + "Latest_operatorsVersion string, Latest_phonePADPartitionedVersions string, " + + "Latest_operatorId string, gamePointDescription string,gamePointId double,contractNumber " + + "BigInt) STORED BY 'org.apache.carbondata.format'") +sql( + s"LOAD DATA INPATH '$resourcesPath/100_olap.csv' INTO table TCarbonSource options " + + "('DELIMITER'=',', 'QUOTECHAR'='\', 'FILEHEADER'='imei,deviceInformationId,MAC,deviceColor," + + "device_backColor,modelId,marketName,AMSize,ROMSize,CUPAudit,CPIClocked,series," + + "productionDate,bomCode,internalModels,deliveryTime,channelsId,channelsName,deliveryAreaId," + + "deliveryCountry,deliveryProvince,deliveryCity,deliveryDistrict,deliveryStreet," + + "oxSingleNumber,ActiveCheckTime,ActiveAreaId,ActiveCountry,ActiveProvince,Activecity," + + "ActiveDistrict,ActiveStreet,ActiveOperatorId,Active_releaseId,Active_EMUIVersion," + + "Active_operaSysVersion,Active_BacVerNumber,Active_BacFlashVer,Active_webUIVersion," + + "Active_webUITypeCarrVer,Active_webTypeDataVerNumber,Active_operatorsVersion," + + "Active_phonePADPartitionedVersions,Latest_YEAR,Latest_MONTH,Latest_DAY,Latest_HOUR," + + "Latest_areaId,Latest_country,Latest_province,Latest_city,Latest_district,Latest_street," + + "Latest_releaseId,Latest_EMUIVersion,Latest_operaSysVersion,Latest_BacVerNumber," + + "Latest_BacFlashVer,Latest_webUIVersion,Latest_webUITypeCarrVer," + +
[jira] [Resolved] (CARBONDATA-1860) Support insertoverwrite for a specific partition.
[ https://issues.apache.org/jira/browse/CARBONDATA-1860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Venkata Ramana G resolved CARBONDATA-1860. -- Resolution: Fixed Assignee: Ravindra Pesala Fix Version/s: 1.3.0 > Support insertoverwrite for a specific partition. > - > > Key: CARBONDATA-1860 > URL: https://issues.apache.org/jira/browse/CARBONDATA-1860 > Project: CarbonData > Issue Type: Sub-task >Reporter: Ravindra Pesala >Assignee: Ravindra Pesala > Fix For: 1.3.0 > > Time Spent: 4h > Remaining Estimate: 0h > > User should able to overwrite partition for a specific partition. Like > {code} > INSERT OVERWRITE TABLE partitioned_user > PARTITION (country = 'US') > SELECT * FROM another_user au > WHERE au.country = 'US'; > {code} > In the above example, the user can overwrite only the partition(country = > 'US') data. So remaining partitions data would be intact. > While overwriting a specific partition carbon should first load data to the > new segment and drop that partition from all remaining segments using > partition.map file. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] carbondata pull request #1689: [CARBONDATA-1674] Describe formatted shows pa...
Github user jackylk commented on a diff in the pull request: https://github.com/apache/carbondata/pull/1689#discussion_r158191134 --- Diff: integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/partition/TestShowPartitions.scala --- @@ -150,6 +150,11 @@ class TestShowPartition extends QueryTest with BeforeAndAfterAll { } + test("show partition table: desc formatted should show partition type"){ +//check for partition type exist in desc formatted +checkExistence(sql("describe formatted hashTable"),true,"Partition Type") --- End diff -- check the output of the type, whether it is correct ---
[GitHub] carbondata pull request #1700: [CARBONDATA-1860][PARTITION] Support insertov...
Github user asfgit closed the pull request at: https://github.com/apache/carbondata/pull/1700 ---
[GitHub] carbondata issue #1700: [CARBONDATA-1860][PARTITION] Support insertoverwrite...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1700 This PR is just duplicate of https://github.com/apache/carbondata/pull/1677 . The other having problem in merging to master, so i reaised new pR ---
[GitHub] carbondata pull request #1677: [CARBONDATA-1860][PARTITION] Support insertov...
Github user ravipesala closed the pull request at: https://github.com/apache/carbondata/pull/1677 ---
[GitHub] carbondata pull request #1700: [CARBONDATA-1860][PARTITION] Support insertov...
GitHub user ravipesala opened a pull request: https://github.com/apache/carbondata/pull/1700 [CARBONDATA-1860][PARTITION] Support insertoverwrite for a specific partition. This PR depends on https://github.com/apache/carbondata/pull/1672 and https://github.com/apache/carbondata/pull/1674 User should able to overwrite partition for a specific partition. Like INSERT OVERWRITE TABLE partitioned_user PARTITION (country = 'US') SELECT * FROM another_user au WHERE au.country = 'US'; In the above example, the user can overwrite only the partition(country = 'US') data. So remaining partitions data would be intact. While overwriting a specific partition carbon should first load data to the new segment and drop that partition from all remaining segments using partition.map file. Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [X] Any interfaces changed? NO - [X] Any backward compatibility impacted? NO - [X] Document update required? YES - [X] Testing done Tests added - [X] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ravipesala/incubator-carbondata partition-overwrite2 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/1700.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1700 commit 32e23c7e0d1dfb0435ae70b6d1311e68cec4c615 Author: ravipesalaDate: 2017-12-19T07:49:15Z Support insert overwrite partition commit 9f0b7d8b1d28cd452633762057d8c7204765e816 Author: ravipesala Date: 2017-12-20T18:07:30Z handle comments ---
[GitHub] carbondata issue #1575: [CARBONDATA-1698]Adding support for table level comp...
Github user Xaprice commented on the issue: https://github.com/apache/carbondata/pull/1575 retest this please ---
[GitHub] carbondata issue #1645: [CARBONDATA-1885] Fix Test error in AlterTableValida...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1645 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2197/ ---
[GitHub] carbondata issue #1698: [CARBONDATA-1923] Remove file after running test cla...
Github user xubo245 commented on the issue: https://github.com/apache/carbondata/pull/1698 retest this please ---
[GitHub] carbondata issue #1698: [CARBONDATA-1923] Remove file after running test cla...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1698 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2466/ ---
[GitHub] carbondata pull request #1575: [CARBONDATA-1698]Adding support for table lev...
Github user chenliang613 commented on a diff in the pull request: https://github.com/apache/carbondata/pull/1575#discussion_r158188263 --- Diff: integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonDataRDDFactory.scala --- @@ -205,7 +205,8 @@ object CarbonDataRDDFactory { val newCarbonLoadModel = prepareCarbonLoadModel(table) - val compactionSize = CarbonDataMergerUtil.getCompactionSize(CompactionType.MAJOR) + val compactionSize = CarbonDataMergerUtil +.getCompactionSize(CompactionType.MAJOR, carbonLoadModel) --- End diff -- ok ---
[GitHub] carbondata issue #1677: [CARBONDATA-1860][PARTITION] Support insertoverwrite...
Github user gvramana commented on the issue: https://github.com/apache/carbondata/pull/1677 LGTM ---
[GitHub] carbondata pull request #1699: [CARBONDATA-1924] Restrict streaming on Parti...
GitHub user ravipesala opened a pull request: https://github.com/apache/carbondata/pull/1699 [CARBONDATA-1924] Restrict streaming on Partitioned table and support PARTITION syntax to the LOAD TABLE command Restrict streaming on Partitioned table and support PARTITION syntax to the LOAD TABLE command Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [ ] Any interfaces changed? - [ ] Any backward compatibility impacted? - [ ] Document update required? - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ravipesala/incubator-carbondata partition-load-syntax Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/1699.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1699 commit 88414667c29af37fca42d042997fb595bddf224f Author: ravipesalaDate: 2017-12-19T14:31:30Z Added PARTITION syntax to the LOAD TABLE ---
[GitHub] carbondata issue #1670: [CARBONDATA-1899] Add CarbonData concurrency test ca...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1670 Build Failed with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/973/ ---
[GitHub] carbondata issue #1670: [CARBONDATA-1899] Add CarbonData concurrency test ca...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1670 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2196/ ---
[jira] [Updated] (CARBONDATA-1924) Add restriction for creating streaming table as partition table.And support PARTITION syntax to LOAD command
[ https://issues.apache.org/jira/browse/CARBONDATA-1924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravindra Pesala updated CARBONDATA-1924: Summary: Add restriction for creating streaming table as partition table.And support PARTITION syntax to LOAD command (was: Add restriction for creating streaming table as partition table.) > Add restriction for creating streaming table as partition table.And support > PARTITION syntax to LOAD command > > > Key: CARBONDATA-1924 > URL: https://issues.apache.org/jira/browse/CARBONDATA-1924 > Project: CarbonData > Issue Type: Sub-task >Reporter: Ravindra Pesala > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (CARBONDATA-1858) Support querying data from partition table.
[ https://issues.apache.org/jira/browse/CARBONDATA-1858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravindra Pesala resolved CARBONDATA-1858. - Resolution: Fixed Assignee: Ravindra Pesala Fix Version/s: 1.3.0 > Support querying data from partition table. > --- > > Key: CARBONDATA-1858 > URL: https://issues.apache.org/jira/browse/CARBONDATA-1858 > Project: CarbonData > Issue Type: Sub-task >Reporter: Ravindra Pesala >Assignee: Ravindra Pesala > Fix For: 1.3.0 > > Time Spent: 14h > Remaining Estimate: 0h > > In case of partition table first, use sessioncatalog to prune the partitions. > With the partition information, datamap should read partition.map file to get > the index file and corresponding blocklets to prune -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (CARBONDATA-1857) Create a system level switch for supporting standard partition or carbon custom partition.
[ https://issues.apache.org/jira/browse/CARBONDATA-1857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravindra Pesala resolved CARBONDATA-1857. - Resolution: Fixed Assignee: Ravindra Pesala Fix Version/s: 1.3.0 > Create a system level switch for supporting standard partition or carbon > custom partition. > -- > > Key: CARBONDATA-1857 > URL: https://issues.apache.org/jira/browse/CARBONDATA-1857 > Project: CarbonData > Issue Type: Sub-task >Reporter: Ravindra Pesala >Assignee: Ravindra Pesala > Fix For: 1.3.0 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (CARBONDATA-1861) Support show partitions
[ https://issues.apache.org/jira/browse/CARBONDATA-1861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravindra Pesala resolved CARBONDATA-1861. - Resolution: Fixed Assignee: Ravindra Pesala Fix Version/s: 1.3.0 > Support show partitions > > > Key: CARBONDATA-1861 > URL: https://issues.apache.org/jira/browse/CARBONDATA-1861 > Project: CarbonData > Issue Type: Sub-task >Reporter: Ravindra Pesala >Assignee: Ravindra Pesala > Fix For: 1.3.0 > > > Show partition information directly from sessioncatalog -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] carbondata pull request #1575: [CARBONDATA-1698]Adding support for table lev...
Github user Xaprice commented on a diff in the pull request: https://github.com/apache/carbondata/pull/1575#discussion_r158186274 --- Diff: integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonDataRDDFactory.scala --- @@ -205,7 +205,8 @@ object CarbonDataRDDFactory { val newCarbonLoadModel = prepareCarbonLoadModel(table) - val compactionSize = CarbonDataMergerUtil.getCompactionSize(CompactionType.MAJOR) + val compactionSize = CarbonDataMergerUtil +.getCompactionSize(CompactionType.MAJOR, carbonLoadModel) --- End diff -- carbonLoadModel may contain table-level major compaction size if it is specified in create table SQL, so the purpose for adding parameter 'carbonLoadModel' is to get the table-level major compaction size. ---
[GitHub] carbondata issue #1690: [WIP] CI random failure
Github user QiangCai commented on the issue: https://github.com/apache/carbondata/pull/1690 retest sdv please ---
[GitHub] carbondata issue #1698: [CARBONDATA-1923] Remove file after running test cla...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1698 Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/972/ ---
[GitHub] carbondata issue #1698: [CARBONDATA-1923] Remove file after running test cla...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1698 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2195/ ---
[GitHub] carbondata issue #1690: [WIP] CI random failure
Github user QiangCai commented on the issue: https://github.com/apache/carbondata/pull/1690 retest this please ---
[GitHub] carbondata issue #1690: [WIP] CI random failure
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1690 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2194/ ---
[GitHub] carbondata issue #1690: [WIP] CI random failure
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1690 Build Failed with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/971/ ---
[GitHub] carbondata issue #1698: [CARBONDATA-1923] Remove file after running test cla...
Github user xubo245 commented on the issue: https://github.com/apache/carbondata/pull/1698 retest sdv please ---
[GitHub] carbondata issue #1645: [CARBONDATA-1885] Fix Test error in AlterTableValida...
Github user xubo245 commented on the issue: https://github.com/apache/carbondata/pull/1645 retest this please ---
[GitHub] carbondata issue #1670: [CARBONDATA-1899] Add CarbonData concurrency test ca...
Github user xubo245 commented on the issue: https://github.com/apache/carbondata/pull/1670 retest this please ---
[GitHub] carbondata issue #1690: [WIP] CI random failure
Github user QiangCai commented on the issue: https://github.com/apache/carbondata/pull/1690 retest this please ---
[jira] [Resolved] (CARBONDATA-1680) Carbon 1.3.0-Partitioning:Show Partition for Hash Partition doesn't display the partition id
[ https://issues.apache.org/jira/browse/CARBONDATA-1680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacky Li resolved CARBONDATA-1680. -- Resolution: Fixed Fix Version/s: 1.3.0 > Carbon 1.3.0-Partitioning:Show Partition for Hash Partition doesn't display > the partition id > > > Key: CARBONDATA-1680 > URL: https://issues.apache.org/jira/browse/CARBONDATA-1680 > Project: CarbonData > Issue Type: Bug > Components: sql >Affects Versions: 1.3.0 >Reporter: Ayushi Sharma >Assignee: Jatin >Priority: Minor > Fix For: 1.3.0 > > Attachments: Show_part_1_doc.PNG, show_part_1.PNG > > Time Spent: 2.5h > Remaining Estimate: 0h > > CREATE TABLE IF NOT EXISTS t9( > id Int, > logdate Timestamp, > phonenumber Int, > country String, > area String > ) > PARTITIONED BY (vin String) > STORED BY 'carbondata' > TBLPROPERTIES('PARTITION_TYPE'='HASH','NUM_PARTITIONS'='5'); > show partitions t9; -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] carbondata pull request #1658: [CARBONDATA-1680] Fixed Bug to show partition...
Github user asfgit closed the pull request at: https://github.com/apache/carbondata/pull/1658 ---
[GitHub] carbondata pull request #1675: [CARBONDATA-1862][PARTITION] Support compacti...
Github user asfgit closed the pull request at: https://github.com/apache/carbondata/pull/1675 ---
[jira] [Resolved] (CARBONDATA-1862) Support compaction for partition table .
[ https://issues.apache.org/jira/browse/CARBONDATA-1862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacky Li resolved CARBONDATA-1862. -- Resolution: Fixed Assignee: Ravindra Pesala Fix Version/s: 1.3.0 > Support compaction for partition table . > > > Key: CARBONDATA-1862 > URL: https://issues.apache.org/jira/browse/CARBONDATA-1862 > Project: CarbonData > Issue Type: Sub-task >Reporter: Ravindra Pesala >Assignee: Ravindra Pesala > Fix For: 1.3.0 > > Time Spent: 5h 40m > Remaining Estimate: 0h > > There is a change in compaction during the block identification and grouping. > As all blocks which are related same partition always needs to group to same > set for compaction.So compactor needs to get the partition information from > partition map file during compaction of partition table -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] carbondata issue #1675: [CARBONDATA-1862][PARTITION] Support compaction for ...
Github user jackylk commented on the issue: https://github.com/apache/carbondata/pull/1675 LGTM ---
[GitHub] carbondata issue #1681: [CARBONDATA-1908][PARTITION] Support UPDATE/DELETE o...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1681 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2465/ ---
[GitHub] carbondata issue #1675: [CARBONDATA-1862][PARTITION] Support compaction for ...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1675 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2464/ ---
[GitHub] carbondata issue #1681: [CARBONDATA-1908][PARTITION] Support UPDATE/DELETE o...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1681 Build Failed with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/970/ ---
[GitHub] carbondata issue #1675: [CARBONDATA-1862][PARTITION] Support compaction for ...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1675 Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/969/ ---
[GitHub] carbondata issue #1681: [CARBONDATA-1908][PARTITION] Support UPDATE/DELETE o...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1681 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2193/ ---
[GitHub] carbondata issue #1675: [CARBONDATA-1862][PARTITION] Support compaction for ...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1675 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2192/ ---
[GitHub] carbondata issue #1677: [CARBONDATA-1860][PARTITION] Support insertoverwrite...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1677 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2463/ ---
[GitHub] carbondata issue #1677: [CARBONDATA-1860][PARTITION] Support insertoverwrite...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1677 Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/968/ ---
[GitHub] carbondata issue #1677: [CARBONDATA-1860][PARTITION] Support insertoverwrite...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1677 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2191/ ---
[GitHub] carbondata pull request #1681: [CARBONDATA-1908][PARTITION] Support UPDATE/D...
Github user ravipesala commented on a diff in the pull request: https://github.com/apache/carbondata/pull/1681#discussion_r158108990 --- Diff: hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonTableOutputFormat.java --- @@ -87,6 +87,7 @@ "mapreduce.carbontable.dict.server.host"; public static final String DICTIONARY_SERVER_PORT = "mapreduce.carbontable.dict.server.port"; + public static final String UPADTE_TIMESTAMP = "mapreduce.carbontable.update.timestamp"; --- End diff -- ok ---
[GitHub] carbondata issue #1694: [WIP]Added code to support case expression
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1694 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2462/ ---