[GitHub] carbondata pull request #2026: [CARBONDATA-2098] Add datamap managment descr...
Github user ravipesala commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2026#discussion_r172009728 --- Diff: docs/datamap/preaggregate-datamap-guide.md --- @@ -193,8 +230,10 @@ main table but not performed on pre-aggregate table, all queries still can benef pre-aggregate tables. To further improve the query performance, compaction on pre-aggregate tables can be triggered to merge the segments and files in the pre-aggregate tables. - Data Management on pre-aggregate tables -Once there is pre-aggregate table created on the main table, following command on the main table +## Data Management with pre-aggregate tables +In current implementation, data consistence need to maintained for both main table and pre-aggregate --- End diff -- typo `need to be maintained` ---
[GitHub] carbondata issue #1990: [CARBONDATA-2195] Add new test case for partition fe...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1990 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2811/ ---
[GitHub] carbondata issue #1990: [CARBONDATA-2195] Add new test case for partition fe...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1990 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4057/ ---
[GitHub] carbondata issue #2020: [CARBONDATA-2220] Reduce unnecessary audit log
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2020 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4056/ ---
[GitHub] carbondata issue #2020: [CARBONDATA-2220] Reduce unnecessary audit log
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2020 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2810/ ---
[GitHub] carbondata issue #2026: [CARBONDATA-2098] Add datamap managment description
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2026 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4055/ ---
[GitHub] carbondata issue #2026: [CARBONDATA-2098] Add datamap managment description
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2026 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2809/ ---
[jira] [Resolved] (CARBONDATA-2204) Access tablestatus file too many times during query
[ https://issues.apache.org/jira/browse/CARBONDATA-2204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacky Li resolved CARBONDATA-2204. -- Resolution: Fixed Fix Version/s: 1.3.1 > Access tablestatus file too many times during query > --- > > Key: CARBONDATA-2204 > URL: https://issues.apache.org/jira/browse/CARBONDATA-2204 > Project: CarbonData > Issue Type: Improvement > Components: data-query >Affects Versions: 1.3.0 >Reporter: xuchuanyin >Priority: Major > Fix For: 1.3.1 > > Time Spent: 3h 10m > Remaining Estimate: 0h > > * Problems > Currently in carbondata, a single query will access tablestatus file 7 times, > which will definitely slow down the query performance especially when this > file is in remote cluster since reading this file is purely client side > operation. > > * Steps to reproduce > 1. Add logger in `AtomicFileOperationsImpl.openForRead` and printout the file > name to read. > 2. Run a query on carbondata table. Here I ran > `TestLoadDataGeneral.test("test data loading CSV file without extension > name")`. > 3. Observe the output log and search the keyword 'tablestatus'. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] carbondata pull request #1999: [CARBONDATA-2204] Optimized number of reads o...
Github user asfgit closed the pull request at: https://github.com/apache/carbondata/pull/1999 ---
[GitHub] carbondata issue #1999: [CARBONDATA-2204] Optimized number of reads of table...
Github user jackylk commented on the issue: https://github.com/apache/carbondata/pull/1999 LGTM ---
[GitHub] carbondata issue #2020: [CARBONDATA-2220] Reduce unnecessary audit log
Github user jackylk commented on the issue: https://github.com/apache/carbondata/pull/2020 retest this please ---
[GitHub] carbondata pull request #2023: [HOXFIX] Add show and drop datamap code
Github user jackylk commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2023#discussion_r172007591 --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/PreAggregateDataMapExample.scala --- @@ -72,6 +73,13 @@ object PreAggregateTableExample { | select id,max(age) from mainTable group by id""" .stripMargin) +// show datamap +spark.sql("show datamap on table mainTable").show(false) + +// drop datamap +spark.sql("drop datamap preagg_count on table mainTable").show() +spark.sql("show datamap on table mainTable").show(false) + spark.sql( s""" | SELECT id,max(age) --- End diff -- since you are adding comment, can you add for query also, to describe which datamap it will hit ---
[GitHub] carbondata pull request #2023: [HOXFIX] Add show and drop datamap code
Github user jackylk commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2023#discussion_r172007572 --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/PreAggregateDataMapExample.scala --- @@ -51,6 +51,7 @@ object PreAggregateTableExample { LOAD DATA LOCAL INPATH '$testData' into table mainTable """) +// create datamaps of pre-aggregate --- End diff -- better change to `create pre-aggregate table by datamap` ---
[GitHub] carbondata pull request #2023: [HOXFIX] Add show and drop datamap code
Github user jackylk commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2023#discussion_r172007576 --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/PreAggregateDataMapExample.scala --- @@ -101,7 +109,7 @@ object PreAggregateTableExample { .option("compress", "true") .mode(SaveMode.Overwrite).save() -// Create pre-aggregate table +// Create datamap of pre-aggregate --- End diff -- change as above comment ---
[GitHub] carbondata pull request #2026: [CARBONDATA-2098] Add datamap managment descr...
GitHub user jackylk opened a pull request: https://github.com/apache/carbondata/pull/2026 [CARBONDATA-2098] Add datamap managment description Enhance document for datamap - [ ] Any interfaces changed? - [ ] Any backward compatibility impacted? - [ ] Document update required? - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jackylk/incubator-carbondata doc Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2026.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2026 commit 0fad0ec79b1ddd7daf80eda54ccfc3daf20ab220 Author: Jacky LiDate: 2018-03-03T05:40:59Z change ---
[GitHub] carbondata issue #1999: [CARBONDATA-2204] Optimized number of reads of table...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1999 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4054/ ---
[GitHub] carbondata issue #1999: [CARBONDATA-2204] Optimized number of reads of table...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1999 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2808/ ---
[GitHub] carbondata pull request #2010: [CARBONDATA-2206] Fixed lucene datamap evalua...
Github user ravipesala closed the pull request at: https://github.com/apache/carbondata/pull/2010 ---
[GitHub] carbondata issue #2025: [CARBONDATA-2098] Optimize document for datamap
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2025 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2807/ ---
[GitHub] carbondata issue #2025: [CARBONDATA-2098] Optimize document for datamap
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2025 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4053/ ---
[GitHub] carbondata pull request #2025: [CARBONDATA-2098] Optimize document for datam...
Github user asfgit closed the pull request at: https://github.com/apache/carbondata/pull/2025 ---
[GitHub] carbondata issue #2025: [CARBONDATA-2098] Optimize document for datamap
Github user chenliang613 commented on the issue: https://github.com/apache/carbondata/pull/2025 LGTM ---
[GitHub] carbondata pull request #2024: [HOTFIX] Fixed all examples
Github user asfgit closed the pull request at: https://github.com/apache/carbondata/pull/2024 ---
[GitHub] carbondata issue #2024: [HOTFIX] Fixed all examples
Github user chenliang613 commented on the issue: https://github.com/apache/carbondata/pull/2024 LGTM ---
[GitHub] carbondata pull request #2025: [CARBONDATA-2098] Optimize document for datam...
GitHub user jackylk opened a pull request: https://github.com/apache/carbondata/pull/2025 [CARBONDATA-2098] Optimize document for datamap 1. Separate document for preaggregate datamap and timeseries datamap, and move them into datamap folder under docs folder. 2. Optimize the document - [ ] Any interfaces changed? - [ ] Any backward compatibility impacted? - [ ] Document update required? - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jackylk/incubator-carbondata refactory-doc Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2025.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2025 commit 1b2d37898c9229018afc2610d994292e4f19e279 Author: Jacky LiDate: 2018-03-03T03:34:46Z modify doc for datamap ---
[GitHub] carbondata issue #2024: [HOTFIX] Fixed all examples
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2024 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2806/ ---
[GitHub] carbondata issue #2024: [HOTFIX] Fixed all examples
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2024 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4052/ ---
[GitHub] carbondata pull request #2024: [HOTFIX] Fixed all examples
Github user ravipesala commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2024#discussion_r172003787 --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/CarbonDataFrameExample.scala --- @@ -54,13 +54,13 @@ object CarbonDataFrameExample { // Saves dataframe to carbondata file df.write .format("carbondata") - .option("tableName", "carbon_table") + .option("tableName", "carbon_df_table") --- End diff -- I have just changed because of creating fails sometimes because all other examples use the same name and also better differentiate the table name as per example. ---
[GitHub] carbondata pull request #2022: [CARBONDATA-2098] Optimize pre-aggregate docu...
Github user asfgit closed the pull request at: https://github.com/apache/carbondata/pull/2022 ---
[GitHub] carbondata issue #2022: [CARBONDATA-2098] Optimize pre-aggregate documentati...
Github user jackylk commented on the issue: https://github.com/apache/carbondata/pull/2022 LGTM ---
[GitHub] carbondata pull request #2022: [CARBONDATA-2098] Optimize pre-aggregate docu...
Github user jackylk commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2022#discussion_r172000793 --- Diff: docs/preaggregate-guide.md --- @@ -0,0 +1,313 @@ +# CarbonData Pre-aggregate tables + +## Quick example +Download and unzip spark-2.2.0-bin-hadoop2.7.tgz, and export $SPARK_HOME + +Package carbon jar, and copy assembly/target/scala-2.11/carbondata_2.11-x.x.x-SNAPSHOT-shade-hadoop2.7.2.jar to $SPARK_HOME/jars +```shell +mvn clean package -DskipTests -Pspark-2.2 +``` + +Start spark-shell in new terminal, type :paste, then copy and run the following code. +```scala + import java.io.File + import org.apache.spark.sql.{CarbonEnv, SparkSession} + import org.apache.spark.sql.CarbonSession._ + import org.apache.spark.sql.streaming.{ProcessingTime, StreamingQuery} + import org.apache.carbondata.core.util.path.CarbonStorePath + + val warehouse = new File("./warehouse").getCanonicalPath + val metastore = new File("./metastore").getCanonicalPath + + val spark = SparkSession + .builder() + .master("local") + .appName("preAggregateExample") + .config("spark.sql.warehouse.dir", warehouse) + .getOrCreateCarbonSession(warehouse, metastore) + + spark.sparkContext.setLogLevel("ERROR") + + // drop table if exists previously + spark.sql(s"DROP TABLE IF EXISTS sales") + // Create target carbon table and populate with initial data + spark.sql( + s""" + | CREATE TABLE sales ( + | user_id string, + | country string, + | quantity int, + | price bigint) + | STORED BY 'carbondata'""".stripMargin) + + spark.sql( + s""" + | CREATE DATAMAP agg_sales + | ON TABLE sales + | USING "preaggregate" + | AS + | SELECT country, sum(quantity), avg(price) + | FROM sales + | GROUP BY country""".stripMargin) + + import spark.implicits._ + import org.apache.spark.sql.SaveMode + import scala.util.Random + + val r = new Random() + val df = spark.sparkContext.parallelize(1 to 10) + .map(x => ("ID." + r.nextInt(10), "country" + x % 8, x % 50, x % 60)) + .toDF("user_id", "country", "quantity", "price") + + // Create table with pre-aggregate table + df.write.format("carbondata") + .option("tableName", "sales") + .option("compress", "true") + .mode(SaveMode.Append).save() + + spark.sql( + s""" +|SELECT country, sum(quantity), avg(price) +| from sales GROUP BY country""".stripMargin).show + + spark.stop +``` + +##PRE-AGGREGATE TABLES + Carbondata supports pre aggregating of data so that OLAP kind of queries can fetch data + much faster.Aggregate tables are created as datamaps so that the handling is as efficient as + other indexing support.Users can create as many aggregate tables they require as datamaps to + improve their query performance,provided the storage requirements and loading speeds are + acceptable. + + For main table called **sales** which is defined as + + ``` + CREATE TABLE sales ( + order_time timestamp, + user_id string, + sex string, + country string, + quantity int, + price bigint) + STORED BY 'carbondata' + ``` + + user can create pre-aggregate tables using the DDL + + ``` + CREATE DATAMAP agg_sales + ON TABLE sales + USING "preaggregate" + AS + SELECT country, sex, sum(quantity), avg(price) + FROM sales + GROUP BY country, sex + ``` + + + +Functions supported in pre-aggregate tables + +| Function | Rollup supported | +|---|| +| SUM | Yes | +| AVG | Yes | +| MAX | Yes | +| MIN | Yes | +| COUNT | Yes | + + +# How pre-aggregate tables are selected +For the main table **sales** and pre-aggregate table **agg_sales** created above, queries of the +kind +``` +SELECT country, sex, sum(quantity), avg(price) from sales GROUP BY country, sex + +SELECT sex, sum(quantity) from sales GROUP BY sex + +SELECT sum(price), country from sales GROUP BY country +``` + +will be transformed by Query Planner to fetch data from pre-aggregate table **agg_sales** + +But queries of kind +``` +SELECT user_id, country, sex, sum(quantity), avg(price) from sales GROUP BY user_id, country, sex + +SELECT sex, avg(quantity) from sales GROUP BY sex + +SELECT country, max(price) from sales GROUP BY country +``` + +will fetch the data from the main table **sales** + +# Loading data to pre-aggregate tables +For existing table with
[GitHub] carbondata issue #2024: [HOTFIX] Fixed all examples
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2024 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4051/ ---
[GitHub] carbondata issue #2024: [HOTFIX] Fixed all examples
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2024 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2805/ ---
[GitHub] carbondata pull request #2024: [HOTFIX] Fixed all examples
Github user chenliang613 commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2024#discussion_r171992561 --- Diff: examples/spark2/src/main/scala/org/apache/carbondata/examples/CarbonDataFrameExample.scala --- @@ -54,13 +54,13 @@ object CarbonDataFrameExample { // Saves dataframe to carbondata file df.write .format("carbondata") - .option("tableName", "carbon_table") + .option("tableName", "carbon_df_table") --- End diff -- why requires to rename table name, is there any rule? ---
[GitHub] carbondata issue #2024: [HOTFIX] Fixed all examples
Github user chenliang613 commented on the issue: https://github.com/apache/carbondata/pull/2024 retest this please ---
[GitHub] carbondata issue #2022: [CARBONDATA-2098] Optimize pre-aggregate documentati...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2022 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2804/ ---
[GitHub] carbondata issue #2022: [CARBONDATA-2098] Optimize pre-aggregate documentati...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2022 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4050/ ---
[GitHub] carbondata issue #2024: [HOTFIX] Fixed all examples
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2024 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4049/ ---
[GitHub] carbondata issue #2024: [HOTFIX] Fixed all examples
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2024 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2803/ ---
[GitHub] carbondata pull request #2024: [HOTFIX] Fixed all examples
GitHub user ravipesala opened a pull request: https://github.com/apache/carbondata/pull/2024 [HOTFIX] Fixed all examples Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [ ] Any interfaces changed? - [ ] Any backward compatibility impacted? - [ ] Document update required? - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ravipesala/incubator-carbondata example-fix Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2024.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2024 commit a1312bf7275c4f4108ee73566b92410a0bb1e866 Author: ravipesalaDate: 2018-03-02T16:16:50Z Fixed all examples ---
[jira] [Resolved] (CARBONDATA-2209) Rename table with partitions not working issue and batch_sort and no_sort with partition table issue
[ https://issues.apache.org/jira/browse/CARBONDATA-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Venkata Ramana G resolved CARBONDATA-2209. -- Resolution: Fixed Assignee: Ravindra Pesala Fix Version/s: 1.3.1 > Rename table with partitions not working issue and batch_sort and no_sort > with partition table issue > > > Key: CARBONDATA-2209 > URL: https://issues.apache.org/jira/browse/CARBONDATA-2209 > Project: CarbonData > Issue Type: Bug >Reporter: Ravindra Pesala >Assignee: Ravindra Pesala >Priority: Major > Fix For: 1.3.1 > > Time Spent: 4h 20m > Remaining Estimate: 0h > > # After table rename on partitions table, it returns empty data upon querying. > # Batch sort and no sort loading is not working on partition table -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] carbondata pull request #2006: [CARBONDATA-2209] Fixed rename table with par...
Github user asfgit closed the pull request at: https://github.com/apache/carbondata/pull/2006 ---
[GitHub] carbondata issue #2006: [CARBONDATA-2209] Fixed rename table with partitions...
Github user gvramana commented on the issue: https://github.com/apache/carbondata/pull/2006 LGTM ---
[GitHub] carbondata issue #2006: [CARBONDATA-2209] Fixed rename table with partitions...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2006 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2802/ ---
[GitHub] carbondata issue #2006: [CARBONDATA-2209] Fixed rename table with partitions...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2006 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4048/ ---
[GitHub] carbondata issue #2022: [WIP][CARBONDATA-2098] Optimize pre-aggregate docume...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2022 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4046/ ---
[GitHub] carbondata issue #2006: [CARBONDATA-2209] Fixed rename table with partitions...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2006 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2801/ ---
[GitHub] carbondata issue #2006: [CARBONDATA-2209] Fixed rename table with partitions...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2006 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4047/ ---
[jira] [Resolved] (CARBONDATA-2219) Add validation for external partition location to use same schema
[ https://issues.apache.org/jira/browse/CARBONDATA-2219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Venkata Ramana G resolved CARBONDATA-2219. -- Resolution: Fixed Assignee: Ravindra Pesala Fix Version/s: 1.3.1 > Add validation for external partition location to use same schema > - > > Key: CARBONDATA-2219 > URL: https://issues.apache.org/jira/browse/CARBONDATA-2219 > Project: CarbonData > Issue Type: Bug >Reporter: Ravindra Pesala >Assignee: Ravindra Pesala >Priority: Major > Fix For: 1.3.1 > > Time Spent: 0.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] carbondata pull request #2018: [CARBONDATA-2219] Added validation for extern...
Github user asfgit closed the pull request at: https://github.com/apache/carbondata/pull/2018 ---
[GitHub] carbondata pull request #2006: [CARBONDATA-2209] Fixed rename table with par...
Github user ravipesala commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2006#discussion_r171867026 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableRenameCommand.scala --- @@ -138,6 +147,27 @@ private[sql] case class CarbonAlterTableRenameCommand( sys.error(s"Folder rename failed for table $oldDatabaseName.$oldTableName") } } + val updatedParts = updatePartitionLocations( --- End diff -- ok ---
[GitHub] carbondata pull request #2006: [CARBONDATA-2209] Fixed rename table with par...
Github user ravipesala commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2006#discussion_r171867005 --- Diff: core/src/main/java/org/apache/carbondata/core/writer/CarbonIndexFileMergeWriter.java --- @@ -38,85 +46,158 @@ /** * Merge all the carbonindex files of segment to a merged file - * @param segmentPath + * @param tablePath * @param indexFileNamesTobeAdded while merging it comsiders only these files. *If null then consider all * @param readFileFooterFromCarbonDataFile flag to read file footer information from carbondata * file. This will used in case of upgrade from version * which do not store the blocklet info to current version * @throws IOException */ - private void mergeCarbonIndexFilesOfSegment(String segmentPath, - List indexFileNamesTobeAdded, boolean readFileFooterFromCarbonDataFile) - throws IOException { -CarbonFile[] indexFiles = SegmentIndexFileStore.getCarbonIndexFiles(segmentPath); + private SegmentIndexFIleMergeStatus mergeCarbonIndexFilesOfSegment(String segmentId, + String tablePath, List indexFileNamesTobeAdded, + boolean readFileFooterFromCarbonDataFile) throws IOException { +Segment segment = Segment.getSegment(segmentId, tablePath); +String segmentPath = CarbonTablePath.getSegmentPath(tablePath, segmentId); +CarbonFile[] indexFiles; +SegmentFileStore sfs = null; +if (segment != null && segment.getSegmentFileName() != null) { + sfs = new SegmentFileStore(tablePath, segment.getSegmentFileName()); + List indexCarbonFiles = sfs.getIndexCarbonFiles(); + indexFiles = indexCarbonFiles.toArray(new CarbonFile[indexCarbonFiles.size()]); +} else { + indexFiles = SegmentIndexFileStore.getCarbonIndexFiles(segmentPath); +} if (isCarbonIndexFilePresent(indexFiles) || indexFileNamesTobeAdded != null) { - SegmentIndexFileStore fileStore = new SegmentIndexFileStore(); - if (readFileFooterFromCarbonDataFile) { -// this case will be used in case of upgrade where old store will not have the blocklet -// info in the index file and therefore blocklet info need to be read from the file footer -// in the carbondata file -fileStore.readAllIndexAndFillBolckletInfo(segmentPath); + if (sfs == null) { +return mergeNormalSegment(indexFileNamesTobeAdded, readFileFooterFromCarbonDataFile, +segmentPath, indexFiles); } else { -fileStore.readAllIIndexOfSegment(segmentPath); +return mergePartitionSegment(indexFileNamesTobeAdded, sfs, indexFiles); } - MapindexMap = fileStore.getCarbonIndexMap(); - MergedBlockIndexHeader indexHeader = new MergedBlockIndexHeader(); - MergedBlockIndex mergedBlockIndex = new MergedBlockIndex(); - List fileNames = new ArrayList<>(indexMap.size()); - List data = new ArrayList<>(indexMap.size()); - for (Map.Entry entry : indexMap.entrySet()) { -if (indexFileNamesTobeAdded == null || -indexFileNamesTobeAdded.contains(entry.getKey())) { - fileNames.add(entry.getKey()); - data.add(ByteBuffer.wrap(entry.getValue())); -} +} +return null; + } + + + private SegmentIndexFIleMergeStatus mergeNormalSegment(List indexFileNamesTobeAdded, + boolean readFileFooterFromCarbonDataFile, String segmentPath, CarbonFile[] indexFiles) + throws IOException { +SegmentIndexFileStore fileStore = new SegmentIndexFileStore(); +if (readFileFooterFromCarbonDataFile) { + // this case will be used in case of upgrade where old store will not have the blocklet + // info in the index file and therefore blocklet info need to be read from the file footer + // in the carbondata file + fileStore.readAllIndexAndFillBolckletInfo(segmentPath); +} else { + fileStore.readAllIIndexOfSegment(segmentPath); +} +Map indexMap = fileStore.getCarbonIndexMap(); +writeMergeIndexFile(indexFileNamesTobeAdded, segmentPath, indexMap); +for (CarbonFile indexFile : indexFiles) { + indexFile.delete(); --- End diff -- ok ---
[GitHub] carbondata pull request #2006: [CARBONDATA-2209] Fixed rename table with par...
Github user ravipesala commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2006#discussion_r171865808 --- Diff: core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/SegmentIndexFileStore.java --- @@ -108,11 +109,20 @@ public void readAllIIndexOfSegment(SegmentFileStore segmentFileStore, SegmentSta location = segmentFileStore.getTablePath() + CarbonCommonConstants.FILE_SEPARATOR + location; } +String mergeFileName = locations.getValue().getMergeFileName(); for (String indexFile : locations.getValue().getFiles()) { CarbonFile carbonFile = FileFactory .getCarbonFile(location + CarbonCommonConstants.FILE_SEPARATOR + indexFile); - if (carbonFile.exists()) { + if (carbonFile.exists() && !indexFiles.contains(carbonFile.getAbsolutePath())) { carbonIndexFiles.add(carbonFile); +indexFiles.add(carbonFile.getAbsolutePath()); + } else if (mergeFileName != null) { --- End diff -- ok ---
[GitHub] carbondata issue #2018: [CARBONDATA-2219] Added validation for external part...
Github user gvramana commented on the issue: https://github.com/apache/carbondata/pull/2018 LGTM ---
[GitHub] carbondata pull request #2015: [CARBONDATA-2103]Make show datamaps configura...
Github user asfgit closed the pull request at: https://github.com/apache/carbondata/pull/2015 ---
[GitHub] carbondata issue #2022: [WIP][CARBONDATA-2098] Optimize pre-aggregate docume...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2022 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2800/ ---
[GitHub] carbondata pull request #2006: [CARBONDATA-2209] Fixed rename table with par...
Github user gvramana commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2006#discussion_r171863712 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableRenameCommand.scala --- @@ -138,6 +147,27 @@ private[sql] case class CarbonAlterTableRenameCommand( sys.error(s"Folder rename failed for table $oldDatabaseName.$oldTableName") } } + val updatedParts = updatePartitionLocations( --- End diff -- need to check once how hive managed table rename works? whether it renames location or not? If not how table creation with old table name works? ---
[GitHub] carbondata issue #1999: [CARBONDATA-2204] Optimized number of reads of table...
Github user zzcclp commented on the issue: https://github.com/apache/carbondata/pull/1999 when can this pr be merged? ---
[GitHub] carbondata issue #2023: [HOXFIX] Add show and drop datamap code
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2023 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2798/ ---
[GitHub] carbondata issue #2023: [HOXFIX] Add show and drop datamap code
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2023 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4044/ ---
[GitHub] carbondata issue #1990: [CARBONDATA-2195] Add new test case for partition fe...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1990 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2797/ ---
[GitHub] carbondata issue #1990: [CARBONDATA-2195] Add new test case for partition fe...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1990 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4043/ ---
[GitHub] carbondata issue #2015: [CARBONDATA-2103]Make show datamaps configurable in ...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/2015 LGTM ---
[GitHub] carbondata pull request #2006: [CARBONDATA-2209] Fixed rename table with par...
Github user gvramana commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2006#discussion_r171852959 --- Diff: core/src/main/java/org/apache/carbondata/core/writer/CarbonIndexFileMergeWriter.java --- @@ -38,85 +46,158 @@ /** * Merge all the carbonindex files of segment to a merged file - * @param segmentPath + * @param tablePath * @param indexFileNamesTobeAdded while merging it comsiders only these files. *If null then consider all * @param readFileFooterFromCarbonDataFile flag to read file footer information from carbondata * file. This will used in case of upgrade from version * which do not store the blocklet info to current version * @throws IOException */ - private void mergeCarbonIndexFilesOfSegment(String segmentPath, - List indexFileNamesTobeAdded, boolean readFileFooterFromCarbonDataFile) - throws IOException { -CarbonFile[] indexFiles = SegmentIndexFileStore.getCarbonIndexFiles(segmentPath); + private SegmentIndexFIleMergeStatus mergeCarbonIndexFilesOfSegment(String segmentId, + String tablePath, List indexFileNamesTobeAdded, + boolean readFileFooterFromCarbonDataFile) throws IOException { +Segment segment = Segment.getSegment(segmentId, tablePath); +String segmentPath = CarbonTablePath.getSegmentPath(tablePath, segmentId); +CarbonFile[] indexFiles; +SegmentFileStore sfs = null; +if (segment != null && segment.getSegmentFileName() != null) { + sfs = new SegmentFileStore(tablePath, segment.getSegmentFileName()); + List indexCarbonFiles = sfs.getIndexCarbonFiles(); + indexFiles = indexCarbonFiles.toArray(new CarbonFile[indexCarbonFiles.size()]); +} else { + indexFiles = SegmentIndexFileStore.getCarbonIndexFiles(segmentPath); +} if (isCarbonIndexFilePresent(indexFiles) || indexFileNamesTobeAdded != null) { - SegmentIndexFileStore fileStore = new SegmentIndexFileStore(); - if (readFileFooterFromCarbonDataFile) { -// this case will be used in case of upgrade where old store will not have the blocklet -// info in the index file and therefore blocklet info need to be read from the file footer -// in the carbondata file -fileStore.readAllIndexAndFillBolckletInfo(segmentPath); + if (sfs == null) { +return mergeNormalSegment(indexFileNamesTobeAdded, readFileFooterFromCarbonDataFile, +segmentPath, indexFiles); } else { -fileStore.readAllIIndexOfSegment(segmentPath); +return mergePartitionSegment(indexFileNamesTobeAdded, sfs, indexFiles); } - MapindexMap = fileStore.getCarbonIndexMap(); - MergedBlockIndexHeader indexHeader = new MergedBlockIndexHeader(); - MergedBlockIndex mergedBlockIndex = new MergedBlockIndex(); - List fileNames = new ArrayList<>(indexMap.size()); - List data = new ArrayList<>(indexMap.size()); - for (Map.Entry entry : indexMap.entrySet()) { -if (indexFileNamesTobeAdded == null || -indexFileNamesTobeAdded.contains(entry.getKey())) { - fileNames.add(entry.getKey()); - data.add(ByteBuffer.wrap(entry.getValue())); -} +} +return null; + } + + + private SegmentIndexFIleMergeStatus mergeNormalSegment(List indexFileNamesTobeAdded, + boolean readFileFooterFromCarbonDataFile, String segmentPath, CarbonFile[] indexFiles) + throws IOException { +SegmentIndexFileStore fileStore = new SegmentIndexFileStore(); +if (readFileFooterFromCarbonDataFile) { + // this case will be used in case of upgrade where old store will not have the blocklet + // info in the index file and therefore blocklet info need to be read from the file footer + // in the carbondata file + fileStore.readAllIndexAndFillBolckletInfo(segmentPath); +} else { + fileStore.readAllIIndexOfSegment(segmentPath); +} +Map indexMap = fileStore.getCarbonIndexMap(); +writeMergeIndexFile(indexFileNamesTobeAdded, segmentPath, indexMap); +for (CarbonFile indexFile : indexFiles) { + indexFile.delete(); --- End diff -- deletion should be postponed as parallel read can happen ---
[GitHub] carbondata issue #2015: [CARBONDATA-2103]Make show datamaps configurable in ...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/2015 @akashrn5 Please rebase it ---
[jira] [Resolved] (CARBONDATA-2217) nullpointer issue drop partition where column does not exists and clean files issue after second level of compaction
[ https://issues.apache.org/jira/browse/CARBONDATA-2217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravindra Pesala resolved CARBONDATA-2217. - Resolution: Fixed Fix Version/s: 1.3.1 1.4.0 > nullpointer issue drop partition where column does not exists and clean > files issue after second level of compaction > - > > Key: CARBONDATA-2217 > URL: https://issues.apache.org/jira/browse/CARBONDATA-2217 > Project: CarbonData > Issue Type: Bug > Components: core, spark-integration >Reporter: Akash R Nilugal >Assignee: Akash R Nilugal >Priority: Minor > Fix For: 1.4.0, 1.3.1 > > Time Spent: 2h 20m > Remaining Estimate: 0h > > 1)when drop partition is fired for a column which does not exists , it throws > null pointer exception > 2)select * is not working when clean files operation is fired after second > level of compaction > create table comp_dt2(id int,name string) partitioned by (dt date,c4 int) > stored by 'carbondata'; > insert into comp_dt2 select 1,'A','2001-01-01',1; > insert into comp_dt2 select 2,'B','2001-01-01',1; > insert into comp_dt2 select 3,'C','2002-01-01',2; > insert into comp_dt2 select 4,'D','2002-01-01',null; > insert into comp_dt2 select 5,'E','2003-01-01',3; > insert into comp_dt2 select 6,'F','2003-01-01',3; > insert into comp_dt2 select 7,'G','2003-01-01',4; > insert into comp_dt2 select 8,'H','2004-01-01',''; > insert into comp_dt2 select 9,'H','2001-01-01',1; > insert into comp_dt2 select 10,'I','2002-01-01',null; > insert into comp_dt2 select 11,'J','2003-01-01',4; > insert into comp_dt2 select 12,'K','2003-01-01',5; > > clean files for table comp_dt2; > select * from comp_dt2 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] carbondata pull request #2017: [CARBONDATA-2217]fix drop partition for non e...
Github user asfgit closed the pull request at: https://github.com/apache/carbondata/pull/2017 ---
[GitHub] carbondata pull request #2006: [CARBONDATA-2209] Fixed rename table with par...
Github user gvramana commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2006#discussion_r171848791 --- Diff: core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/SegmentIndexFileStore.java --- @@ -108,11 +109,20 @@ public void readAllIIndexOfSegment(SegmentFileStore segmentFileStore, SegmentSta location = segmentFileStore.getTablePath() + CarbonCommonConstants.FILE_SEPARATOR + location; } +String mergeFileName = locations.getValue().getMergeFileName(); for (String indexFile : locations.getValue().getFiles()) { CarbonFile carbonFile = FileFactory .getCarbonFile(location + CarbonCommonConstants.FILE_SEPARATOR + indexFile); - if (carbonFile.exists()) { + if (carbonFile.exists() && !indexFiles.contains(carbonFile.getAbsolutePath())) { carbonIndexFiles.add(carbonFile); +indexFiles.add(carbonFile.getAbsolutePath()); + } else if (mergeFileName != null) { --- End diff -- Can move mergeFilename logic out of loop ---
[GitHub] carbondata issue #2017: [CARBONDATA-2217]fix drop partition for non existing...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/2017 LGTM ---
[GitHub] carbondata pull request #2023: [HOXFIX] Add show and drop datamap code
GitHub user chenliang613 opened a pull request: https://github.com/apache/carbondata/pull/2023 [HOXFIX] Add show and drop datamap code Add show and drop datamap code. Update assemble pom for changing jar name. Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [X] Any interfaces changed? NA - [X] Any backward compatibility impacted? NA - [X] Document update required? NA - [X] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [X] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. NA You can merge this pull request into a Git repository by running: $ git pull https://github.com/chenliang613/carbondata datamap Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2023.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2023 commit 8e747bc24574731ee3cbbdb9bf74ea959facb6df Author: chenliang613Date: 2018-03-02T13:33:05Z add show and drop datamap code ---
[GitHub] carbondata pull request #1956: [HOTFIX] Add partition usage code
Github user asfgit closed the pull request at: https://github.com/apache/carbondata/pull/1956 ---
[GitHub] carbondata issue #1933: [CARBONDATA-2132] [Partition] Fixed Error while load...
Github user anubhav100 commented on the issue: https://github.com/apache/carbondata/pull/1933 @jackylk please review this pr ---
[GitHub] carbondata issue #1933: [CARBONDATA-2132] [Partition] Fixed Error while load...
Github user anubhav100 commented on the issue: https://github.com/apache/carbondata/pull/1933 @jackylyk ---
[GitHub] carbondata issue #1956: [HOTFIX] Add partition usage code
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1956 LGTM ---
[jira] [Resolved] (CARBONDATA-2144) There are some improper place in pre-aggregate documentation
[ https://issues.apache.org/jira/browse/CARBONDATA-2144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liang Chen resolved CARBONDATA-2144. Resolution: Fixed Fix Version/s: 1.3.1 > There are some improper place in pre-aggregate documentation > > > Key: CARBONDATA-2144 > URL: https://issues.apache.org/jira/browse/CARBONDATA-2144 > Project: CarbonData > Issue Type: Improvement > Components: docs >Reporter: xubo245 >Assignee: xubo245 >Priority: Major > Fix For: 1.3.1 > > Time Spent: 1h > Remaining Estimate: 0h > > Optimize pre-aggregate documentation: > * add blank space > * upper case > like: > Carbondata supports pre aggregating of data so that OLAP kind of queries can > fetch data much faster.Aggregate tables are created as datamaps so that the > handling is as efficient as other indexing support.Users can create as many > aggregate tables they require as datamaps to improve their query > performance,provided the storage requirements and loading speeds are > acceptable. > For main table called sales which is defined as > CREATE TABLE sales ( > order_time timestamp, > user_id string, > sex string, > country string, > quantity int, > price bigint) > STORED BY 'carbondata') > need to -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] carbondata pull request #1949: [HOTFIX][CARBONDATA-2144] Optimize preaggrega...
Github user asfgit closed the pull request at: https://github.com/apache/carbondata/pull/1949 ---
[GitHub] carbondata pull request #1941: [CARBONDATA-1506] fix SDV error in PushUP_FIL...
Github user asfgit closed the pull request at: https://github.com/apache/carbondata/pull/1941 ---
[GitHub] carbondata issue #1949: [HOTFIX][CARBONDATA-2144] Optimize preaggregate tabl...
Github user chenliang613 commented on the issue: https://github.com/apache/carbondata/pull/1949 LGTM ---
[GitHub] carbondata issue #1857: [CARBONDATA-2073][CARBONDATA-1516][Tests] Add test c...
Github user xubo245 commented on the issue: https://github.com/apache/carbondata/pull/1857 @jackylk @kumarvishal09 Please review it. ---
[GitHub] carbondata issue #1930: [CARBONDATA-2130] Find some spelling error in Carbon...
Github user xubo245 commented on the issue: https://github.com/apache/carbondata/pull/1930 @jackylk @chenliang613 Please review it ---
[GitHub] carbondata issue #2022: [CARBONDATA-2098] Optimize pre-aggregate documentati...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2022 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4041/ ---
[GitHub] carbondata issue #2022: [CARBONDATA-2098] Optimize pre-aggregate documentati...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2022 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2795/ ---
[GitHub] carbondata issue #1995: [WIP] File Format Reader
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1995 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4042/ ---
[GitHub] carbondata issue #1956: [HOTFIX] Add partition usage code
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1956 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4040/ ---
[GitHub] carbondata issue #1995: [WIP] File Format Reader
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1995 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2796/ ---
[GitHub] carbondata issue #1857: [CARBONDATA-2073][CARBONDATA-1516][Tests] Add test c...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1857 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4039/ ---
[GitHub] carbondata issue #1956: [HOTFIX] Add partition usage code
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1956 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2794/ ---
[GitHub] carbondata issue #1857: [CARBONDATA-2073][CARBONDATA-1516][Tests] Add test c...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1857 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2793/ ---
[GitHub] carbondata issue #1949: [HOTFIX][CARBONDATA-2144] Optimize preaggregate tabl...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1949 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4038/ ---
[GitHub] carbondata pull request #2022: [CARBONDATA-2098] Optimize pre-aggregate docu...
GitHub user sraghunandan opened a pull request: https://github.com/apache/carbondata/pull/2022 [CARBONDATA-2098] Optimize pre-aggregate documentation optimize pre-aggregate documentation move to separate file add more examples Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [x] Any interfaces changed? No - [x] Any backward compatibility impacted? No - [x] Document update required? Updating docs - [x] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. NA - [x] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. NA You can merge this pull request into a Git repository by running: $ git pull https://github.com/sraghunandan/carbondata-1 agg_doc_new_file Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2022.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2022 commit 742359d1640bab97b3c0d40d948b0bedf8fe6a30 Author: sraghunandanDate: 2018-03-02T11:32:39Z optimize pre-aggregate documentation;move to separate file;add more examples ---
[GitHub] carbondata issue #1949: [HOTFIX][CARBONDATA-2144] Optimize preaggregate tabl...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1949 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2792/ ---
[GitHub] carbondata issue #1990: [CARBONDATA-2195] Add new test case for partition fe...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1990 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4037/ ---
[GitHub] carbondata issue #1990: [CARBONDATA-2195] Add new test case for partition fe...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1990 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2791/ ---
[GitHub] carbondata issue #2017: [CARBONDATA-2217]fix drop partition for non existing...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2017 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4036/ ---
[GitHub] carbondata issue #2017: [CARBONDATA-2217]fix drop partition for non existing...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2017 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2790/ ---
[GitHub] carbondata issue #1930: [CARBONDATA-2130] Find some spelling error in Carbon...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1930 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2789/ ---
[GitHub] carbondata issue #1930: [CARBONDATA-2130] Find some spelling error in Carbon...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1930 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4035/ ---
[GitHub] carbondata issue #1949: [HOTFIX][CARBONDATA-2144] Optimize preaggregate tabl...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1949 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2788/ ---
[GitHub] carbondata issue #2017: [CARBONDATA-2217]fix drop partition for non existing...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2017 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2787/ ---
[GitHub] carbondata issue #2018: [CARBONDATA-2219] Added validation for external part...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/2018 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3748/ ---