[CARBONDATA-1997] Add CarbonWriter SDK API
Added a new module called store-sdk, and added a CarbonWriter API, it can be
used to write Carbondata files to a specified folder, without Spark and Hadoop
dependency. User can use this API in any environment.
This closes #1967
Project:
http://git-wip-us.apache.org/repos/asf/carbondata/blob/83df87dd/processing/src/main/java/org/apache/carbondata/processing/loading/model/CarbonLoadModelBuilder.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/f06824e9/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableRenameCommand.scala
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/92c9f224/core/src/main/java/org/apache/carbondata/core/scan/executor/impl/AbstractQueryExecutor.java
--
diff --git
[CARBONDATA-1827] S3 Carbon Implementation
1.Provide support for s3 in carbondata.
2.Added S3Example to create carbon table on s3.
3.Added S3CSVExample to load carbon table using csv from s3.
This closes #1805
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
[CARBONDATA-2099] Refactor query scan process to improve readability
Unified concepts in scan process flow:
1.QueryModel contains all parameter for scan, it is created by API in
CarbonTable. (In future, CarbonTable will be the entry point for various table
operations)
2.Use term ColumnChunk to
[HotFix][CheckStyle] Fix import related checkstyle
This closes #1952
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/d88d5bb9
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/d88d5bb9
Diff:
[CARBONDATA-1480]Min Max Index Example for DataMap
Datamap Example. Implementation of Min Max Index through Datamap. And Using the
Index while prunning.
This closes #1359
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
http://git-wip-us.apache.org/repos/asf/carbondata/blob/dcfe73b8/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/DataLoadingUtil.scala
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/92c9f224/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/RowLevelRangeLessThanFiterExecuterImpl.java
--
diff --git
[CARBONDATA-2186] Add InterfaceAudience.Internal to annotate internal interface
This closes #1986
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/623a1f93
Tree:
[CARBONDATA-1992] Remove partitionId in CarbonTablePath
In CarbonTablePath, there is a deprecated partition id which is always 0, it
should be removed to avoid confusion.
This closes #1765
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
http://git-wip-us.apache.org/repos/asf/carbondata/blob/f06824e9/processing/src/test/java/org/apache/carbondata/carbon/datastore/BlockIndexStoreTest.java
--
diff --git
[CARBONDATA-2023][DataLoad] Add size base block allocation in data loading
Carbondata assign blocks to nodes at the beginning of data loading.
Previous block allocation strategy is block number based and it will
suffer skewed data problem if the size of input files differs a lot.
We introduced a
http://git-wip-us.apache.org/repos/asf/carbondata/blob/92c9f224/core/src/main/java/org/apache/carbondata/core/scan/processor/DataBlockIterator.java
--
diff --git
[REBASE] Solve conflict after rebasing master
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/0bb4aed6
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/0bb4aed6
Diff:
http://git-wip-us.apache.org/repos/asf/carbondata/blob/92c9f224/processing/src/main/java/org/apache/carbondata/processing/merger/CarbonCompactionExecutor.java
--
diff --git
[CARBONDATA-2018][DataLoad] Optimization in reading/writing for sort temp row
Pick up the no-sort fields in the row and pack them as bytes array and skip
parsing them during merge sort to reduce CPU consumption
This closes #1792
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
[CARBONDATA-2156] Add interface annotation
InterfaceAudience and InterfaceStability annotation should be added for user
and developer
1.InetfaceAudience can be User and Developer
2.InterfaceStability can be Stable, Evolving, Unstable
This closes #1968
Project:
[CARBONDATA-1968] Add external table support
This PR adds support for creating external table with existing carbondata
files, using Hive syntax.
CREATE EXTERNAL TABLE tableName STORED BY 'carbondata' LOCATION 'path'
This closes #1749
Project:
[CARBONDATA-2091][DataLoad] Support specifying sort column bounds in data
loading
Enhance data loading performance by specifying sort column bounds
1. Add row range number during convert-process-step
2. Dispatch rows to each sorter by range number
3. Sort/Write process step can be done
[CARBONDATA-2080] [S3-Implementation] Propagated hadoopConf from driver to
executor for s3 implementation in cluster mode.
Problem : hadoopconf was not getting propagated from driver to the executor
that's why load was failing to the distributed environment.
Solution: Setting the Hadoop conf in
http://git-wip-us.apache.org/repos/asf/carbondata/blob/92c9f224/core/src/main/java/org/apache/carbondata/core/datastore/chunk/impl/FixedLengthDimensionDataChunk.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/92c9f224/core/src/main/java/org/apache/carbondata/core/scan/result/iterator/PartitionSpliterRawResultIterator.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/28b5720f/processing/src/main/java/org/apache/carbondata/processing/loading/sort/unsafe/holder/UnsafeSortTempFileChunkHolder.java
--
diff --git
[CARBONDATA-2023][DataLoad] Add size base block allocation in data loading
Carbondata assign blocks to nodes at the beginning of data loading.
Previous block allocation strategy is block number based and it will
suffer skewed data problem if the size of input files differs a lot.
We introduced a
http://git-wip-us.apache.org/repos/asf/carbondata/blob/92c9f224/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/ExcludeColGroupFilterExecuterImpl.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/bb5bb00a/datamap/examples/src/minmaxdatamap/main/java/org/apache/carbondata/datamap/examples/MinMaxDataMapFactory.java
--
diff --git
[REBASE] resolve conflict after rebasing to master
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/6216294c
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/6216294c
Diff:
http://git-wip-us.apache.org/repos/asf/carbondata/blob/92c9f224/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/RowLevelFilterExecuterImpl.java
--
diff --git
Repository: carbondata
Updated Branches:
refs/heads/carbonstore-rebase5 7f92fde49 -> 8104735fd (forced update)
[REBASE] Solve conflict after rebasing master
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
http://git-wip-us.apache.org/repos/asf/carbondata/blob/92c9f224/core/src/test/java/org/apache/carbondata/core/util/CarbonUtilTest.java
--
diff --git
a/core/src/test/java/org/apache/carbondata/core/util/CarbonUtilTest.java
http://git-wip-us.apache.org/repos/asf/carbondata/blob/92c9f224/core/src/main/java/org/apache/carbondata/core/datastore/chunk/store/impl/unsafe/UnsafeFixedLengthDimensionDataChunkStore.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/92c9f224/core/src/main/java/org/apache/carbondata/core/util/AbstractDataFileFooterConverter.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/f06824e9/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/49d06c20/processing/src/main/java/org/apache/carbondata/processing/loading/sort/impl/UnsafeParallelReadMergeSorterWithBucketingImpl.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/828ae5ec/datamap/examples/src/minmaxdatamap/main/java/org/apache/carbondata/datamap/examples/MinMaxDataMapFactory.java
--
diff --git
[CARBONDATA-2023][DataLoad] Add size base block allocation in data loading
Carbondata assign blocks to nodes at the beginning of data loading.
Previous block allocation strategy is block number based and it will
suffer skewed data problem if the size of input files differs a lot.
We introduced a
[REBASE] Solve conflict after merging master
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/7f92fde4
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/7f92fde4
Diff:
http://git-wip-us.apache.org/repos/asf/carbondata/blob/9b9125b6/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/datacompaction/MajorCompactionIgnoreInMinorTest.scala
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/55c4e438/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/ExcludeColGroupFilterExecuterImpl.java
--
diff --git
[CARBONDATA-1114][Tests] Fix bugs in tests in windows env
Fix bugs in tests that will cause failure under windows env
This closes #1994
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/87bda960
Tree:
http://git-wip-us.apache.org/repos/asf/carbondata/blob/55c4e438/core/src/test/java/org/apache/carbondata/core/util/CarbonUtilTest.java
--
diff --git
a/core/src/test/java/org/apache/carbondata/core/util/CarbonUtilTest.java
[REBASE] Solve conflict after rebasing master
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/ef812484
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/ef812484
Diff:
[CARBONDATA-1827] S3 Carbon Implementation
1.Provide support for s3 in carbondata.
2.Added S3Example to create carbon table on s3.
3.Added S3CSVExample to load carbon table using csv from s3.
This closes #1805
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
[CARBONDATA-2018][DataLoad] Optimization in reading/writing for sort temp row
Pick up the no-sort fields in the row and pack them as bytes array and skip
parsing them during merge sort to reduce CPU consumption
This closes #1792
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Revert "[CARBONDATA-2018][DataLoad] Optimization in reading/writing for sort
temp row"
This reverts commit de92ea9a123b17d903f2d1d4662299315c792954.
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/bfdf3e35
Tree:
http://git-wip-us.apache.org/repos/asf/carbondata/blob/55c4e438/core/src/main/java/org/apache/carbondata/core/scan/executor/impl/AbstractQueryExecutor.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/9b9125b6/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/55c4e438/core/src/main/java/org/apache/carbondata/core/metadata/blocklet/SegmentInfo.java
--
diff --git
[CARBONDATA-2091][DataLoad] Support specifying sort column bounds in data
loading
Enhance data loading performance by specifying sort column bounds
1. Add row range number during convert-process-step
2. Dispatch rows to each sorter by range number
3. Sort/Write process step can be done
[CARBONDATA-1544][Datamap] Datamap FineGrain implementation
Implemented interfaces for FG datamap and integrated to filterscanner to use
the pruned bitset from FG datamap.
FG Query flow as follows.
1.The user can add FG datamap to any table and implement there interfaces.
2. Any filter query
http://git-wip-us.apache.org/repos/asf/carbondata/blob/55c4e438/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/RowLevelFilterExecuterImpl.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/55c4e438/core/src/main/java/org/apache/carbondata/core/scan/result/iterator/PartitionSpliterRawResultIterator.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/55c4e438/core/src/main/java/org/apache/carbondata/core/scan/processor/DataBlockIterator.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/55c4e438/core/src/main/java/org/apache/carbondata/core/util/AbstractDataFileFooterConverter.java
--
diff --git
[REBASE] Solve conflict after rebasing master
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/03157f91
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/03157f91
Diff:
http://git-wip-us.apache.org/repos/asf/carbondata/blob/daecc774/processing/src/main/java/org/apache/carbondata/processing/loading/sort/unsafe/holder/UnsafeSortTempFileChunkHolder.java
--
diff --git
[CARBONDATA-2159] Remove carbon-spark dependency in store-sdk module
To make assembling JAR of store-sdk module, it should not depend on
carbon-spark module
This closes #1970
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
http://git-wip-us.apache.org/repos/asf/carbondata/blob/55c4e438/processing/src/main/java/org/apache/carbondata/processing/merger/CarbonCompactionExecutor.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/8fcde02e/processing/src/main/java/org/apache/carbondata/processing/loading/sort/unsafe/holder/UnsafeSortTempFileChunkHolder.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/55c4e438/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/RowLevelRangeLessThanFiterExecuterImpl.java
--
diff --git
[CARBONDATA-2099] Refactor query scan process to improve readability
Unified concepts in scan process flow:
1.QueryModel contains all parameter for scan, it is created by API in
CarbonTable. (In future, CarbonTable will be the entry point for various table
operations)
2.Use term ColumnChunk to
[CARBONDATA-1992] Remove partitionId in CarbonTablePath
In CarbonTablePath, there is a deprecated partition id which is always 0, it
should be removed to avoid confusion.
This closes #1765
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
[CARBONDATA-2025] Unify all path construction through CarbonTablePath static
method
Refactory CarbonTablePath:
1.Remove CarbonStorePath and use CarbonTablePath only.
2.Make CarbonTablePath an utility without object creation, it can avoid
creating object before using it, thus code is cleaner
Support generating assembling JAR for store-sdk module
Support generating assembling JAR for store-sdk module and remove junit
dependency
This closes #1976
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/d32c0cf3
http://git-wip-us.apache.org/repos/asf/carbondata/blob/9b9125b6/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/schema/CarbonAlterTableRenameCommand.scala
--
diff --git
Revert "[CARBONDATA-2023][DataLoad] Add size base block allocation in data
loading"
This reverts commit 6dd8b038fc898dbf48ad30adfc870c19eb38e3d0.
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/3f1d101d
Tree:
http://git-wip-us.apache.org/repos/asf/carbondata/blob/bfdf3e35/processing/src/main/java/org/apache/carbondata/processing/loading/sort/unsafe/holder/UnsafeSortTempFileChunkHolder.java
--
diff --git
Repository: carbondata
Updated Branches:
refs/heads/master bfd77f69f -> c125f0caa
[CARBONDATA-2204] Optimized number of reads of tablestatus file while querying
This PR avoid reading status file multiple times. For first time query, it
reads 2 times(Needed for datamap refresher) and 1 time
Repository: carbondata
Updated Branches:
refs/heads/master 74f5d67c0 -> 3e36639ed
[CARBONDATA-2098] Optimize pre-aggregate documentation
optimize pre-aggregate documentation
move to separate file
add more examples
This closes #2022
Project:
[REBASE] Solve conflict after merging master
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/c305f309
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/c305f309
Diff:
Repository: carbondata
Updated Branches:
refs/heads/carbonstore-rebase5 7540cc9ca -> c305f309e (forced update)
http://git-wip-us.apache.org/repos/asf/carbondata/blob/c305f309/processing/src/main/java/org/apache/carbondata/processing/merger/CarbonDataMergerUtil.java
[CARBONDATA-2186] Add InterfaceAudience.Internal to annotate internal interface
This closes #1986
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/b1bc9c79
Tree:
http://git-wip-us.apache.org/repos/asf/carbondata/blob/d115c479/processing/src/main/java/org/apache/carbondata/processing/loading/sort/unsafe/holder/UnsafeSortTempFileChunkHolder.java
--
diff --git
[CARBONDATA-2091][DataLoad] Support specifying sort column bounds in data
loading
Enhance data loading performance by specifying sort column bounds
1. Add row range number during convert-process-step
2. Dispatch rows to each sorter by range number
3. Sort/Write process step can be done
http://git-wip-us.apache.org/repos/asf/carbondata/blob/636eb799/core/src/test/java/org/apache/carbondata/core/util/CarbonUtilTest.java
--
diff --git
a/core/src/test/java/org/apache/carbondata/core/util/CarbonUtilTest.java
http://git-wip-us.apache.org/repos/asf/carbondata/blob/c5740b19/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/DataLoadingUtil.scala
--
diff --git
[CARBONDATA-1544][Datamap] Datamap FineGrain implementation
Implemented interfaces for FG datamap and integrated to filterscanner to use
the pruned bitset from FG datamap.
FG Query flow as follows.
1.The user can add FG datamap to any table and implement there interfaces.
2. Any filter query
http://git-wip-us.apache.org/repos/asf/carbondata/blob/2e28c156/processing/src/main/java/org/apache/carbondata/processing/loading/steps/DataWriterProcessorStepImpl.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/636eb799/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/RowLevelFilterExecuterImpl.java
--
diff --git
[REBASE] Solve conflict after rebasing master
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/65daaca7
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/65daaca7
Diff:
[CARBONDATA-1992] Remove partitionId in CarbonTablePath
In CarbonTablePath, there is a deprecated partition id which is always 0, it
should be removed to avoid confusion.
This closes #1765
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
Support generating assembling JAR for store-sdk module
Support generating assembling JAR for store-sdk module and remove junit
dependency
This closes #1976
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/d85215a1
[CARBONDATA-1480]Min Max Index Example for DataMap
Datamap Example. Implementation of Min Max Index through Datamap. And Using the
Index while prunning.
This closes #1359
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
[CARBONDATA-1114][Tests] Fix bugs in tests in windows env
Fix bugs in tests that will cause failure under windows env
This closes #1994
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/8a9dd8b2
Tree:
Revert "[CARBONDATA-2023][DataLoad] Add size base block allocation in data
loading"
This reverts commit 6dd8b038fc898dbf48ad30adfc870c19eb38e3d0.
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/22bb333a
Tree:
[REBASE] Solve conflict after rebasing master
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/6944dd42
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/6944dd42
Diff:
[CARBONDATA-1997] Add CarbonWriter SDK API
Added a new module called store-sdk, and added a CarbonWriter API, it can be
used to write Carbondata files to a specified folder, without Spark and Hadoop
dependency. User can use this API in any environment.
This closes #1967
Project:
http://git-wip-us.apache.org/repos/asf/carbondata/blob/9daad358/processing/src/main/java/org/apache/carbondata/processing/merger/CarbonCompactionUtil.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/78aa2cc3/processing/src/main/java/org/apache/carbondata/processing/loading/sort/unsafe/holder/UnsafeSortTempFileChunkHolder.java
--
diff --git
[REBASE] resolve conflict after rebasing to master
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/90629314
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/90629314
Diff:
http://git-wip-us.apache.org/repos/asf/carbondata/blob/9daad358/core/src/test/java/org/apache/carbondata/core/util/path/CarbonFormatDirectoryStructureTest.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/d0990532/datamap/examples/src/minmaxdatamap/main/java/org/apache/carbondata/datamap/examples/MinMaxDataWriter.java
--
diff --git
Repository: carbondata
Updated Branches:
refs/heads/carbonstore-rebase5 8b94788a5 -> 7540cc9ca (forced update)
[CARBONDATA-2211] in case of DDL HandOff should not be execute in thread
1. DDL handoff will be executed in the blocking thread.
2. Auto handoff will be executed in a new
[CARBONDATA-1968] Add external table support
This PR adds support for creating external table with existing carbondata
files, using Hive syntax.
CREATE EXTERNAL TABLE tableName STORED BY 'carbondata' LOCATION 'path'
This closes #1749
Project:
http://git-wip-us.apache.org/repos/asf/carbondata/blob/b1c96285/processing/src/main/java/org/apache/carbondata/processing/loading/sort/impl/UnsafeParallelReadMergeSorterWithBucketingImpl.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/9daad358/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/AggregateDataMapCompactor.scala
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/636eb799/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/RowLevelRangeLessThanFiterExecuterImpl.java
--
diff --git
[CARBONDATA-2099] Refactor query scan process to improve readability
Unified concepts in scan process flow:
1.QueryModel contains all parameter for scan, it is created by API in
CarbonTable. (In future, CarbonTable will be the entry point for various table
operations)
2.Use term ColumnChunk to
701 - 800 of 2025 matches
Mail list logo