Repository: carbondata
Updated Branches:
refs/heads/master 0668e7d71 -> 3262230cb
[CARBONDATA-2260] CarbonThriftServer should support store carbon table on S3
CarbonThriftServer should support store carbon table on S3
Add config for AK,SK,EndPoint when start carbonsession
This closes #2073
Repository: carbondata
Updated Branches:
refs/heads/master b08ef0012 -> 21c5fb1db
[CARBONDATA-2390] Refresh Lucene data map for the exists table with data
if the table has old data before the creation of the Lucene data map, we should
use Refresh command to build data map manually.
This
Repository: carbondata
Updated Branches:
refs/heads/master d0f88a154 -> b08ef0012
[CARBONDATA-2380][DataMap] Support visible/invisible datamap for performance
tuning
Support making datamap visible/invisible through session env.
Invisible datamap will only be ignored during query and user can
Repository: carbondata
Updated Branches:
refs/heads/master 84267dc7a -> f2bb9f4eb
[HOTFIX] Avoid adding status if there is no datamaps on table.
This closes #
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
Repository: carbondata
Updated Branches:
refs/heads/master 8b33ab240 -> 42bf13719
[CARBONDATA-2338][Test] Add example to upload data to S3 by using SDK
Add example to write carbondata files into S3 using SDK
This closes #2165
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Repository: carbondata
Updated Branches:
refs/heads/master 668bfdd50 -> 8b33ab240
[CARBONDATA-2376] Improve Lucene datamap performance by eliminating blockid
while writing and reading index
Problem:
Currently DataMap interface implementations use blockid and blockletid while
writing index
Repository: carbondata
Updated Branches:
refs/heads/master b7b8073d6 -> 4a47630d3
[CARBONDATA-2375] Added CG prune before FG prune
This PR adds CG prune before FG prune, and passes the pruned segments and
indexfiles to FG DataMap for further pruning.
This closes #2204
Project:
Repository: carbondata
Updated Branches:
refs/heads/master 3ff574d29 -> b86ff926d
[CARBONDATA-2373][DataMap] Add bloom datamap to support precise equal query
For each indexed column, adding a bloom filter for each blocklet to
indicate whether it belongs to this blocklet.
Currently bloom
Repository: carbondata
Updated Branches:
refs/heads/master 94973c587 -> 12c164958
[CARBONDATA-2374][DataMap] Fix errors in minmax datamap example
This closes #2201
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
Repository: carbondata
Updated Branches:
refs/heads/master 2fc0ad306 -> c58eb43ba
http://git-wip-us.apache.org/repos/asf/carbondata/blob/c58eb43b/processing/src/main/java/org/apache/carbondata/processing/datamap/DataMapWriterListener.java
Repository: carbondata
Updated Branches:
refs/heads/master a9d5e9dec -> e6d03d112
[CARBONDATA-2312]Support In Memory Catalog
Support Storing Catalog in memory(not in hive) for each session, after session
restart user can create eternal table and run select query
This closes #2103
Project:
Repository: carbondata
Updated Branches:
refs/heads/master e6d03d112 -> c471386fa
[CARBONDATA-2364][HotFix] Remove useless code in dataloading
Remove useless and performance related code in dataloading
This closes #2193
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Repository: carbondata
Updated Branches:
refs/heads/master 366415691 -> 8999a8aff
[CARBONDATA-2363] Add CarbonStreamingQueryListener to SparkSession
This closes #2188
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
Repository: carbondata
Updated Branches:
refs/heads/branch-1.3 6493892af -> 25d9adb9c
[CARBONDATA-2363][branch-1.3] Add CarbonStreamingQueryListener to SparkSession
This closes #2189
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
Repository: carbondata
Updated Branches:
refs/heads/master 860e144d4 -> 5f2a748f6
[CARBONDATA-2353] Added cache for datamap schema provider and added tests
Problem:
Currently, there is no cache for datamap schema provider, so every time it
reads schema from disk.
Solution:
Add cache to the
Repository: carbondata
Updated Branches:
refs/heads/master 4a9adce46 -> ceac8abf6
[CARBONDATA-2325]Page level uncompress and Improve query performance for unsafe
no-dictionary columns
Page Level Decoder for query
Added page level on demand decoding, in current code, all pages of blocklet is
Repository: carbondata
Updated Branches:
refs/heads/master e2641 -> 4a9adce46
[CARBONDATA-2230][Documentation]add documentation for segment lock files clean
up configuration
added documentation for segment lock files clean up configuration
This closes #2138
Project:
Repository: carbondata
Updated Branches:
refs/heads/master cf1e4d4ca -> e2641
[CARBONDATA-2304][Compaction] Prefetch rowbatch during compaction
Add a configuration to enable prefetch during compaction.
During compaction, carbondata will query on the segments and retrieve a rowï¼
then
Repository: carbondata
Updated Branches:
refs/heads/master 4c9bed8bc -> ecd6c0c54
[CARBONDATA-2350][DataMap] Fix bugs in minmax datamap example
Fix bugs in minmax datamap example
This closes #2174
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
Repository: carbondata
Updated Branches:
refs/heads/master 5c058acc7 -> 9ee74fe07
[CARBONDATA-2343][DataMap]Improper filter resolver cause more filter scan on
data that could be skipped
Currently DataMapChooser will choose and combine datamap for
expressions and it will wrap the expression
Repository: carbondata
Updated Branches:
refs/heads/master 687118a1c -> 520481838
[CARBONDATA-2324] Support config ExecutorService in search mode
Make scan thread configurable in search mode
This closes #2150
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
Repository: carbondata
Updated Branches:
refs/heads/master a5645875b -> 94ea913a0
[CARBONDATA-2308] Support concurrent loading and compaction
When data loading (or insert into) is in progress, user should be able to do
compaction on same table
This PR supports it.
This closes #2132
Repository: carbondata
Updated Branches:
refs/heads/master 2ad621df5 -> a5645875b
[CARBONDATA-2320][Datamap] Fix error in luncene coarse grain datamap suite
add DM-properties while creating datamap, otherwise the test will fail
This closes #2145
Project:
Repository: carbondata
Updated Branches:
refs/heads/master f6990d622 -> 57c54fb7f
[CARBONDATA-2299]Support showing all segment information(include visible and
invisible segments)
Use command 'SHOW HISTORY SEGMENTS' to show all segment information(include
visible and invisible segments)
Repository: carbondata
Updated Branches:
refs/heads/master 85a958ee3 -> f6990d622
[CARBONDATA-2319][Profiler] Fix carbon_scan_time and carbon_IO_time in task
statistics
carbon_scan_time and carbon_IO_time are incorrect in task statistics. This PR
fix it
This closes #2144
Project:
Repository: carbondata
Updated Branches:
refs/heads/carbonstore [created] 638ed1fa7
Repository: carbondata
Updated Branches:
refs/heads/master 0311c439a -> 638ed1fa7
[CARBONDATA-2297] Support SEARCH_MODE for basic filter query
1. Add a new spark schedule type.
2.Add a new Query Executor
This closes #2123
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Repository: carbondata
Updated Branches:
refs/heads/master 6374d361b -> 32405f4f5
[CARBONDATA-2300] Add ENABLE_UNSAFE_IN_QUERY_EXECUTION as a configuration
parameter in presto integration
Add ENABLE_UNSAFE_IN_QUERY_EXECUTION as a configuration parameter in presto
integration
Provide this
Repository: carbondata
Updated Branches:
refs/heads/master 0992b3b23 -> 6374d361b
[CARBONDATA-2298]Delete segment lock files before update metadata
If there are some COMPACTED segments and their last modified time is within one
hour, the segment lock files deletion operation will not be
Repository: carbondata
Updated Branches:
refs/heads/branch-1.3 b704c1aa8 -> 30bc68c8b
[CARBONDATA-2298][BACKPORT-1.3]Delete segment lock files before update metadata
If there are some COMPACTED segments and their last modified time is within one
hour, the segment lock files deletion
Repository: carbondata
Updated Branches:
refs/heads/master f910cfa98 -> 0992b3b23
[CARBONDATA-2302]Fix some bugs when separate visible and invisible segments
info into two files
There are some bugs when separate visible and invisible segments info into two
files:
1.It will not delete
Repository: carbondata
Updated Branches:
refs/heads/master fca960e37 -> 52f8d7111
[CARBONDATA-1899] Optimize CarbonData concurrency test case
This closes #1713
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
Repository: carbondata
Updated Branches:
refs/heads/master cd509d5db -> f187856e3
[Documentation] The syntax and the example is corrected
Overwrite syntax and examples where corrected as it was throwing error
This closes #2116
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Repository: carbondata
Updated Branches:
refs/heads/master e8da88002 -> cd509d5db
[CARBONDATA-2296] Fix create datamap command with out on table syntax and
correct the target location of test framework
Problem
1 Create datamap command fails if user does not mention on table
2 Test framework
Repository: carbondata
Updated Branches:
refs/heads/master 7b56126b7 -> e8da88002
[CARBONDATA-2130] Fix some spelling error in CarbonData
Fix some spelling error in CarbonData:
cloumn => column
realtion => relation
parition=>partition
Dimesion =>Dimension
dictionay=>dictionary
This closes
Repository: carbondata
Updated Branches:
refs/heads/master 9fba68454 -> 05eda0014
[CARBONDATA-2292] Different module dependencies different version spark jar
Fix the problem in presto integration module pom to keep one spark version
This closes #2111
Project:
Repository: carbondata
Updated Branches:
refs/heads/master 5daae9515 -> 9fba68454
[CARBONDATA-2291] Added datamap status and refresh command to sync data
manually to datamaps
In order maintain the data consistency we require to enable or disable datamaps
when data is not synchronized
Repository: carbondata
Updated Branches:
refs/heads/master e43be5e74 -> 7e0803fec
http://git-wip-us.apache.org/repos/asf/carbondata/blob/7e0803fe/integration/spark2/src/main/scala/org/apache/carbondata/spark/rdd/CarbonTableCompactor.scala
[CARBONDATA-2270] Write segment file in loading for non-partition table
Currently when loading into partition table, carbon is writing a segment file
to record the segment and index file location mapping.
This can avoid frequent listFile operation when querying. The same should be
done for
Repository: carbondata
Updated Branches:
refs/heads/master 0c200d834 -> e43be5e74
[CARBONDATA-2073][CARBONDATA-1516][Tests] Add test cases for pre-aggregate
datamap
This closes #1857
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
Repository: carbondata
Updated Branches:
refs/heads/master 0e6fe6cae -> 0c200d834
[CARBONDATA-2285] Spark integration code refactor
This closes #2104
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/0c200d83
[CARBONDATA-2073][CARBONDATA-1516][Tests] Add test cases for timeseries datamape
This closes #1856
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/0e6fe6ca
Tree:
Repository: carbondata
Updated Branches:
refs/heads/master 8bda43b05 -> 0e6fe6cae
http://git-wip-us.apache.org/repos/asf/carbondata/blob/0e6fe6ca/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/timeseries/TestTimeseriesTableSelection.scala
Repository: carbondata
Updated Branches:
refs/heads/master 3647aee3c -> 8bda43b05
[CARBONDATA-2262] Support the syntax of 'using CARBONDATA' to create table
Add new function to Support the syntax of 'using CARBONDATA' to create table,
for example:
CREATE TABLE src_carbondata1(key INT, value
Repository: carbondata
Updated Branches:
refs/heads/master c723947a7 -> 05086e536
http://git-wip-us.apache.org/repos/asf/carbondata/blob/05086e53/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/datamap/CarbonDataMapShowCommand.scala
http://git-wip-us.apache.org/repos/asf/carbondata/blob/c723947a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/StreamHandoffRDD.scala
--
diff --git
[CARBONDATA-2165]Remove spark in carbon-hadoop module
1. Streaming relation RecordReader is moved to carbon-streaming module.
2. RDD related class is moved to carbon-spark2 module
This closes #2074
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
Repository: carbondata
Updated Branches:
refs/heads/master 2e1ddb542 -> c723947a7
http://git-wip-us.apache.org/repos/asf/carbondata/blob/c723947a/streaming/src/main/scala/org/apache/carbondata/streaming/StreamSinkFactory.scala
http://git-wip-us.apache.org/repos/asf/carbondata/blob/c723947a/streaming/src/main/java/org/apache/carbondata/streaming/CarbonStreamRecordReader.java
--
diff --git
Repository: carbondata
Updated Branches:
refs/heads/master 2eb8f047c -> ee9df2e2c
[CARBONDATA-2262] Support the syntax of 'STORED AS CARBONDATA' to create table
This closes #2078
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
Repository: carbondata
Updated Branches:
refs/heads/master 5da419149 -> 2eb8f047c
[CARBONDATA-2264] Support create table using CarbonSource Without TableName
CarbonData should work when create table Without TableName in options.
This closes #2080
Project:
Repository: carbondata
Updated Branches:
refs/heads/master d5bec4dd7 -> 72f50b507
[CARBONDATA-2258] Separate visible and invisible segments info into two files
to reduce the size of tablestatus file.
The size of the tablestatus file is getting larger, there are many places will
scan this
[CARBONDATA-2271] Collect SQL execution information to driver side
This PR add support for collecting SQL execution information for profiling
purpose. See CarbonSessionExample.scala, it will generate a separated log file
containing profiling information
This closes #2087
Project:
Repository: carbondata
Updated Branches:
refs/heads/master e58ca9f0c -> d5bec4dd7
http://git-wip-us.apache.org/repos/asf/carbondata/blob/d5bec4dd/integration/spark-common/src/main/scala/org/apache/spark/sql/profiler/Profiler.scala
Repository: carbondata
Updated Branches:
refs/heads/carbonfile b384b6e1f -> fcdda5460
[CARBONDATA-1998][SDK] Support CarbonReader to read carbondata files
Support CarbonReader to read carbondata files
This closes #2072
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
[CARBONDATA-2253][SDK] Support write JSON/Avro data to carbon files
This PR adds AvroCarbonWriter in SDK, it can be used to write JSON or Avro data
to carbon files
This closes #2061
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
http://git-wip-us.apache.org/repos/asf/carbondata/blob/223c25de/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonDropTableCommand.scala
--
diff --git
[CARBONDATA-2236]added sdv test cases for standard partition
added sdv test cases for standard partition
This closes #2042
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/98b85501
Tree:
Repository: carbondata
Updated Branches:
refs/heads/carbonfile 99766b8af -> b384b6e1f (forced update)
[HOTFIX] Fix CI random failure
This closes #2068
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/04ff3676
[CARBONDATA-2254][DOC] Optimize CarbonData documentation
Optimize CarbonData documentation
This closes #2062
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/dbec6f9f
Tree:
http://git-wip-us.apache.org/repos/asf/carbondata/blob/b384b6e1/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/createTable/TestCreateTableUsingSparkCarbonFileFormat.scala
--
diff --git
[CARBONDATA-2230]Add a path into table path to store lock files and delete
useless segment lock files before loading
This closes #2045
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/0609fc52
Tree:
[CARBONDATA-2224][File Level Reader Support] Refactoring of #2055
Review comment fixes and refactoring of #2055
This closes #2069
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/b384b6e1
Tree:
[CARBONDATA-2223] Adding Listener Support for Partition
Adding Listener Support for Partition
This closes #2031
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/f5cdd5ca
Tree:
[CARBONDATA-2224][File Level Reader Support] External File level reader support
File level reader reads any carbondata file placed in any external file path.
This closes #2055
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
Repository: carbondata
Updated Branches:
refs/heads/carbonfile 7a124ecd8 -> 99766b8af
http://git-wip-us.apache.org/repos/asf/carbondata/blob/99766b8a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/createTable/TestCreateTableUsingSparkCarbonFileFormat.scala
[CARBONDATA-2224][File Level Reader Support] Refactoring of #2055
Review comment fixes and refactoring of #2055
This closes #2069
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/99766b8a
Tree:
Repository: carbondata
Updated Branches:
refs/heads/master dbec6f9f2 -> 0609fc52c
[CARBONDATA-2230]Add a path into table path to store lock files and delete
useless segment lock files before loading
This closes #2045
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
Repository: carbondata
Updated Branches:
refs/heads/master 04ff36764 -> e39b0a14a
[CARBONDATA-2253][SDK] Support write JSON/Avro data to carbon files
This PR adds AvroCarbonWriter in SDK, it can be used to write JSON or Avro data
to carbon files
This closes #2061
Project:
Repository: carbondata
Updated Branches:
refs/heads/master a386f1f4e -> 04ff36764
[HOTFIX] Fix CI random failure
This closes #2068
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/04ff3676
Tree:
http://git-wip-us.apache.org/repos/asf/carbondata/blob/7a124ecd/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonDropTableCommand.scala
--
diff --git
Repository: carbondata
Updated Branches:
refs/heads/carbonfile 6cb6f8380 -> 7a124ecd8
[CARBONDATA-2255] Rename the streaming examples
optimize streaming examples
This closes #2064
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
[CARBONDATA-2224][File Level Reader Support] External File level reader support
File level reader reads any carbondata file placed in any external file path.
This closes #2055
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
[CARBONDATA-2250][DataLoad] Reduce massive object generation in global sort
Generate compatator outside the function, otherwise it will be generated for
every row
This closes #2059
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
[CARBONDATA-2244]fix creating pre-aggregate table bug when there are
invisibility INSERT_IN_PROGRESS/INSERT_OVERWRITE_IN_PROGRESS segments on main
table
When there are some invisibility
INSERT_IN_PROGRESS/INSERT_OVERWRITE_IN_PROGRESS segments on main table, it can
not create preaggregate
Repository: carbondata
Updated Branches:
refs/heads/branch-1.3 0283c938b -> 96e26d7a5
[CARBONDATA-2244]fix creating pre-aggregate table bug when there are
invisibility INSERT_IN_PROGRESS/INSERT_OVERWRITE_IN_PROGRESS segments on main
table
When there are some invisibility
Repository: carbondata
Updated Branches:
refs/heads/master 31011fc29 -> a386f1f4e
[CARBONDATA-2244]fix creating pre-aggregate table bug when there are
invisibility INSERT_IN_PROGRESS/INSERT_OVERWRITE_IN_PROGRESS segments on main
table
When there are some invisibility
Repository: carbondata
Updated Branches:
refs/heads/master 5b48e70a8 -> 31011fc29
[CARBONDATA-2250][DataLoad] Reduce massive object generation in global sort
Generate compatator outside the function, otherwise it will be generated for
every row
This closes #2059
Project:
Repository: carbondata
Updated Branches:
refs/heads/master d4f9003af -> 5b48e70a8
[CARBONDATA-2248]Fixed Memory leak in parser/CarbonSparkSqlParser.scala
In some scenarios where more sessions are created, there are many parser
failure objects are accumulated in memory inside thread locals
Repository: carbondata
Updated Branches:
refs/heads/carbonfile [created] 6cb6f8380
[CARBONDATA-2231] Removed redundant code from streaming test cases to improve
CI time
This closes #2036
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/07d4da7a
Tree:
[CARBONDATA-2288] [Test] Exception is Masked Inside
StandardPartitionTableQueryTestCase
This closes #2034
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/334a420a
Tree:
[CARBONDATA-2032][DataLoad] directly write carbon data files to HDFS
Currently in data loading, carbondata write the final data files to local disk
and then copy it to HDFS.
For saving disk IO, carbondata can skip this procedure and directly write these
files to HDFS.
This closes #1825
[CARBONDATA-2139] Optimize CTAS documentation and test case
Optimize CTAS:
optimize documentation
add test case
drop table after finishing run test acse, remove the file of table from disk
This closes #1939
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
Repository: carbondata
Updated Branches:
refs/heads/master 030ef947e -> 881ea1e12 (forced update)
[Documentation] Updated Readme for Datamap Feature
Readme is updated with the links to the following new topics on Datamap
This closes #2044
Project:
[HOTFIX] Fix unsafe load in test case
Unsafe Load fails for dictionary columns because of refactoring
This closes #2051
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/18380a6b
Tree:
[CARBONDATA-2226] Removed redundant and unnecessary test cases to improve CI
time for PreAggregation Create and Drop datamap feature
Description: Removed redundant and unnecessary test cases to improve CI time
for PreAggregation Create and Drop datamap feature
This closes #2035
Project:
[HOTFIX] Fix fingbugs
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/b509ad8d
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/b509ad8d
Diff:
[CARBONDATA-2194] Exception is improper when use incorrect bad record action
type
This closes #1989
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/5ab30995
Tree:
[CARBONDATA-2235]Update configuration-parameters.md
carbon.query.show.datamaps
This property is a system configuration, if it is set to true, show tables will
list all the tables including datamaps(ex: Preaggregate) and if it is false,
show tables will filter the datamaps and only will show
[CARBONDATA-2241][Docs][BugFix] Updated Doc for query which will execute on
datamap
Fix: Corrected the query so that it will execute using datamap.
This closes #2048
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
[CARBONDATA-2232][DataLoad] Fix incorrect logic in spilling unsafe pages to disk
The unsafe row page will only be written to disk if the memory is
unavailable -- the previous logic just reversed it.
This closes #2037
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
[CARBONDATA-] Update the FAQ doc for some mistakes
Update the FAQ doc for some mistakes
This closes #2029
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/6ae56b92
Tree:
Repository: carbondata
Updated Branches:
refs/heads/master f16fd5423 -> 030ef947e
[CARBONDATA-2139] Optimize CTAS documentation and test case
Optimize CTAS:
optimize documentation
add test case
drop table after finishing run test acse, remove the file of table from disk
This closes #1939
Repository: carbondata
Updated Branches:
refs/heads/master c5b21ff67 -> f16fd5423
[CARBONDATA-2194] Exception is improper when use incorrect bad record action
type
This closes #1989
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
Repository: carbondata
Updated Branches:
refs/heads/master 94707f2da -> c5b21ff67
[CARBONDATA-2288] [Test] Exception is Masked Inside
StandardPartitionTableQueryTestCase
This closes #2034
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
Repository: carbondata
Updated Branches:
refs/heads/master b6b796162 -> 94707f2da
[CARBONDATA-2226] Removed redundant and unnecessary test cases to improve CI
time for PreAggregation Create and Drop datamap feature
Description: Removed redundant and unnecessary test cases to improve CI time
Repository: carbondata
Updated Branches:
refs/heads/master 4f672eb72 -> fddf3bab8
[CARBONDATA-2241][Docs][BugFix] Updated Doc for query which will execute on
datamap
Fix: Corrected the query so that it will execute using datamap.
This closes #2048
Project:
Repository: carbondata
Updated Branches:
refs/heads/master a01aabd53 -> 4f672eb72
[CARBONDATA-2032][DataLoad] directly write carbon data files to HDFS
Currently in data loading, carbondata write the final data files to local disk
and then copy it to HDFS.
For saving disk IO, carbondata can
Repository: carbondata
Updated Branches:
refs/heads/master 1c8d5f0bb -> a01aabd53
[CARBONDATA-2231] Removed redundant code from streaming test cases to improve
CI time
This closes #2036
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
501 - 600 of 2025 matches
Mail list logo