Repository: carbondata
Updated Branches:
refs/heads/master 6351c3a07 -> e580d64ef
[CARBONDATA-2625] While BlockletDataMap loading, avoid multiple times listing
of files
CarbonReader is very slow for many files as blockletDataMap lists files
of folder while loading each segment. This
Repository: carbondata
Updated Branches:
refs/heads/master 0c363bd18 -> 43285bbd1
[CARBONDATA-2760] Reduce Memory footprint and store size for local dictionary
encoded columns
Problem:
Local dictionary encoded page is using unsafevarlenghtcolumn column page which
internally maintains offset
Repository: carbondata
Updated Branches:
refs/heads/master 63b930c89 -> 892594743
[CARBONDATA-2735] Fixed Performance issue for complex array data type when
number of elements in array is more
GC issue due to repeated arraylist resize, is fixed
This closes #2487
Project:
Repository: carbondata
Updated Branches:
refs/heads/master 98c758190 -> 637a97469
[CARBONDATA-2717] fixed table id empty problem while taking drop lock
This closes #2472
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
http://git-wip-us.apache.org/repos/asf/carbondata/blob/f9114036/core/src/main/java/org/apache/carbondata/core/scan/executor/util/QueryUtil.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/f9114036/integration/spark-common/src/main/scala/org/apache/carbondata/events/AlterTableEvents.scala
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/f9114036/core/src/main/java/org/apache/carbondata/core/datastore/block/SegmentTaskIndexWrapper.java
--
diff --git
[CARBONDATA-2720] Remove dead code
For acturate coverage results and easy maintainance
This closes #2354
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/f9114036
Tree:
http://git-wip-us.apache.org/repos/asf/carbondata/blob/f9114036/datamap/lucene/src/main/java/org/apache/carbondata/datamap/lucene/LuceneCoarseGrainDataMap.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/f9114036/core/src/test/java/org/apache/carbondata/core/scan/expression/conditional/GreaterThanExpressionUnitTest.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/f9114036/core/src/test/java/org/apache/carbondata/core/datastore/SegmentTaskIndexStoreTest.java
--
diff --git
[CARBONDATA-2532][Integration] Carbon to support spark 2.3 version,
compatability issues
Changes continued..
All compatability issues when supporting 2.3 addressed
Supported pom profile -P"spark-2.3"
This closes #2366
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
http://git-wip-us.apache.org/repos/asf/carbondata/blob/d0fa5239/integration/spark2/src/main/commonTo2.2And2.3/org/apache/spark/sql/hive/CreateCarbonSourceTableAsSelectCommand.scala
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/d0fa5239/pom.xml
--
diff --git a/pom.xml b/pom.xml
index bbf6c9d..82b9b7b 100644
--- a/pom.xml
+++ b/pom.xml
@@ -560,6 +560,7 @@
[CARBONDATA-2532][Integration] Carbon to support spark 2.3 version,
ColumnVector Interface
Column vector and Columnar Batch interface compatibility issues has been
addressed in this PR, The changes were related to below modifications
done in spark interface
a) This is a refactoring of
http://git-wip-us.apache.org/repos/asf/carbondata/blob/d0fa5239/integration/spark2/src/main/spark2.2/org/apache/spark/sql/execution/BatchedDataSourceScanExec.scala
--
diff --git
Repository: carbondata
Updated Branches:
refs/heads/carbonstore 6fa86381f -> d0fa52396
http://git-wip-us.apache.org/repos/asf/carbondata/blob/2b8ae262/streaming/src/main/java/org/apache/carbondata/streaming/CarbonStreamRecordReader.java
Repository: carbondata
Updated Branches:
refs/heads/master 75126c6ca -> 438b4421e
http://git-wip-us.apache.org/repos/asf/carbondata/blob/438b4421/integration/spark-common-test/src/test/resources/adap.csv
--
diff --git
[CARBONDATA-2607][Complex Column Enhancements] Complex Primitive DataType
Adaptive Encoding
In this PR the improvement was done to save the complex type more effectively
so that reading becomes more efficient.
The changes are:
Primitive types inside complex types are separate pages.
Repository: carbondata
Updated Branches:
refs/heads/master c82e3e85f -> cb10d03a7
[CARBONDATA-2642] Added configurable Lock path property
A new property is being exposed which will allow the user to configure the lock
path "carbon.lock.path"
Refactored code to create a separate
[HOTFIX] Implementing getMemorySize in BlockletDataMapIndexWrapper
This closes #2330
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/16ed99a1
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/16ed99a1
[CARBONDATA-2577] [CARBONDATA-2579] Fixed issue in Avro logical type for nested
Array and document update
Problem: Nested Array logical type of date, timestamp-millis, timestamp-micros
is not working.
Root cause: During the preparation of carbon schema from avro schema. For array
nested type
[CARBONDATA-2500] Add new API to read user's schema in SDK
The field order in schema that SDK returns is different between write and read
data type of schema in SDK
This closes #2341
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
[HOTFIX] Changes in selecting the carbonindex files
Currently, in the query flow while getting the index files we are checking for
either mergeFileName or the list of files. After this change, we will
be checking for both files and mergeFileName
This closes #2333
Project:
[CARBONDATA-2524] Support create carbonReader with default projection
This closes #2338
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/8b80b12e
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/8b80b12e
[CARBONDATA-2529] Fixed S3 Issue for Hadoop 2.8.3
This issue fixes the issue while loading the data with S3 as backend
This closes #2340
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/4d22ddc9
Tree:
[CARONDATA-2559]task id set for each carbonReader in threadlocal
1. Task Id set for CarbonReader because for each CarbonReader object it should
be separate Thread Local variable .
2. If sort-Column is not given to CarbonWriter Describe formatted showing
default sort_cols is fixed
3. Issue :
[CARBONDATA-2491] Fix the error when reader read twice with SDK carbonReader
This PR includes:
1. Fix the error out of bound when reader read twice with SDK carbonReader
2. Fix the java.lang.NegativeArraySizeException
3. Add timestamp and bad record test case
4. support parallel read of two
[CARBONDATA-2389] Search mode support FG datamap
Search mode support FG datamap
This closes #2290
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/b3384593
Tree:
[CARBONDATA-2503] Data write fails if empty value is provided for sort columns
in sdk is fixed
SortColumn with empty value was giving exception
This closes #2326
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
[CARBONDATA-2552]Fix Data Mismatch for Complex Data type Array of Timestamp
with Dictionary Include
Fix Data Mismatch for Complex Data type Array and Struct of Timestamp with
Dictionary Include
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
[CARBONDATA-2545] Fix some spell error in CarbonData
Change Inerface to Interface
This closes #2346
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/33b825d7
Tree:
[CARBONDATA-2546] Fixed the ArrayIndexOutOfBoundsException when give same
column twice in projection of CarbonReader
This closes #2348
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/a7faef8a
Tree:
[CARBONDATA-2514] Added condition to check for duplicate column names
1. Duplicate columns check was not present.
2. IndexFileReader was not being closed due to which index file could not be
deleted.
This closes #2332
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
[CARBONDATA-2496] Changed to hadoop bloom implementation and added compress
option to compress bloom on disk
This PR removes the guava bloom and adds the hadoop bloom. And also added the
compress bloom option to compress bloom on disk and in memory as well.
The user can use bloom_compress
[CARBONDATA-2136] Fixed bug related to data load for bad_record_action as
REDIRECT or IGNORE and sort scope as NO_SORT
Problem: When data loading is performed with bad_record_action as REDIRECT or
IGNORE and
with sort_scope option as NO_SORT, it was throwing an error as our row batch
was
[CARBONDATA-2538] added filter while listing files from writer path
1. Added filter to list only index and carbondata files. So even if the lock
files are present proper exception can be thrown
2. Updated complex type docs
This closes #2344
Project:
[CARBONDATA-2507] enable.offheap.sort not validate in CarbonData
In #2274, the value of enable.offheap.sort will transform to false when args[0]
not equal to true, including false and other string, like f,any and so on.
So we should validate it.
This closes #2331
Project:
[CARBONDATA-2481] Adding SDV for SDKwriter
Adding SDV testcases for SDKwriter
This closes #2308
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/6cc86db8
Tree:
[CARBONDATA-2558] Optimize carbon schema reader interface of SDK
Optimize carbon schema reader interface of SDK
1.create CarbonSchemaReader and move schema read interface from CarbonReader to
CarbonSchemaReader
2.change the return type from List to SDK Schema, remove the tableInfo return
type
[CARBONDATA-2499][Test] Validate the visible/invisible status of datamap
This closes #2325
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/1b6ce8cd
Tree:
[CARBONDATA-2495][Doc][BloomDataMap] Add document for bloomfilter datamap
add document for bloomfilter datamap
This closes #2323
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/d9534c2c
Tree:
[CARBONDATA-2555] SDK reader set default isTransactional as false
SDK writer is having default value of isTransactional is false. But reader is
not like this.
So, Fixing this by deafult make SDK to use flat folder structure.
This closes #2352
Project:
[CARBONDATA-2198] Fixed bug for streaming data for bad_records_action as
REDIRECT or IGNORE
1. Refactored streaming functionality for bad_records_action as IGNORE or
REDIRECT
2. Added related test cases
This closes #2014
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
[CARBONDATA-2493] DataType.equals() failes for complex types
This closes #2319
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/33941281
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/33941281
Diff:
[CARBONDATA-2227] Added support to show partition details in describe formatted
Added Detailed information in describe formatted command like partition
location and partition values.
Example Usage: Descsribe formatted partition(partition_col_name=partition_value)
This closes #2033
Project:
[CARBONDATA-2566] Optimize CarbonReaderExample
Optimize CarbonReaderExample
1.Add different data type, including date and timestamp
2. update the doc
3.invoke the
Schema schema = CarbonSchemaReader
.readSchemaInSchemaFile(dataFiles[0].getAbsolutePath())
.asOriginOrder();
This closes #2356
[CARBONDATA-2575] Add document to explain DataMap Management
Add document to explain DataMap Management
This closes #2360
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/d401e060
Tree:
[CARBONDATA-2557] [CARBONDATA-2472] [CARBONDATA-2570] Improve Carbon Reader
performance on S3 and fixed datamap clear issue in reader
[CARBONDATA-2557] [CARBONDATA-2472] Problem : CarbonReaderBuilder.build() is
slower in s3. It takes around 8 seconds to finish build()
Solution: S3 is slow in
http://git-wip-us.apache.org/repos/asf/carbondata/blob/7ef91645/integration/presto/src/main/java/org/apache/carbondata/presto/readers/IntegerStreamReader.java
--
diff --git
[CARBONDATA-2487] Block filters for lucene with more than one text_match udf
This closes #2311
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/7cba44b9
Tree:
[Hoxfix] Upgrade dev version to 1.5.0-SNAPSHOT and fix some small issues
1.Upgrade dev version to 1.5.0-SNAPSHOT
2.Fix carbon-spark-sql issue
3.Remove hadoop 2.2 profile
This closes #2359
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
[CARBONDATA-2498] Change CarbonWriterBuilder interface to take schema while
creating writer
This closes #2316
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/26eb2d0b
Tree:
[Documentation] Editorial Review comment fixed
Editorial Review comment fixed
This closes #2320
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/5ad70095
Tree:
[CARBONDATA-2206]add documentation for lucene datamap
added documentation for lucene datamap
This closes #2215
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/061871ed
Tree:
[CARBONDATA-2355] Support run SQL on carbondata files directly
Support run SQL on carbondata files directly
This closes #2181
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/9469e6bd
Tree:
[CARBONDATA-2521] Support create carbonReader without tableName
Add new method for creating carbonReader without tableName
1.add new interface: public static CarbonReaderBuilder builder(String tablePath)
2.Default value of table name is UnknownTable + time
This closes #2336
Project:
http://git-wip-us.apache.org/repos/asf/carbondata/blob/7ef91645/processing/src/test/java/org/apache/carbondata/processing/StoreCreator.java
--
diff --git
[CARBONDATA-2554] Added support for logical type
Added support for date and timestamp logical types in AvroCarbonWriter.
This closes #2347
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/2f234869
Tree:
[CARBONDATA-2433] [Lucene GC Issue] Executor OOM because of GC when blocklet
pruning is done using Lucene datamap
Problem
Executor OOM because of GC when blocklet pruning is done using Lucene datamap
Analysis
While seraching using lucene it creates a PriorityQueue to hold the documents.
As
Repository: carbondata
Updated Branches:
refs/heads/spark-2.3 8fe165668 -> 041603dcc
[CARBONDATA-2413] After running CarbonWriter, there is null/_system directory
about datamap
After running CarbonWriter, there is null directory:
***null/_system# ls
datamap.mdtfile
**# git status
Fix:
[CARBONDATA-2440] doc updated to set the property for SDK user
doc updated to set the property for SDK user
This closes #2274
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/e1ef85ac
Tree:
http://git-wip-us.apache.org/repos/asf/carbondata/blob/7ef91645/core/src/main/java/org/apache/carbondata/core/memory/UnsafeMemoryManager.java
--
diff --git
[CARBONDATA-2508] Fix the exception that can't get executorService when start
search mode twice
This closes #2355
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/6aadfe70
Tree:
[CARBONDATA-2571] Calculating the carbonindex and carbondata file size of a
table is wrong
Problem:
While calculating the carbonindex files size, we are checking either index file
or merge file. But in PR#2333, implementation is changed to fill both
the file name and the merge file name. So, we
[CARBONDATA-2494] Fix lucene datasize and performance
Improved lucene datamap size and performance by using the following parameters.
New DM properties
1.flush_cache: size of the cache to maintain in Lucene writer, if specified
then it tries to aggregate the unique data till the cache limit and
[CARBONDATA-2433][LUCENE]close the lucene index reader after every task and
clean the resource and other functional issues
problem:
Lucene IndexReader opened during query is never closed. Which will impact
performance and will lead to memory issues
Lucene index will not index the stop words
[CARBONDATA-2519] Add document for CarbonReader
Add document for CarbonReader
This closes #2337
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/ddf3e859
Tree:
[CARBONDATA-2520] Clean and close datamap writers on any task failure during
load
Problem: The datamap writers registered to listener are closed or finished only
in case of load success case and not in any failure case. So when tesing
lucene, it is found that, after task is failed and the
[CARBONDATA-2489] Coverity scan fixes
https://scan4.coverity.com/reports.htm#v29367/p11911
This closes #2313
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/7ef91645
Tree:
Repository: carbondata
Updated Branches:
refs/heads/master 7cba44b90 -> 33941281e
[CARBONDATA-2493] DataType.equals() failes for complex types
This closes #2319
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
http://git-wip-us.apache.org/repos/asf/carbondata/blob/7ef91645/core/src/main/java/org/apache/carbondata/core/memory/UnsafeMemoryManager.java
--
diff --git
[CARBONDATA-2489] Coverity scan fixes
https://scan4.coverity.com/reports.htm#v29367/p11911
This closes #2313
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/7ef91645
Tree:
http://git-wip-us.apache.org/repos/asf/carbondata/blob/7ef91645/integration/presto/src/main/java/org/apache/carbondata/presto/readers/IntegerStreamReader.java
--
diff --git
Repository: carbondata
Updated Branches:
refs/heads/master f184de885 -> 7ef916455
http://git-wip-us.apache.org/repos/asf/carbondata/blob/7ef91645/processing/src/test/java/org/apache/carbondata/processing/StoreCreator.java
--
Repository: carbondata
Updated Branches:
refs/heads/master f2bb9f4eb -> 1b8271726
[CARBONDATA-2369] Add a document for Non Transactional table with SDK writer
guide
This closes #2198
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
Repository: carbondata
Updated Branches:
refs/heads/master 4a47630d3 -> 5f3264799
[CARBONDATA-2359][SDK] Support applicable load options and table properties for
Non-Transactional table
Support read multiple sdk writer placed at same path
This closes #2190
Project:
Repository: carbondata
Updated Branches:
refs/heads/master b86ff926d -> b7b8073d6
http://git-wip-us.apache.org/repos/asf/carbondata/blob/b7b8073d/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonLoadDataCommand.scala
[CARBONDATA-2360][Non Transactional Table] Insert into Non-Transactional Table
Also supports overwrite clause
This closes #2177
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/b7b8073d
Tree:
Repository: carbondata
Updated Branches:
refs/heads/master 78e4d0da3 -> cf1e4d4ca
Blockletsize and Blocksize issue fix in sdk writer and other unmanaged table
fixes
*Decimal dataype issue fix in sdk writer
*Drop unmanaged table issue in cluster
*Added comment for SDK writer API methods
Repository: carbondata
Updated Branches:
refs/heads/master 359f6e6b2 -> 13cdeb9f4
[CARBONDATA-2303] clean files issue resolved for partition folder
This closes #2128
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
[CARBONDATA-2313] Support unmanaged carbon table read and write
Carbon SDK writer will take the input data and write back the carbondata
and carbonindex files in the path specified.
This output doesn't have metadata folder. So, it is called unmanaged
carbon table.
This can be read by creating
Repository: carbondata
Updated Branches:
refs/heads/master 550846029 -> 280a4003a
http://git-wip-us.apache.org/repos/asf/carbondata/blob/280a4003/store/sdk/src/test/java/org/apache/carbondata/sdk/file/CarbonReaderTest.java
--
http://git-wip-us.apache.org/repos/asf/carbondata/blob/280a4003/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/createTable/TestUnmanagedCarbonTable.scala
--
diff --git
Repository: carbondata
Updated Branches:
refs/heads/branch-1.3 c36e944d6 -> 0283c938b
[CARBONDATA-1993] Carbon properties default values fix and corresponding
template and document correction
This closes #1831
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
Repository: carbondata
Updated Branches:
refs/heads/master 39fa1eb58 -> be600bc90
[CARBONDATA-1993] Carbon properties default values fix and corresponding
template and document correction
This closes #1831
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
Repository: carbondata
Updated Branches:
refs/heads/branch-1.3 a781515c5 -> c36e944d6
[CARBONDATA-2237] Removing parsers thread local objects after parsing of carbon
query
This closes #2040
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
Repository: carbondata
Updated Branches:
refs/heads/master 910f26171 -> 3fb406618
[CARBONDATA-2237] Removing parsers thread local objects after parsing of carbon
query
This closes #2040
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
Repository: carbondata
Updated Branches:
refs/heads/branch-1.3 ce9695633 -> a781515c5
[CARBONDATA-2234] Support UTF-8 with BOM in CSVInputFormat
This closes #2038
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
Repository: carbondata
Updated Branches:
refs/heads/master 9f2884a04 -> 910f26171
[CARBONDATA-2234] Support UTF-8 with BOM in CSVInputFormat
This closes #2038
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
[CARBONDATA-2209] Fixed rename table with partitions not working issue and
batch_sort and no_sort with partition table issue
This closes #2006
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/5b44e810
Tree:
Repository: carbondata
Updated Branches:
refs/heads/branch-1.3 660190fb5 -> 5b44e8105
[CARBONDATA-2219] Added validation for external partition location to use same
schema.
This closes #2018
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
Repository: carbondata
Updated Branches:
refs/heads/master 7bfe4afe4 -> 74f5d67c0
[CARBONDATA-2209] Fixed rename table with partitions not working issue and
batch_sort and no_sort with partition table issue
This closes #2006
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Repository: carbondata
Updated Branches:
refs/heads/master ac30e3e72 -> 7bfe4afe4
[CARBONDATA-2219] Added validation for external partition location to use same
schema.
This closes #2018
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit:
[CARBONDATA-2200] Fix bug of LIKE operation on streaming table
Fix bug of LIKE operation on streaming table,
LIKE operation will be converted to StartsWith / EndsWith / Contains expression.
Carbon will use RowLevelFilterExecuterImpl to evaluate this expression.
Streaming table also should
[CARBONDATA-2168] Support global sort for standard hive partitioning
This closes #1972
Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/758d03e7
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/758d03e7
[CARBONDATA-2201] NPE fixed while triggering the LoadTablePreExecutionEvent
before Streaming
While triggering the LoadTablePreExecutionEvent we require options provided by
user and the finalOptions.
In case of streaming both are same. If we pass null . It may cause NPE.
This closes #1997
http://git-wip-us.apache.org/repos/asf/carbondata/blob/1997ca23/core/src/main/java/org/apache/carbondata/core/metadata/PartitionMapFileStore.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/758d03e7/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonDropPartitionRDD.scala
--
diff --git
http://git-wip-us.apache.org/repos/asf/carbondata/blob/1997ca23/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableCleanTestCase.scala
--
diff --git
1 - 100 of 198 matches
Mail list logo