http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/44eed099/src/site/markdown/configuration-parameters.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/configuration-parameters.md
b/src/site/markdown/configuration-parameters.md
index 46b8bd0..de72439 100644
--- a/src/site/markdown/configuration-parameters.md
+++ b/src/site/markdown/configuration-parameters.md
@@ -7,7 +7,7 @@
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
-
+
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
@@ -16,152 +16,135 @@
-->
# Configuring CarbonData
- This tutorial guides you through the advanced configurations of CarbonData :
-
+ This guide explains the configurations that can be used to tune CarbonData to
achieve better performance.Some of the properties can be set dynamically and
are explained in the section Dynamic Configuration In CarbonData Using
SET-RESET.Most of the properties that control the internal settings have
reasonable default values.They are listed along with the properties along with
explanation.
+
* [System Configuration](#system-configuration)
- * [Performance Configuration](#performance-configuration)
- * [Miscellaneous Configuration](#miscellaneous-configuration)
- * [Spark Configuration](#spark-configuration)
+ * [Data Loading Configuration](#data-loading-configuration)
+ * [Compaction Configuration](#compaction-configuration)
+ * [Query Configuration](#query-configuration)
+ * [Data Mutation Configuration](#data-mutation-configuration)
* [Dynamic Configuration In CarbonData Using
SET-RESET](#dynamic-configuration-in-carbondata-using-set-reset)
-
-
+
+
## System Configuration
This section provides the details of all the configurations required for the
CarbonData System.
-<b><p align="center">System Configuration in carbon.properties</p></b>
-
| Property | Default Value | Description |
|----------------------------|-------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| carbon.storelocation | | Location where CarbonData will create the store,
and write the data in its own format. If not specified then it takes
spark.sql.warehouse.dir path. NOTE: Store location should be in HDFS. |
-| carbon.ddl.base.hdfs.url | | This property is used to configure the HDFS
relative path, the path configured in carbon.ddl.base.hdfs.url will be appended
to the HDFS path configured in fs.defaultFS. If this path is configured, then
user need not pass the complete path while dataload. For example: If absolute
path of the csv file is hdfs://10.18.101.155:54310/data/cnbc/2016/xyz.csv, the
path "hdfs://10.18.101.155:54310" will come from property fs.defaultFS and user
can configure the /data/cnbc/ as carbon.ddl.base.hdfs.url. Now while dataload
user can specify the csv path as /2016/xyz.csv. |
-| carbon.badRecords.location | | Path where the bad records are stored. |
-| carbon.data.file.version | V3 | If this parameter value is set to 1, then
CarbonData will support the data load which is in old format(0.x version). If
the value is set to 2(1.x onwards version), then CarbonData will support the
data load of new format only.|
-| carbon.streaming.auto.handoff.enabled | true | If this parameter value is
set to true, auto trigger handoff function will be enabled.|
-| carbon.streaming.segment.max.size | 1024000000 | This parameter defines the
maximum size of the streaming segment. Setting this parameter to appropriate
value will avoid impacting the streaming ingestion. The value is in bytes.|
-| carbon.query.show.datamaps | true | If this parameter value is set to true,
show tables command will list all the tables including datatmaps(eg:
Preaggregate table), else datamaps will be excluded from the table list. |
-| carbon.segment.lock.files.preserve.hours | 48 | This property value
indicates the number of hours the segment lock files will be preserved after
dataload. These lock files will be deleted with the clean command after the
configured number of hours. |
-| carbon.unsafe.working.memory.in.mb | 512 | Specifies the size of executor
unsafe working memory. Used for sorting data, storing column pages,etc. This
value is expressed in MB. |
-| carbon.unsafe.driver.working.memory.in.mb | 512 | Specifies the size of
driver unsafe working memory. Used for storing block or blocklet datamap cache.
If not configured then carbon.unsafe.working.memory.in.mb value is considered.
This value is expressed in MB. |
-
-## Performance Configuration
-This section provides the details of all the configurations required for
CarbonData Performance Optimization.
-
-<b><p align="center">Performance Configuration in carbon.properties</p></b>
-
-* **Data Loading Configuration**
-
-| Parameter | Default Value | Description | Range |
-|--------------------------------------|---------------|----------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| carbon.number.of.cores.while.loading | 2 | Number of cores to be used while
loading data. | |
-| carbon.sort.size | 100000 | Record count to sort and write intermediate
files to temp. | |
-| carbon.max.driver.lru.cache.size | -1 | Max LRU cache size upto which data
will be loaded at the driver side. This value is expressed in MB. Default value
of -1 means there is no memory limit for caching. Only integer values greater
than 0 are accepted. | |
-| carbon.max.executor.lru.cache.size | -1 | Max LRU cache size upto which data
will be loaded at the executor side. This value is expressed in MB. Default
value of -1 means there is no memory limit for caching. Only integer values
greater than 0 are accepted. If this parameter is not configured, then the
carbon.max.driver.lru.cache.size value will be considered. | |
-| carbon.merge.sort.prefetch | true | Enable prefetch of data during merge
sort while reading data from sort temp files in data loading. | |
-| carbon.insert.persist.enable | false | Enabling this parameter considers
persistent data. If we are executing insert into query from source table using
select statement & loading the same source table concurrently, when select
happens on source table during the data load, it gets new record for which
dictionary is not generated, so there will be inconsistency. To avoid this
condition we can persist the dataframe into MEMORY_AND_DISK(default value) and
perform insert into operation. By default this value will be false because no
need to persist the dataframe in all cases. If user wants to run load and
insert queries on source table concurrently then user can enable this
parameter. | |
-| carbon.insert.storage.level | MEMORY_AND_DISK | Which storage level to
persist dataframe when 'carbon.insert.persist.enable'=true, if user's executor
has less memory, set this parameter to 'MEMORY_AND_DISK_SER' or other storage
level to correspond to different environment. [See
detail](http://spark.apache.org/docs/latest/rdd-programming-guide.html#rdd-persistence).
| |
-| carbon.update.persist.enable | true | Enabling this parameter considers
persistent data. Enabling this will reduce the execution time of UPDATE
operation. | |
-| carbon.update.storage.level | MEMORY_AND_DISK | Which storage level to
persist dataframe when 'carbon.update.persist.enable'=true, if user's executor
has less memory, set this parameter to 'MEMORY_AND_DISK_SER' or other storage
level to correspond to different environment. [See
detail](http://spark.apache.org/docs/latest/rdd-programming-guide.html#rdd-persistence).
| |
-| carbon.global.sort.rdd.storage.level | MEMORY_ONLY | Which storage level to
persist rdd when loading data with 'sort_scope'='global_sort', if user's
executor has less memory, set this parameter to 'MEMORY_AND_DISK_SER' or other
storage level to correspond to different environment. [See
detail](http://spark.apache.org/docs/latest/rdd-programming-guide.html#rdd-persistence).
| |
-| carbon.load.global.sort.partitions | 0 | The Number of partitions to use
when shuffling data for sort. If user don't configurate or configurate it less
than 1, it uses the number of map tasks as reduce tasks. In general, we
recommend 2-3 tasks per CPU core in your cluster.
-| carbon.options.bad.records.logger.enable | false | Whether to create logs
with details about bad records. | |
-| carbon.bad.records.action | FORCE | This property can have four types of
actions for bad records FORCE, REDIRECT, IGNORE and FAIL. If set to FORCE then
it auto-corrects the data by storing the bad records as NULL. If set to
REDIRECT then bad records are written to the raw CSV instead of being loaded.
If set to IGNORE then bad records are neither loaded nor written to the raw
CSV. If set to FAIL then data loading fails if any bad records are found. | |
-| carbon.options.is.empty.data.bad.record | false | If false, then empty (""
or '' or ,,) data will not be considered as bad record and vice versa. | |
-| carbon.options.bad.record.path | | Specifies the HDFS path where bad
records are stored. By default the value is Null. This path must to be
configured by the user if bad record logger is enabled or bad record action
redirect. | |
-| carbon.enable.vector.reader | true | This parameter increases the
performance of select queries as it fetch columnar batch of size 4*1024 rows
instead of fetching data row by row. | |
-| carbon.blockletgroup.size.in.mb | 64 MB | The data are read as a group of
blocklets which are called blocklet groups. This parameter specifies the size
of the blocklet group. Higher value results in better sequential IO access.The
minimum value is 16MB, any value lesser than 16MB will reset to the default
value (64MB). | |
-| carbon.task.distribution | block | **block**: Setting this value will launch
one task per block. This setting is suggested in case of concurrent queries and
queries having big shuffling scenarios. **custom**: Setting this value will
group the blocks and distribute it uniformly to the available resources in the
cluster. This enhances the query performance but not suggested in case of
concurrent queries and queries having big shuffling scenarios. **blocklet**:
Setting this value will launch one task per blocklet. This setting is suggested
in case of concurrent queries and queries having big shuffling scenarios.
**merge_small_files**: Setting this value will merge all the small partitions
to a size of (128 MB is the default value of
"spark.sql.files.maxPartitionBytes",it is configurable) during querying. The
small partitions are combined to a map task to reduce the number of read task.
This enhances the performance. | |
-| carbon.load.sortmemory.spill.percentage | 0 | If we use unsafe memory during
data loading, this configuration will be used to control the behavior of
spilling inmemory pages to disk. Internally in Carbondata, during sorting
carbondata will sort data in pages and add them in unsafe memory. If the memory
is insufficient, carbondata will spill the pages to disk and generate sort temp
file. This configuration controls how many pages in memory will be spilled to
disk based size. The size can be calculated by multiplying this configuration
value with 'carbon.sort.storage.inmemory.size.inmb'. For example, default value
0 means that no pages in unsafe memory will be spilled and all the newly sorted
data will be spilled to disk; Value 50 means that if the unsafe memory is
insufficient, about half of pages in the unsafe memory will be spilled to disk
while value 100 means that almost all pages in unsafe memory will be spilled.
**Note**: This configuration only works for 'LOCAL_SORT' and 'BA
TCH_SORT' and the actual spilling behavior may slightly be different in each
data loading. | Integer values between 0 and 100 |
-
-* **Compaction Configuration**
-
-| Parameter | Default Value | Description | Range |
-|-----------------------------------------------|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------|
-| carbon.number.of.cores.while.compacting | 2 | Number of cores which are used
to write data during compaction. | |
-| carbon.compaction.level.threshold | 4, 3 | This property is for minor
compaction which decides how many segments to be merged. Example: If it is set
as 2, 3 then minor compaction will be triggered for every 2 segments. 3 is the
number of level 1 compacted segment which is further compacted to new segment.
| Valid values are from 0-100. |
-| carbon.major.compaction.size | 1024 | Major compaction size can be
configured using this parameter. Sum of the segments which is below this
threshold will be merged. This value is expressed in MB. | |
-| carbon.horizontal.compaction.enable | true | This property is used to turn
ON/OFF horizontal compaction. After every DELETE and UPDATE statement,
horizontal compaction may occur in case the delta (DELETE/ UPDATE) files
becomes more than specified threshold. | |
-| carbon.horizontal.UPDATE.compaction.threshold | 1 | This property specifies
the threshold limit on number of UPDATE delta files within a segment. In case
the number of delta files goes beyond the threshold, the UPDATE delta files
within the segment becomes eligible for horizontal compaction and compacted
into single UPDATE delta file. | Values between 1 to 10000. |
-| carbon.horizontal.DELETE.compaction.threshold | 1 | This property specifies
the threshold limit on number of DELETE delta files within a block of a
segment. In case the number of delta files goes beyond the threshold, the
DELETE delta files for the particular block of the segment becomes eligible for
horizontal compaction and compacted into single DELETE delta file. | Values
between 1 to 10000. |
-| carbon.update.segment.parallelism | 1 | This property specifies the
parallelism for each segment during update. If there are segments that contain
too many records to update and the spark job encounter data-spill related
errors, it is better to increase this property value. It is recommended to set
this value to a multiple of the number of executors for balance. | Values
between 1 to 1000. |
-| carbon.merge.index.in.segment | true | This property is used to merge all
carbon index files (.carbonindex) inside a segment to a single carbon index
merge file (.carbonindexmerge).| Values true or false |
-
-* **Query Configuration**
-
-| Parameter | Default Value | Description | Range |
-|--------------------------------------|---------------|---------------------------------------------------|---------------------------|
-| carbon.number.of.cores | 4 | Number of cores to be used while querying. | |
-| carbon.enable.quick.filter | false | Improves the performance of filter
query. | |
-
-
-## Miscellaneous Configuration
-
-<b><p align="center">Extra Configuration in carbon.properties</p></b>
-
-* **Time format for CarbonData**
-
-| Parameter | Default Format | Description |
-|-------------------------|---------------------|--------------------------------------------------------------|
-| carbon.timestamp.format | yyyy-MM-dd HH:mm:ss | Timestamp format of input
data used for timestamp data type. |
-
-* **Dataload Configuration**
-
-| Parameter | Default Value | Description |
-|---------------------------------------------|--------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| carbon.sort.file.write.buffer.size | 16384 | File write buffer size used
during sorting. Minimum allowed buffer size is 10240 byte and Maximum allowed
buffer size is 10485760 byte. |
+| carbon.storelocation | spark.sql.warehouse.dir property value | Location
where CarbonData will create the store, and write the data in its custom
format. If not specified,the path defaults to spark.sql.warehouse.dir property.
NOTE: Store location should be in HDFS. |
+| carbon.ddl.base.hdfs.url | (none) | To simplify and shorten the path to be
specified in DDL/DML commands, this property is supported.This property is used
to configure the HDFS relative path, the path configured in
carbon.ddl.base.hdfs.url will be appended to the HDFS path configured in
fs.defaultFS of core-site.xml. If this path is configured, then user need not
pass the complete path while dataload. For example: If absolute path of the csv
file is hdfs://10.18.101.155:54310/data/cnbc/2016/xyz.csv, the path
"hdfs://10.18.101.155:54310" will come from property fs.defaultFS and user can
configure the /data/cnbc/ as carbon.ddl.base.hdfs.url. Now while dataload user
can specify the csv path as /2016/xyz.csv. |
+| carbon.badRecords.location | (none) | CarbonData can detect the records not
conforming to defined table schema and isolate them as bad records.This
property is used to specify where to store such bad records. |
+| carbon.streaming.auto.handoff.enabled | true | CarbonData supports storing
of streaming data.To have high throughput for streaming, the data is written in
Row format which is highly optimized for write, but performs poorly for
query.When this property is true and when the streaming data size reaches
***carbon.streaming.segment.max.size***, CabonData will automatically convert
the data to columnar format and optimize it for faster querying.**NOTE:** It is
not recommended to keep the default value which is true. |
+| carbon.streaming.segment.max.size | 1024000000 | CarbonData writes streaming
data in row format which is optimized for high write throughput.This property
defines the maximum size of data to be held is row format, beyond which it will
be converted to columnar format in order to support high performane query,
provided ***carbon.streaming.auto.handoff.enabled*** is true. **NOTE:** Setting
higher value will impact the streaming ingestion. The value has to be
configured in bytes. |
+| carbon.query.show.datamaps | true | CarbonData stores datamaps as
independent tables so as to allow independent maintenance to some extent.When
this property is true,which is by default, show tables command will list all
the tables including datatmaps(eg: Preaggregate table), else datamaps will be
excluded from the table list.**NOTE:** It is generally not required for the
user to do any maintenance operations on these tables and hence not required to
be seen.But it is shown by default so that user or admin can get clear
understanding of the system for capacity planning. |
+| carbon.segment.lock.files.preserve.hours | 48 | In order to support parallel
data loading onto the same table, CarbonData sequences(locks) at the
granularity of segments.Operations affecting the segment(like IUD, alter) are
blocked from parallel operations.This property value indicates the number of
hours the segment lock files will be preserved after dataload. These lock files
will be deleted with the clean command after the configured number of hours. |
+| carbon.timestamp.format | yyyy-MM-dd HH:mm:ss | CarbonData can understand
data of timestamp type and process it in special manner.It can be so that the
format of Timestamp data is different from that understood by CarbonData by
default.This configuration allows users to specify the format of Timestamp in
their data. |
| carbon.lock.type | LOCALLOCK | This configuration specifies the type of lock
to be acquired during concurrent operations on table. There are following types
of lock implementation: - LOCALLOCK: Lock is created on local file system as
file. This lock is useful when only one spark driver (thrift server) runs on a
machine and no other CarbonData spark application is launched concurrently. -
HDFSLOCK: Lock is created on HDFS file system as file. This lock is useful when
multiple CarbonData spark applications are launched and no ZooKeeper is running
on cluster and HDFS supports file based locking. |
-| carbon.lock.path | TABLEPATH | Locks on the files are used to prevent
concurrent operation from modifying the same files. This
-configuration specifies the path where lock files have to be created.
Recommended to configure
-HDFS lock path(to this property) in case of S3 file system as locking is not
feasible on S3.
-**Note:** If this property is not set to HDFS location for S3 store, then
there is a possibility
-of data corruption because multiple data manipulation calls might try to
update the status file
-and as lock is not acquired before updation data might get overwritten. |
-| carbon.sort.intermediate.files.limit | 20 | Minimum number of intermediate
files after which merged sort can be started (minValue = 2, maxValue=50). |
-| carbon.block.meta.size.reserved.percentage | 10 | Space reserved in
percentage for writing block meta data in CarbonData file. |
-| carbon.csv.read.buffersize.byte | 1048576 | csv reading buffer size. |
-| carbon.merge.sort.reader.thread | 3 | Maximum no of threads used for reading
intermediate files for final merging. |
-| carbon.concurrent.lock.retries | 100 | Specifies the maximum number of
retries to obtain the lock for concurrent operations. This is used for
concurrent loading. |
-| carbon.concurrent.lock.retry.timeout.sec | 1 | Specifies the interval
between the retries to obtain the lock for concurrent operations. |
-| carbon.lock.retries | 3 | Specifies the maximum number of retries to obtain
the lock for any operations other than load. |
-| carbon.lock.retry.timeout.sec | 5 | Specifies the interval between the
retries to obtain the lock for any operation other than load. |
-| carbon.skip.empty.line | false | Setting this property ignores the empty
lines in the CSV file during the data load |
-| carbon.enable.calculate.size | true | **For Load Operation**: Setting this
property calculates the size of the carbon data file (.carbondata) and carbon
index file (.carbonindex) for every load and updates the table status file.
**For Describe Formatted**: Setting this property calculates the total size of
the carbon data files and carbon index files for the respective table and
displays in describe formatted command. |
-
-
-
-* **Compaction Configuration**
+| carbon.lock.path | TABLEPATH | This configuration specifies the path where
lock files have to be created. Recommended to configure zookeeper lock type or
configure HDFS lock path(to this property) in case of S3 file system as locking
is not feasible on S3. |
+| carbon.unsafe.working.memory.in.mb | 512 | CarbonData supports storing data
in off-heap memory for certain operations during data loading and query.This
helps to avoid the Java GC and thereby improve the overall performance.The
Minimum value recommeded is 512MB.Any value below this is reset to default
value of 512MB.**NOTE:** The below formulas explain how to arrive at the
off-heap size required.<u>Memory Required For Data
Loading:</u>(*carbon.number.of.cores.while.loading*) * (Number of tables to
load in parallel) * (*offheap.sort.chunk.size.inmb* +
*carbon.blockletgroup.size.in.mb* + *carbon.blockletgroup.size.in.mb*/3.5 ).
<u>Memory required for Query:</u>SPARK_EXECUTOR_INSTANCES *
(*carbon.blockletgroup.size.in.mb* + *carbon.blockletgroup.size.in.mb* * 3.5) *
spark.executor.cores |
+| carbon.update.sync.folder | /tmp/carbondata | CarbonData maintains last
modification time entries in modifiedTime.mdt to determine the schema changes
and reload only when necessary.This configuration specifies the path where the
file needs to be written. |
+| carbon.invisible.segments.preserve.count | 200 | CarbonData maintains each
data load entry in tablestatus file. The entries from this file are not deleted
for those segments that are compacted or dropped, but are made invisible.If the
number of data loads are very high, the size and number of entries in
tablestatus file can become too many causing unnecessary reading of all
data.This configuration specifies the number of segment entries to be
maintained afte they are compacted or dropped.Beyond this, the entries are
moved to a separate history tablestatus file.**NOTE:** The entries in
tablestatus file help to identify the operations performed on CarbonData table
and is also used for checkpointing during various data manupulation
operations.This is similar to AUDIT file maintaining all the operations and its
status.Hence the entries are never deleted but moved to a separate history
file. |
+| carbon.lock.retries | 3 | CarbonData ensures consistency of operations by
blocking certain operations from running in parallel.In order to block the
operations from running in parallel, lock is obtained on the table.This
configuration specifies the maximum number of retries to obtain the lock for
any operations other than load.**NOTE:** Data manupulation operations like
Compaction,UPDATE,DELETE or LOADING,UPDATE,DELETE are not allowed to run in
parallel.How ever data loading can happen in parallel to compaction. |
+| carbon.lock.retry.timeout.sec | 5 | Specifies the interval between the
retries to obtain the lock for any operation other than load.**NOTE:** Refer to
***carbon.lock.retries*** for understanding why CarbonData uses locks for
operations. |
+
+## Data Loading Configuration
| Parameter | Default Value | Description |
-|-----------------------------------|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| carbon.numberof.preserve.segments | 0 | If the user wants to preserve some
number of segments from being compacted then he can set this property. Example:
carbon.numberof.preserve.segments = 2 then 2 latest segments will always be
excluded from the compaction. No segments will be preserved by default. |
-| carbon.allowed.compaction.days | 0 | Compaction will merge the segments
which are loaded with in the specific number of days configured. Example: If
the configuration is 2, then the segments which are loaded in the time frame of
2 days only will get merged. Segments which are loaded 2 days apart will not be
merged. This is disabled by default. |
-| carbon.enable.auto.load.merge | false | To enable compaction while data
loading. |
-|carbon.enable.page.level.reader.in.compaction|true|Enabling page level reader
for compaction reduces the memory usage while compacting more number of
segments. It allows reading only page by page instead of reading whole blocklet
to memory.|
+|--------------------------------------|---------------|----------------------------------------------------------------------------------------------------------------------|
+| carbon.number.of.cores.while.loading | 2 | Number of cores to be used while
loading data.This also determines the number of threads to be used to read the
input files (csv) in parallel.**NOTE:** This configured value is used in every
data loading step to parallelize the operations. Configuring a higher value can
lead to increased early thread pre-emption by OS and there by reduce the
overall performance. |
+| carbon.sort.size | 100000 | Number of records to hold in memory to sort and
write intermediate temp files.**NOTE:** Memory required for data loading
increases with increase in configured value as each thread would cache
configured number of records. |
+| carbon.global.sort.rdd.storage.level | MEMORY_ONLY | Storage level to
persist dataset of RDD/dataframe when loading data with
'sort_scope'='global_sort', if user's executor has less memory, set this
parameter to 'MEMORY_AND_DISK_SER' or other storage level to correspond to
different environment. [See
detail](http://spark.apache.org/docs/latest/rdd-programming-guide.html#rdd-persistence).
|
+| carbon.load.global.sort.partitions | 0 | The Number of partitions to use
when shuffling data for sort. Default value 0 means to use same number of map
tasks as reduce tasks.**NOTE:** In general, it is recommended to have 2-3 tasks
per CPU core in your cluster. |
+| carbon.options.bad.records.logger.enable | false | CarbonData can identify
the records that are not conformant to schema and isolate them as bad
records.Enabling this configuration will make CarbonData to log such bad
records.**NOTE:** If the input data contains many bad records, logging them
will slow down the over all data loading throughput.The data load operation
status would depend on the configuration in ***carbon.bad.records.action***. |
+| carbon.bad.records.action | FAIL | CarbonData in addition to identifying the
bad records, can take certain actions on such data.This configuration can have
four types of actions for bad records namely FORCE, REDIRECT, IGNORE and FAIL.
If set to FORCE then it auto-corrects the data by storing the bad records as
NULL. If set to REDIRECT then bad records are written to the raw CSV instead of
being loaded. If set to IGNORE then bad records are neither loaded nor written
to the raw CSV. If set to FAIL then data loading fails if any bad records are
found. |
+| carbon.options.is.empty.data.bad.record | false | Based on the business
scenarios, empty("" or '' or ,,) data can be valid or invalid. This
configuration controls how empty data should be treated by CarbonData. If
false, then empty ("" or '' or ,,) data will not be considered as bad record
and vice versa. |
+| carbon.options.bad.record.path | (none) | Specifies the HDFS path where bad
records are to be stored. By default the value is Null. This path must to be
configured by the user if ***carbon.options.bad.records.logger.enable*** is
**true** or ***carbon.bad.records.action*** is **REDIRECT**. |
+| carbon.blockletgroup.size.in.mb | 64 | Please refer to
[file-structure-of-carbondata](./file-structure-of-carbondata.md ) to
understand the storage format of CarbonData.The data are read as a group of
blocklets which are called blocklet groups. This parameter specifies the size
of each blocklet group. Higher value results in better sequential IO access.The
minimum value is 16MB, any value lesser than 16MB will reset to the default
value (64MB).**NOTE:** Configuring a higher value might lead to poor
performance as an entire blocklet group will have to read into memory before
processing.For filter queries with limit, it is **not advisable** to have a
bigger blocklet size.For Aggregation queries which need to return more number
of rows,bigger blocklet size is advisable. |
+| carbon.sort.file.write.buffer.size | 16384 | CarbonData sorts and writes
data to intermediate files to limit the memory usage.This configuration
determines the buffer size to be used for reading and writing such files.
**NOTE:** This configuration is useful to tune IO and derive optimal
performance.Based on the OS and underlying harddisk type, these values can
significantly affect the overall performance.It is ideal to tune the buffersize
equivalent to the IO buffer size of the OS.Recommended range is between 10240
to 10485760 bytes. |
+| carbon.sort.intermediate.files.limit | 20 | CarbonData sorts and writes data
to intermediate files to limit the memory usage.Before writing the target
carbondat file, the data in these intermediate files needs to be sorted again
so as to ensure the entire data in the data load is sorted.This configuration
determines the minimum number of intermediate files after which merged sort is
applied on them sort the data.**NOTE:** Intermediate merging happens on a
separate thread in the background.Number of threads used is determined by
***carbon.merge.sort.reader.thread***.Configuring a low value will cause more
time to be spent in merging these intermediate merged files which can cause
more IO.Configuring a high value would cause not to use the idle threads to do
intermediate sort merges.Range of recommended values are between 2 and 50 |
+| carbon.csv.read.buffersize.byte | 1048576 | CarbonData uses Hadoop
InputFormat to read the csv files.This configuration value is used to pass
buffer size as input for the Hadoop MR job when reading the csv files.This
value is configured in bytes.**NOTE:** Refer to
***org.apache.hadoop.mapreduce.InputFormat*** documentation for additional
information. |
+| carbon.merge.sort.reader.thread | 3 | CarbonData sorts and writes data to
intermediate files to limit the memory usage.When the intermediate files
reaches ***carbon.sort.intermediate.files.limit*** the files will be merged,the
number of threads specified in this configuration will be used to read the
intermediate files for performing merge sort.**NOTE:** Refer to
***carbon.sort.intermediate.files.limit*** for operation
description.Configuring less number of threads can cause merging to slow down
over loading process where as configuring more number of threads can cause
thread contention with threads in other data loading steps.Hence configure a
fraction of ***carbon.number.of.cores.while.loading***. |
+| carbon.concurrent.lock.retries | 100 | CarbonData supports concurrent data
loading onto same table.To ensure the loading status is correctly updated into
the system,locks are used to sequence the status updation step.This
configuration specifies the maximum number of retries to obtain the lock for
updating the load status.**NOTE:** This value is high as more number of
concurrent loading happens,more the chances of not able to obtain the lock when
tried.Adjust this value according to the number of concurrent loading to be
supported by the system. |
+| carbon.concurrent.lock.retry.timeout.sec | 1 | Specifies the interval
between the retries to obtain the lock for concurrent operations.**NOTE:**
Refer to ***carbon.concurrent.lock.retries*** for understanding why CarbonData
uses locks during data loading operations. |
+| carbon.skip.empty.line | false | The csv files givent to CarbonData for
loading can contain empty lines.Based on the business scenario, this empty line
might have to be ignored or needs to be treated as NULL value for all
columns.In order to define this business behavior, this configuration is
provided.**NOTE:** In order to consider NULL values for non string columns and
continue with data load, ***carbon.bad.records.action*** need to be set to
**FORCE**;else data load will be failed as bad records encountered. |
+| carbon.enable.calculate.size | true | **For Load Operation**: Setting this
property calculates the size of the carbon data file (.carbondata) and carbon
index file (.carbonindex) for every load and updates the table status file.
**For Describe Formatted**: Setting this property calculates the total size of
the carbon data files and carbon index files for the respective table and
displays in describe formatted command.**NOTE:** This is useful to determine
the overall size of the carbondata table and also get an idea of how the table
is growing in order to take up other backup strategy decisions. |
+| carbon.cutOffTimestamp | (none) | CarbonData has capability to generate the
Dictionary values for the timestamp columns from the data itself without the
need to store the computed dictionary values. This configuration sets the start
date for calculating the timestamp. Java counts the number of milliseconds from
start of "1970-01-01 00:00:00". This property is used to customize the start of
position. For example "2000-01-01 00:00:00". **NOTE:** The date must be in the
form ***carbon.timestamp.format***. CarbonData supports storing data for upto
68 years.For example, if the cut-off time is 1970-01-01 05:30:00, then data
upto 2038-01-01 05:30:00 will be supported by CarbonData. |
+| carbon.timegranularity | SECOND | The configuration is used to specify the
data granularity level such as DAY, HOUR, MINUTE, or SECOND.This helps to store
more than 68 years of data into CarbonData. |
+| carbon.use.local.dir | false | CarbonData during data loading, writes files
to local temp directories before copying the files to HDFS.This configuration
is used to specify whether CarbonData can write locally to tmp directory of the
container or to the YARN application directory. |
+| carbon.use.multiple.temp.dir | false | When multiple disks are present in
the system, YARN is generally configured with multiple disks to be used as temp
directories for managing the containers.This configuration specifies whether to
use multiple YARN local directories during data loading for disk IO load
balancing.Enable ***carbon.use.local.dir*** for this configuration to take
effect.**NOTE:** Data Loading is an IO intensive operation whose performance
can be limited by the disk IO threshold, particularly during multi table
concurrent data load.Configuring this parameter, balances the disk IO across
multiple disks there by improving the over all load performance. |
+| carbon.sort.temp.compressor | (none) | CarbonData writes every
***carbon.sort.size*** number of records to intermediate temp files during data
loading to ensure memory footprint is within limits.These temporary files cab
be compressed and written in order to save the storage space.This configuration
specifies the name of compressor to be used to compress the intermediate sort
temp files during sort procedure in data loading.The valid values are
'SNAPPY','GZIP','BZIP2','LZ4','ZSTD' and empty. By default, empty means that
Carbondata will not compress the sort temp files.**NOTE:** Compressor will be
useful if you encounter disk bottleneck.Since the data needs to be compressed
and decompressed,it involves additional CPU cycles,but is compensated by the
high IO throughput due to less data to be written or read from the disks. |
+| carbon.load.skewedDataOptimization.enabled | false | During data
loading,CarbonData would divide the number of blocks equally so as to ensure
all executors process same number of blocks.This mechanism satisfies most of
the scenarios and ensures maximum parallel processing for optimal data loading
performance.In some business scenarios, there might be scenarios where the size
of blocks vary significantly and hence some executors would have to do more
work if they get blocks containing more data. This configuration enables size
based block allocation strategy for data loading.When loading, carbondata will
use file size based block allocation strategy for task distribution. It will
make sure that all the executors process the same size of data.**NOTE:** This
configuration is useful if the size of your input data files varies widely, say
1MB~1GB.For this configuration to work effectively,knowing the data pattern and
size is important and necessary. |
+| carbon.load.min.size.enabled | false | During Data Loading, CarbonData would
divide the number of files among the available executors to parallelize the
loading operation.When the input data files are very small, this action causes
to generate many small carbondata files.This configuration determines whether
to enable node minumun input data size allocation strategy for data loading.It
will make sure that the node load the minimum amount of data there by reducing
number of carbondata files.**NOTE:** This configuration is useful if the size
of the input data files are very small, like 1MB~256MB.Refer to
***load_min_size_inmb*** to configure the minimum size to be considered for
splitting files among executors. |
+| enable.data.loading.statistics | false | CarbonData has extensive logging
which would be useful for debugging issues related to performance or hard to
locate issues.This configuration when made ***true*** would log additional data
loading statistics information to more accurately locate the issues being
debugged.**NOTE:** Enabling this would log more debug information to log files,
there by increasing the log files size significantly in short span of time.It
is advised to configure the log files size, retention of log files parameters
in log4j properties appropriately.Also extensive logging is an increased IO
operation and hence over all data loading performance might get
reduced.Therefore it is recommened to enable this configuration only for the
duration of debugging. |
+| carbon.dictionary.chunk.size | 10000 | CarbonData generates dictionary keys
and writes them to separate dictionary file during data loading.To optimize the
IO, this configuration determines the number of dictionary keys to be persisted
to dictionary file at a time.**NOTE:** Writing to file also serves as a commit
point to the dictionary generated.Increasing more values in memory causes more
data loss during system or application failure.It is advised to alter this
configuration judiciously. |
+| dictionary.worker.threads | 1 | CarbonData supports Optimized data loading
by relying on a dictionary server.Dictionary server helps to maintain
dictionary values independent of the data loading and there by avoids reading
the same input data multiples times.This configuration determines the number of
concurrent dictionary generation or request that needs to be served by the
dictionary server.**NOTE:** This configuration takes effect when
***carbon.options.single.pass*** is configured as true.Please refer to
*carbon.options.single.pass*to understand how dictionary server optimizes data
loading. |
+| enable.unsafe.sort | true | CarbonData supports unsafe operations of Java to
avoid GC overhead for certain operations.This configuration enables to use
unsafe functions in CarbonData.**NOTE:** For operations like data loading,
which generates more short lived Java objects, Java GC can be a bottle
neck.Using unsafe can overcome the GC overhead and improve the overall
performance. |
+| enable.offheap.sort | true | CarbonData supports storing data in off-heap
memory for certain operations during data loading and query.This helps to avoid
the Java GC and thereby improve the overall performance.This configuration
enables using off-heap memory for sorting of data during data loading.**NOTE:**
***enable.unsafe.sort*** configuration needs to be configured to true for
using off-heap |
+| enable.inmemory.merge.sort | false | CarbonData sorts and writes data to
intermediate files to limit the memory usage.These intermediate files needs to
be sorted again using merge sort before writing to the final carbondata
file.Performing merge sort in memory would increase the sorting performance at
the cost of increased memory footprint. This Configuration specifies to do
in-memory merge sort or to do file based merge sort. |
+| carbon.load.sort.scope | LOCAL_SORT | CarbonData can support various sorting
options to match the balance between load and query performance.LOCAL_SORT:All
the data given to an executor in the single load is fully sorted and written to
carondata files.Data loading performance is reduced a little as the entire data
needs to be sorted in the executor.BATCH_SORT:Sorts the data in batches of
configured size and writes to carbondata files.Data loading performance
increases as the entire data need not be sorted.But query performance will get
reduced due to false positives in block pruning and also due to more number of
carbondata files written.Due to more number of carbondata files, if identified
blocks > cluster parallelism, query performance and concurrency will get
reduced.GLOBAL SORT:Entire data in the data load is fully sorted and written to
carbondata files.Data loading perfromance would get reduced as the entire data
needs to be sorted.But the query performance increases signific
antly due to very less false positives and concurrency is also
improved.**NOTE:** when BATCH_SORTis configured, it is recommended to keep
***carbon.load.batch.sort.size.inmb*** > ***carbon.blockletgroup.size.in.mb*** |
+| carbon.load.batch.sort.size.inmb | 0 | When ***carbon.load.sort.scope*** is
configured as ***BATCH_SORT***,This configuration needs to be added to specify
the batch size for sorting and writing to carbondata files.**NOTE:** It is
recommended to keep the value around 45% of
***carbon.sort.storage.inmemory.size.inmb*** to avoid spill to disk.Also it is
recommended to keep the value higher than
***carbon.blockletgroup.size.in.mb***. Refer to *carbon.load.sort.scope* for
more information on sort options and the advantages/disadvantges of each
option. |
+| carbon.dictionary.server.port | 2030 | Single Pass Loading enables single
job to finish data loading with dictionary generation on the fly. It enhances
performance in the scenarios where the subsequent data loading after initial
load involves fewer incremental updates on the dictionary.Single pass loading
can be enabled using the option ***carbon.options.single.pass***.When this
option is specified, a dictionary server will be internally started to handle
the dictionary generation and query requests.This configuration specifies the
port on which the server need to listen for incoming requests.Port value ranges
between 0-65535 |
+| carbon.merge.sort.prefetch | true | CarbonData writes every
***carbon.sort.size*** number of records to intermediate temp files during data
loading to ensure memory footprint is within limits.These intermediate temp
files will have to be sorted using merge sort before writing into CarbonData
format.This configuration enables pre fetching of data from these temp files in
order to optimize IO and speed up data loading process. |
+| carbon.loading.prefetch | false | CarbonData uses univocity parser to read
csv files.This configuration is used to inform the parser whether it can
prefetch the data from csv files to speed up the reading.**NOTE:** Enabling
prefetch improves the data loading performance, but needs higher memory to keep
more records which are read ahead from disk. |
+| carbon.prefetch.buffersize | 1000 | When the configuration
***carbon.merge.sort.prefetch*** is configured to true, we need to set the
number of records that can be prefetched.This configuration is used specify the
number of records to be prefetched.**NOTE: **Configuring more number of records
to be prefetched increases memory footprint as more records will have to be
kept in memory. |
+| load_min_size_inmb | 256 | This configuration is used along with
***carbon.load.min.size.enabled***.This determines the minimum size of input
files to be considered for distribution among executors while data
loading.**NOTE:** Refer to ***carbon.load.min.size.enabled*** for understanding
when this configuration needs to be used and its advantages and disadvantages. |
+| carbon.load.sortmemory.spill.percentage | 0 | During data loading, some data
pages are kept in memory upto memory configured in
***carbon.sort.storage.inmemory.size.inmb*** beyond which they are spilled to
disk as intermediate temporary sort files.This configuration determines after
what percentage data needs to be spilled to disk.**NOTE:** Without this
configuration, when the data pages occupy upto configured memory, new data
pages would be dumped to disk and old pages are still maintained in disk. |
+| carbon.load.directWriteHdfs.enabled | false | During data load all the
carbondata files are written to local disk and finally copied to the target
location in HDFS.Enabling this parameter will make carrbondata files to be
written directly onto target HDFS location bypassing the local disk.**NOTE:**
Writing directly to HDFS saves local disk IO(once for writing the files and
again for copying to HDFS) there by improving the performance.But the drawback
is when data loading fails or the application crashes, unwanted carbondata
files will remain in the target HDFS location until it is cleared during next
data load or by running *CLEAN FILES* DDL command |
+| carbon.options.serialization.null.format | \N | Based on the business
scenarios, some columns might need to be loaded with null values.As null value
cannot be written in csv files, some special characters might be adopted to
specify null values.This configuration can be used to specify the null values
format in the data being loaded. |
+| carbon.sort.storage.inmemory.size.inmb | 512 | CarbonData writes every
***carbon.sort.size*** number of records to intermediate temp files during data
loading to ensure memory footprint is within limits.When
***enable.unsafe.sort*** configuration is enabled, instead of using
***carbon.sort.size*** which is based on rows count, size occupied in memory is
used to determine when to flush data pages to intermediate temp files.This
configuration determines the memory to be used for storing data pages in
memory.**NOTE:** Configuring a higher values ensures more data is maintained in
memory and hence increases data loading performance due to reduced or no
IO.Based on the memory availability in the nodes of the cluster, configure the
values accordingly. |
+
+## Compaction Configuration
-
-* **Query Configuration**
+| Parameter | Default Value | Description |
+|-----------------------------------------------|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| carbon.number.of.cores.while.compacting | 2 | Number of cores to be used
while compacting data.This also determines the number of threads to be used to
read carbondata files in parallel. |
+| carbon.compaction.level.threshold | 4, 3 | Each CarbonData load will create
one segment, if every load is small in size it will generate many small file
over a period of time impacting the query performance.This configuration is for
minor compaction which decides how many segments to be merged. Configuration is
of the form (x,y). Compaction will be triggered for every x segments and form a
single level 1 compacted segment.When the number of compacted level 1 segments
reach y, compaction will be triggered again to merge them to form a single
level 2 segment. For example: If it is set as 2, 3 then minor compaction will
be triggered for every 2 segments. 3 is the number of level 1 compacted
segments which is further compacted to new segment.**NOTE:** When
***carbon.enable.auto.load.merge*** is **true**, Configuring higher values
cause overall data loading time to increase as compaction will be triggered
after data loading is complete but status is not returned till compaction is
comp
lete. But compacting more number of segments can increase query
performance.Hence optimal values needs to be configured based on the business
scenario.Valid values are bwteen 0 to 100. |
+| carbon.major.compaction.size | 1024 | To improve query performance and All
the segments can be merged and compacted to a single segment upto configured
size.This Major compaction size can be configured using this parameter. Sum of
the segments which is below this threshold will be merged. This value is
expressed in MB. |
+| carbon.horizontal.compaction.enable | true | CarbonData supports
DELETE/UPDATE functionality by creating delta data files for existing
carbondata files.These delta files would grow as more number of DELETE/UPDATE
operations are performed.Compaction of these delta files are termed as
horizontal compaction.This configuration is used to turn ON/OFF horizontal
compaction. After every DELETE and UPDATE statement, horizontal compaction may
occur in case the delta (DELETE/ UPDATE) files becomes more than specified
threshold.**NOTE: **Having many delta files will reduce the query performance
as scan has to happen on all these files before the final state of data can be
decided.Hence it is advisable to keep horizontal compaction enabled and
configure reasonable values to
***carbon.horizontal.UPDATE.compaction.threshold*** and
***carbon.horizontal.DELETE.compaction.threshold*** |
+| carbon.horizontal.update.compaction.threshold | 1 | This configuration
specifies the threshold limit on number of UPDATE delta files within a segment.
In case the number of delta files goes beyond the threshold, the UPDATE delta
files within the segment becomes eligible for horizontal compaction and are
compacted into single UPDATE delta file.Values range between 1 to 10000. |
+| carbon.horizontal.delete.compaction.threshold | 1 | This configuration
specifies the threshold limit on number of DELETE delta files within a block of
a segment. In case the number of delta files goes beyond the threshold, the
DELETE delta files for the particular block of the segment becomes eligible for
horizontal compaction and are compacted into single DELETE delta file.Values
range between 1 to 10000. |
+| carbon.update.segment.parallelism | 1 | CarbonData processes the UPDATE
operations by grouping records belonging to a segment into a single executor
task.When the amount of data to be updated is more, this behavior causes
problems like restarting of executor due to low memory and data-spill related
errors.This property specifies the parallelism for each segment during
update.**NOTE:** It is recommended to set this value to a multiple of the
number of executors for balance.Values range between 1 to 1000. |
+| carbon.numberof.preserve.segments | 0 | If the user wants to preserve some
number of segments from being compacted then he can set this configuration.
Example: carbon.numberof.preserve.segments = 2 then 2 latest segments will
always be excluded from the compaction. No segments will be preserved by
default.**NOTE:** This configuration is useful when the chances of input data
can be wrong due to environment scenarios.Preserving some of the latest
segments from being compacted can help to easily delete the wrongly loaded
segments.Once compacted,it becomes more difficult to determine the exact data
to be deleted(except when data is incrementing according to time) |
+| carbon.allowed.compaction.days | 0 | This configuration is used to control
on the number of recent segments that needs to be compacted, ignoring the older
ones.This congifuration is in days.For Example: If the configuration is 2, then
the segments which are loaded in the time frame of past 2 days only will get
merged. Segments which are loaded earlier than 2 days will not be merged. This
configuration is disabled by default.**NOTE:** This configuration is useful
when a bulk of history data is loaded into the carbondata.Query on this data is
less frequent.In such cases involving these segments also into compacation will
affect the resource consumption, increases overall compaction time. |
+| carbon.enable.auto.load.merge | false | Compaction can be automatically
triggered once data load completes.This ensures that the segments are merged in
time and thus query times doesnt increase with increase in segments.This
configuration enables to do compaction along with data loading.**NOTE:
**Compaction will be triggered once the data load completes.But the status of
data load wait till the compaction is completed.Hence it might look like data
loading time has increased, but thats not the case.Moreover failure of
compaction will not affect the data loading status.If data load had completed
successfully, the status would be updated and segments are committed.However,
failure while data loading, will not trigger compaction and error is returned
immediately. |
+| carbon.enable.page.level.reader.in.compaction|true|Enabling page level
reader for compaction reduces the memory usage while compacting more number of
segments. It allows reading only page by page instead of reading whole blocklet
to memory.**NOTE:** Please refer to
[file-structure-of-carbondata](./file-structure-of-carbondata.md ) to
understand the storage format of CarbonData and concepts of pages.|
+| carbon.concurrent.compaction | true | Compaction of different tables can be
executed concurrently.This configuration determines whether to compact all
qualifying tables in parallel or not.**NOTE: **Compacting concurrently is a
resource demanding operation and needs more resouces there by affecting the
query performance also.This configuration is **deprecated** and might be
removed in future releases. |
+| carbon.compaction.prefetch.enable | false | Compaction operation is similar
to Query + data load where in data from qualifying segments are queried and
data loading performed to generate a new single segment.This configuration
determines whether to query ahead data from segments and feed it for data
loading.**NOTE: **This configuration is disabled by default as it needs extra
resources for querying ahead extra data.Based on the memory availability on the
cluster, user can enable it to improve compaction performance. |
+| carbon.merge.index.in.segment | true | Each CarbonData file has a companion
CarbonIndex file which maintains the metadata about the data.These CarbonIndex
files are read and loaded into driver and is used subsequently for pruning of
data during queries.These CarbonIndex files are very small in size(few KB) and
are many.Reading many small files from HDFS is not efficient and leads to slow
IO performance.Hence these CarbonIndex files belonging to a segment can be
combined into a single file and read once there by increasing the IO
throughput.This configuration enables to merge all the CarbonIndex files into a
single MergeIndex file upon data loading completion.**NOTE:** Reading a single
big file is more efficient in HDFS and IO throughput is very high.Due to this
the time needed to load the index files into memory when query is received for
the first time on that table is significantly reduced and there by
significantly reduces the delay in serving the first query. |
+
+## Query Configuration
| Parameter | Default Value | Description |
-|--------------------------|---------------|-----------------------------------------------------------------------------------------------|
+|--------------------------------------|---------------|---------------------------------------------------|
+| carbon.max.driver.lru.cache.size | -1 | Maximum memory **(in MB)** upto
which the driver process can cache the data (BTree and dictionary values).
Beyond this, least recently used data will be removed from cache before loading
new set of values.Default value of -1 means there is no memory limit for
caching. Only integer values greater than 0 are accepted.**NOTE:** Minimum
number of entries that needs to be removed from cache in order to load the new
set of data is determined and unloaded.ie.,for example if 3 cache entries
qualify for pre-emption, out of these, those entries that free up more cache
memory is removed prior to others. |
+| carbon.max.executor.lru.cache.size | -1 | Maximum memory **(in MB)** upto
which the executor process can cache the data (BTree and reverse dictionary
values).Default value of -1 means there is no memory limit for caching. Only
integer values greater than 0 are accepted.**NOTE:** If this parameter is not
configured, then the value of ***carbon.max.driver.lru.cache.size*** will be
used. |
| max.query.execution.time | 60 | Maximum time allowed for one query to be
executed. The value is in minutes. |
-| carbon.enableMinMax | true | Min max is feature added to enhance query
performance. To disable this feature, set it false. |
-| carbon.dynamicallocation.schedulertimeout | 5 | Specifies the maximum time
(unit in seconds) the scheduler can wait for executor to be active. Minimum
value is 5 sec and maximum value is 15 sec. |
-| carbon.scheduler.minregisteredresourcesratio | 0.8 | Specifies the minimum
resource (executor) ratio needed for starting the block distribution. The
default value is 0.8, which indicates 80% of the requested resource is
allocated for starting block distribution. The minimum value is 0.1 min and
the maximum value is 1.0. |
+| carbon.enableMinMax | true | CarbonData maintains the metadata which enables
to prune unnecessary files from being scanned as per the query conditions.To
achieve pruning, Min,Max of each column is maintined.Based on the filter
condition in the query, certain data can be skipped from scanning by matching
the filter value against the min,max values of the column(s) present in that
carbondata file.This pruing enhances query performance significantly. |
+| carbon.dynamicallocation.schedulertimeout | 5 | CarbonData has its own
scheduling algorithm to suggest to Spark on how many tasks needs to be launched
and how much work each task need to do in a Spark cluster for any query on
CarbonData.To determine the number of tasks that can be scheduled, knowing the
count of active executors is necessary.When dynamic allocation is enabled on a
YARN based spark cluster,execuor processes are shutdown if no request is
received for a particular amount of time.The executors are brought up when the
requet is received again.This configuration specifies the maximum time (unit in
seconds) the carbon scheduler can wait for executor to be active. Minimum value
is 5 sec and maximum value is 15 sec.**NOTE: **Waiting for longer time leads to
slow query response time.Moreover it might be possible that YARN is not able to
start the executors and waiting is not beneficial. |
+| carbon.scheduler.minregisteredresourcesratio | 0.8 | Specifies the minimum
resource (executor) ratio needed for starting the block distribution. The
default value is 0.8, which indicates 80% of the requested resource is
allocated for starting block distribution. The minimum value is 0.1 min and
the maximum value is 1.0. |
| carbon.search.enabled (Alpha Feature) | false | If set to true, it will use
CarbonReader to do distributed scan directly instead of using compute framework
like spark, thus avoiding limitation of compute framework like SQL optimizer
and task scheduling overhead. |
-
-* **Global Dictionary Configurations**
-
-| Parameter | Default Value | Description |
-|---------------------------------------|---------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| carbon.cutOffTimestamp | | Sets the start date for calculating the
timestamp. Java counts the number of milliseconds from start of "1970-01-01
00:00:00". This property is used to customize the start of position. For
example "2000-01-01 00:00:00". The date must be in the form
"carbon.timestamp.format". |
-| carbon.timegranularity | SECOND | The property used to set the data
granularity level DAY, HOUR, MINUTE, or SECOND. |
-
-## Spark Configuration
- <b><p align="center">Spark Configuration Reference in
spark-defaults.conf</p></b>
-
+| carbon.search.query.timeout | 10s | Time within which the result is expected
from the workers;beyond which the query is terminated |
+| carbon.search.scan.thread | num of cores available in worker node | Number
of cores to be used in each worker for performing scan. |
+| carbon.search.master.port | 10020 | Port on which the search master listens
for incoming query requests |
+| carbon.search.worker.port | 10021 | Port on which search master communicates
with the workers. |
+| carbon.search.worker.workload.limit | 10 * *carbon.search.scan.thread* |
Maximum number of active requests that can be sent to a worker.Beyond which the
request needs to be rescheduled for later time or to a different worker. |
+| carbon.detail.batch.size | 100 | The buffer size to store records, returned
from the block scan. In limit scenario this parameter is very important. For
example your query limit is 1000. But if we set this value to 3000 that means
we get 3000 records from scan but spark will only take 1000 rows. So the 2000
remaining are useless. In one Finance test case after we set it to 100, in the
limit 1000 scenario the performance increase about 2 times in comparison to if
we set this value to 12000. |
+| carbon.enable.vector.reader | true | Spark added vector processing to
optimize cpu cache miss and there by increase the query performance.This
configuration enables to fetch data as columnar batch of size 4*1024 rows
instead of fetching data row by row and provide it to spark so that there is
improvement in select queries performance. |
+| carbon.task.distribution | block | CarbonData has its own scheduling
algorithm to suggest to Spark on how many tasks needs to be launched and how
much work each task need to do in a Spark cluster for any query on
CarbonData.Each of these task distribution suggestions has its own advantages
and disadvantages.Based on the customer use case, appropriate task distribution
can be configured.**block**: Setting this value will launch one task per block.
This setting is suggested in case of concurrent queries and queries having big
shuffling scenarios. **custom**: Setting this value will group the blocks and
distribute it uniformly to the available resources in the cluster. This
enhances the query performance but not suggested in case of concurrent queries
and queries having big shuffling scenarios. **blocklet**: Setting this value
will launch one task per blocklet. This setting is suggested in case of
concurrent queries and queries having big shuffling scenarios.
**merge_small_files**: S
etting this value will merge all the small carbondata files upto a bigger size
configured by ***spark.sql.files.maxPartitionBytes*** (128 MB is the default
value,it is configurable) during querying. The small carbondata files are
combined to a map task to reduce the number of read task. This enhances the
performance. |
+| carbon.custom.block.distribution | false | CarbonData has its own scheduling
algorithm to suggest to Spark on how many tasks needs to be launched and how
much work each task need to do in a Spark cluster for any query on
CarbonData.When this configuration is true, CarbonData would distribute the
available blocks to be scanned among the available number of cores.For
Example:If there are 10 blocks to be scanned and only 3 tasks can be run(only 3
executor cores available in the cluster), CarbonData would combine blocks as
4,3,3 and give it to 3 tasks to run.**NOTE:** When this configuration is false,
as per the ***carbon.task.distribution*** configuration, each block/blocklet
would be given to each task. |
+| enable.query.statistics | false | CarbonData has extensive logging which
would be useful for debugging issues related to performance or hard to locate
issues.This configuration when made ***true*** would log additional query
statistics information to more accurately locate the issues being
debugged.**NOTE:** Enabling this would log more debug information to log files,
there by increasing the log files size significantly in short span of time.It
is advised to configure the log files size, retention of log files parameters
in log4j properties appropriately.Also extensive logging is an increased IO
operation and hence over all query performance might get reduced.Therefore it
is recommened to enable this configuration only for the duration of debugging. |
+| enable.unsafe.in.query.processing | true | CarbonData supports unsafe
operations of Java to avoid GC overhead for certain operations.This
configuration enables to use unsafe functions in CarbonData while scanning the
data during query. |
+| carbon.query.validate.directqueryondatamap | true | CarbonData supports
creating pre-aggregate table datamaps as an independent tables.For some
debugging purposes, it might be required to directly query from such datamap
tables.This configuration allows to query on such datamaps. |
+| carbon.heap.memory.pooling.threshold.bytes | 1048576 | CarbonData supports
unsafe operations of Java to avoid GC overhead for certain operations.Using
unsafe, memory can be allocated on Java Heap or off heap.This configuration
controlls the allocation mechanism on Java HEAP.If the heap memory allocations
of the given size is greater or equal than this value,it should go through the
pooling mechanism.But if set this size to -1, it should not go through the
pooling mechanism.Default value is 1048576(1MB, the same as Spark).Value to be
specified in bytes. |
+
+## Data Mutation Configuration
| Parameter | Default Value | Description |
-|----------------------------------------|--------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| spark.driver.memory | 1g | Amount of memory to be used by the driver
process. |
-| spark.executor.memory | 1g | Amount of memory to be used per executor
process. |
+|--------------------------------------|---------------|---------------------------------------------------|
+| carbon.insert.persist.enable | false | CarbonData does loading in 2 major
steps.1st step reads from the input source and generates the dictionary
values.2nd step reads from the source again and encodes the data with the
dictionary values, perform index calculations and writes in CarbonData format.
Suppose we are loading the CarbonData table using another table as source(using
insert into) and the source table is being loaded in parallel, there can be
cases where some data got inserted into the source table after CarbonData
generated for the target table in which case some new records which does not
have dictionary values generated gets read leading to inconsistency. To avoid
this condition we can persist the dataset of RDD/dataframe into
MEMORY_AND_DISK(default value) and perform insert into operation. This ensures
the data read from source table is cached and is not read again from the source
there by ensuring consistency between dictionary generation and writing to
CarbonData fo
rmat steps. By default this value is false as concurrent loading into source
table is not the scenario majority of the times.**NOTE:** This configuration
can reduce the insert into execution time as data need not be re read; but
increases the memory foot print. |
+| carbon.insert.storage.level | MEMORY_AND_DISK | Storage level to persist
dataset of a RDD/dataframe.Applicable when ***carbon.insert.persist.enable***
is **true**, if user's executor has less memory, set this parameter to
'MEMORY_AND_DISK_SER' or other storage level to correspond to different
environment. [See
detail](http://spark.apache.org/docs/latest/rdd-programming-guide.html#rdd-persistence).
|
+| carbon.update.persist.enable | true | Configuration to enable the dataset of
RDD/dataframe to persist data. Enabling this will reduce the execution time of
UPDATE operation. |
+| carbon.update.storage.level | MEMORY_AND_DISK | Storage level to persist
dataset of a RDD/dataframe.Applicable when ***carbon.update.persist.enable***
is **true**, if user's executor has less memory, set this parameter to
'MEMORY_AND_DISK_SER' or other storage level to correspond to different
environment. [See
detail](http://spark.apache.org/docs/latest/rdd-programming-guide.html#rdd-persistence).
|
+
## Dynamic Configuration In CarbonData Using SET-RESET
@@ -208,16 +191,24 @@ RESET
<b><p align="center">Dynamically Configurable Properties of CarbonData</p></b>
-| Properties | Description
|
-|------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| carbon.options.bad.records.logger.enable | To enable or disable bad record
logger.
|
-| carbon.options.bad.records.action | This property can have four types
of actions for bad records FORCE, REDIRECT, IGNORE and FAIL. If set to FORCE
then it auto-corrects the data by storing the bad records as NULL. If set to
REDIRECT then bad records are written to the raw CSV instead of being loaded.
If set to IGNORE then bad records are neither loaded nor written to the raw
CSV. If set to FAIL then data loading fails if any bad records are found.
|
-| carbon.options.is.empty.data.bad.record | If false, then empty ("" or '' or
,,) data will not be considered as bad record and vice versa.
|
-| carbon.options.batch.sort.size.inmb | Size of batch data to keep in
memory, as a thumb rule it supposed to be less than 45% of
sort.inmemory.size.inmb otherwise it may spill intermediate data to disk.
|
-| carbon.options.single.pass | Single Pass Loading enables
single job to finish data loading with dictionary generation on the fly. It
enhances performance in the scenarios where the subsequent data loading after
initial load involves fewer incremental updates on the dictionary. This option
specifies whether to use single pass for loading data or not. By default this
option is set to FALSE.
|
-| carbon.options.bad.record.path | Specifies the HDFS path where bad
records needs to be stored.
|
-| carbon.custom.block.distribution | Specifies whether to use the
Spark or Carbon block distribution feature.
|
-| enable.unsafe.sort | Specifies whether to use unsafe
sort during data loading. Unsafe sort reduces the garbage collection during
data load operation, resulting in better performance.
|
+
+| Properties | Description
|
+| ----------------------------------------- |
------------------------------------------------------------ |
+| carbon.options.bad.records.logger.enable | CarbonData can identify the
records that are not conformant to schema and isolate them as bad
records.Enabling this configuration will make CarbonData to log such bad
records.**NOTE:** If the input data contains many bad records, logging them
will slow down the over all data loading throughput.The data load operation
status would depend on the configuration in ***carbon.bad.records.action***. |
+| carbon.options.bad.records.logger.enable | To enable or disable bad record
logger. |
+| carbon.options.bad.records.action | This property can have four
types of actions for bad records FORCE, REDIRECT, IGNORE and FAIL. If set to
FORCE then it auto-corrects the data by storing the bad records as NULL. If set
to REDIRECT then bad records are written to the raw CSV instead of being
loaded. If set to IGNORE then bad records are neither loaded nor written to the
raw CSV. If set to FAIL then data loading fails if any bad records are found. |
+| carbon.options.is.empty.data.bad.record | If false, then empty ("" or ''
or ,,) data will not be considered as bad record and vice versa. |
+| carbon.options.batch.sort.size.inmb | Size of batch data to keep in
memory, as a thumb rule it supposed to be less than 45% of
sort.inmemory.size.inmb otherwise it may spill intermediate data to disk. |
+| carbon.options.single.pass | Single Pass Loading enables
single job to finish data loading with dictionary generation on the fly. It
enhances performance in the scenarios where the subsequent data loading after
initial load involves fewer incremental updates on the dictionary. This option
specifies whether to use single pass for loading data or not. By default this
option is set to FALSE.**NOTE:** Enabling this starts a new dictionary server
to handle dictionary generation requests during data loading.Without this
option, the input csv files will have to read twice.Once while dictionary
generation and persisting to the dictionary files.second when the data loading
need to convert the input data into carbondata format.Enabling this optimizes
the optimizes to read the input data only once there by reducing IO and hence
over all data loading time.If concurrent data loading needs to be supported,
consider tuning ***dictionary.worker.threads***.Port on which the dictiona
ry server need to listen on can be configured using the configuration
***carbon.dictionary.server.port***. |
+| carbon.options.bad.record.path | Specifies the HDFS path where
bad records needs to be stored. |
+| carbon.custom.block.distribution | Specifies whether to use the
Spark or Carbon block distribution feature.**NOTE: **Refer to [Query
Configuration](#query-configuration)#carbon.custom.block.distribution for more
details on CarbonData scheduler. |
+| enable.unsafe.sort | Specifies whether to use unsafe
sort during data loading. Unsafe sort reduces the garbage collection during
data load operation, resulting in better performance. |
+| carbon.options.dateformat | Specifies the data format of the
date columns in the data being loaded |
+| carbon.options.timestampformat | Specifies the timestamp format
of the time stamp columns in the data being loaded |
+| carbon.options.sort.scope | Specifies how the current data
load should be sorted with.**NOTE: **Refer to [Data Loading
Configuration](#data-loading-configuration)#carbon.sort.scope for detailed
information. |
+| carbon.options.global.sort.partitions |
|
+| carbon.options.serialization.null.format | Default Null value
representation in the data being loaded.**NOTE:** Refer to [Data Loading
Configuration](#data-loading-configuration)#carbon.options.serialization.null.format
for detailed information. |
+| carbon.query.directQueryOnDataMap.enabled | Specifies whether datamap can be
queried directly.This is useful for debugging purposes.**NOTE: **Refer to
[Query
Configuration](#query-configuration)#carbon.query.validate.directqueryondatamap
for detailed information. |
**Examples:**
@@ -244,3 +235,16 @@ RESET
* Success will be recorded in the driver log.
* Failure will be displayed in the UI.
+
+
+<script>
+$(function() {
+ // Show selected style on nav item
+ $('.b-nav__docs').addClass('selected');
+
+ // Display docs subnav items
+ if (!$('.b-nav__docs').parent().hasClass('nav__item__with__subs--expanded'))
{
+ $('.b-nav__docs').parent().toggleClass('nav__item__with__subs--expanded');
+ }
+});
+</script>
\ No newline at end of file