[CARBONDATA-1840]Updated configuration-parameters.md for V3 format

Updated configuration-parameters.md for V3 format

This closes #1883


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/f34ea5c7
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/f34ea5c7
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/f34ea5c7

Branch: refs/heads/branch-1.3
Commit: f34ea5c70b38ac6934d9203264de4626d22f68e4
Parents: cdff193
Author: vandana <[email protected]>
Authored: Tue Jan 30 15:12:34 2018 +0530
Committer: chenliang613 <[email protected]>
Committed: Thu Feb 1 11:03:29 2018 +0800

----------------------------------------------------------------------
 docs/configuration-parameters.md | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/f34ea5c7/docs/configuration-parameters.md
----------------------------------------------------------------------
diff --git a/docs/configuration-parameters.md b/docs/configuration-parameters.md
index 522d222..fe207f2 100644
--- a/docs/configuration-parameters.md
+++ b/docs/configuration-parameters.md
@@ -35,7 +35,7 @@ This section provides the details of all the configurations 
required for the Car
 | carbon.storelocation | /user/hive/warehouse/carbon.store | Location where 
CarbonData will create the store, and write the data in its own format. NOTE: 
Store location should be in HDFS. |
 | carbon.ddl.base.hdfs.url | hdfs://hacluster/opt/data | This property is used 
to configure the HDFS relative path, the path configured in 
carbon.ddl.base.hdfs.url will be appended to the HDFS path configured in 
fs.defaultFS. If this path is configured, then user need not pass the complete 
path while dataload. For example: If absolute path of the csv file is 
hdfs://10.18.101.155:54310/data/cnbc/2016/xyz.csv, the path 
"hdfs://10.18.101.155:54310" will come from property fs.defaultFS and user can 
configure the /data/cnbc/ as carbon.ddl.base.hdfs.url. Now while dataload user 
can specify the csv path as /2016/xyz.csv. |
 | carbon.badRecords.location | /opt/Carbon/Spark/badrecords | Path where the 
bad records are stored. |
-| carbon.data.file.version | 2 | If this parameter value is set to 1, then 
CarbonData will support the data load which is in old format(0.x version). If 
the value is set to 2(1.x onwards version), then CarbonData will support the 
data load of new format only.|
+| carbon.data.file.version | 3 | If this parameter value is set to 1, then 
CarbonData will support the data load which is in old format(0.x version). If 
the value is set to 2(1.x onwards version), then CarbonData will support the 
data load of new format only. The default value for this parameter is 3(latest 
version is set as default version). It improves the query performance by ~20% 
to 50%. For configuring V3 format explicitly, add carbon.data.file.version = V3 
in carbon.properties file. |
 | carbon.streaming.auto.handoff.enabled | true | If this parameter value is 
set to true, auto trigger handoff function will be enabled.|
 | carbon.streaming.segment.max.size | 1024000000 | This parameter defines the 
maximum size of the streaming segment. Setting this parameter to appropriate 
value will avoid impacting the streaming ingestion. The value is in bytes.|
 
@@ -60,6 +60,7 @@ This section provides the details of all the configurations 
required for CarbonD
 | carbon.options.is.empty.data.bad.record | false | If false, then empty ("" 
or '' or ,,) data will not be considered as bad record and vice versa. | |
 | carbon.options.bad.record.path |  | Specifies the HDFS path where bad 
records are stored. By default the value is Null. This path must to be 
configured by the user if bad record logger is enabled or bad record action 
redirect. | |
 | carbon.enable.vector.reader | true | This parameter increases the 
performance of select queries as it fetch columnar batch of size 4*1024 rows 
instead of fetching data row by row. | |
+| carbon.blockletgroup.size.in.mb | 64 MB | The data are read as a group of 
blocklets which are called blocklet groups. This parameter specifies the size 
of the blocklet group. Higher value results in better sequential IO access.The 
minimum value is 16MB, any value lesser than 16MB will reset to the default 
value (64MB). |  |
 
 * **Compaction Configuration**
   

Reply via email to