Github user sraghunandan commented on a diff in the pull request:
https://github.com/apache/carbondata/pull/2576#discussion_r207071907
--- Diff: docs/configuration-parameters.md ---
@@ -106,7 +106,10 @@ This section provides the details of all the
configurations required for CarbonD
|---------------------------------------------|--------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| carbon.sort.file.write.buffer.size | 16384 | File write buffer size used
during sorting. Minimum allowed buffer size is 10240 byte and Maximum allowed
buffer size is 10485760 byte. |
| carbon.lock.type | LOCALLOCK | This configuration specifies the type of
lock to be acquired during concurrent operations on table. There are following
types of lock implementation: - LOCALLOCK: Lock is created on local file system
as file. This lock is useful when only one spark driver (thrift server) runs on
a machine and no other CarbonData spark application is launched concurrently. -
HDFSLOCK: Lock is created on HDFS file system as file. This lock is useful when
multiple CarbonData spark applications are launched and no ZooKeeper is running
on cluster and HDFS supports file based locking. |
-| carbon.lock.path | TABLEPATH | This configuration specifies the path
where lock files have to be created. Recommended to configure zookeeper lock
type or configure HDFS lock path(to this property) in case of S3 file system as
locking is not feasible on S3.
+| carbon.lock.path | TABLEPATH | This configuration specifies the path
where lock files have to
+be created. Recommended to configure HDFS lock path(to this property) in
case of S3 file system
+as locking is not feasible on S3.
+**Note:** If this property is not set to HDFS location for S3 store, then
there is a possibility of data corruption.
--- End diff --
can add a brief sentence as to why corruption might happen
---