This is an automated email from the ASF dual-hosted git repository.

lakshsingla pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
     new d6c73ca6e5 Cleanup the documentation for deep storage
d6c73ca6e5 is described below

commit d6c73ca6e5f6d9c97d590f0936f2d1e491c78001
Author: Laksh Singla <[email protected]>
AuthorDate: Fri Aug 4 10:20:01 2023 +0000

    Cleanup the documentation for deep storage
---
 docs/multi-stage-query/reference.md | 18 ------------------
 1 file changed, 18 deletions(-)

diff --git a/docs/multi-stage-query/reference.md 
b/docs/multi-stage-query/reference.md
index 5389da48af..592aed9a2a 100644
--- a/docs/multi-stage-query/reference.md
+++ b/docs/multi-stage-query/reference.md
@@ -361,24 +361,6 @@ The following common service properties control how 
durable storage behaves:
 |`druid.msq.intermediate.storage.maxRetry` | 10 | Optional. Defines the max 
number times to attempt S3 API calls to avoid failures due to transient errors. 
| 
 |`druid.msq.intermediate.storage.chunkSize` | 100MiB | Optional. Defines the 
size of each chunk to temporarily store in 
`druid.msq.intermediate.storage.tempDir`. The chunk size must be between 5 MiB 
and 5 GiB. A large chunk size reduces the API calls made to the durable 
storage, however it requires more disk space to store the temporary chunks. 
Druid uses a default of 100MiB if the value is not provided.| 
 
-The following common service properties control how durable storage behaves:
-
-|Parameter          |Default                                 | Description     
     |
-|-------------------|----------------------------------------|----------------------|
-|`druid.msq.intermediate.storage.enable` | true | Required. Whether to enable 
durable storage for the cluster.|
-|`druid.msq.intermediate.storage.type` | `s3` if your deep storage is S3 | 
Required. The type of storage to use. Currently only `s3` is supported.  |
-|`druid.msq.intermediate.storage.chunkSize` | 100MiB | Optional. Defines the 
size of each chunk to temporarily store in 
`druid.msq.intermediate.storage.tempDir`. The chunk size must be between 5 MiB 
and 5 GiB. A large chunk size reduces the API calls made to the durable 
storage, however it requires more disk space to store the temporary chunks. 
Druid uses a default of 100MiB if the value is not provided.|
-|`druid.msq.intermediate.storage.maxRetry` | 10 | Optional. Defines the max 
number times to attempt S3 API calls to avoid failures due to transient errors. 
|
-|`druid.msq.intermediate.storage.bucket` | n/a | The bucket in S3 where you 
want to store intermediate files.  |
-|`druid.msq.intermediate.storage.prefix` | n/a | S3 prefix to store 
intermediate stage results. Provide a unique value for the prefix. Don't share 
the same prefix between clusters. If the location  includes other files or 
directories, then they will get cleaned up as well.  |
-|`druid.msq.intermediate.storage.tempDir`| n/a | Required. Directory path on 
the local disk to temporarily store intermediate stage results.  |
-
-In addition to the common service properties, there are certain properties 
that you configure on the Overlord specifically to clean up intermediate files:
-
-|Parameter          |Default                                 | Description     
     |
-|-------------------|----------------------------------------|----------------------|
-|`druid.msq.intermediate.storage.cleaner.enabled`| false | Optional. Whether 
durable storage cleaner should be enabled for the cluster.  |
-|`druid.msq.intermediate.storage.cleaner.delaySeconds`| 86400 | Optional. The 
delay (in seconds) after the last run post which the durable storage cleaner 
would clean the outputs.  |
 
 In addition to the common service properties, there are certain properties 
that you configure on the Overlord specifically to clean up intermediate files:
 


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to