maytasm commented on a change in pull request #10935:
URL: https://github.com/apache/druid/pull/10935#discussion_r597879612



##########
File path: docs/configuration/index.md
##########
@@ -886,6 +886,17 @@ The below is a list of the supported configurations for 
auto compaction.
 |`chatHandlerTimeout`|Timeout for reporting the pushed segments in worker 
tasks.|no (default = PT10S)|
 |`chatHandlerNumRetries`|Retries for reporting the pushed segments in worker 
tasks.|no (default = 5)|
 
+###### Automatic compaction TuningConfig

Review comment:
       Should this says Automatic compaction Granularity Spec?

##########
File path: docs/ingestion/compaction.md
##########
@@ -22,36 +22,45 @@ description: "Defines compaction and automatic compaction 
(auto-compaction or au
   ~ specific language governing permissions and limitations
   ~ under the License.
   -->
-Query performance in Apache Druid depends on optimally sized segments. 
Compaction is one strategy you can use to optimize segment size for your Druid 
database. Compaction tasks read an existing set of segments for a given time 
interval and combine the data into a new "compacted" set of segments. The 
compacted segments are generally larger, but there are fewer of them. Here 
compaction increases performance because fewer segments require less the 
per-segment processing and the memory overhead for ingestion and for querying 
paths.
+Query performance in Apache Druid depends on optimally sized segments. 
Compaction is one strategy you can use to optimize segment size for your Druid 
database. Compaction tasks read an existing set of segments for a given time 
interval and combine the data into a new "compacted" set of segments. In some 
cases the compacted segments are larger, but there are fewer of them. In other 
cases the compacted segments may be smaller. Compaction tends to increase 
performance because optimized segments require less per-segment processing and 
less memory overhead for ingestion and for querying paths.
 
-As a general strategy, compaction is effective when you have data arriving out 
of chronological order resulting in lots of small segments. This often happens, 
for example, if you are appending data using `appendToExisting` for [native 
batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data 
with each ingestion task, you don't need to use compaction.
+## Compaction strategies
+There are several cases to consider compaction for segment optimization:
+- With streaming ingestion, data can arrive out of chronological order 
creating lots of small segments.
+- If you append data using `appendToExisting` for [native 
batch](native-batch.md) ingestion creating suboptimal segments.
+- When you use `index_parallel` for parallel batch indexing and the parallel 
ingestion tasks create many small segments.
+- When a misconfigured ingestion task creates oversized segments.
 
-In some cases you can use compaction to reduce segment size. For example, if a 
misconfigured ingestion task creates oversized segments, you can create a 
compaction task to split the segment files into smaller, more optimally sized 
ones.
+By default, compaction does not modify the underlying data of the segments. 
However, there are cases when you may want to modify data during compaction to 
improve query performance:
+- If, after ingestion, you realize realize that data for the time interval is 
sparse, you can use compaction to increase the segment granularity.
+- Over time you don't need fine-grained granularity for older data so you use 
compaction to change older segments to a coarser query granularity. For example 
from `minute` to `hour`, or `hour` to `day`. You cannot go from coarser 
granularity to finer granularity.

Review comment:
       Should mentioned that the reason for this is for more rollup which 
result in less storage space

##########
File path: docs/configuration/index.md
##########
@@ -843,7 +843,7 @@ A description of the compaction config is:
 |`inputSegmentSizeBytes`|Maximum number of total segment bytes processed per 
compaction task. Since a time chunk must be processed in its entirety, if the 
segments for a particular time chunk have a total size in bytes greater than 
this parameter, compaction will not run for that time chunk. Because each 
compaction task runs with a single thread, setting this value too far above 
1–2GB will result in compaction tasks taking an excessive amount of time.|no 
(default = 419430400)|
 |`maxRowsPerSegment`|Max number of rows per segment after compaction.|no|
 |`skipOffsetFromLatest`|The offset for searching segments to be compacted in 
[ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) duration format. Strongly 
recommended to set for realtime dataSources. See [Data handling with 
compaction](../ingestion/compaction.md#data-handling-with-compaction)|no 
(default = "P1D")|
-|`tuningConfig`|Tuning config for compaction tasks. See below [Compaction Task 
TuningConfig](#compaction-tuningconfig).|no|
+|`tuningConfig`|Tuning config for compaction tasks. See below [Compaction Task 
TuningConfig](#automatic-compaction-tuningconfig).|no|
 |`taskContext`|[Task context](../ingestion/tasks.md#context) for compaction 
tasks.|no|
 |`granularitySpec`|Custom `granularitySpec` to describe the 
`segmentGranularity` for the compacted segments.|No|

Review comment:
       Should this link to the newly added Automatic compaction granularity 
spec section?

##########
File path: docs/ingestion/compaction.md
##########
@@ -22,36 +22,45 @@ description: "Defines compaction and automatic compaction 
(auto-compaction or au
   ~ specific language governing permissions and limitations
   ~ under the License.
   -->
-Query performance in Apache Druid depends on optimally sized segments. 
Compaction is one strategy you can use to optimize segment size for your Druid 
database. Compaction tasks read an existing set of segments for a given time 
interval and combine the data into a new "compacted" set of segments. The 
compacted segments are generally larger, but there are fewer of them. Here 
compaction increases performance because fewer segments require less the 
per-segment processing and the memory overhead for ingestion and for querying 
paths.
+Query performance in Apache Druid depends on optimally sized segments. 
Compaction is one strategy you can use to optimize segment size for your Druid 
database. Compaction tasks read an existing set of segments for a given time 
interval and combine the data into a new "compacted" set of segments. In some 
cases the compacted segments are larger, but there are fewer of them. In other 
cases the compacted segments may be smaller. Compaction tends to increase 
performance because optimized segments require less per-segment processing and 
less memory overhead for ingestion and for querying paths.
 
-As a general strategy, compaction is effective when you have data arriving out 
of chronological order resulting in lots of small segments. This often happens, 
for example, if you are appending data using `appendToExisting` for [native 
batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data 
with each ingestion task, you don't need to use compaction.
+## Compaction strategies
+There are several cases to consider compaction for segment optimization:
+- With streaming ingestion, data can arrive out of chronological order 
creating lots of small segments.
+- If you append data using `appendToExisting` for [native 
batch](native-batch.md) ingestion creating suboptimal segments.
+- When you use `index_parallel` for parallel batch indexing and the parallel 
ingestion tasks create many small segments.
+- When a misconfigured ingestion task creates oversized segments.
 
-In some cases you can use compaction to reduce segment size. For example, if a 
misconfigured ingestion task creates oversized segments, you can create a 
compaction task to split the segment files into smaller, more optimally sized 
ones.
+By default, compaction does not modify the underlying data of the segments. 
However, there are cases when you may want to modify data during compaction to 
improve query performance:
+- If, after ingestion, you realize realize that data for the time interval is 
sparse, you can use compaction to increase the segment granularity.

Review comment:
       typo. realize twice

##########
File path: docs/ingestion/compaction.md
##########
@@ -0,0 +1,210 @@
+---
+id: compaction
+title: "Compaction"
+description: "Defines compaction and automatic compaction (auto-compaction or 
autocompaction) as a strategy for segment optimization. Use cases for 
compaction. Describes compaction task configuration."
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+Query performance in Apache Druid depends on optimally sized segments. 
Compaction is one strategy you can use to optimize segment size for your Druid 
database. Compaction tasks read an existing set of segments for a given time 
interval and combine the data into a new "compacted" set of segments. The 
compacted segments are generally larger, but there are fewer of them. Here 
compaction increases performance because fewer segments require less the 
per-segment processing and the memory overhead for ingestion and for querying 
paths.
+
+As a general strategy, compaction is effective when you have data arriving out 
of chronological order resulting in lots of small segments. This often happens, 
for example, if you are appending data using `appendToExisting` for [native 
batch](./native_batch.md) ingestion. Conversely, if you are rewriting your data 
with each ingestion task, you don't need to use compaction.
+
+In some cases you can use compaction to reduce segment size. For example, if a 
misconfigured ingestion task creates oversized segments, you can create a 
compaction task to split the segment files into smaller, more optimally sized 
ones.
+
+See [Segment optimization](../operations/segment-optimization.md) for guidance 
to determine if compaction will help in your environment.
+
+
+## Types of segment compaction
+You can configure the Druid Coordinator to perform automatic compaction, also 
called auto-compaction, for a datasource. Using a segment search policy, the 
coordinator periodically identifies segments for compaction starting with the 
newest to oldest. When segments can benefit from compaction, the coordinator 
automatically submits a compaction task. 
+
+Automatic compaction works in most use cases and should be your first option. 
To learn more about automatic compaction, see [Compacting 
Segments](../design/coordinator.md#compacting-segments).
+
+In cases where you require more control over compaction, you can manually 
submit compaction tasks. For example:
+- Automatic compaction is too slow.
+- You want to force compaction for a specific time range.
+- You want to compact data out of chronological order.
+
+See [Setting up a manual compaction task](#setting-up-manual-compaction) for 
more about manual compaction tasks.
+
+
+## Data handling with compaction
+During compaction, Druid overwrites the original set of segments with the 
compacted set without modifying the data. During compaction Druid locks the 
segments for the time interval being compacted to ensure data consistency.
+
+If an ingestion task needs to write data to a segment for a time interval 
locked for compaction, the ingestion task supersedes the compaction task and 
the compaction task fails without finishing. For manual compaction tasks you 
can adjust the input spec interval to avoid conflicts between ingestion and 
compaction. For automatic compaction, you can set the `skipOffsetFromLatest` 
key to adjustment the auto compaction starting point from the current time to 
reduce the chance of conflicts between ingestion and compaction. See 
[Compaction dynamic 
configuration](../configuration/index.md#compaction-dynamic-configuration) for 
more information.
+
+### Segment granularity handling
+
+Unless you modify the segment granularity in the [granularity 
spec](#compaction-granularity-spec), Druid attempts to retain the granularity 
for the compacted segments. When segments have different segment granularities 
with no overlap in interval Druid creates a separate compaction task for each 
to retain the segment granularity in the compacted segment. If segments have 
different segment granularities before compaction but there is some overlap in 
interval, Druid attempts find start and end of the overlapping interval and 
uses the closest segment granularity level for the compacted segment.
+
+### Query granularity handling
+
+Unless you modify the query granularity in the [granularity 
spec](#compaction-granularity-spec), Druid retains the query granularity for 
the compacted segments. If segments have different query granularities before 
compaction, Druid chooses the finest level of granularity for the resulting 
compacted segment. For example if a compaction task combines two segments, one 
with day query granularity and one with minute query granularity, the resulting 
segment uses minute query granularity.
+
+> In Apache Druid 0.21.0 and prior, Druid sets the granularity for compacted 
segments to the default granularity of `NONE` regardless of the query 
granularity of the original segments.
+
+If you configure query granularity in compaction to go from a finer 
granularity like month to a coarser query granularity like year, then Druid 
overshadows the original segment with coarser granularity. Because the new 
segments have a coarser granularity, running a kill task to remove the 
overshadowed segments for those intervals will cause you to permanently lose 
the finer granularity data.
+
+### Dimension handling
+Apache Druid supports schema changes. Therefore, dimensions can be different 
across segments even if they are a part of the same data source. See [Different 
schemas among 
segments](../design/segments.md#different-schemas-among-segments). If the input 
segments have different dimensions, the resulting compacted segment includes 
all dimensions of the input segments. 
+
+Even when the input segments have the same set of dimensions, the dimension 
order or the data type of dimensions can be different. The dimensions of recent 
segments precede that of old segments in terms of data types and the ordering 
because more recent segments are more likely to have the preferred order and 
data types.
+
+If you want to use your own ordering and types, you can specify a custom 
`dimensionsSpec` in the compaction task spec.
+
+### Rollup
+Druid only rolls up the output segment when `rollup` is set for all input 
segments. See [Roll-up](../ingestion/index.md#rollup) for more details.
+You can check that your segments are rolled up or not by using [Segment 
Metadata Queries](../querying/segmentmetadataquery.md#analysistypes).
+
+## Setting up manual compaction
+
+To perform a manual compaction, you submit a compaction task. Compaction tasks 
merge all segments for the defined interval according to the following syntax:
+
+```json

Review comment:
       Can you add granularitySpec to this json example

##########
File path: docs/ingestion/compaction.md
##########
@@ -62,16 +71,12 @@ Unless you modify the query granularity in the [granularity 
spec](#compaction-gr
 If you configure query granularity in compaction to go from a finer 
granularity like month to a coarser query granularity like year, then Druid 
overshadows the original segment with coarser granularity. Because the new 
segments have a coarser granularity, running a kill task to remove the 
overshadowed segments for those intervals will cause you to permanently lose 
the finer granularity data.
 
 ### Dimension handling
-Apache Druid supports schema changes. Therefore, dimensions can be different 
across segments even if they are a part of the same data source. See [Different 
schemas among 
segments](../design/segments.md#different-schemas-among-segments). If the input 
segments have different dimensions, the resulting compacted segment includes 
all dimensions of the input segments. 
+Apache Druid supports schema changes. Therefore, dimensions can be different 
across segments even if they are a part of the same data source. See [Different 
schemas among 
segments](../design/segments.md#different-schemas-among-segments). If the input 
segments have different dimensions, the resulting compacted segment include all 
dimensions of the input segments. 
 
 Even when the input segments have the same set of dimensions, the dimension 
order or the data type of dimensions can be different. The dimensions of recent 
segments precede that of old segments in terms of data types and the ordering 
because more recent segments are more likely to have the preferred order and 
data types.
 
 If you want to use your own ordering and types, you can specify a custom 
`dimensionsSpec` in the compaction task spec.
 
-### Rollup

Review comment:
       Why is this removed?

##########
File path: docs/ingestion/compaction.md
##########
@@ -62,16 +71,12 @@ Unless you modify the query granularity in the [granularity 
spec](#compaction-gr
 If you configure query granularity in compaction to go from a finer 
granularity like month to a coarser query granularity like year, then Druid 
overshadows the original segment with coarser granularity. Because the new 
segments have a coarser granularity, running a kill task to remove the 
overshadowed segments for those intervals will cause you to permanently lose 
the finer granularity data.
 
 ### Dimension handling
-Apache Druid supports schema changes. Therefore, dimensions can be different 
across segments even if they are a part of the same data source. See [Different 
schemas among 
segments](../design/segments.md#different-schemas-among-segments). If the input 
segments have different dimensions, the resulting compacted segment includes 
all dimensions of the input segments. 
+Apache Druid supports schema changes. Therefore, dimensions can be different 
across segments even if they are a part of the same data source. See [Different 
schemas among 
segments](../design/segments.md#different-schemas-among-segments). If the input 
segments have different dimensions, the resulting compacted segment include all 
dimensions of the input segments. 
 
 Even when the input segments have the same set of dimensions, the dimension 
order or the data type of dimensions can be different. The dimensions of recent 
segments precede that of old segments in terms of data types and the ordering 
because more recent segments are more likely to have the preferred order and 
data types.
 
 If you want to use your own ordering and types, you can specify a custom 
`dimensionsSpec` in the compaction task spec.
 
-### Rollup

Review comment:
       This is still true. What I meant to say is that compaction cannot change 
the `rollup` setting but the `rollup` setting on the original segments is 
important and determines how compaction works




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to