This is an automated email from the ASF dual-hosted git repository.

techdocsmith pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
     new 42fa5c26e1 remove arbitrary granularity spec from docs (#12460)
42fa5c26e1 is described below

commit 42fa5c26e1f4e09692c9752ccba6126a9ae96905
Author: Charles Smith <[email protected]>
AuthorDate: Thu Apr 28 16:36:54 2022 -0700

    remove arbitrary granularity spec from docs (#12460)
    
    * remove arbitrary granularity spec from docs
    
    * Update docs/ingestion/ingestion-spec.md
    
    Co-authored-by: Victoria Lim <[email protected]>
    
    Co-authored-by: Victoria Lim <[email protected]>
---
 docs/ingestion/ingestion-spec.md          | 6 +++---
 docs/tutorials/tutorial-ingestion-spec.md | 2 +-
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/docs/ingestion/ingestion-spec.md b/docs/ingestion/ingestion-spec.md
index 184ccb3dfb..c87fb6141c 100644
--- a/docs/ingestion/ingestion-spec.md
+++ b/docs/ingestion/ingestion-spec.md
@@ -298,11 +298,11 @@ A `granularitySpec` can have the following components:
 
 | Field | Description | Default |
 |-------|-------------|---------|
-| type | Either `uniform` or `arbitrary`. In most cases you want to use 
`uniform`.| `uniform` |
-| segmentGranularity | [Time 
chunking](../design/architecture.md#datasources-and-segments) granularity for 
this datasource. Multiple segments can be created per time chunk. For example, 
when set to `day`, the events of the same day fall into the same time chunk 
which can be optionally further partitioned into multiple segments based on 
other configurations and input size. Any 
[granularity](../querying/granularities.md) can be provided here. Note that all 
segments in the same time chunk s [...]
+| type |`uniform`| `uniform` |
+| segmentGranularity | [Time 
chunking](../design/architecture.md#datasources-and-segments) granularity for 
this datasource. Multiple segments can be created per time chunk. For example, 
when set to `day`, the events of the same day fall into the same time chunk 
which can be optionally further partitioned into multiple segments based on 
other configurations and input size. Any 
[granularity](../querying/granularities.md) can be provided here. Note that all 
segments in the same time chunk s [...]
 | queryGranularity | The resolution of timestamp storage within each segment. 
This must be equal to, or finer, than `segmentGranularity`. This will be the 
finest granularity that you can query at and still receive sensible results, 
but note that you can still query at anything coarser than this granularity. 
E.g., a value of `minute` will mean that records will be stored at minutely 
granularity, and can be sensibly queried at any multiple of minutes (including 
minutely, 5-minutely, hourly [...]
 | rollup | Whether to use ingestion-time [rollup](./rollup.md) or not. Note 
that rollup is still effective even when `queryGranularity` is set to `none`. 
Your data will be rolled up if they have the exactly same timestamp. | `true` |
-| intervals | A list of intervals defining time chunks for segments. Specify 
interval values using ISO8601 format. For example, 
`["2021-12-06T21:27:10+00:00/2021-12-07T00:00:00+00:00"]`. If you omit the 
time, the time defaults to "00:00:00".<br><br>If `type` is set to `uniform`, 
Druid breaks the list up and rounds-off the list values based on the 
`segmentGranularity`. If `type` is set to `arbitrary`, Druid uses the list 
as-is.<br><br>If `null` or not provided, batch ingestion tasks gener [...]
+| intervals | A list of intervals defining time chunks for segments. Specify 
interval values using ISO8601 format. For example, 
`["2021-12-06T21:27:10+00:00/2021-12-07T00:00:00+00:00"]`. If you omit the 
time, the time defaults to "00:00:00".<br><br>Druid breaks the list up and 
rounds off the list values based on the `segmentGranularity`.<br><br>If `null` 
or not provided, batch ingestion tasks generally determine which time chunks to 
output based on the timestamps found in the input data. [...]
 
 ### `transformSpec`
 
diff --git a/docs/tutorials/tutorial-ingestion-spec.md 
b/docs/tutorials/tutorial-ingestion-spec.md
index 821c376f63..13a0b8700d 100644
--- a/docs/tutorials/tutorial-ingestion-spec.md
+++ b/docs/tutorials/tutorial-ingestion-spec.md
@@ -251,7 +251,7 @@ If we were not using rollup, all columns would be specified 
in the `dimensionsSp
 At this point, we are done defining the `parser` and `metricsSpec` within the 
`dataSchema` and we are almost done writing the ingestion spec.
 
 There are some additional properties we need to set in the `granularitySpec`:
-* Type of granularitySpec: `uniform` and `arbitrary` are the two supported 
types. For this tutorial, we will use a `uniform` granularity spec, where all 
segments have uniform interval sizes (for example, all segments cover an hour's 
worth of data).
+* Type of granularitySpec: the `uniform` granularity spec defines segments 
with uniform interval sizes. For example, all segments cover an hour's worth of 
data.
 * The segment granularity: what size of time interval should a single segment 
contain data for? e.g., `DAY`, `WEEK`
 * The bucketing granularity of the timestamps in the time column (referred to 
as `queryGranularity`)
 


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to