vtlim commented on code in PR #12712:
URL: https://github.com/apache/druid/pull/12712#discussion_r911472495


##########
docs/querying/granularities.md:
##########
@@ -41,7 +41,7 @@ Simple granularities are specified as a string and bucket 
timestamps by their UT
 Supported granularity strings are: `all`, `none`, `second`, `minute`, 
`fifteen_minute`, `thirty_minute`, `hour`, `day`, `week`, `month`, `quarter` 
and `year`.
 
 * `all` buckets everything into a single bucket
-* `none` does not bucket data (it actually uses the granularity of the index - 
minimum here is `none` which means millisecond granularity). Using `none` in a 
[TimeseriesQuery](../querying/timeseriesquery.md) is currently not recommended 
(the system will try to generate 0 values for all milliseconds that didn’t 
exist, which is often a lot).
+* `none` actually does bucket data - to the granularity of the internal index 
- which means millisecond granularity. `none` can be thought of as 
`millisecond`.  Using `none` in a 
[TimeseriesQuery](../querying/timeseriesquery.md) is currently not recommended 
(the system will try to generate 0 values for all milliseconds that didn’t 
exist, which is often a lot).

Review Comment:
   my two cents
   ```suggestion
   * `none` is a slight misnomer since it buckets data to millisecond 
granularity—the granularity of the internal index. You can think of `none` as 
equivalent to `millisecond`. Do not use `none` in a [timeseries 
query](../querying/timeseriesquery.md). Druid fills empty interior time buckets 
with zeroes, meaning the output will contain results for every single 
millisecond in the requested interval.
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to