techdocsmith commented on code in PR #12712:
URL: https://github.com/apache/druid/pull/12712#discussion_r910230238


##########
docs/querying/granularities.md:
##########
@@ -41,7 +41,7 @@ Simple granularities are specified as a string and bucket 
timestamps by their UT
 Supported granularity strings are: `all`, `none`, `second`, `minute`, 
`fifteen_minute`, `thirty_minute`, `hour`, `day`, `week`, `month`, `quarter` 
and `year`.
 
 * `all` buckets everything into a single bucket
-* `none` does not bucket data (it actually uses the granularity of the index - 
minimum here is `none` which means millisecond granularity). Using `none` in a 
[TimeseriesQuery](../querying/timeseriesquery.md) is currently not recommended 
(the system will try to generate 0 values for all milliseconds that didn’t 
exist, which is often a lot).
+* `none` actually does bucket data - to the granularity of the internal index 
- which means millisecond granularity. `none` can be thought of as 
`millisecond`.  Using `none` in a 
[TimeseriesQuery](../querying/timeseriesquery.md) is currently not recommended 
(the system will try to generate 0 values for all milliseconds that didn’t 
exist, which is often a lot).

Review Comment:
   ```suggestion
   * `none` buckets data to the granularity of the internal index which is 
millisecond granularity. You can think of `none` as equivalent to 
`millisecond`.  Using `none` in a 
[TimeseriesQuery](../querying/timeseriesquery.md) is currently not recommended 
because the system will try to generate 0 values for all milliseconds that 
didn’t exist, which can be a lot.
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to