sthetland commented on a change in pull request #11053:
URL: https://github.com/apache/druid/pull/11053#discussion_r605285191



##########
File path: docs/ingestion/faq.md
##########
@@ -100,20 +91,22 @@ Or, if you use hadoop based ingestion, then you can use 
"dataSource" input spec
 
 See the [Update existing data](../ingestion/data-management.md#update) section 
of the data management page for more details.
 
-## How can I change the granularity of existing data in Druid?
+## How can I change the query granularity of existing data in Druid?
 
-In a lot of situations you may want to lower the granularity of older data. 
Example, any data older than 1 month has only hour level granularity but newer 
data has minute level granularity. This use case is same as re-indexing.
+In a lot of situations you may want coarser granularity for older data. 
Example, any data older than 1 month has only hour level granularity but newer 
data has minute level granularity. This use case is same as re-indexing.
 
 To do this use the 
[DruidInputSource](../ingestion/native-batch.md#druid-input-source) and run a 
[Parallel task](../ingestion/native-batch.md). The DruidInputSource will allow 
you to take in existing segments from Druid and aggregate them and feed them 
back into Druid. It will also allow you to filter the data in those segments 
while feeding it back in. This means if there are rows you want to delete, you 
can just filter them away during re-ingestion.
 Typically the above will be run as a batch job to say everyday feed in a chunk 
of data and aggregate it.
 Or, if you use hadoop based ingestion, then you can use "dataSource" input 
spec to do reindexing.
 
 See the [Update existing data](../ingestion/data-management.md#update) section 
of the data management page for more details.
 
+You can also change the query granularity using compaction. See [Query 
granularity handling](../ingestion/compaction.md#query-granularity-handling).
+
 ## Real-time ingestion seems to be stuck
 
 There are a few ways this can occur. Druid will throttle ingestion to prevent 
out of memory problems if the intermediate persists are taking too long or if 
hand-off is taking too long. If your process logs indicate certain columns are 
taking a very long time to build (for example, if your segment granularity is 
hourly, but creating a single column takes 30 minutes), you should re-evaluate 
your configuration or scale up your real-time ingestion.
 
 ## More information
 
-Getting data into Druid can definitely be difficult for first time users. 
Please don't hesitate to ask questions in our IRC channel or on our [google 
groups page](https://groups.google.com/forum/#!forum/druid-user).
+Data ingestion for Druid can be difficult for first time users. Please don't 
hesitate to ask questions the [Druid Forum](https://www.druidforum.org/).

Review comment:
       ```suggestion
   Data ingestion for Druid can be difficult for first time users. Please don't 
hesitate to ask questions in the [Druid Forum](https://www.druidforum.org/).
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to