abhishekrb19 commented on code in PR #16584:
URL: https://github.com/apache/druid/pull/16584#discussion_r1633942024


##########
docs/development/modules.md:
##########
@@ -115,17 +114,17 @@ It's recommended to use batch ingestion tasks to validate 
your implementation.
 The segment will be automatically rolled up to Historical note after ~20 
seconds.
 In this way, you can validate both push (at realtime process) and pull (at 
Historical process) segments.
 
-* DataSegmentPusher
+#### DataSegmentPusher
 
 Wherever your data storage (cloud storage service, distributed file system, 
etc.) is, you should be able to see one new file: `index.zip` 
(`partitionNum_index.zip` for HDFS data storage) after your ingestion task ends.
 
-* DataSegmentPuller
+#### DataSegmentPuller

Review Comment:
   The extension point is `URIDataPuller` and `DataSegmentPuller` was removed 
here awhile ago - https://github.com/apache/druid/pull/5461/files
   ```suggestion
   #### URIDataPuller
   ```



##########
docs/development/modules.md:
##########
@@ -115,17 +114,17 @@ It's recommended to use batch ingestion tasks to validate 
your implementation.
 The segment will be automatically rolled up to Historical note after ~20 
seconds.
 In this way, you can validate both push (at realtime process) and pull (at 
Historical process) segments.
 
-* DataSegmentPusher
+#### DataSegmentPusher
 
 Wherever your data storage (cloud storage service, distributed file system, 
etc.) is, you should be able to see one new file: `index.zip` 
(`partitionNum_index.zip` for HDFS data storage) after your ingestion task ends.
 
-* DataSegmentPuller
+#### DataSegmentPuller
 
 After ~20 secs your ingestion task ends, you should be able to see your 
Historical process trying to load the new segment.

Review Comment:
   The default coordinator cycle that runs the historical duties is 1 minute:
   ```suggestion
   After ~1 minute your ingestion task ends, you should be able to see your 
Historical process trying to load the new segment.
   ```



##########
docs/development/modules.md:
##########
@@ -115,17 +114,17 @@ It's recommended to use batch ingestion tasks to validate 
your implementation.
 The segment will be automatically rolled up to Historical note after ~20 
seconds.
 In this way, you can validate both push (at realtime process) and pull (at 
Historical process) segments.
 
-* DataSegmentPusher
+#### DataSegmentPusher
 
 Wherever your data storage (cloud storage service, distributed file system, 
etc.) is, you should be able to see one new file: `index.zip` 
(`partitionNum_index.zip` for HDFS data storage) after your ingestion task ends.
 
-* DataSegmentPuller
+#### DataSegmentPuller
 
 After ~20 secs your ingestion task ends, you should be able to see your 
Historical process trying to load the new segment.
 
 The following example was retrieved from a Historical process configured to 
use Azure for deep storage:
 
-```
+```txt
 2015-04-14T02:42:33,450 INFO [ZkCoordinator-0] 
org.apache.druid.server.coordination.ZkCoordinator - New request[LOAD: 
dde_2015-01-02T00:00:00.000Z_2015-01-03T00:00:00

Review Comment:
   Some of these example logs have changed. It's hard to keep these logs 
up-to-date with the code and maintain them. Assuming they don't add a lot of 
value, should we just remove them?
   
   Instead, we can also direct the developer to use the quickstart, if needed, 
which uses the `LocalDataSegmentPuller` and doesn't require any special setup.



##########
docs/development/modules.md:
##########
@@ -73,19 +72,19 @@ The file that should exist in your jar is
 
 It should be a text file with a new-line delimited list of package-qualified 
classes that implement DruidModule like
 
-```
+```txt
 org.apache.druid.storage.cassandra.CassandraDruidModule
 ```
 
 If your jar has this file, then when it is added to the classpath or as an 
extension, Druid will notice the file and will instantiate instances of the 
Module.  Your Module should have a default constructor, but if you need access 
to runtime configuration properties, it can have a method with @Inject on it to 
get a Properties object injected into it from Guice.
 
 ### Adding a new deep storage implementation
 
-Check the `azure-storage`, `google-storage`, `cassandra-storage`, 
`hdfs-storage` and `s3-extensions` modules for examples of how to do this.
+Check the `druid-azure-extensions`, `druid-google-extensions`, 
`druid-cassandra-storage`, `druid-hdfs-storage` and `druid-s3-extensions` 
modules for examples of how to do this.

Review Comment:
   Good catch 👍 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to