suneet-s commented on a change in pull request #9449:
URL: https://github.com/apache/druid/pull/9449#discussion_r435641233



##########
File path: docs/ingestion/native-batch.md
##########
@@ -1310,6 +1311,56 @@ A spec that applies a filter and reads a subset of the 
original datasource's col
 This spec above will only return the `page`, `user` dimensions and `added` 
metric.
 Only rows where `page` = `Druid` will be returned.
 
+### SQL Input Source
+
+The SQL input source is used to read data directly from RDBMS.
+The SQL input source is _splittable_ and can be used by the [Parallel 
task](#parallel-task), where each worker task will read from one SQL query from 
the list of queries.
+Since this input source has a fixed input format for reading events, no 
`inputFormat` field needs to be specified in the ingestion spec when using this 
input source.
+
+|property|description|required?|
+|--------|-----------|---------|
+|type|This should be "sql".|Yes|
+|database|Specifies the database connection details. The database type 
corresponds to the extension that supplies the `connectorConfig` support and 
this extension must be loaded into Druid. For database types `mysql` and 
`postgresql`, the `connectorConfig` support is provided by 
[mysql-metadata-storage](../development/extensions-core/mysql.md) and 
[postgresql-metadata-storage](../development/extensions-core/postgresql.md) 
extensions respectively.|Yes|
+|foldCase|Toggle case folding of database column names. This may be enabled in 
cases where the database returns case insensitive column names in query 
results.|No|
+|sqls|List of SQL queries where each SQL query would retrieve the data to be 
indexed.|Yes|
+
+An example SqlInputSource spec is shown below:
+
+```json
+...
+    "ioConfig": {
+      "type": "index_parallel",
+      "inputSource": {
+        "type": "sql",
+        "database": {
+            "type": "mysql",
+            "connectorConfig": {
+                "connectURI": "jdbc:mysql://host:port/schema",
+                "user": "user",
+                "password": "password"
+            }
+        },
+        "sqls": ["SELECT * FROM table1 WHERE timestamp BETWEEN '2013-01-01 
00:00:00' AND '2013-01-01 11:59:59'", "SELECT * FROM table2 WHERE timestamp 
BETWEEN '2013-01-01 00:00:00' AND '2013-01-01 11:59:59'"]
+    },
+...
+```
+
+The spec above will read all events from two separate SQLs for the interval 
`2013-01-01/2013-01-02`.
+Each of the SQL queries will be run in its own sub-task and thus for the above 
example, there would be two sub-tasks.
+
+Compared to the other native batch InputSources, SQL InputSource behaves 
differently in terms of reading the input data and so it would be helpful to 
consider the following points before using this InputSource in a production 
environment:

Review comment:
       Maybe instead of experimental, we should call this feature `Beta`. It 
indicates that there is future work coming and we may want to change the 
behavior or configuration of the InputSource, so users shouldn't rely on the 
API / behavior being consistent across releases.
   
   Since there are no integration tests, I'm concerned about calling this GA, 
because someone would have to run through these tests manually before each 
release to make sure we did not accidentally break the SqlInputSource




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to