Indhumathi27 commented on a change in pull request #4243:
URL: https://github.com/apache/carbondata/pull/4243#discussion_r767598589



##########
File path: docs/configuration-parameters.md
##########
@@ -179,6 +179,33 @@ This section provides the details of all the 
configurations required for the Car
 | carbon.update.storage.level | MEMORY_AND_DISK | Storage level to persist 
dataset of a RDD/dataframe. Applicable when ***carbon.update.persist.enable*** 
is **true**, if user's executor has less memory, set this parameter to 
'MEMORY_AND_DISK_SER' or other storage level to correspond to different 
environment. [See 
detail](http://spark.apache.org/docs/latest/rdd-programming-guide.html#rdd-persistence).
 |
 | carbon.update.check.unique.value | true | By default this property is true, 
so update will validate key value mapping. This validation might have slight 
degrade in performance of update query. If user knows that key value mapping is 
correct, can disable this validation for better update performance by setting 
this property to false. |
 
+## Streamer tool Configuration
+| Parameter                         | Default Value | Description              
                                                                                
                                                                                
                                                                                
                                                                       |
+|-----------------------------------|---------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+ | carbon.streamer.target.database   | <none> | The database name where the 
target table is present to merge the incoming data. If not given by user, 
system will take the current database in the spark session.                     
                                                                                
                                                                          |

Review comment:
       looks like <none> is not displayed in the document. can change to 
(none). Please handle in other places also

##########
File path: docs/configuration-parameters.md
##########
@@ -179,6 +179,33 @@ This section provides the details of all the 
configurations required for the Car
 | carbon.update.storage.level | MEMORY_AND_DISK | Storage level to persist 
dataset of a RDD/dataframe. Applicable when ***carbon.update.persist.enable*** 
is **true**, if user's executor has less memory, set this parameter to 
'MEMORY_AND_DISK_SER' or other storage level to correspond to different 
environment. [See 
detail](http://spark.apache.org/docs/latest/rdd-programming-guide.html#rdd-persistence).
 |
 | carbon.update.check.unique.value | true | By default this property is true, 
so update will validate key value mapping. This validation might have slight 
degrade in performance of update query. If user knows that key value mapping is 
correct, can disable this validation for better update performance by setting 
this property to false. |
 
+## Streamer tool Configuration
+| Parameter                         | Default Value | Description              
                                                                                
                                                                                
                                                                                
                                                                       |
+|-----------------------------------|---------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+ | carbon.streamer.target.database   | <none> | The database name where the 
target table is present to merge the incoming data. If not given by user, 
system will take the current database in the spark session.                     
                                                                                
                                                                          |
+ | carbon.streamer.target.table      | <none> | The target carbondata table 
where the data has to be merged. If this is not configured by user, the 
operation will fail.                                                            
                                                                                
                                                                            |
+ | carbon.streamer.source.type       | kafka | Streamer tool currently 
supports 2 different types of data sources. One can ingest data from either 
kafka or DFS into target carbondata table using streamer tool.                  
                                                                                
                                                                            |

Review comment:
       ```suggestion
    | carbon.streamer.source.type       | kafka | Streamer tool currently 
supports two types of data sources. One can ingest data from either kafka or 
DFS into target carbondata table using streamer tool.                           
                                                                                
                                                                   |
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@carbondata.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to