turbaszek commented on a change in pull request #18278:
URL: https://github.com/apache/airflow/pull/18278#discussion_r709592871



##########
File path: airflow/providers/apache/hive/example_dags/example_twitter_README.md
##########
@@ -35,7 +35,7 @@
 ***Screenshot:***
 <img src="http://i.imgur.com/rRpSO12.png"; width="99%"/>
 
-***Example Structure:*** In this example dag, we are collecting tweets for 
four users account or twitter handle. Each twitter handle has two channels, 
incoming tweets and outgoing tweets. Hence, in this example, by running the 
fetch_tweet task, we should have eight output files. For better management, 
each of the eight output files should be saved with the yesterday's date (we 
are collecting tweets from yesterday), i.e. toTwitter_A_2016-03-21.csv. We are 
using three kind of operators: PythonOperator, BashOperator, and HiveOperator. 
However, for this example only the Python scripts are stored externally. Hence 
this example DAG only has the following directory structure:
+***Example Structure:*** In this example dag, we are collecting tweets for 
four users account or twitter handle. Each twitter handle has two channels, 
incoming tweets and outgoing tweets. Hence, in this example, by running the 
fetch_tweet task, we should have eight output files. For better management, 
each of the eight output files should be saved with the yesterday's date (we 
are collecting tweets from yesterday), i.e. toTwitter_A_2016-03-21.csv. We are 
using two kinds of operators -- BashOperator and HiveOperator -- along with 
task-decorated functions. However, for this example only the Python scripts are 
stored externally. Hence this example DAG only has the following directory 
structure:

Review comment:
       Not an expert but maybe using `(...)` would be better than `--`?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to