feluelle commented on a change in pull request #4405: [AIRFLOW-3598] Add tests
for MsSqlToHiveTransfer
URL: https://github.com/apache/airflow/pull/4405#discussion_r256519777
##########
File path: airflow/operators/mssql_to_hive.py
##########
@@ -98,38 +99,34 @@ def __init__(
@classmethod
def type_map(cls, mssql_type):
- t = pymssql
- d = {
- t.BINARY.value: 'INT',
- t.DECIMAL.value: 'FLOAT',
- t.NUMBER.value: 'INT',
+ map_dict = {
+ pymssql.BINARY.value: 'INT',
+ pymssql.DECIMAL.value: 'FLOAT',
+ pymssql.NUMBER.value: 'INT',
}
- return d[mssql_type] if mssql_type in d else 'STRING'
+ return map_dict[mssql_type] if mssql_type in map_dict else 'STRING'
def execute(self, context):
- hive = HiveCliHook(hive_cli_conn_id=self.hive_cli_conn_id)
mssql = MsSqlHook(mssql_conn_id=self.mssql_conn_id)
-
self.log.info("Dumping Microsoft SQL Server query results to local
file")
- conn = mssql.get_conn()
- cursor = conn.cursor()
- cursor.execute(self.sql)
- with NamedTemporaryFile("w") as f:
- csv_writer = csv.writer(f, delimiter=self.delimiter,
encoding='utf-8')
- field_dict = OrderedDict()
- col_count = 0
- for field in cursor.description:
- col_count += 1
- col_position = "Column{position}".format(position=col_count)
- field_dict[col_position if field[0] == '' else field[0]] \
- = self.type_map(field[1])
- csv_writer.writerows(cursor)
- f.flush()
- cursor.close()
- conn.close()
+ with closing(mssql.get_conn()) as conn:
+ with closing(conn.cursor()) as cursor:
+ cursor.execute(self.sql)
+ with NamedTemporaryFile("w") as tmp_file:
+ csv_writer = csv.writer(tmp_file,
delimiter=self.delimiter, encoding='utf-8')
+ field_dict = OrderedDict()
+ col_count = 0
+ for field in cursor.description:
+ col_count += 1
+ col_position =
"Column{position}".format(position=col_count)
+ field_dict[col_position if field[0] == '' else
field[0]] = self.type_map(field[1])
+ csv_writer.writerows(cursor)
+ tmp_file.flush()
+
+ hive = HiveCliHook(hive_cli_conn_id=self.hive_cli_conn_id)
Review comment:
@zhongjiajie I think I got your point.
* Replacing `conn = mssql.get_conn()` ... `conn.close()` by using the `with`
statement isn't a functional change though. You can see the [docs
](http://pymssql.org/en/stable/pymssql_examples.html#using-the-with-statement-context-managers)
for more information.
* Moving the instantiation of a hook is indeed a functional change. We can
discuss about that :)
In my opinion it is better to create the connection when you really need it,
because what does it gets you when you instantiate a hook but due to issues in
the source hook the destination hook will never gets used.
On the other side you can argue that if you do it like I did it that you are
already pulling data from the source without knowing whether the destination
connections is working or not.
In the end I like it more to have two blocks in the transfer execute
function. One retrieving data from the source and one sending it to the
destination.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services