tegdev <[email protected]> added the comment:
The correct handling of None values belongs to the csv module.
There is a use case to migrate a DB2 database to PostgreSQL.
DB2 has a command line tool "db2 export ..." which produces csv-files.
A row
['Hello', null, 'world']
is exported to
"Hello,,"world".
I would like to read in these exports with python and put it to PostgreSQL.
But with the csv library I can't read it in correctly. The input is converted
to:
['Hello', '', 'world']
It should read as:
['Hello', None, 'world']
It is pretty easy to write a correct CSV reader with ANTLR but it's terribly
slow.
And last but not least: if someone writes a list the reading should the
identity.
Thats not True for the csv libraray.
Example:
import csv
hello_out_lst = ['Hello', None, 'world']
with open('hello.csv', 'w') as ofh:
writer = csv.writer(ofh, delimiter=',')
writer.writerow(hello_out_lst)
with open('hello.csv', 'r') as ifh:
reader = csv.reader(ifh, delimiter=',')
for row in reader:
hello_in_lst = row
is_equal = hello_out_lst == hello_in_lst
print(f'{hello_out_lst} is equal {hello_in_lst} ? {is_equal}')
The result is:
['Hello', None, 'world'] is equal ['Hello', '', 'world'] ? False
----------
nosy: +tegdev
_______________________________________
Python tracker <[email protected]>
<https://bugs.python.org/issue23041>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe:
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com