You will need two text utilities {dos2unix and sed} to do this in the simplest way. They are fairly standard text utilities and are probably already on your machine.
This is how I would do it :
sed "s/\"//g" file_name.txt \
| dos2unix \
| pgsql -c "COPY table_name FROM STDIN USING DELIMITERS ',';" db
Where "file_name.txt" is the csv file you want to import and "table_name" is the previously created table you want to insert the data into and db is the database name.
How this works is "sed" {stream editor} removes all the double quote characters '"' then pipes the output through "dos2unix" which converts all the CRLF {DOS EOL} sequences into CR {UNIX EOL} characters, then pipes the data to "pgsql" with a command that does a bulk insert into the table of the database you have selected.
Guy
Oliver Vecernik wrote:
Hi again!
After investigating a little bit further my CSV import couldn't work because of following reasons:
1. CSV files are delimited with CR/LF
2. text fields are surrounded by double quotes
Is there a direct way to import such files into PostgreSQL?
I would like to have something like MySQL provides:
LOAD DATA [LOW_PRIORITY | CONCURRENT] [LOCAL] INFILE 'file_name.txt'
[REPLACE | IGNORE]
INTO TABLE tbl_name
[FIELDS
[TERMINATED BY '\t']
[[OPTIONALLY] ENCLOSED BY '']
[ESCAPED BY '\\' ]
]
[LINES TERMINATED BY '\n']
[IGNORE number LINES]
[(col_name,...)]
Has anybody written such a function already?
Regards,
Oliver
---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html