So here's what I'm trying to do.
a)I have a CSV file on disk.
b)I read the headers from this file into a string array.
c)I I create a table, using a generated name and the fields I just
read, plus I add a column called 'ROW_ID' which is an auto-increment
field.
d)I want to then read the DATA from the CSV file into this new table.
The problem is, the column count is incorrect.
I tried: "INSERT INTO amp_20091124 VALUES (0, SELECT * FROM
CSVREAD('C:\Temp\Mark Issue 11302009\amp_20091124.csv'))", but this
had the same problem.
It's possible for me to specify all field names (except row_id) in the
"insert into" clause, and then "Select field1, field2, etc" in the
values clause, but I'm dealing with files with hundreds or thousands
of columns and that creates very unwieldly and hard-to-read strings
when I'm trying to debug my SQL, so I'd like to avoid that, if
possible.
(Meaning, I could do "INSERT INTO MYFILE (Field1, field2....field N)
values (Select field1, field2... fieldn from CSVREAD(...."). At least,
I assume I could, haven't tried it yet.
The other option is to create the table and then add the new column.
My reason for not doing it this way is I plan to deal with very large
tables (hundreds of thousands to millions of rows), and my experience
has been that adding all columns, and specifying indexes, is much
better done before you add data. So that's the worst-case solution at
the moment.
Thank you for any advice on this.
--
You received this message because you are subscribed to the Google Groups "H2
Database" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/h2-database?hl=en.