I'm having an issue where a very large (30 gig) CSV import aborts at various points due to small errors in a very tiny number of rows. As each one is discovered, I can rewrite the import query to deal with it, but it's a slow process to wait ten hours, get an error, track it down, start over, wait 11 hours, get an error, etc. The size of the file makes it difficult to correct it manually, and the export of the file is done and cannot be easily redone, so the errors in the file can't be correct by regenerating it from scratch.
Is there any option, flag, or setting which will cause H2 to simply skip invalid rows, possibly with a log as to which were skipped and why? There's other solutions, including ways to clean the data somewhat, or to split the file, etc, but they're generally inferior to a "skip and log". We're talking, so far, less than a dozen bad rows out of ~40 million records. -- ======================= Personal Blog: http://www.xanga.com/lizard_sf Facebook: http://www.facebook.com/lizard_sf MrLizard: Gaming and Geekery: http://www.mrlizard.com -- You received this message because you are subscribed to the Google Groups "H2 Database" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/h2-database?hl=en.
