<http://postgresql.1045698.n5.nabble.com/file/n5798002/PG_man_excerpt.png>
These were my results: <http://postgresql.1045698.n5.nabble.com/file/n5798002/PG_embedded_copy_log_excerpt.png> I'd advise anyone contemplating using this feature to seriously seriously seriously test this and examine your logs after each test run before you move this feature into your baseline. Maybe you'll have better luck than I did. For what it's worth I got very good performance results from using INSERT with multiple values clauses that inserted 1000 records at a time. For example on one error test (of many) purposefully attempting to insert alphabetic data into a numeric field yielded explicit, correct information about the exact line of data causing the error within the 1000 records attempting to be inserted. With this information in hand it would be eminently feasible to go back to the baseline and examine any recent source code updates that might have altered the generation of the data that caused an error like this. Hopefully this helps anyone trying to handle large amounts of data quickly and wondering what a viable solution might be. Best regards to everyone and thank you all for your time, Steve K. -- View this message in context: http://postgresql.1045698.n5.nabble.com/PQputCopyData-dont-signal-error-tp4302340p5798002.html Sent from the PostgreSQL - hackers mailing list archive at Nabble.com. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers