I had dismissed your recommendation for bulk and batch sizes because I was 
confusing them with the commit size.  I tried using bulkAll() but it did 
not make any difference.  Then I turned debug logging back on and noticed 
that with bulkAll() it was still logging 80k individual debug inserts.  I 
changed the call to uses bulkAfter(1000) which took the load time down to 
18 seconds!  This is still 35 times longer than the psql \copy command but 
a perfectly usable number for our use case!

Seems like builkAll() is not doing what it should in this situation.

Any chance that jOOQ will support the postgresql CopyManager class (See 
http://stackoverflow.com/questions/6958965/how-to-copy-a-data-from-file-to-postgresql-using-jdbc
 
) in some future release?  Sounds like it can lead to some worthwhile speed 
improvements for people who need every last drop.

Thanks again for the help, loadInto()/loadCSV()/bulkAfter() was just what I 
needed!

  - Aner

-- 
You received this message because you are subscribed to the Google Groups "jOOQ 
User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to