Mike, Derby is running in embedded mode on our Solaris dual processor app server. The size of the files are ~700 MB per table.
I am going to drop all indexes and keys then try a limited subset (~1000 records). - Derek -----Original Message----- From: Mike Matrigali [mailto:[EMAIL PROTECTED] Sent: Wednesday, March 14, 2007 4:41 PM To: Derby Discussion Subject: Re: Large multi-record insert performance Lance J. Andersen wrote: > > > Even if the backend does not provide optimization for batch > processing, i would hope that there would be still some efficiency > especially in a networked environment vs building the strings, > invoking execute() 1000 times in the amount of data on the wire... > > I could not tell from the question whether this was network or not. I agree in network then limiting execution probably is best. In embedded I am not sure - I would not be surprised if doing 1000 in batch is slower than just doing the executes. In either case I really would stay away from string manipulation as much as possible and also stay away from things that create very long SQL statements like 1000 term values clauses.
