Joe Conway wrote:
Tom Lane wrote:

Christopher Kings-Lynne <[EMAIL PROTECTED]> writes:

Strange. Last time I checked I thought MySQL dump used 'multivalue lists in inserts' for dumps, for the same reason that we use COPY

I think Andrew identified the critical point upthread: they don't try
to put an unlimited number of rows into one INSERT, only a megabyte
or so's worth.  Typical klugy-but-effective mysql design approach ...

OK, so given that we don't need to be able to do 1 million multi-targetlist insert statements, here is rev 2 of the patch.

I did some testing today against mysql and found that it will easily absorb insert statements with 1 million targetlists provided you set max_allowed_packet high enough for the server. It peaked out at about 600MB, compared to my test similar last night where it was using about 3.8 GB when I killed it.

So the question is, do we care?

If we do, I'll start looking for a new rev 3 strategy (ideas/pointers etc very welcome). If not, I'll start working on docs and regression test.

Thanks,

Joe



---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

              http://archives.postgresql.org

Reply via email to