Chris Browne wrote:
[EMAIL PROTECTED] (Christopher Kings-Lynne) writes:
The major downside is that somewhere between 9000 and 10000
VALUES-targetlists produces "ERROR: stack depth limit
exceeded". Perhaps for the typical use-case this is sufficient
though.
I'm open to better ideas, comments, objections...
If the use case is people running MySQL dumps, then there will be
millions of values-targetlists in MySQL dumps.
Curiosity: How do *does* TheirSQL parse that, and not have the One
Gigantic Query blow up their query parser?
Experimentation shows that mysqldump breaks up the insert into chunks.
Example with 10m rows:
[EMAIL PROTECTED] ~]# perl -e 'print "drop table if exists foo; create table
foo (x int);\n"; foreach my $i (0..9_9999) { print "insert into foo
values \n"; foreach my $j (0..99) { print "," if $j; print
"(",100*$i+$j+1,")"; } print ";\n"; } ' > gggggg
[EMAIL PROTECTED] ~]# mysql test < gggggg
[EMAIL PROTECTED] ~]# mysqldump test foo > aaaaaa
[EMAIL PROTECTED] ~]# mysql test < aaaaaa
[EMAIL PROTECTED] ~]# grep INSERT aaaaaa | wc -l
104
cheers
andrew
---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly