Hi,

I'm importing a dump file that contains something like the next statements
(excerpt):

  DROP TABLE IF EXISTS `mytable`;
  CREATE TABLE `mytable` (
    ...
  ) ENGINE=InnoDB AUTO_INCREMENT=872 DEFAULT CHARSET=utf8 
COLLATE=utf8_general_ci;
  INSERT INTO `mytable` VALUES (1, ...), (2, ...), (3, ...), ...;

and get the next error in the INSERT statement:

  ERROR 1118 (42000): Row size too large (> 8126). Changing some columns to
  TEXT or BLOB or using ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED may help.
  In current row format, BLOB prefix of 0 bytes is stored inline.

However, if I split the INSERT statement, or more precisely, if I take one
particular row out and insert it separately, the import works fine.

I've been reading the troobleshooting guide for this error,

https://mariadb.com/docs/server/server-usage/storage-engines/innodb/innodb-row-formats/troubleshooting-row-size-too-large-errors-with-innodb

but it does not mention anything about this particular case.

The table schema contains many VARCHAR(255) fields and LONGTEXT fields, and the
row format is (innodb) DYNAMIC, so I would expect them to be stored off-page.
Moreover, since I can execute the statements separately, the records seem to
fit within the row size limit.

So I have figured out that the row size estimation is not accurate when using a
bulk insert with many rows, and it may work by estimating the max row size as
the sum of the max field sizes of all rows, instead of the maximum of each
individual row.

Is the row size estimation affected by bulk inserts?
How can I avoid this issue when importing large datasets?

Thanks in advance,
-Eduardo
_______________________________________________
discuss mailing list -- discuss@lists.mariadb.org
To unsubscribe send an email to discuss-le...@lists.mariadb.org

Reply via email to