Patrick,

people have created files at least up to 500 GB using InnoDB's auto-extend feature.

What does:

ulimit -a

say about the 'file size' of the user running mysqld?

Have you put some disk space quotas on the directories of the MySQL datadir? Please correct me if I am wrong, but I think one can restrict how much disk space a directory can use in Linux.

Best regards,

Heikki

Oracle Corp./Innobase Oy
InnoDB - transactions, row level locking, and foreign keys for MySQL

InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM tables
http://www.innodb.com/order.php


----- Original Message ----- From: ""Patrick Herber"" <[EMAIL PROTECTED]>
Newsgroups: mailing.database.myodbc
Sent: Sunday, January 15, 2006 4:16 PM
Subject: RE: ERROR 1114 (HY000): The table is full converting a big table from MyISAM to InnoDB on 5.0.18


Thanks a lot for your answer!
However, when I used the option innodb_file_per_table I saw that the =
temp
file (#sql...) was created in my DB directory and on this partition I =
still
have plenty of space (more than 200GB).
Do you think I CAN'T use this option for such a big table and I have to =
use
innodb_data_file_path?

Thanks a lot and regards,
Patrick

-----Original Message-----
From: Jocelyn Fournier [mailto:[EMAIL PROTECTED]
Sent: Sunday, 15 January 2006 15:09
To: Patrick Herber
Cc: mysql@lists.mysql.com
Subject: Re: ERROR 1114 (HY000): The table is full converting=20
a big table from MyISAM to InnoDB on 5.0.18
=20
Hi,
=20
I think you should change the tmpdir variable value to a=20
directory which
  have enough room to create your temp big table (by default,=20
it points to /tmp dir).
=20
Regards,
   Jocelyn
=20
Patrick Herber a =E9crit :
> Hello!
> I have a database with a big table (Data File 45 GB, Index=20
File 30 GB).=20
> Since I have some performance troubles with "table-locking" in a=20
> multi-user environment (when one of them performs a complex=20
query all=20
> the other have to wait up to 1 minute, which is not very=20
nice...), I=20
> would like to convert this (and other tables) into InnoDB engine.
> =20
> I first tried using the innodb_file_per_table option but=20
when running=20
> the statement
> =20
> ALTER TABLE invoice ENGINE=3DINNODB;
> =20
> ERROR 1114 (HY000): The table '#sql...' is full
> =20
> (this about one our after the start of the command, when=20
the size of=20
> the file was bigger than ca. 70GB (I don't know exactly the size))
> =20
> I tried then without the innodb_file_per_table option, setting my=20
> innodb_data_file_path as follows:
> =20
>=20
=
innodb_data_file_path=3Dibdata1:500M;ibdata2:500M;ibdata3;500M;ibdata4:5
> 00M;ib=20
>=20
data5:500M;ibdata6:500M;ibdata7:500M;ibdata8:500M;ibdata9:500M;ibdata1
> 0:500M
> :autoextend
>=20
> Also in this case I got the same error message.
> =20
> What should I do in order to convert this table?
> =20
> Should I set in the innodb_data_file_path for example 50=20
Files, each=20
> big 4GB ?
> =20
> Thanks a lot for your help.
> =20
> Best regards,
> Patrick
> =20
> PS: I'm running MySQL 5.0.18 on a Linux 2.6.13-15.7-smp server.
>=20
=20
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:   =20
http://lists.mysql.com/[EMAIL PROTECTED]
=20


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to