Michael, I was able to repeat the bug on Linux now.
It seems to happen if I set max_binlog_size to 2M in the SLAVE. The relay binlog gets split into several 2 MB pieces. It does not happen always, but I have a randomized test which produces the error in 1 minute. I was not able to repeat the bug when I had not set the max binlog size in the slave, in which case I think it defaults to 1 GB. heikki@hundin:~/mysql-4.0/sql> mysqld --defaults-file=/home/heikki/slavemy.cnf 021204 23:55:45 InnoDB: Started mysqld: ready for connections 021204 23:55:45 Slave I/O thread: connected to master 'slaveuser@hundin:3307', replication started in log 'FIRST' at position 4 021204 23:57:42 Error in Log_event::read_log_event(): 'Event too big', data_len =1447971143,event_type=115 021204 23:58:03 Slave SQL thread: I/O error reading event(errno: -1 cur_log->e rror: 12) 021204 23:58:03 Error reading relay log event: Aborting slave SQL thread becaus e of partial event read 021204 23:58:03 Could not parse log event entry, check the master for binlog co rruption This may also be a network problem, or just a bug in the master or slave code. 021204 23:58:03 Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'binlog. 002' position 13659061 heikki@hundin:~/data> ls -l total 51832 -rw-rw---- 1 heikki users 24086965 Dec 4 23:57 binlog.001 -rw-rw---- 1 heikki users 28925234 Dec 4 23:58 binlog.002 -rw-rw---- 1 heikki users 26 Dec 4 23:57 binlog.index -rw-rw---- 1 heikki users 5 Dec 4 23:55 hundin.pid drwxr-xr-x 2 heikki users 619 Sep 5 20:51 mysql drwxr-xr-x 2 heikki users 513 Dec 4 23:57 test heikki@hundin:~/data> Also, I observed that if I do a big LOAD DATA INFILE when autocommit=1, then the master splits the master binlog into 2 MB pieces as I have instructed, and since I have set max packet size to 1M in both master and the slave, the slave complains: heikki@hundin:~/mysql-4.0/sql> mysqld --defaults-file=/home/heikki/slavemy.cnf 021204 23:48:21 InnoDB: Started mysqld: ready for connections 021204 23:48:21 Slave I/O thread: connected to master 'slaveuser@hundin:3307', replication started in log 'FIRST' at position 4 021204 23:52:08 Error reading packet from server: log event entry exceeded max_ allowed_packet; Increase max_allowed_packet on master (server_errno=1236) 021204 23:52:08 Got fatal error 1236: 'log event entry exceeded max_allowed_pac ket; Increase max_allowed_packet on master' from master when reading data from b inary log 021204 23:52:08 Slave I/O thread exiting, read up to log 'binlog.002', position 4 This does NOT happen if I set AUTOCOMMIT=0. I think the above should also be fixed. The slave should read the binlog in smaller pieces, also in the case where AUTOCOMMIT=1. Yet another problem: When LOAD DATA INFILE failed in the master (AUTOCOMMIT=1): mysql> load data infile '/home/heikki/rtdump' into table replt3; ERROR 1114: The table 'replt3' is full mysql> the slave failed like this: heikki@hundin:~/mysql-4.0/sql> mysqld --defaults-file=~/slavemy.cnf 021204 21:45:35 InnoDB: Started mysqld: ready for connections 021204 21:45:35 Slave I/O thread: connected to master 'slaveuser@hundin:3307', replication started in log 'FIRST' at position 4 021204 22:04:22 Slave: Could not open file '/tmp/SQL_LOAD-2-1-4.data', error_co de=2 021204 22:04:22 Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'binlog. 026' position 27 I am forwarding these to the replication developer of MySQL AB. I hope he can fix these to 4.0.6. Best regards, Heikki Tuuri Innobase Oy --- InnoDB - transactions, row level locking, and foreign key support for MySQL See http://www.innodb.com, download MySQL-Max from http://www.mysql.com sql query ----- Original Message ----- From: "Heikki Tuuri" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]> Sent: Thursday, December 05, 2002 12:24 AM Subject: Re: Bug Report: Replication in 4.0.5beta > Michael, > > I have been running tests on 4.0.6 with big insert transactions on Linux. I > set max_binlog_size to 2M and max_packet_size to 16M. So far no errors with > tables up to 400 MB in size. > > Looks like MySQL always writes a big transaction as one big block to the > current binlog file, and does not cut the binlog file into 2 MB pieces. > Thus, it looks like the binlog file rotation cannot be the source of the bug > you have observed. > > If you look in the datadir with > > ls -l > > the actual sizes of the master's binlogs, could it be that there really is a > 1.3 GB file there? > > Can you make a script which would always repeat the replication failure? > > What is the CREATE TABLE statement of your table? > > What is your my.cnf like? > > Regards, > > Heikki > > ----- Original Message ----- > From: "Heikki Tuuri" <[EMAIL PROTECTED]> > To: <[EMAIL PROTECTED]> > Cc: <[EMAIL PROTECTED]> > Sent: Thursday, November 28, 2002 1:27 PM > Subject: Re: Bug Report: Replication in 4.0.5beta > > > > Michael, > > > > ----- Original Message ----- > > From: "Michael Ryan" <[EMAIL PROTECTED]> > > Newsgroups: mailing.database.mysql > > Sent: Thursday, November 28, 2002 12:34 PM > > Subject: Bug Report: Replication in 4.0.5beta > > > > > > > The environment info was copied from the "mysqlbug" command by our > > external > > > hosting company who truncated the lines therefore the last couple of > > > characters from each line is not there however it was a Solaris 2.8 > binary > > > download of 4.0.5beta so you would have all of the info anyway. > > > > > > >Description: > > > I am using MySQL 4.0.5beta on Solaris 2.8 from a binary version > > > downloaded from www.mysql.com on the 19th of November 2002. I have one > > > database set up as the master database and 2 databases set up as slave > > > databases. Each database is on a separate SUN server. I am performing > > > intense load testing on MySQL replicated databases using InnoDB tables > and > > > transactions and I have come across what is most likely a bug. > > > > > > The replication is failing on the slaves with the following error (this > > > appears both slaves error logs at the same time) :- > > > > > > 021127 13:48:28 Error in Log_event::read_log_event(): 'Event too big', > > > data_len=1397639424,event_type=111 > > > 021127 13:55:36 Error in Log_event::read_log_event(): 'Event too big', > > > data_len=1397639424,event_type=111 > > > > > > this definitely looks like a bug in replication. > > > > From New Zealand we got the following bug report, which might be connected > > to this: > > > > > > > > 021111 18:32:54 Error reading packet from server: log > > > event entry > > > > > > exceeded max_allowed_packet - increase > > > max_allowed_packet on master > > > > > > (server_errno=2000) > > > > Above errors might happen if the pointer to the binlog becomes displaced. > It > > will then read garbage from the event length field. > > > > I think a transaction can consist of many log events. > > > > I will run tests on our SunOS-5.8 computer to see if I can repeat this > bug. > > > > > > Best regards, > > > > Heikki Tuuri > > Innobase Oy > > --- > > InnoDB - transactions, hot backup, and foreign key support for MySQL > > See http://www.innodb.com, download MySQL-Max from http://www.mysql.com > > > > sql query > > > > ... > > > > > > BBCi at http://www.bbc.co.uk/ > > > > > > > > --------------------------------------------------------------------- Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail <[EMAIL PROTECTED]> To unsubscribe, e-mail <[EMAIL PROTECTED]> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php