When you use --master-data=1, it executes the CHANGE MASTER command first 
before adding data.

Do the following to verify this:

Run 'mysqldump --single-transaction --master-data=1 -h... -u... -p... > 
DataDump1.sql
Run 'mysqldump --single-transaction --master-data=2 -h... -u... -p... > 
DataDump2.sql

Run 'head -30 DataDump1.sql'
You will see the CHANGE MASTER command before all CREATE TABLEs and INSERTs.
Therefore, it will execute.

Run 'head -30 DataDump2.sql'
You will see the CHANGE MASTER command before all CREATE TABLEs and INSERTs.
However, the command is commented Out !!!
Therefore, it will not execute.

After loading DataDump2.sql, you can then use the replication coordinates (log 
file name and log position) in the Commented Out CHANGE MASTER Command
After the data are loaded.

In theory, it is a paradigm bug because the CHANGE MASTER command when using 
--master-data=1 should appear on the bottom of the mysqldump and not at the 
top. Yet, it is at the top and executes immediately and then tries to load your 
data and read from the master's binary logs at the same time, guaranteeing 
duplicate key collision.

This is why importing mysqldump straight to mysql via a pipe produces the error 
you are experiencing.

Try this:

1) In mysql session 1, Run FLUSH TABLES WITH READ LOCK on the master.

2) In mysql session 1, run SHOW MASTER STATUS.

3) Record the log file and position from mysql session 1.

4) In mysql seesion 2, run 'STOP SLAVE;'

5) Run 'mysqldump --single-transaction mysqldump --single-transaction 
--master-data=1 -u root -pmypass -h masterHost dbName | mysql -u root -pmypass 
-h slaveHost dbName'. Let it run to completion.

Notice I did not use --master-data in the mysqldump

5) In mysql session 2, run "CHANGE MASTER TO MASTER_LOG_FILE='<log file from 
SHOW MASTER STATUS>,MASTER_LOG_POS='<log position from SHOW MASTER STATUS>';"

6) In mysql session 2,run 'START SLAVE'.

7) In mysql session 1, run 'UNLOCK TABLES'

Give it a try !!!

-----Original Message-----
From: Mark Maunder [mailto:[EMAIL PROTECTED]
Sent: Tuesday, August 05, 2008 3:02 AM
To: mysql@lists.mysql.com
Subject: Possible bug in mysqldump?

Hi all,

I'm busy setting up replication and have encountered what looks like a bug
in mysqldump. The following commands work perfectly:

Running the following commands in the mysql client on the slave:
stop slave;
reset slave;
create database dbName;
CHANGE MASTER TO MASTER_HOST='masterHost', MASTER_USER='root',
MASTER_PASSWORD='mypass';

Then running the following on the command line on the slave:

mysqldump --single-transaction --master-data=1 -u root -pmypass -h
masterHost dbName >masterDB.sql ;

mysql -u root -pmypass -h slaveHost dbName< masterDB.sql

Then running the following in the mysql client on the slave:

start slave;

At this point the slave comes up perfectly and is in sync with the master.

However, if I do exactly the same thing, but import the data using a pipe
command:

mysqldump --single-transaction --master-data=1 -u root -pmypass -h
masterHost dbName | mysql -u root -pmypass -h slaveHost dbName

When i start the slave I get a duplicate key error. In other words, the
slave is trying to execute entries in the masters log that have already been
run.

I can't figure out why this is a problem and this has forced me to store
data on disk as a file as an intermediate step when setting up slaves.

The only difference between the two methods is that in the first case the
data is stored on disk and then imported via the client and in the second
case it's piped directly to the client. In both cases the data that
mysqldump produces is the same. Both include the CHANGE MASTER command that
sets the log file and position.

Is this a bug in mysqldump, or am I missing something?

Thanks in advance,

Mark.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to