xy720 opened a new issue #6287:
URL: https://github.com/apache/incubator-doris/issues/6287
As we know, Binlog is the basic infrastructure in the Mysql Replication
Architecture. The synchronization between replcations is carried out by reading
and writing the binary log file (binlog) which stored on the mysql master
server.
We know that in mysql cluster mode, only one replication is responsible for
writing and the other replications are responsible for reading. Therefore, the
repliactions' architecture is usually composed of one master (responsible for
writing) and one or more slaves (responsible for reading).
All data changes on the master node will be firstly write into local binlog,
then copied to the slave nodes.
1. On the master node, the binlog file are named like mysql-bin.000001,
mysql-bin.000002... MySQL will automatically segment binlog logs.
2. On the slave node, the binlog file name and position (offset) will be
saved as a file or table to locate the latest consumption location.
```
--------------------- ---------------------
| Slave | | Master |
| | read | |
| FileName/Position | <<<--------------------------- | Binlog Files |
--------------------- ---------------------
```
In order to get binlogs, the slave node sends the “MySQL binlog dump
command” to the master node, and the dump thread of the master server will
start to pushing binlog to the slave server continuously.
That is to say, we can get binlogs on the master node by forging this dump
command. We can use Alibaba's Canal to achieve this goal.
Canal forge the dump protocol to disguise itself as a slave node to get and
parse the master server's binlog log. Then it will store the parsed data in a
ring queue in memory, waiting for other clients to subscribe and get it.
Therefore, with canal as the intermediary, Fe can get and synchronize the
binlog logs on the master node. The blueprint for the first stage is below:
```
------------------------------------------------------
| ---------- --------->
channel 1 ---------> table1 |
Binlog Get | | client | --------->
channel 2 ---------> table2 |
---------- ---------> ----------- ------------>| ---------- --------->
channel 3 ---------> table3 |
| Mysql | | canal | Ack |
|
| | | | <------------| Doris
---------- -----------
------------------------------------------------------
```
The work we need to do will be divided into two stages:
stage 1:
1、Support creating consumption job and data channel in Fe to get parsed data
in canal.
2、Support increasingly synchronizing the data changed in MYSQL, and ensure
that the data isn't lost and repeateded.
stage 2:
1、Support synchronizing and executing MySQL DDL statements.
2、Embedding the canal into Fe, no longer to independently deploy canal
server.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]