Loading... <p><span style="font-family: 宋体;line-height: 26px;text-indent: 20px">最基本的MMM安装必须至少需要</span><span style="font-family: 宋体;line-height: 26px;text-indent: 20px">2</span><span style="font-family: 宋体;line-height: 26px;text-indent: 20px">个数据库服务器和一个监控服务器</span><span style="font-family: Arial;line-height: 26px">下面要配置的MySQL Cluster环境包含四台数据库服务器和一台监控服务器,如下:</span></p> <table class="inline"> <tbody style="padding: 0px"> <tr style="padding: 0px" class="firstRow"> <th class="col0 leftalign">function</th> <th class="col1 leftalign">ip</th> <th>hostname</th> <th>server id</th> </tr> <tr style="padding: 0px"> <td>monitoring host</td> <td>192.168.0.10</td> <td class="col2 leftalign">mon</td> <td class="col3 centeralign">–</td> </tr> <tr style="padding: 0px"> <td class="col0 leftalign">master 1</td> <td>192.168.0.11</td> <td class="col2 leftalign">db1</td> <td class="col3 centeralign">1</td> </tr> <tr style="padding: 0px"> <td class="col0 leftalign">master 2</td> <td>192.168.0.12</td> <td class="col2 leftalign">db2</td> <td class="col3 centeralign">2</td> </tr> <tr style="padding: 0px"> <td class="col0 leftalign">slave 1</td> <td>192.168.0.13</td> <td class="col2 leftalign">db3</td> <td class="col3 centeralign">3</td> </tr> <tr style="padding: 0px"> <td class="col0 leftalign">slave 2</td> <td>192.168.0.14</td> <td class="col2 leftalign">db4</td> <td class="col3 centeralign">4</td> </tr> </tbody> </table> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">如果是个人学习安装,一下子找5台机器不太容易,可以虚拟机就可以完成。</p> <p><span style="font-family: Arial;font-size: 14px;line-height: 26px"> 配置完成后,使用下面的虚拟IP访问MySQL Cluster,他们通过MMM分配到不同的服务器。</span></p> <table class="inline"> <tbody style="padding: 0px"> <tr style="padding: 0px" class="firstRow"> <th class="col0 leftalign">ip</th> <th class="col1 leftalign">role</th> <th>description</th> </tr> <tr style="padding: 0px"> <td>192.168.0.100</td> <td>writer</td> <td><span style="line-height: 26px;text-indent: 20px;font-family: 宋体">应用程序应该连接到这个</span><span style="font-family: 宋体, Arial;line-height: 26px;text-indent: 20px">ip</span><span style="line-height: 26px;text-indent: 20px;font-family: 宋体">进行写操作</span></td> </tr> <tr style="padding: 0px"> <td>192.168.0.101</td> <td>reader</td> <td rowspan="4"><span style="line-height: 26px;text-indent: 20px;font-family: 宋体">应用程序应该链接到这些</span><span style="font-family: 宋体, Arial;line-height: 26px;text-indent: 20px">ip</span><span style="line-height: 26px;text-indent: 20px;font-family: 宋体">中的一个进行读操作</span></td> </tr> <tr style="padding: 0px"> <td>192.168.0.102</td> <td>reader</td> </tr> <tr style="padding: 0px"> <td>192.168.0.103</td> <td>reader</td> </tr> <tr style="padding: 0px"> <td>192.168.0.104</td> <td>reader</td> </tr> </tbody> </table> <p>结构图如下:</p> <p><img src="//cto.wang/usr/uploads/2016/07/20160703180324-25.jpg" title="1428809632447729.gif" alt="1.gif" /></p> <h2> <p class="headline-1 bk-sidecatalog-title"><span style="line-height: 36px;font-size: 22px">2. Basic configuration of master 1</span></p> </h2> <p></p> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">First we install MySQL on all hosts:</p> <pre class="brush:python;toolbar:false">aptitude install mysql-server</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Then we edit the configuration file /etc/mysql/my.cnf and add the following lines – be sure to use different server ids for all hosts:</p> <pre class="brush:python;toolbar:false">server_id = 1 log_bin = /var/log/mysql/mysql-bin.log log_bin_index = /var/log/mysql/mysql-bin.log.index relay_log = /var/log/mysql/mysql-relay-bin relay_log_index = /var/log/mysql/mysql-relay-bin.index expire_logs_days = 10 max_binlog_size = 100M log_slave_updates = 1</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Then remove the following entry:</p> <pre class="brush:python;toolbar:false">bind-address = 127.0.0.1</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Set to number of masters:</p> <pre class="brush:python;toolbar:false">auto_increment_increment = 2</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Set to a unique, incremented number, less than auto_increment_increment, on each server</p> <pre class="brush:python;toolbar:false">auto_increment_offset = 1</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Do not bind of any specific IP, use 0.0.0.0 instead:</p> <pre class="brush:python;toolbar:false">bind-address = 0.0.0.0</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Afterwards we need to restart MySQL for our changes to take effect:</p> <pre class="brush:python;toolbar:false">/etc/init.d/mysql restart</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Now we can create the required users. We'll need 3 different users:</p> <p></p> <table class="inline"> <tbody style="padding: 0px"> <tr style="padding: 0px" class="firstRow"> <th class="col0 leftalign">function</th> <th>description</th> <th>privileges</th> </tr> <tr style="padding: 0px"> <td class="col0 leftalign">monitor user</td> <td>used by the mmm monitor to check the health of the MySQL servers</td> <td>REPLICATION CLIENT</td> </tr> <tr style="padding: 0px"> <td class="col0 leftalign">agent user</td> <td>used by the mmm agent to change read-only mode, replication master, etc.</td> <td>SUPER, REPLICATION CLIENT, PROCESS</td> </tr> <tr style="padding: 0px"> <td>relication user</td> <td>used for replication</td> <td>REPLICATION SLAVE</td> </tr> </tbody> </table> <p></p> <pre class="brush:python;toolbar:false">GRANT REPLICATION CLIENT ON *.* TO 'mmm_monitor'@'192.168.0.%' IDENTIFIED BY 'monitor_password'; GRANT SUPER, REPLICATION CLIENT, PROCESS ON *.* TO 'mmm_agent'@'192.168.0.%' IDENTIFIED BY 'agent_password'; GRANT REPLICATION SLAVE ON *.* TO 'replication'@'192.168.0.%' IDENTIFIED BY 'replication_password';</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em"><span style="padding: 0px">Note:</span> We could be more restrictive here regarding the hosts from which the users are allowed to connect: mmm_monitor is used from 192.168.0.10. mmm_agent and replication are used from 192.168.0.11 – 192.168.0.14.</p> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em"><span style="padding: 0px">Note:</span> Don't use a replication_password longer than 32 characters</p> <h2> <p class="headline-1 bk-sidecatalog-title"><span style="line-height: 36px;font-size: 22px">4. Synchronisation of data between both databases</span></p> </h2> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">I'll assume that db1 contains the correct data. If you have an empty database, you still have to syncronize the accounts we have just created.</p> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">First make sure that no one is altering the data while we create a backup.</p> <pre class="brush:python;toolbar:false">(db1) mysql> FLUSH TABLES WITH READ LOCK;</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Then get the current position in the binary-log. We will need this values when we setup the replication on db2, db3 and db4.</p> <pre class="brush:python;toolbar:false">(db1) mysql> SHOW MASTER STATUS; +------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000002 | 374 | | | +------------------+----------+--------------+------------------+ 1 row in set (0.00 sec)</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em"><span style="padding: 0px">DON'T CLOSE this mysql-shell.</span> If you close it, the database lock will be removed. Open a second console and type:</p> <pre class="brush:python;toolbar:false">db1$ mysqldump -u root -p --all-databases > /tmp/database-backup.sql</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Now we can remove the database-lock. Go to the first shell:</p> <pre class="brush:python;toolbar:false">(db1) mysql> UNLOCK TABLES;</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Copy the database backup to db2, db3 and db4.</p> <pre class="brush:python;toolbar:false">db1$ scp /tmp/database-backup.sql <user>@192.168.0.12:/tmp db1$ scp /tmp/database-backup.sql <user>@192.168.0.13:/tmp db1$ scp /tmp/database-backup.sql <user>@192.168.0.14:/tmp</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Then import this into db2, db3 and db4:</p> <pre class="brush:python;toolbar:false">db2$ mysql -u root -p < /tmp/database-backup.sql db3$ mysql -u root -p < /tmp/database-backup.sql db4$ mysql -u root -p < /tmp/database-backup.sql</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Then flush the privileges on db2, db3 and db4. We have altered the user-table and mysql has to reread this table.</p> <pre class="brush:python;toolbar:false">(db2) mysql> FLUSH PRIVILEGES; (db3) mysql> FLUSH PRIVILEGES; (db4) mysql> FLUSH PRIVILEGES;</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">On debian and ubuntu, copy the passwords in /etc/mysql/debian.cnf from db1 to db2, db3 and db4. This password is used for starting and stopping mysql.</p> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Both databases now contain the same data. We now can setup replication to keep it that way.</p> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em"><span style="padding: 0px">Note:</span> Import just only <span style="padding: 0px">add</span> records from dump file. You should drop all databases before import dump file.</p> <h2> <p class="headline-1 bk-sidecatalog-title"><span style="line-height: 36px;font-size: 22px">5. Setup replication</span></p> </h2> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Configure replication on db2, db3 and db4 with the following commands:</p> <pre class="brush:python;toolbar:false">(db2) mysql> CHANGE MASTER TO master_host='192.168.0.11', master_port=3306, master_user='replication', master_password='replication_password', master_log_file='<file>', master_log_pos=<position>; (db3) mysql> CHANGE MASTER TO master_host='192.168.0.11', master_port=3306, master_user='replication', master_password='replication_password', master_log_file='<file>', master_log_pos=<position>; (db4) mysql> CHANGE MASTER TO master_host='192.168.0.11', master_port=3306, master_user='replication', master_password='replication_password', master_log_file='<file>', master_log_pos=<position>;</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Please insert the values return by “show master status” on db1 at the <file> and <position> tags.</p> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Start the slave-process on all 3 hosts:</p> <pre class="brush:python;toolbar:false">(db2) mysql> START SLAVE; (db3) mysql> START SLAVE; (db4) mysql> START SLAVE;</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Now check if the replication is running correctly on all hosts:</p> <pre class="brush:python;toolbar:false">(db2) mysql> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.0.11 Master_User: replication Master_Port: 3306 Connect_Retry: 60 … (db3) mysql> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.0.11 Master_User: replication Master_Port: 3306 Connect_Retry: 60 … (db4) mysql> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.0.11 Master_User: replication Master_Port: 3306 Connect_Retry: 60 …</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Now we have to make db1 replicate from db2. First we have to determine the values for master_log_file and master_log_pos:</p> <pre class="brush:python;toolbar:false">(db2) mysql> SHOW MASTER STATUS; +------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000001 | 98 | | | +------------------+----------+--------------+------------------+ 1 row in set (0.00 sec)</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Now we configure replication on db1 with the following command:</p> <pre class="brush:python;toolbar:false">(db1) mysql> CHANGE MASTER TO master_host = '192.168.0.12', master_port=3306, master_user='replication', master_password='replication_password', master_log_file='<file>', master_log_pos=<position>;</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Now insert the values return by “show master status” on db2 at the <file> and <position> tags.</p> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Start the slave-process:</p> <pre class="brush:python;toolbar:false">(db1) mysql> START SLAVE;</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Now check if the replication is running correctly on db1:</p> <pre class="brush:python;toolbar:false">(db1) mysql> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.0.12 Master_User: <replication> Master_Port: 3306 Connect_Retry: 60 …</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Replication between the nodes should now be complete. Try it by inserting some data into both db1 and db2 and check that the data will appear on all other nodes.</p> <h2> <p class="headline-1 bk-sidecatalog-title"><span style="line-height: 36px;font-size: 22px">6. Install MMM</span></p> </h2> <h3>Create user</h3> <p>Optional: Create user that will be the owner of the MMM</acronym> scripts and configuration files. This will provide an easier method to securely manage the monitor scripts.</p> <pre class="brush:python;toolbar:false">useradd --comment "MMM Script owner" --shell /sbin/nologin mmmd</pre> <h3>Monitoring host</h3> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">First install dependencies:</p> <pre class="brush:python;toolbar:false">aptitude install liblog-log4perl-perl libmailtools-perl liblog-dispatch-perl libclass-singleton-perl libproc-daemon-perl libalgorithm-diff-perl libdbi-perl libdbd-mysql-perl</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Then fetch the latest mysql-mmm-common*.deb and mysql-mmm-monitor*.deb and install it:</p> <pre class="brush:python;toolbar:false">dpkg -i mysql-mmm-common_*.deb mysql-mmm-monitor*.deb</pre> <h3>Database hosts</h3> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em"><span style="padding: 0px">On Ubuntu</span> First install dependencies:</p> <pre class="brush:python;toolbar:false">aptitude install liblog-log4perl-perl libmailtools-perl liblog-dispatch-perl iproute libnet-arp-perl libproc-daemon-perl libalgorithm-diff-perl libdbi-perl libdbd-mysql-perl</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Then fetch the latest mysql-mmm-common*.deb and mysql-mmm-agent*.deb and install it:</p> <pre class="brush:python;toolbar:false">dpkg -i mysql-mmm-common_*.deb mysql-mmm-agent_*.deb</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em"><span style="padding: 0px">On RedHat</span></p> <pre class="brush:python;toolbar:false">yum install -y mysql-mmm-agent</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">This will take care of all the dependencies, which may include:</p> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Installed:</p> <pre class="brush:python;toolbar:false">mysql-mmm-agent.noarch 0:2.2.1-1.el5</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Dependency Installed:</p> <pre class="brush:python;toolbar:false;">libart_lgpl.x86_64 0:2.3.17-4 mysql-mmm.noarch 0:2.2.1-1.el5 perl-Algorithm-Diff.noarch 0:1.1902-2.el5 perl-DBD-mysql.x86_64 0:4.008-1.rf perl-DateManip.noarch 0:5.44-1.2.1 perl-IPC-Shareable.noarch 0:0.60-3.el5 perl-Log-Dispatch.noarch 0:2.20-1.el5 perl-Log-Dispatch-FileRotate.noarch 0:1.16-1.el5 perl-Log-Log4perl.noarch 0:1.13-2.el5 perl-MIME-Lite.noarch 0:3.01-5.el5 perl-Mail-Sender.noarch 0:0.8.13-2.el5.1 perl-Mail-Sendmail.noarch 0:0.79-9.el5.1 perl-MailTools.noarch 0:1.77-1.el5 perl-Net-ARP.x86_64 0:1.0.6-2.1.el5 perl-Params-Validate.x86_64 0:0.88-3.el5 perl-Proc-Daemon.noarch 0:0.03-1.el5 perl-TimeDate.noarch 1:1.16-5.el5 perl-XML-DOM.noarch 0:1.44-2.el5 perl-XML-Parser.x86_64 0:2.34-6.1.2.2.1 perl-XML-RegExp.noarch 0:0.03-2.el5 rrdtool.x86_64 0:1.2.27-3.el5 rrdtool-perl.x86_64 0:1.2.27-3.el5</pre> <h3>Configure MMM</h3> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">All generic configuration-options are grouped in a separate file called /etc/mysql-mmm/mmm_common.conf. This file will be the same on all hosts in the system:</p> <pre class="brush:python;toolbar:false">active_master_role writer <host default> cluster_interface eth0 pid_path /var/run/mmmd_agent.pid bin_path /usr/lib/mysql-mmm/ replication_user replication replication_password replication_password agent_user mmm_agent agent_password agent_password </host> <host db1> ip 192.168.0.11 mode master peer db2 </host> <host db2> ip 192.168.0.12 mode master peer db1 </host> <host db3> ip 192.168.0.13 mode slave </host> <host db4> ip 192.168.0.14 mode slave </host> <role writer> hosts db1, db2 ips 192.168.0.100 mode exclusive </role> <role reader> hosts db1, db2, db3, db4 ips 192.168.0.101, 192.168.0.102, 192.168.0.103, 192.168.0.104 mode balanced </role></pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Don't forget to copy this file to all other hosts (including the monitoring host).</p> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">On the database hosts we need to edit /etc/mysql-mmm/mmm_agent.conf. Change “db1” accordingly on the other hosts:</p> <pre class="brush:python;toolbar:false">include mmm_common.conf this db1</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">On the monitor host we need to edit /etc/mysql-mmm/mmm_mon.conf:</p> <pre class="brush:python;toolbar:false">include mmm_common.conf <monitor> ip 127.0.0.1 pid_path /var/run/mmmd_mon.pid bin_path /usr/lib/mysql-mmm/ status_path /var/lib/misc/mmmd_mon.status ping_ips 192.168.0.1, 192.168.0.11, 192.168.0.12, 192.168.0.13, 192.168.0.14 </monitor> <host default> monitor_user mmm_monitor monitor_password monitor_password </host> debug 0</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">ping_ips are some ips that are pinged to determine whether the network connection of the monitor is ok. I used my switch (192.168.0.1) and the four database server.</p> <h2 style="padding: 0px;font-size: 14px"> <p class="headline-1 bk-sidecatalog-title"><span style="line-height: 36px;font-size: 22px">7. Start MMM</span></p> </h2> <p></p> <h3>Start the agents</h3> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em"><span style="padding: 0px">(On the database hosts)</span></p> <h4>Debian/Ubuntu</h4> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Edit /etc/default/mysql-mmm-agent to enable the agent:</p> <pre class="brush:python;toolbar:false">ENABLED=1</pre> <h4>Red Hat</h4> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">RHEL/Fedora does not enable packages to start at boot time per default policy, so you might have to turn it on manually so the agents will start automatically when server is rebooted:</p> <pre class="brush:python;toolbar:false">chkconfig mysql-mmm-agent on</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Then start it:</p> <pre class="brush:python;toolbar:false">/etc/init.d/mysql-mmm-agent start</pre> <h3>Start the monitor</h3> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em"><span style="padding: 0px">(On the monitoring host)</span> Edit /etc/default/mysql-mmm-monitor to enable the monitor:</p> <pre class="brush:python;toolbar:false">ENABLED=1</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Then start it:</p> <pre class="brush:python;toolbar:false">/etc/init.d/mysql-mmm-monitor start</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Wait some seconds for mmmd_mon to start up. After a few seconds you can use mmm_control to check the status of the cluster:</p> <pre class="brush:python;toolbar:false">mon$ mmm_control show db1(192.168.0.11) master/AWAITING_RECOVERY. Roles: db2(192.168.0.12) master/AWAITING_RECOVERY. Roles: db3(192.168.0.13) slave/AWAITING_RECOVERY. Roles: db4(192.168.0.14) slave/AWAITING_RECOVERY. Roles:</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Because its the first startup the monitor does not know our hosts, so it sets all hosts to state AWAITING_RECOVERY and logs a warning message:</p> <pre class="brush:python;toolbar:false">mon$ tail /var/log/mysql-mmm/mmm_mon.warn … 2009/10/28 23:15:28 WARN Detected new host 'db1': Setting its initial state to 'AWAITING_RECOVERY'. Use 'mmm_control set_online db1' to switch it online. 2009/10/28 23:15:28 WARN Detected new host 'db2': Setting its initial state to 'AWAITING_RECOVERY'. Use 'mmm_control set_online db2' to switch it online. 2009/10/28 23:15:28 WARN Detected new host 'db3': Setting its initial state to 'AWAITING_RECOVERY'. Use 'mmm_control set_online db3' to switch it online. 2009/10/28 23:15:28 WARN Detected new host 'db4': Setting its initial state to 'AWAITING_RECOVERY'. Use 'mmm_control set_online db4' to switch it online.</pre> <p style="padding-top: 0px;padding-bottom: 0px;margin-top: 0px;margin-bottom: 1em">Now we set or hosts online (db1 first, because the slaves replicate from this host):</p> <pre class="brush:python;toolbar:false">mon$ mmm_control set_online db1 OK: State of 'db1' changed to ONLINE. Now you can wait some time and check its new roles! mon$ mmm_control set_online db2 OK: State of 'db2' changed to ONLINE. Now you can wait some time and check its new roles! mon$ mmm_control set_online db3 OK: State of 'db3' changed to ONLINE. Now you can wait some time and check its new roles! mon$ mmm_control set_online db4 OK: State of 'db4' changed to ONLINE. Now you can wait some time and check its new roles! 参考:http://mysql-mmm.org/mmm2:guide</pre> 最后修改:2021 年 12 月 10 日 10 : 53 AM © 允许规范转载 赞赏 如果觉得我的文章对你有用,请随意赞赏 赞赏作者 支付宝微信