Loading... <p>MMM介绍:</p> <p> MMM全称为Multi-Master Replication Manager for MySQL,即为主主复制管理器;根据MMM官网介绍,其工作原理类似于lvs,都是利用vip地址;但lvs只有一个组件便可以正常工作,而MMM则使用三个组件,分别是mysql-mmm、mysql-mmm-agent、mysql-mmm-monitor;使用mysql-mmm组件定义真实mysql各节点主机物理IP与ips地址的对应关系,然后利用mysql-mmm-agent检查mysql各节点主机的工作状态,最后使用mysql-mmm-monitor实现故障切换,读写分离。</p> <p></p> <p>实验环境:</p> <table width="-414"> <tbody> <tr class="firstRow"> <td width="393" valign="top">node1: 172.16.2.12 mysql主节点DB1</td> </tr> <tr> <td width="393" valign="top">node2: 172.16.2.13 mysql主节点DB2</td> </tr> <tr> <td width="393" valign="top">node3: 172.16.2.14 mysql从节点DB3</td> </tr> <tr> <td width="393" valign="top">node4: 172.16.2.15 监控节点</td> </tr> </tbody> </table> <table width="-414"> <tbody> <tr class="firstRow"> <td width="393" valign="top"> <p>write ips:</p> <p> 172.16.2.21</p> </td> </tr> <tr> <td width="393" valign="top"> <p>read ips:</p> <p> 172.16.2.22</p> <p> 172.16.2.23</p> <p> 172.16.2.24 </p> </td> </tr> </tbody> </table> <p>实验前提:</p> <p> DB1与DB2是双主模型,DB1与DB3是主从模型;确保都能正常运行</p> <p>一、安装mmm</p> <p>1、创建监控时使用的用户账号和密码</p> <pre class="brush:bash;toolbar:false">MariaDB [(none)]>GRANT REPLICATION CLIENT ON *.* TO 'monitor'@'172.16.2.%' IDENTIFIED BY 'monitor'; \\此账户用于监控各节点的 MariaDB [(none)]>GRANT SUPER, REPLICATION CLIENT, PROCESS ON *.* TO 'agent'@'172.16.2.%' IDENTIFIED BY 'agent'; \\此账号用于mysql-agent使用</pre> <p>1、在DB1;DB2;DB3上安装mysql-mmm,mysql-mmm-agent</p> <pre class="brush:bash;toolbar:false">[root@node1 ~]# yum -y install mysql-mmm mysql-mmm-agent</pre> <p>2、编辑mysql-mmm的配置文件,修改内容如下</p> <pre class="brush:bash;toolbar:false">[root@node1 ~]# vim /etc/mysql-mmm/mmm_common.conf active_master_role writer \\定义活动主节点的角色 <host default> \\定义主机信息 cluster_interface eth0 \\定义监听接口 pid_path /var/run/mysql-mmm/mmm_agentd.pid \\定义pid文件存放位置 bin_path /usr/libexec/mysql-mmm/ \\mmm执行文件的路径 replication_user test \\此处填写主从或主主模型使用的用户 replication_password test \\此处填写主从或主主模型使用的用户密码 agent_user agent \\此处填写运行代理(agent)的用户 agent_password agent \\此处填写运行代理(agent)的用户密码 </host> <host db1> \\定义主机类型 ip 172.16.2.12 \\定义db1主机的地址 mode master \\定义db1主机的角色 peer db2 \\定义对端主机名称 </host> <host db2> ip 172.16.2.13 mode master peer db1 </host> <host db3> ip 172.16.2.14 mode slave \\定义db3为从节点 </host> <role writer> \\定义写节点包含哪些主机 hosts db1, db2 \\定义写节点主机 ips 172.16.2.21 \\定义写节点的vip地址 mode exclusive \\类型为exclusive </role> <role reader> \\定义读节点的主机 hosts db1, db2, db3 \\主机名称 ips 172.16.2.22, 172.16.2.23, 172.16.2.24 \\读节点的vip地址 mode balanced \\类型为轮询 </role></pre> <p>3、复制此配置文件到各个节点,包含监控节点</p> <pre class="brush:bash;toolbar:false">[root@node1 ~]# scp /etc/mysql-mmm/mmm_common.conf node2:/etc/mysql-mmm/ [root@node1 ~]# scp /etc/mysql-mmm/mmm_common.conf node3:/etc/mysql-mmm/ [root@node1 ~]# scp /etc/mysql-mmm/mmm_common.conf node4:/etc/mysql-mmm/</pre> <p>4、编辑mysql-mmm-agent配置文件,修改内容如下:</p> <pre class="brush:bash;toolbar:false">[root@node1 ~]# vim /etc/mysql-mmm/mmm_agent.conf include mmm_common.conf # The 'this' variable refers to this server. Proper operation requires # that 'this' server (db1 by default), as well as all other servers, have the # proper IP addresses set in mmm_common.conf. this db1 \\此处根据mysql-mmm的配置文件定义的主机名填写,node1,node2,node3分别为db1,db2,db3</pre> <p>5、在监控主机安装mysql-mmm-monitor</p> <pre class="brush:bash;toolbar:false">[root@node4 ~]# yum -y install mysql-mmm-monitor</pre> <p>5.1编辑mysql-mmm-monitor的配置文件</p> <pre class="brush:bash;toolbar:false">[root@node4 ~]# vim /etc/mysql-mmm/mmm_mon.conf include mmm_common.conf <monitor> ip 127.0.0.1 \\保持默认即可 pid_path /var/run/mysql-mmm/mmm_mond.pid \\mysql-mmm-monitor的pid路径 bin_path /usr/libexec/mysql-mmm \\mysql-mmm-monitor运行程序路径 status_path /var/lib/mysql-mmm/mmm_mond.status \\监控状态存放路径 ping_ips 172.16.2.1, 172.16.2.12, 172.16.2.13, 172.16.2.14 \\监控各个主机的状态,172.16.2.1为主机的网关,这里也监控了网关 auto_set_online 60 \\设置自动上线时间 # The kill_host_bin does not exist by default, though the monitor will # throw a warning about it missing. See the section 5.10 "Kill Host # Functionality" in the PDF documentation. # # kill_host_bin /usr/libexec/mysql-mmm/monitor/kill_host # </monitor> <host default> monitor_user monitor \\用于监控的用户名 monitor_password monitor \\用户监控的用户密码 </host> debug 0 \\调试,0为关闭,1为开启</pre> <p>6、在各个DB节点启动agent,在监控点启动monitor</p> <pre class="brush:bash;toolbar:false">[root@node1 ~]# /etc/init.d/mysql-mmm-agent start [root@node2 ~]# /etc/init.d/mysql-mmm-agent start [root@node3 ~]# /etc/init.d/mysql-mmm-agent start</pre> <pre class="brush:bash;toolbar:false">[root@node4 ~]# /etc/init.d/mysql-mmm-monitor start</pre> <p>7、在monitor节点运行mmm_control show命令查看</p> <pre class="brush:bash;toolbar:false">[root@node4 ~]# mmm_control show db1(172.16.2.12) master/ONLINE. Roles: reader(172.16.2.23), writer(172.16.2.21) db2(172.16.2.13) master/ONLINE. Roles: reader(172.16.2.24) db3(172.16.2.14) slave/ONLINE. Roles: reader(172.16.2.22)</pre> <p>8、查看各个DB接的ip地址:</p> <pre class="brush:bash;toolbar:false">[root@node1 ~]# ip addr list eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:0e:c4:29 brd ff:ff:ff:ff:ff:ff inet 172.16.2.12/24 brd 172.16.2.255 scope global eth0 inet 172.16.2.23/32 scope global eth0 inet 172.16.2.21/32 scope global eth0 inet6 fe80::20c:29ff:fe0e:c429/64 scope link valid_lft forever preferred_lft forever</pre> <pre class="brush:bash;toolbar:false">[root@node2 ~]# ip addr list eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:f8:d4:88 brd ff:ff:ff:ff:ff:ff inet 172.16.2.13/16 brd 172.16.255.255 scope global eth0 inet 172.16.2.24/32 scope global eth0 inet6 fe80::20c:29ff:fef8:d488/64 scope link valid_lft forever preferred_lft forever</pre> <pre class="brush:bash;toolbar:false">[root@node3 ~]# ip addr list eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:5a:4f:48 brd ff:ff:ff:ff:ff:ff inet 172.16.2.14/24 brd 172.16.2.255 scope global eth0 inet 172.16.2.22/32 scope global eth0 inet6 fe80::20c:29ff:fe5a:4f48/64 scope link valid_lft forever preferred_lft forever</pre> <p>9、模拟node1故障</p> <pre class="brush:bash;toolbar:false">[root@node4 ~]# mmm_control set_offline db1 \\使用monitor命令强制node1下线 OK: State of 'db1' changed to ADMIN_OFFLINE. Now you can wait some time and check all roles!</pre> <pre class="brush:bash;toolbar:false">[root@node4 ~]# mmm_control show \\再次查看各节点的状态,db1显示不在线,db2为读写角色,db3为读角色 db1(172.16.2.12) master/ADMIN_OFFLINE. Roles: db2(172.16.2.13) master/ONLINE. Roles: reader(172.16.2.24), writer(172.16.2.21) db3(172.16.2.14) slave/ONLINE. Roles: reader(172.16.2.22), reader(172.16.2.23)</pre> <p>10、到此,mysql-mmm可以正常工作了,当一个主节点故障,此主节点的ips地址会转移的另一个主节点上,实现故障切换,同时从节点也会切换主节点的ip,进行数据同步。</p> <p></p> <p> 此处只介绍了mmm的配置,如果大家在配置mysql双主模型或主从模型时或配置mmm有问题时,大家可以一起研究。</p> <p></p> <p>相关连接:</p> <p>http://mysql-mmm.org/mmm2:guide </p> <p></p> 最后修改:2021 年 12 月 10 日 10 : 53 AM © 允许规范转载 赞赏 如果觉得我的文章对你有用,请随意赞赏 赞赏作者 支付宝微信