您好,登錄后才能下訂單哦!
MySQL主從復制、讀寫分離、高可用集群搭建
一、服務介紹
1.1 Keepalived
Keepalived,見名知意,即保持存活,其目的是解決單點故障,當一臺服務器宕機或者故障時自動切換到其他的服務器中。Keepalived是基于VRRP協議實現的。VRRP協議是用于實現路由器冗余的協議,VRRP協議將兩臺或多臺路由器設備虛擬成虛擬設備,可以對外提供虛擬路由器IP(一個或多個),即漂移IP(VIP)。
1.2 ProxySQL
ProxySQL是一個高性能,高可用性的MySQL代理服務,用于實現數據庫的代理和讀寫分離的功能。
1.3 Mariadb
Mariadb是多用戶,多線程的SQL數據庫服務器。它是C/S架構,即client/server,客服端/服務端架構。MariaDB基于事務的Maria存儲引擎,使用了Percona的 XtraDB,InnoDB的變體,性能十分的強大。mariadb由開源社區維護,采用GPL授權許可,完全兼容MySQL。
1.4 MHA
MHA(Master HA)是一款開源的 MySQL 的高可用程序,它為 MySQL 主從復制架構提供 automating master failover 功能。MHA 在監控到 master 節點故障時,會提升其中擁有最新數據的 slave節點成為新的master節點,在此期間,MHA會通過于其它從節點獲取額外信息來避免一致性方面的問題。MHA 還提供了master節點的在線切換功能,即按需切換master/slave節點。
二、架構要求
2.1 架構要求
實現MsySQL數據庫服務器主從復制、主從高可用、讀寫分離、ProxySQL高可用的功能。
2.2 系統版本選擇
OS:centos7.3
Kernel:3.10.0-514.el7.x86_64
Archive:X86_64
2.3 部署環境
三、Linux系統環境查看
3.1 查看服務器硬件信息
dmidecode | grep "Product Name"
3.2 查看 CPU CPU型號
lscpu | grep "Model name"
3.3 查看CPU個數
lscpu | grep "^CPU(s)"
3.4 查看內存大小
free -h | grep Mem|awk '{print $2}'
四、系統初始化
4.1 清空防火墻規則
iptables -F
4.2 關閉防火墻或者定義防火墻規則(這里為了實驗方便、選擇關閉)
systemctl stop firewalld.service systemctl disable firewalld.service
4.3 關閉Selinux
sed -i 's@SELINUX=enforcing@SELINUX=disabled@g' /etc/selinux/config setenforce 0
4.4 關閉NetworkManager
systemctl stop NetworkManager systemctl disable NetworkManager
4.5 時間同步
ntpdate 172.16.0.1
4.6 hosts文件修改
192.168.0.51 node1 192.168.0.52 node2 192.168.0.53 node3 192.168.0.54 node4 192.168.0.55 node5 192.168.0.56 node6
4.7 設置主機名
hostnamectl set-hostname node1 hostnamectl set-hostname node2 hostnamectl set-hostname node3 hostnamectl set-hostname node4 hostnamectl set-hostname node5 hostnamectl set-hostname node6
4.8 分發sshkey(node3上操作)
#創建密鑰 [root@node3 ~]# ssh-keygen -t rsa -P '' #分發公鑰到每一臺主機 ssh-copy-id -i .ssh/id_rsa.pub root@node3 ssh-copy-id -i .ssh/id_rsa.pub root@node1 ssh-copy-id -i .ssh/id_rsa.pub root@node2 ssh-copy-id -i .ssh/id_rsa.pub root@node4 ssh-copy-id -i .ssh/id_rsa.pub root@node5 ssh-copy-id -i .ssh/id_rsa.pub root@node6 #將私鑰和公鑰發往其他六個節點,保證各節點基于密鑰通信 [root@node3 ~]# scp .ssh/id_rsa .ssh/id_rsa.pub root@node1:/root/.ssh/ id_rsa 100% 1675 1.6KB/s 00:00 id_rsa.pub 100% 392 0.4KB/s 00:00 [root@node3 ~]# scp .ssh/id_rsa .ssh/id_rsa.pub root@node2:/root/.ssh/ id_rsa 100% 1675 1.6KB/s 00:00 id_rsa.pub 100% 392 0.4KB/s 00:00 [root@node3 ~]# scp .ssh/id_rsa .ssh/id_rsa.pub root@node4:/root/.ssh/ id_rsa 100% 1675 1.6KB/s 00:00 id_rsa.pub 100% 392 0.4KB/s 00:00 [root@node3 ~]# scp .ssh/id_rsa .ssh/id_rsa.pub root@node5:/root/.ssh/ id_rsa 100% 1675 1.6KB/s 00:00 id_rsa.pub 100% 392 0.4KB/s 00:00 [root@node3 ~]# scp .ssh/id_rsa .ssh/id_rsa.pub root@node6:/root/.ssh/ id_rsa 100% 1675 1.6KB/s 00:00 id_rsa.pub 100% 392 0.4KB/s 00:00
五、環境部署
5.1 主從復制配置(node3、node4、node5、node6)
1 安裝Mariadb(4個節點)
yum install mariadb mariadb-server -y
2 主節點Mariadb服務配置
vim /etc/my.cnf.d/server.cnf [mysqld] innodb_file_per_table=ON skip_name_resolve=ON server_id = 1 log_bin = log-bin
3 啟動服務
systemctl start mariadb.service
4 創建主從配置賬號
grant all privileges on *.* to 'michael'@'192.168.0.%' identified by 'password'; grant replication slave,replication client on *.* to 'repuser'@'192.168.0.%' identified by 'repass'; flush privileges;
5 其他從節點Mariadb服務配置
#node3 [mysqld] innodb_file_per_table=ON skip_name_resolve=ON server_id = 11 relay_log=relay-log read_only=ON #node4 [mysqld] innodb_file_per_table=ON skip_name_resolve=ON server_id = 12 relay_log=relay-log read_only=ON #node5 [mysqld] innodb_file_per_table=ON skip_name_resolve=ON server_id = 13 relay_log=relay-log read_only=ON
6 全部啟動服務
systemctl start mariadb.service
7 主節點查看日志文件
MariaDB [(none)]> SHOW MASTER LOGS; +----------------+-----------+ | Log_name | File_size | +----------------+-----------+ | log-bin.000001 | 30331 | | log-bin.000002 | 1038814 | | log-bin.000003 | 899 | +----------------+-----------+ 3 rows in set (0.00 sec)
8 建立主從復制關系
#從節點建立主從關系 CHANGE MASTER TO MASTER_HOST='192.168.0.53',MASTER_USER='repuser',MASTER_PASSWORD='repass',MASTER_LOG_FILE='log-bin.000003',MASTER_LOG_POS=899; #啟動SLAVE START SLAVE ; MariaDB [(none)]> START SLAVE ; Query OK, 0 rows affected (0.00 sec) #查看slave狀態,確認主從復制是否配置成功(從節點都需要配置) MariaDB [(none)]> SHOW SLAVE STATUS\G; *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.0.53 Master_User: repuser Master_Port: 3306 Connect_Retry: 60 Master_Log_File: log-bin.000003 Read_Master_Log_Pos: 899 Relay_Log_File: relay-log.000002 Relay_Log_Pos: 527 Relay_Master_Log_File: log-bin.000003 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 899 Relay_Log_Space: 815 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1
9 測試主從復制
#主節點創建數據庫hellodb MariaDB [(none)]> CREATE DATABASE hellodb; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> SHOW DATABASES; +--------------------+ | Database | +--------------------+ | information_schema | | hellodb | | mysql | performance_schema | | test | +--------------------+ #各個從節點查看,顯示hellodb數據庫存在,主從配置成功 MariaDB [(none)]> SHOW DATABASES; +--------------------+ | Database | +--------------------+ | information_schema | | hellodb | | mysql | | performance_schema | | test | +--------------------+ 5 rows in set (0.00 sec)
5.2 讀寫分離配置
1 Keepalived、ProxySQL、Mariadb安裝
yum install keepalived mariadb -y wget -c ftp://172.16.0.1/pub/Sources/7.x86_64/proxysql/proxysql-1.3.6-1-centos7.x86_64.rpm yum install proxysql-1.3.6-1-centos7.x86_64.rpm -y
2 配置ProxySQL
[root@node1 keepalived]# cat /etc/proxysql.cnf datadir="/var/lib/proxysql" #管理配置段 admin_variables= { admin_credentials="admin:admin" mysql_ifaces="127.0.0.1:6032;/tmp/proxysql_admin.sock" } #MySQL變量配置段 mysql_variables= { threads=4 max_connections=2048 default_query_delay=0 default_query_timeout=36000000 have_compress=true poll_timeout=2000 interfaces="0.0.0.0:3306;/tmp/proxysql.sock" default_schema="information_schema" stacksize=1048576 server_version="5.5.30" connect_timeout_server=3000 monitor_history=600000 monitor_connect_interval=60000 monitor_ping_interval=10000 monitor_read_only_interval=1500 monitor_read_only_timeout=500 ping_interval_server=120000 ping_timeout_server=500 commands_stats=true sessions_sort=true connect_retries_on_failure=10 } #MySQL服務配置段 mysql_servers = ( { address = "192.168.0.53" # no default, required . If port is 0 , address is interpred as a Unix Socket Domain port = 3306 # no default, required . If port is 0 , address is interpred as a Unix Socket Domain hostgroup = 0 # master node status = "ONLINE" # default: ONLINE weight = 1 # default: 1 compression = 0 # default: 0 }, { address="192.168.0.54" port=3306 hostgroup=1 #slave node status = "ONLINE" # default: ONLINE weight = 1 # default: 1 compression = 0 # default: 0 }, { address="192.168.0.55" port=3306 hostgroup=1 #slave node status = "ONLINE" # default: ONLINE weight = 1 # default: 1 compression = 0 # default: 0 }, { address="192.168.0.56" port=3306 hostgroup=1 #slave node status = "ONLINE" # default: ONLINE weight = 1 # default: 1 compression = 0 # default: 0 } ) #MySQL用戶配置段 mysql_users: ( { username = "michael" # no default , required password = "password" # default: '' default_hostgroup = 0 # default: 0 max_connections=1000 defult_schema="test" active = 1 # default: 1 } ) mysql_query_rules: ( ) scheduler= ( ) #MySQL讀寫配置段 mysql_replication_hostgroups= ( { writer_hostgroup=0 reader_hostgroup=1 } )
3 啟動ProxySQL
[root@node1]# service proxysql start Starting ProxySQL: DONE! [root@node2 init.d]# ss -tnl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 127.0.0.1:6032 *:* LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 1024 *:3306 *:* LISTEN 0 1024 *:3306 *:* LISTEN 0 1024 *:3306 *:* LISTEN 0 1024 *:3306 *:* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25 :::*
4 node1連接測試,連接成功
[root@node1 ~]# mysql -umichael -ppassword -h 192.168.0.153 Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 387 Server version: 5.5.52-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
node2配置和node1配置如出一轍。
5.3 ProxySQL高可用配置
1 node1 keepalived.conf配置
[root@node1 ~]# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { root@localhost } notification_email_from keepalived@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id node1 vrrp_mcast_group4 224.1.101.114 } #服務器宕機,既執行降級,切換服務器;否則退出。 vrrp_script chk_down{ script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0" interval 1 weight -10 fall 1 rize 1 } #健康狀態檢測腳本,檢測proxysql是否存活 vrrp_script chk_proxysql { script "killall -0 proxysql && exit 0 || exit 1" interval 1 weight -10 fall 1 rise 1 } vrrp_instance VI_1 { state MASTER interface ens33 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass fs3D4Gr } virtual_ipaddress { 192.168.0.100/24 dev ens33 label ens33:0 } #調用腳本 track_script { chk_down chk_proxysql } notify_master "/etc/keepalived/notify.sh master" notify_backup "/etc/keepalived/notify.sh backup" notify_fault "/etc/keepalived/notify.sh fault" }
這里設定了虛擬IP(VIP),因此需要將prosql.conf配置文件內的IP更改,注意node1、node2都需要更改。
mysql_ifaces="192.168.0.100:6032;/tmp/proxysql_admin.sock" interfaces="192.168.0.100:3306;/tmp/proxysql.sock"
2 通知腳本(notify.sh),用于通知服務器故障轉移。
[root@node1 keepalived]# cat notify.sh #!/bin/bash # contact='root@localhost' notify() { mailsubject="vrrp:$(hostname) to be $1" mailbody="$(hostname) to be $1,vrrp transition, $(date)." echo "$mailbody" | mail -s "$mailsubject" $contact } case $1 in master) notify master service proxysql start ;; backup) notify backup service proxysql start ;; fault) notify fault service proxysql stop ;; *) echo "Usage: $(basename $0) {master|backup|fault}" exit 1 ;; esac
3 node2 keepalived.conf配置
[root@node2 keepalived]# cat keepalived.conf ! Configuration File for keepalived global_defs { notification_email { root@localhost } notification_email_from keepalived@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id node2 vrrp_mcast_group4 224.1.101.114 } #服務器宕機,既執行降級,切換服務器;否則退出。 vrrp_script chk_down{ script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0" interval 1 weight -10 fall 1 rize 1 } #健康狀態檢測腳本,檢測proxysql是否存活 vrrp_script chk_proxysql { script "killall -0 proxysql && exit 0 || exit 1" interval 1 weight -10 fall 1 rise 1 } vrrp_instance VI_1 { state MASTER interface ens33 virtual_router_id 51 priority 90 advert_int 1 authentication { auth_type PASS auth_pass fs3D4Gr } virtual_ipaddress { 192.168.0.100/24 dev ens33 label ens33:0 } #調用腳本 track_script { chk_down chk_proxysql } notify_master "/etc/keepalived/notify.sh master" notify_backup "/etc/keepalived/notify.sh backup" notify_fault "/etc/keepalived/notify.sh fault" }
notify.sh腳本同上面保持一致。此時高可用已經配置完成,當然你可以通過停止某個節點的服務來測試高可用是否實現。
5.4 通過MHA實現MySQL主節點高可用
MHA服務有兩種角色,MHA Manager(管理節點)和 MHA Node(數據節點)。
MHAManager,通常單獨部署在一臺獨立機器上管理多個master/slave 集群,每master/slave 集群稱作一個application。
MHA node:運行在每臺MySQL服務器上(master/slave/manager),它通過監控,具備解析和清理 logs功能的腳本來加快故障轉移。
MHA需要基于SSH通信,這個我們在一開始的環境部署就配置好了。
1 下載MHA安裝包(管理包、數據包)
wget -c http://192.168.0.123/install_package/down/Sources/mha/mha4mysql-manager-0.56-0.el6.noarch.rpm wget -c http://192.168.0.123/install_package/down/Sources/mha/mha4mysql-node-0.56-0.el6.noarch.rpm
2 node1、node2做MHA管理節點并做高可用,因此在node1、node2安裝mha所有rpm包
yum install -y mha4mysql*
3 其他節點安裝mha數據節點rpm安裝包
yum install mha4mysql-node-0.56-0.el6.noarch.rpm -y
MHA manager管理節點管理其他的MHA node數據節點需要配置一個配置文件用于定義管理事項。全局配置文件默認為/etc/masterha_default.cnf,其為可選配置。如果是僅有一組Application,我們可以自定義一個配置文件。
本文將使用以下路徑的配置文件。
4 node1、node2都需要執行如下的操作
mkdir /etc/masterha vim /etc/masterha/app1.cnf
5 自定義的管理配置文件app1.cnf配置
[root@node1 ~]# cat /etc/masterha/app1.cnf [server default] user=michael password=password manager_workdir=/data/masterha/app1 manager_log=/data/masterha/app1/manager.log remote_workdir=/data/masterha/app1 ssh_user=root repl_user=repladmin repl_password=replpass ping_interval=1 [server1] hostname=192.168.0.53 candidate_master=1 [server2] hostname=192.168.0.54 candidate_master=1 [server3] hostname=192.168.0.55 candidate_master=1 [server4] hostname=192.168.0.56 candidate_master=1
6 檢測各節點SSH通信是否成功
masterha_check_ssh --conf=/etc/masterha/app1.cnf #最后一行輸出如下信息,表示通過檢測。 [info] All SSH connection tests passed successfully.
7 修改主節點(node3)和從節點(node4、5、6)的配置
#主節點master配置: [mysqld] innodb_file_per_table=ON skip_name_resolve=ON server_id = 1 log_bin = log-bin relay-log=relay-bin
8 修改所有slave節點依賴的配置
[mysqld] innodb_file_per_table=ON skip_name_resolve=ON server_id = 11 #id根據不同node做相應地修改 relay_log=relay-log read_only=ON log_bin = log-bin relay_log_purge=0
9 檢查MySQL的復制集群是否通信成功
masterha_check_repl --conf=/etc/masterha/app1.cnf
10 如果成功,則啟動MHA
nohup masterha_manager --conf=/etc/masterha/app1.cnf >/data/masterha/app1/manager.log 2>&1 &
11 查看master節點的當前狀態
[root@node1 ~]# masterha_check_status --conf=/etc/masterha/app1.cnf app1 (pid:75846) is running(0:PING_OK), master:192.168.0.53
此時配置已經完成,這時候我們可以模擬一下數據庫主節點(node3)的故障用來測試MHA高可用功能是否實現。一旦我們主節點因為某種原因發生故障,就被自動轉移其他的節點成為主節點。而這個時候,我們需要立即在新的主節點中進行備份操作。而此時的故障節點就應該立即進行故障處理,恢復正常。一旦恢復正常,我們就應該導入數據庫的備份文件,也需要將故障節點的mysql配置文件修改成從服務器的配置屬性,使其成為從節點。主從配置上面已有介紹,因此我就不啰嗦了。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。