91超碰碰碰碰久久久久久综合_超碰av人澡人澡人澡人澡人掠_国产黄大片在线观看画质优化_txt小说免费全本

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

MySQL讀寫分離amoeba&mysql-proxy

發布時間:2020-08-10 13:57:13 來源:ITPUB博客 閱讀:154 作者:彩倫 欄目:MySQL數據庫

----主從同步介紹

refencen    https://www.cnblogs.com/lin3615/p/5684891.html

1. 讀寫分離方式

這里介紹兩種方式,一種是用mysql-proxy,一種用Amoeba

 

amoeba

優點:直接實現讀寫分離和負載均衡,不用修改代碼,有很靈活的數據解決方案

缺點:自己分配賬戶,和后端數據庫權限管理獨立,權限處理不夠靈活

 

mysql-proxy

優點:直接實現讀寫分離和負載均衡,不用修改代碼,masterslave用一樣的帳號

缺點:字符集問題,lua語言編程,還只是alpha版本,時間消耗有點高

 

2. 讀寫分離,延遲是個大問題

在slave服務器上執行 show slave status 查看同步情況

Master_Log_File:slave中的I/O線程當前正在讀取的master服務器二進制式日志文件名.

Read_Master_Log_Pos:在當前的 master服務器二進制日志中,slave中的I/O線程已經讀取的位置

Relay_Log_File:SQL線程當前正在讀取與執行中繼日志文件的名稱

Relay_Log_Pos:在當前的中繼日志中,SQL線程已讀取和執行的位置

Relay_Master_Log_File:SQL線程執行的包含多數近期事件的master二進制日志文件的名稱

Slave_IO_Running:I/O線程是否被啟動并成功連接到master

Slave_SQL_Running:SQL線程是否被啟動

Seconds_Behind_Master:slave服務器SQL線程和從服務器I/O線程之間的差距,單位為秒計

 

slave同步延遲情況出現:

1.Seconds_Behind_Master不為0,這個值可能會很大

2.Relay_Master_Log_FileMaster_Log_File顯示bin-log的編號相差很大,說明bin-logslave上沒有及時同步,所以近期執行的 bin-log和當前I/O線程所讀的 bin-log相差很大

3.mysql slave數據庫目錄下存在大量的 mysql-relay-log日志,該日志同步完成之后就會被系統自動刪除,存在大量日志,說明主從同步延遲很厲害

 

3. mysql主從同步延遲原理

mysql主從同步原理

主庫針對讀寫操作,順序寫 binlog,從庫單線程去主庫讀"寫操作的binlog",從庫取到 binlog在本地原樣執行(隨機寫),來保證主從數據邏輯上一致.

mysql的主從復制都是單線程的操作,主庫對所有DDLDML產生 binlogbinlog是順序寫,所以效率很高,slaveSlave_IO_Running線程到主庫取日志,效率比較高,下一步問題來了,slave的 slave_sql_running線程將主庫的 DDL和DML操作在 slave實施。DML,DDL的IO操作是隨即的,不能順序的,成本高很多,還有可能slave上的其他查詢產生 lock,由于 slave_sql_running也是單線程的,所以 一個 DDL卡住了,需要執行一段時間,那么所有之后的DDL會等待這個 DDL執行完才會繼續執行,這就導致了延遲.由于master可以并發,Slave_sql_running線程卻不可以,所以主庫執行 DDL需求一段時間,在slave執行相同的DDL時,就產生了延遲.

 

主從同步延遲產生原因

當主庫的TPS并發較高時,產生的DDL數量超過Slave一個 sql線程所能承受的范圍,那么延遲就產生了,當然還有就是可能與 slave的大型 query語句產生了鎖等待

首要原因:數據庫在業務上讀寫壓力太大,CPU計算負荷大,網卡負荷大,硬盤隨機IO太高

次要原因:讀寫 binlog帶來的性能影響,網絡傳輸延遲

 

4. 主從同步延遲解決方案

架構方面

1.業務的持久化層的實現采用分庫架構,mysql服務可平行擴展分散壓力

2.單個庫讀寫分離,一主多從,主寫從讀,分散壓力。

3.服務的基礎架構在業務和mysql之間加放 cache

4.不同業務的mysql放在不同的機器

5.使用比主加更了的硬件設備作slave

反正就是mysql壓力變小,延遲自然會變小

 

硬件方面:

采用好的服務器

 

5. mysql主從同步加速

1sync_binlogslave端設置為0

2、–logs-slave-updates 從服務器從主服務器接收到的更新不記入它的二進制日志。

3、直接禁用slave端的binlog

4slave端,如果使用的存儲引擎是innodbinnodb_flush_log_at_trx_commit =2

 

從文件系統本身屬性角度優化

master

修改linuxUnix文件系統中文件的etime屬性, 由于每當讀文件時OS都會將讀取操作發生的時間回寫到磁盤上,對于讀操作頻繁的數據庫文件來說這是沒必要的,只會增加磁盤系統的負擔影響I/O性能。可以通過設置文件系統的mount屬性,組織操作系統寫atime信息,在linux上的操作為:

打開/etc/fstab,加上noatime參數

/dev/sdb1 /data reiserfs noatime 1 2

然后重新mount文件系統

#mount -oremount /data

 

主庫是寫,對數據安全性較高,比如sync_binlog=1,innodb_flush_log_at_trx_commit = 1 之類的設置是需要的

而slave則不需要這么高的數據安全,完全可以將sync_binlog設置為0或者關閉binlog,innodb_flushlog也可以設置為0來提高sql的執行效率

1sync_binlog=1

MySQL提供一個sync_binlog參數來控制數據庫的binlog刷到磁盤上去。

默認,sync_binlog=0,表示MySQL不控制binlog的刷新,由文件系統自己控制它的緩存的刷新。這時候的性能是最好的,但是風險也是最大的。一旦系統Crash,在binlog_cache中的所有binlog信息都會被丟失。

如果sync_binlog>0,表示每sync_binlog次事務提交,MySQL調用文件系統的刷新操作將緩存刷下去。最安全的就是sync_binlog=1了,表示每次事務提交,MySQL都會把binlog刷下去,是最安全但是性能損耗最大的設置。這樣的話,在數據庫所在的主機操作系統損壞或者突然掉電的情況下,系統才有可能丟失1個事務的數據。

但是binlog雖然是順序IO,但是設置sync_binlog=1,多個事務同時提交,同樣很大的影響MySQLIO性能。

雖然可以通過group commit的補丁緩解,但是刷新的頻率過高對IO的影響也非常大。對于高并發事務的系統來說,“sync_binlog”設置為0和設置為1的系統寫入性能差距可能高達5倍甚至更多。

所以很多MySQL DBA設置的sync_binlog并不是最安全的1,而是2或者是0。這樣犧牲一定的一致性,可以獲得更高的并發和性能。

默認情況下,并不是每次寫入時都將binlog與硬盤同步。因此如果操作系統或機器(不僅僅是MySQL服務器)崩潰,有可能binlog中最后的語句丟失了。要想防止這種情況,你可以使用sync_binlog全局變量(1是最安全的值,但也是最慢的),使binlog在每Nbinlog寫入后與硬盤同步。即使sync_binlog設置為1,出現崩潰時,也有可能表內容和binlog內容之間存在不一致性。

 

2innodb_flush_log_at_trx_commit (這個很管用)

抱怨InnodbMyISAM 100倍?那么你大概是忘了調整這個值。默認值1的意思是每一次事務提交或事務外的指令都需要把日志寫入(flush)硬盤,這是很費時的。特別是使用電池供電緩存(Battery backed up cache)時。設成2對于很多運用,特別是從MyISAM表轉過來的是可以的,它的意思是不寫入硬盤而是寫入系統緩存。

日志仍然會每秒flush到硬盤,所以你一般不會丟失超過1-2秒的更新。設成0會更快一點,但安全方面比較差,即使MySQL掛了也可能會丟失事務的數據。而值2只會在整個操作系統 掛了時才可能丟數據。

 

3ls命令可用來列出文件的 atimectime mtime

atime 文件的access time 在讀取文件或者執行文件時更改的

ctime 文件的create time 在寫入文件,更改所有者,權限或鏈接設置時隨inode的內容更改而更改

mtime 文件的modified time 在寫入文件時隨文件內容的更改而更改

ls -lc filename 列出文件的 ctime

ls -lu filename 列出文件的 atime

ls -l filename 列出文件的 mtime

stat filename 列出atimemtimectime

atime不一定在訪問文件之后被修改

因為:使用ext3文件系統的時候,如果在mount的時候使用了noatime參數那么就不會更新atime信息。

這三個time stamp都放在 inode .如果mtime,atime 修改,inode 就一定會改, 既然 inode 改了,ctime也就跟著改了.

之所以在 mount option 中使用 noatime, 就是不想file system 做太多的修改, 而改善讀取效能

 

4.進行分庫分表處理,這樣減少數據量的復制同步操作

 

 

 

 

 

 

一、MySQL主從搭建

 

1. 主從庫定義

主庫     192.168.12.56   3306

從庫1    192.168.12.56   3307

從庫2    192.168.12.55   3306

 

 

2. 主庫修改參數

====192.168.12.56

# vi /etc/my.cnf

[mysqld]

server-id       = 1

log-bin=mysql-bin

 

 

------重啟mysql

# /etc/init.d/mysqld restart

# netstat -nltpd |grep mysql

 

------檢查參數

# ls -lrth /app/mysql/data/|grep mysql-bin

-rw-rw---- 1 mysql mysql  107 Oct 29 22:35 mysql-bin.000001

-rw-rw---- 1 mysql mysql   19 Oct 29 22:35 mysql-bin.index

 

# mysql -uroot -p111111 -e 'show variables;'|egrep "log_bin|server_id"

log_bin ON

server_id       1

 

 

3. 從庫修改參數

3.1 從庫1修改

====192.168.12.56

# vi /mysqldata/3307/my3307.cnf

[mysqld]

server-id       = 2

 

# mysqladmin -uroot -p1234567 -S /mysqldata/3307/mysql3307.sock shutdown

# mysqld_safe --defaults-file=/mysqldata/3307/my3307.cnf 2>&1 >/dev/null &

 

# mysql -uroot -p1234567 -S /mysqldata/3307/mysql3307.sock -e 'show variables like "server%"'

+---------------+-------+

| Variable_name | Value |

+---------------+-------+

| server_id     | 2     |

 

 

 

 

3.2 從庫2修改

====192.168.12.55

# vi /etc/my.cnf

[mysqld]

server-id       = 3


# /etc/init.d/mysqld restart

# netstat -nltpd |grep mysql

 

# mysql -uroot -p111111 -e 'show variables like "server%"'

+---------------+-------+

| Variable_name | Value |

+---------------+-------+

| server_id     | 3     |

+---------------+-------+

 

 

4. 主庫創建同步帳號rep

# mysql -uroot -p111111

 

mysql> grant replication slave on *.* to 'rep'@'192.168.12.%' identified by '123456';

mysql> flush privileges;

 

----replication slave mysql同步的必須權限,此處不要授予all

----*.* 表示所有庫所有表,也可以指定具體庫和表進行復制。shaw_gbk_db.test_tb

----'rep'@'192.168.12.%' rep為同步帳號,192.168.12.%為授權主機網段

 

mysql> select user,host from mysql.user where user='rep';

+------+--------------+

| user | host         |

+------+--------------+

| rep  | 192.168.12.% |

+------+--------------+

1 row in set (0.04 sec)

mysql> show grants for rep@'192.168.12.%'\G

*************************** 1. row ***************************

Grants for rep@192.168.12.%: GRANT REPLICATION SLAVE ON *.* TO 'rep'@'192.168.12.%' IDENTIFIED BY PASSWORD '*6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9'

 

 

5. 從庫搭建

5.1 ### 從庫1搭建 ###(方式一)

5.1.1 主庫導出數據

------主庫加鎖。加鎖后,該窗口不能退出,并且受以下參數影響

mysql> show variables like '%timeout';

| interactive_timeout        | 28800    |

| wait_timeout               | 28800    |

 

mysql> flush table with read lock;

注意:mysql 5.1mysql 5.5版本鎖表方式不一樣:

5.1版本: flush tables with read lock

5.5版本: flush table with read lock

 

 

------查看當前binlog

mysql> show master status;

+------------------+----------+--------------+------------------+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |

+------------------+----------+--------------+------------------+

| mysql-bin.000001 |      334 |              |                  |

+------------------+----------+--------------+------------------+

 

------新開窗口導出數據

# mkdir /bak

# mysqldump -uroot -p1111111 -A -B --events|gzip >/bak/mysql_bak_$(date +%F).sql.gz

 

# ll -lrht /bak/

total 144K

-rw-r--r-- 1 root root 141K Jan 10 07:58 mysql_bak_2018-01-10.sql.gz

 

------導完數據之后,查看binlog狀態是否和之前一致,無誤后解鎖

mysql> show master status;

+------------------+----------+--------------+------------------+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |

+------------------+----------+--------------+------------------+

| mysql-bin.000001 |      334 |              |                  |

+------------------+----------+--------------+------------------+

mysql> unlock tables;

 

-------創建一個數據庫,帶回從庫搭建后,看是否能自動同步過去

mysql> create database shaw_db;

Query OK, 1 row affected (0.02 sec)

mysql> show master status;

+------------------+----------+--------------+------------------+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |

+------------------+----------+--------------+------------------+

| mysql-bin.000001 |      423 |              |                  |

+------------------+----------+--------------+------------------+

mysql> show processlist;

+----+------+-------------------------------------------------------------------+------------------+

| Id | User | Host        | db      | Command     | Time | State                                                                 | Info             |

+----+------+-------------------------------------------------------------------+------------------+

|  4 | root | localhost   | shaw_db | Query       |    0 | NULL                                                                  | show processlist |

| 10 | rep  | xuan2:37165 | NULL    | Binlog Dump |  406 | Master has sent all binlog to slave; waiting for binlog to be updated | NULL             |

+----+------+---------------------------------------------------------------------+------------------+

 

 

5.1.2 從庫1導入數據

# gzip -d /bak/mysql_bak_2018-01-10.sql.gz

# ls -lrht /bak/

total 516K

-rw-r--r-- 1 root root 516K Jan 10 07:58 mysql_bak_2018-01-10.sql

# mysql -uroot -p1234567 -S /mysqldata/3307/mysql3307.sock </bak/mysql_bak_2018-01-10.sql

mysql> show slave status\G

Empty set (0.00 sec)

 

5.1.3 從庫1設置change master

# mysql -uroot -p1234567 -S /mysqldata/3307/mysql3307.sock<<EOF

  change master to

master_host='192.168.12.56',

master_port=3306,

master_user='rep',

master_password='123456',

master_log_file='mysql-bin.000001',

master_log_pos=334;

EOF

 

====>從庫1生成master.info

# ls -lrht /mysqldata/3307/data/

total 29M

drwx------ 2 mysql mysql 4.0K Oct 29 21:45 performance_schema

-rw-rw---- 1 mysql mysql 5.0M Oct 29 21:47 ib_logfile1

-rw-rw---- 1 mysql mysql  18M Oct 29 22:42 ibdata1

-rw-rw---- 1 mysql mysql 5.0M Oct 29 22:43 ib_logfile0

drwx------ 2 mysql root  4.0K Oct 29 23:42 mysql

-rw-rw---- 1 mysql mysql   78 Oct 29 23:45 master.info

-rw-rw---- 1 mysql mysql  107 Oct 29 23:45 mysql3307-relay-bin.000001

-rw-rw---- 1 mysql mysql   29 Oct 29 23:45 mysql3307-relay-bin.index

-rw-r----- 1 mysql root  6.0K Oct 29 23:45 xuan2.err

-rw-rw---- 1 mysql mysql   52 Oct 29 23:45 relay-log.info

 

# cat master.info

18

mysql-bin.000001

334

192.168.12.56

rep

123456

3306

60

0

 

5.1.4 從庫1啟動從庫進程

mysql> start slave;

 

mysql> show slave status\G

             Slave_IO_Running: Yes

            Slave_SQL_Running: Yes

        Seconds_Behind_Master: 0   落后主庫的秒數

*************************** 1. row ***************************

               Slave_IO_State: Waiting for master to send event

                  Master_Host: 192.168.12.56

                  Master_User: rep

                  Master_Port: 3306

                Connect_Retry: 60

              Master_Log_File: mysql-bin.000001

          Read_Master_Log_Pos: 423

               Relay_Log_File: orcl-relay-bin.000002

                Relay_Log_Pos: 342

        Relay_Master_Log_File: mysql-bin.000001

             Slave_IO_Running: Yes

            Slave_SQL_Running: Yes

              Replicate_Do_DB:

          Replicate_Ignore_DB:

           Replicate_Do_Table:

       Replicate_Ignore_Table:

      Replicate_Wild_Do_Table:

  Replicate_Wild_Ignore_Table:

                   Last_Errno: 0

                   Last_Error:

                 Skip_Counter: 0

          Exec_Master_Log_Pos: 423

              Relay_Log_Space: 497

              Until_Condition: None

               Until_Log_File:

                Until_Log_Pos: 0

           Master_SSL_Allowed: No

           Master_SSL_CA_File:

           Master_SSL_CA_Path:

              Master_SSL_Cert:

            Master_SSL_Cipher:

               Master_SSL_Key:

        Seconds_Behind_Master: 0

Master_SSL_Verify_Server_Cert: No

                Last_IO_Errno: 0

                Last_IO_Error:

               Last_SQL_Errno: 0

               Last_SQL_Error:

  Replicate_Ignore_Server_Ids:

             Master_Server_Id: 1

1 row in set (0.00 sec)

 

mysql> show processlist;

+----+-------------+-----------+----------------------------------+------------------+

| Id | User        | Host      | db   | Command | Time | State                                                                       | Info             |

+----+-------------+-----------+---------------------------------------------------------------+------------------+

|  9 | root        | localhost | NULL | Query   |    0 | NULL                                                                        | show processlist |

| 10 | system user |           | NULL | Connect |  347 | Waiting for master to send event                                            | NULL             |

| 11 | system user |           | NULL | Connect |  289 | Slave has read all relay log; waiting for the slave I/O thread to update it | NULL             |

+----+-------------+---------------------------------------------------------------------+------------------+

3 rows in set (0.00 sec)

 

----檢查,發現之前創建的數據

mysql> show databases like 'shaw_db';

+--------------------+

| Database (shaw_db) |

+--------------------+

| shaw_db            |

+--------------------+

1 row in set (0.00 sec)

 

5.1.5 再次測試同步情況

----主庫

mysql> use shaw_db;

Database changed

mysql> create table t_zhong as select * from mysql.user;

Query OK, 3 rows affected (0.02 sec)

Records: 3  Duplicates: 0  Warnings: 0

mysql> show master status;

+------------------+----------+--------------+------------------+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |

+------------------+----------+--------------+------------------+

| mysql-bin.000001 |      537 |              |                  |

+------------------+----------+--------------+------------------+

1 row in set (0.00 sec)

 

----從庫1

mysql> select count(*) from shaw_db.t_zhong;

+----------+

| count(*) |

+----------+

|        3 |

+----------+

1 row in set (0.00 sec)

 

 

 

--5.1.6 主從庫產生的日志文件信息

---- 主庫開啟binlog后產生binlog日志

# ls -lrt |grep bin

-rw-rw---- 1 mysql mysql   33 Jan 10 07:13 mysql-bin.index

-rw-rw---- 1 mysql mysql  827 Jan 10 10:57 mysql-bin.000001

 

# cat mysql-bin.index

/mysqldata/3309/mysql-bin.000001

 

# mysqlbinlog mysql-bin.000001|more

/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/;

/*!40019 SET @@session.max_insert_delayed_threads=0*/;

/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;

DELIMITER /*!*/;

# at 4

#180110  7:13:05 server id 1  end_log_pos 107   Start: binlog v 4, server v 5.5.32-log created 1801

10  7:13:05 at startup

# Warning: this binlog is either in use or was not closed properly.

ROLLBACK/*!*/;

BINLOG '

。。。。。。。。。。。。。。。。。。

 

 

---- 從庫開啟未開啟binlog,但是有relaylog,并且有master.info記錄信息

# ls -lrt |grep relay

-rw-rw---- 1 mysql mysql      155 Jan 10 08:20 orcl-relay-bin.000001

-rw-rw---- 1 mysql mysql       48 Jan 10 08:20 orcl-relay-bin.index

-rw-rw---- 1 mysql mysql      746 Jan 10 10:57 orcl-relay-bin.000002

-rw-rw---- 1 mysql mysql       49 Jan 10 10:57 relay-log.info

 

# cat  orcl-relay-bin.index   ##relaylog索引文件

./orcl-relay-bin.000001

./orcl-relay-bin.000002

 

# cat relay-log.info  relaylog是SQL線程

./orcl-relay-bin.000002

746    ## Relay_Log_Pos: 746表示從庫sql應用日志的relaylog位置。

mysql-bin.000001   ##這個表示從庫從主庫取數據回來的binlog位置

827   ##Exec_Master_Log_Pos: 827 這個是從庫從主庫取數據回來的binlog日志中pos位置

 

# mysqlbinlog orcl-relay-bin.000002 |more

/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/;

/*!40019 SET @@session.max_insert_delayed_threads=0*/;

/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;

DELIMITER /*!*/;

# at 4

#180110  8:20:25 server id 2  end_log_pos 107   Start: binlog v 4, server v 5.5.32 created 180110 

8:20:25

。。。。。。。。。

 

# cat master.info   ##master.info是IO線程

18    ##

mysql-bin.000001  ##主庫binlog位置

827   ##主庫pos位置

192.168.12.55   ##主庫地址

rep      ##主庫連接帳號

123456   ##主庫連接密碼

3309  ##主庫端口號

60    ##表示主從出現問題后,從庫重試時間

0

 

 

 

 

 

0

1800.000

 

0

 

 

5.2 ### 從庫2搭建 ###(方式二) 推薦

5.2.1 主庫導出數據

====192.168.12.56

---- 導出時加入master-data=1

主庫導出備份加參數 master-data=1 后。導入從庫后,change master to時 不需要加以下參數,如果是master-data=2 則需要加以下參數

master_log_file='mysql-bin.000001',

master_log_pos=334;

 

## 導出數據

# mysqldump -uroot -p111111 -A -B -F --master-data=1 --events --single-transaction|gzip > /bak/mysqld_$(date +%F).sql.gz

 

===============================================================

## 參數說明:

-B 指定多個庫,增加建庫及use語句

--compact 去掉注釋,適合調試輸出,生產不用

-A 備份所有庫

-F 刷新binlog日志

--master-data 增加binlog日志文件名及對應的位置點

-x, --lock-all-tables 

   Locks all tables across all databases.This is archieved by taking a global read lock for the duration of the whole dump. Automatically turns –single-transaction and –lock-tables off

-l, --lock-tables  Lock all tables for read

-d 只備份表結構

-t 只備份數據

--single-transaction  適合InnoDB事務數據庫備份

InnoDB表在備份時,通常啟用選項—single-transaction來保證備份的一致性,實際上它的工作原理是設定本次會話的隔離級別為:repeatable read 以確保本次會話dump時不會看到其他會話已經提交的數據。

 

myisam備份命令

mysqldump –uroot –p111111 –A –B –F –master-data=2 –x –events|gzip > /opt/all.sql.gz

 

innodb備份命令:推薦

mysqldump –uroot –p111111 –A –B –F –master-data=2 –events –single-transaction |gzip >opt/all.sql.gz

===============================================================

 

 

5.2.2 從庫2導入數據

====192.168.12.55

# scp /bak/mysqld_2016-10-30.sql.gz root@192.168.12.55:/root

# gunzip mysqld_2016-10-30.sql.gz

# ls -lrht |grep mysqld

-rw-r--r--   1 root root  520K Feb 27 14:12 mysqld_2016-10-30.sql

 

# mysql -uroot -p111111 <mysqld_2016-10-30.sql

 

# mysql -uroot -p111111

mysql> show slave status\G

Empty set (0.00 sec)

 

5.2.3 從庫2設置change master

# mysql -uroot -p1111111 <<EOF

  change master to

master_host='192.168.12.56',

master_port=3306,

master_user='rep',

master_password='123456';

EOF

 

 

mysql> show slave status\G

*************************** 1. row ***************************

               Slave_IO_State:

                  Master_Host: 192.168.12.55

                  Master_User: rep

                  Master_Port: 3309

                Connect_Retry: 60

              Master_Log_File:

          Read_Master_Log_Pos: 4

               Relay_Log_File: mysql3308-relay-bin.000001

                Relay_Log_Pos: 4

        Relay_Master_Log_File:

             Slave_IO_Running: No

            Slave_SQL_Running: No

              Replicate_Do_DB:

          Replicate_Ignore_DB:

           Replicate_Do_Table:

       Replicate_Ignore_Table:

      Replicate_Wild_Do_Table:

  Replicate_Wild_Ignore_Table:

                   Last_Errno: 0

                   Last_Error:

                 Skip_Counter: 0

          Exec_Master_Log_Pos: 0

              Relay_Log_Space: 107

              Until_Condition: None

               Until_Log_File:

                Until_Log_Pos: 0

           Master_SSL_Allowed: No

           Master_SSL_CA_File:

           Master_SSL_CA_Path:

              Master_SSL_Cert:

            Master_SSL_Cipher:

               Master_SSL_Key:

        Seconds_Behind_Master: NULL

Master_SSL_Verify_Server_Cert: No

                Last_IO_Errno: 0

                Last_IO_Error:

               Last_SQL_Errno: 0

               Last_SQL_Error:

  Replicate_Ignore_Server_Ids:

             Master_Server_Id: 0

1 row in set (0.00 sec)

 

5.2.4 從庫2啟動從庫進程

mysql> start slave;

 

mysql> show slave status\G

*************************** 1. row ***************************

               Slave_IO_State: Waiting for master to send event

                  Master_Host: 192.168.12.55

                  Master_User: rep

                  Master_Port: 3309

                Connect_Retry: 60

              Master_Log_File: mysql-bin.000002

          Read_Master_Log_Pos: 107

               Relay_Log_File: mysql3308-relay-bin.000002

                Relay_Log_Pos: 480

        Relay_Master_Log_File: mysql-bin.000001

             Slave_IO_Running: Yes

            Slave_SQL_Running: No

              Replicate_Do_DB:

          Replicate_Ignore_DB:

           Replicate_Do_Table:

       Replicate_Ignore_Table:

      Replicate_Wild_Do_Table:

  Replicate_Wild_Ignore_Table:

                   Last_Errno: 1007

                   Last_Error: Error 'Can't create database 'shaw_db'; database exists' on query. Default database: 'shaw_db'. Query: 'create database shaw_db'

                 Skip_Counter: 0

          Exec_Master_Log_Pos: 334

              Relay_Log_Space: 1642

              Until_Condition: None

               Until_Log_File:

                Until_Log_Pos: 0

           Master_SSL_Allowed: No

           Master_SSL_CA_File:

           Master_SSL_CA_Path:

              Master_SSL_Cert:

            Master_SSL_Cipher:

               Master_SSL_Key:

        Seconds_Behind_Master: NULL

Master_SSL_Verify_Server_Cert: No

                Last_IO_Errno: 0

                Last_IO_Error:

               Last_SQL_Errno: 1007

               Last_SQL_Error: Error 'Can't create database 'shaw_db'; database exists' on query. Default database: 'shaw_db'. Query: 'create database shaw_db'

  Replicate_Ignore_Server_Ids:

             Master_Server_Id: 1

1 row in set (0.00 sec)

 

報錯原因: 在從庫先創建個對象,數據庫、表….然后在主庫創建一個同名的對象就會出現以上報錯

 

5.2.5 處理同步故障

----解決方法兩種,一種如下:

stop slave;

set global sql_slave_skip_counter = 1;

start slave

MySQL讀寫分離amoeba&mysql-proxy

 

----另一種,如下:需要重啟mysql

根據錯誤號跳過指定的錯誤。

slave-skip-errors = 1032,1062,1007

1032:記錄不存在

1062:字段值重復,入庫失敗

1007:數據庫已存在,創建數據庫失敗

1050:數據表已存在

 

一般由于入庫重復導致的失敗,可以忽略。也可以使用all值忽略所有錯誤消息如下,但不建議。slave-skip-errors = all

 

 

===處理之后:

mysql> show slave status\G

*************************** 1. row ***************************

               Slave_IO_State: Waiting for master to send event

                  Master_Host: 192.168.12.56

                  Master_User: rep

                  Master_Port: 3306

                Connect_Retry: 60

              Master_Log_File: mysql-bin.000002

          Read_Master_Log_Pos: 107

               Relay_Log_File: orcl-relay-bin.000007

                Relay_Log_Pos: 253

        Relay_Master_Log_File: mysql-bin.000002

             Slave_IO_Running: Yes

            Slave_SQL_Running: Yes

              Replicate_Do_DB:

          Replicate_Ignore_DB:

           Replicate_Do_Table:

       Replicate_Ignore_Table:

      Replicate_Wild_Do_Table:

  Replicate_Wild_Ignore_Table:

                   Last_Errno: 0

                   Last_Error:

                 Skip_Counter: 0

          Exec_Master_Log_Pos: 107

              Relay_Log_Space: 554

              Until_Condition: None

               Until_Log_File:

                Until_Log_Pos: 0

           Master_SSL_Allowed: No

           Master_SSL_CA_File:

           Master_SSL_CA_Path:

              Master_SSL_Cert:

            Master_SSL_Cipher:

               Master_SSL_Key:

        Seconds_Behind_Master: 0

Master_SSL_Verify_Server_Cert: No

                Last_IO_Errno: 0

                Last_IO_Error:

               Last_SQL_Errno: 0

               Last_SQL_Error:

  Replicate_Ignore_Server_Ids:

             Master_Server_Id: 1

1 row in set (0.00 sec)

 

 

5.2.6 測試同步

---- 主庫操作

mysql> use shaw_db;

Database changed

mysql> create table t_user as select * from mysql.user;

 

---- 從庫1檢查

mysql> show slave status\G

mysql> select count(*) from shaw_db.t_user;

+----------+

| count(*) |

+----------+

|        3 |

+----------+

1 row in set (0.01 sec)

 

---- 從庫2檢查

mysql> show slave status\G

mysql> select count(*) from shaw_db.t_user;

+----------+

| count(*) |

+----------+

|        3 |

+----------+

1 row in set (0.01 sec)

 

 

5.3 創建用戶用于應用連接數據庫

mysql> grant all privileges on *.* to user01@'192.168.12.%' identified by "111111";

Query OK, 0 rows affected (0.00 sec)

 

mysql> flush privileges;

Query OK, 0 rows affected (0.00 sec)

 

mysql> show grants for user01@'192.168.12.%'\G

*************************** 1. row ***************************

Grants for user01@192.168.12.%: GRANT ALL PRIVILEGES ON *.* TO 'user01'@'192.168.12.%' IDENTIFIED BY PASSWORD '*FD571203974BA9AFE270FE62151AE967ECA5E0AA'

1 row in set (0.00 sec)

 

###### 注意:

========》在登錄的時候,如果是本機登錄,默認的認證方式是local,因此默認登錄會報錯,需要指定ip登錄。如下:

# mysql -uuser01 -p111111

ERROR 1045 (28000): Access denied for user 'user01'@'localhost' (using password: YES)

 

# mysql -uuser01 -p111111 -h 192.168.12.56 -P3306

Welcome to the MySQL monitor.  Commands end with ; or \g.

Your MySQL connection id is 20

Server version: 5.5.32-log Source distribution

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> exit

Bye

 

=====》這里處理方法是再創建local的用戶

mysql> grant all privileges on *.* to user01@'localhost' identified by "111111";

Query OK, 0 rows affected (0.00 sec)

 

mysql> flush privileges;

 

# mysql -uuser01 -p111111

Welcome to the MySQL monitor.  Commands end with ; or \g.

Your MySQL connection id is 23

Server version: 5.5.32-log Source distribution

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

 

 

mysql> select user,host from mysql.user;

+--------+--------------+

| user   | host         |

+--------+--------------+

| root   | 127.0.0.1    |

| rep    | 192.168.12.% |

| user01 | 192.168.12.% |

| root   | localhost    |

| user01 | localhost    |

+--------+--------------+

5 rows in set (0.00 sec)

 

 

5.4 分別在主庫、從庫插入不同數據

----目的是后面讀寫分離時 方便識別連接的庫

----主庫上操作192.168.12.56 3306

# mysql -uroot -p111111

mysql> use shaw_db;

mysql> create table t_usedb (id int,name varchar(20));

mysql> insert into t_usedb values(1,'master');

 

----從庫1操作 192.168.12.55 3306

# mysql -uroot -p111111

mysql> use shaw_db;

mysql> select * from t_usedb;

+------+--------+

| id   | name   |

+------+--------+

|    1 | master |

+------+--------+

mysql> insert into t_usedb values(2,'slave1');

 

----從庫2操作 192.168.12.56 3307

# mysql -uroot -p111111 -S /mysqldata/3307/mysql3307.sock

mysql> select * from shaw_db.t_usedb;

+------+--------+

| id   | name   |

+------+--------+

|    1 | master |

+------+--------+

mysql> insert into shaw_db.t_usedb values(3,'slave2');

 

 

 

 

二、基于AmoebaMySQL讀寫分離

refencen   

http://blog.chinaunix.net/uid-20639775-id-154600.html

https://www.cnblogs.com/liuyisai/p/6009379.html

https://www.cnblogs.com/xyp-blog123/p/6684118.html

 

1. Amoeba介紹

Amoeba的中文意思是阿米巴、變型蟲

目前要實現mysql的主從讀寫分離,主要有以下幾種方案:

a. 通過程序實現,網上很多現成的代碼,比較復雜,如果添加從服務器要更改多臺服務器的代碼。

b. 通過mysql-proxy來實現,由于mysql-proxy的主從讀寫分離是通過lua腳本來實現,目前lua的腳本的開發跟不上節奏,而又沒有完美的現成的腳本,因此導致用于生產環境的話風險比較大,據網上很多人說mysql-proxy的性能不高。

c. 自己開發接口實現,這種方案門檻高,開發成本高,不是一般的小公司能承擔得起。

d. 利用阿里巴巴的開源項目Amoeba來實現,具有負載均衡、高可用性、sql過濾、讀寫分離、可路由相關的query到目標數據庫,并且安裝配置非常簡單

 

Amoeba(變形蟲)項目,專注 分布式數據庫 proxy 開發。在ClientDB Server(s)之間。對客戶端透明。具有負載均衡、高可用性、sql過濾、讀寫分離、可路由相關的query到目標數據庫、可并發請求多臺數據庫合并結果。

主要解決:

? 降低 數據切分帶來的復雜多數據庫結構

? 提供切分規則并降低 數據切分規則 給應用帶來的影響

? 降低db 與客戶端的連接數

? 讀寫分離

 

MySQL讀寫分離amoeba&mysql-proxy

 

2. 部署環境前介紹

主庫     192.168.12.56   3306

從庫1    192.168.12.56   3307

從庫2    192.168.12.55   3306

Amoeba   192.168.12.55   8066

 

Amoeba框架是居于JDK1.5開發的,采用了JDK1.5的特性,所以還需要安裝java環境,建議使用javaSE1.5以上的JDK版本

 

 

3. 安裝Java環境

官方下載地址:

http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

http://download.oracle.com/otn-pub/java/jdk/8u161-b12/2f38c3b165be4555a1fa6e98c45e0808/jdk-8u161-linux-x64.rpm?AuthParam=1519720220_d473abf93bf78651f1ec927514473d86

 

二進制安裝方法:

MySQL讀寫分離amoeba&mysql-proxy

 

這里采用rpm包安裝:

========192.168.12.55

# rpm -ivh jdk-8u161-linux-x64.rpm

Preparing...                ########################################### [100%]

   1:jdk1.8                 ########################################### [100%]

Unpacking JAR files...

        tools.jar...

        plugin.jar...

        javaws.jar...

        deploy.jar...

        rt.jar...

        jsse.jar...

        charsets.jar...

        localedata.jar...

 

# rpm -qa|grep jdk1.8

jdk1.8-1.8.0_161-fcs.x86_64

# rpm -ql jdk1.8-1.8.0_161-fcs.x86_64

/usr/java/jdk1.8.0_161/…….

 

## 配置java環境變量

# vim /etc/profile

#set java environment

export JAVA_HOME=/usr/java/jdk1.8.0_161

export JRE_HOME=$JAVA_HOME/jre

export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH

export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH

 

# source /etc/profile

 

# java -version

java version "1.8.0_161"

Java(TM) SE Runtime Environment (build 1.8.0_161-b12)

Java HotSpot(TM) 64-Bit Server VM (build 25.161-b12, mixed mode)

 

 

4. 安裝Amoeba環境

官方下載地址:

https://sourceforge.net/projects/amoeba/

https://nchc.dl.sourceforge.net/project/amoeba/Amoeba%20for%20mysql/3.x/amoeba-mysql-3.0.5-RC-distribution.zip

 

Amoeba安裝非常簡單,直接解壓即可使用,這里將Amoeba解壓到/usr/local/amoeba目錄下,這樣就安裝完成了

 

========192.168.12.55

# mkdir /usr/local/amoeba/

# unzip -d /usr/local/amoeba/ amoeba-mysql-3.0.5-RC-distribution.zip

 

# cd /usr/local/amoeba/

# ls

benchmark  bin  conf  jvm.properties  lib

 

 

 

5. 配置Amoeba

5.1 配置文件介紹

Amoeba總共有7個配置文件,分別如下:

#/*  Amoeba主配置文件($AMOEBA_HOME/conf/amoeba.xml),用來配置Amoeba服務的基本參數,如Amoeba主機地址、端口、認證方式、用于連接的用戶名、密碼、線程數、超時時間、其他配置文件的位置等。

#/*  數據庫服務器配置文件($AMOEBA_HOME/conf/dbServers.xml),用來存儲和配置Amoeba所代理的數據庫服務器的信息,如:主機IP、端口、用戶名、密碼等。

#/*  切分規則配置文件($AMOEBA_HOME/conf/rule.xml),用來配置切分規則。

#/*  數據庫函數配置文件($AMOEBA_HOME/conf/functionMap.xml),用來配置數據庫函數的處理方法,Amoeba將使用該配置文件中的方法解析數據庫函數。

#/*  切分規則函數配置文件($AMOEBA_HOME/conf/ruleFunctionMap.xml),用來配置切分規則中使用的用戶自定義函數的處理方法。

#/*  訪問規則配置文件($AMOEBA_HOME/conf/access_list.conf),用來授權或禁止某些服務器IP訪問Amoeba

#/*  日志規格配置文件($AMOEBA_HOME/conf/log4j.xml),用來配置Amoeba輸出日志的級別和方式。

 

 

5.2 配置腳本介紹

Amoeba的配置文件在本環境下位于/usr/local/amoeba/conf目錄下。配置文件比較多,但是僅僅使用讀寫分離功能,只需配置兩個文件即可,分別是dbServers.xmlamoeba.xml,如果需要配置ip訪問控制,還需要修改access_list.conf文件。

 

#### 注意點:

腳本中用 “<!--     -->”表示注釋,如下面的這段,其實是注釋了的

                <!--

                <property name="writePool">master</property>

                <property name="readPool">myslave</property>

                -->

 

 

 

 

 

 

下面首先介紹dbServers.xml

[root@bogon amoeba]# cat conf/dbServers.xml

<?xml version="1.0" encoding="gbk"?>

 

<!DOCTYPE amoeba:dbServers SYSTEM "dbserver.dtd">

<amoeba:dbServers xmlns:amoeba="http://amoeba.meidusa.com/">

 

        <!--

            Each dbServer needs to be configured into a Pool,

            If you need to configure multiple dbServer with load balancing that can be simplified by the following configuration:

             add attribute with name virtual = "true" in dbServer, but the configuration does not allow the element with name factoryConfig

             such as 'multiPool' dbServer  

        -->

       

    <dbServer name="abstractServer" abstractive="true">

        <factoryConfig class="com.meidusa.amoeba.mysql.net.MysqlServerConnectionFactory">

            <property name="connectionManager">${defaultManager}</property>

            <property name="sendBufferSize">64</property>

            <property name="receiveBufferSize">128</property>

               

            <!-- mysql port -->

            <property name="port">3306</property>  #設置Amoeba要連接的mysql數據庫的端口,默認是3306

           

            <!-- mysql schema -->

            <property name="schema">testdb</property>  #設置缺省的數據庫,當連接amoeba時,操作表必須顯式的指定數據庫名,即采用dbname.tablename的方式,不支持 use dbname指定缺省庫,因為操作會調度到各個后端dbserver

           

            <!-- mysql user -->

            <property name="user">test1</property>  #設置amoeba連接后端數據庫服務器的賬號和密碼,因此需要在所有后端數據庫上創建該用戶,并授權amoeba服務器可連接

           

            <property name="password">111111</property>

        </factoryConfig>

 

        <poolConfig class="com.meidusa.toolkit.common.poolable.PoolableObjectPool">

            <property name="maxActive">500</property>  #最大連接數,默認500

            <property name="maxIdle">500</property>    #最大空閑連接數

            <property name="minIdle">1</property>    #最新空閑連接數

            <property name="minEvictableIdleTimeMillis">600000</property>

            <property name="timeBetweenEvictionRunsMillis">600000</property>

            <property name="testOnBorrow">true</property>

            <property name="testOnReturn">true</property>

            <property name="testWhileIdle">true</property>

        </poolConfig>

    </dbServer>

 

    <dbServer name="writedb"  parent="abstractServer">  #設置一個后端可寫的dbServer,這里定義為writedb,這個名字可以任意命名,后面還會用到

        <factoryConfig>

            <!-- mysql ip -->

            <property name="ipAddress">192.168.2.204</property> #設置后端可寫dbserver

        </factoryConfig>

    </dbServer>

   

    <dbServer name="slave"  parent="abstractServer">  #設置后端可讀dbserver

        <factoryConfig>

            <!-- mysql ip -->

            <property name="ipAddress">192.168.2.205</property>

        </factoryConfig>

    </dbServer>

   

    <dbServer name="myslave" virtual="true">  #設置定義一個虛擬的dbserver,實際上相當于一個dbserver組,這里將可讀的數據庫ip統一放到一個組中,將這個組的名字命名為myslave

        <poolConfig class="com.meidusa.amoeba.server.MultipleServerPool">

            <!-- Load balancing strategy: 1=ROUNDROBIN , 2=WEIGHTBASED , 3=HA-->

            <property name="loadbalance">1</property>  #選擇調度算法,1表示復制均衡,2表示權重,3表示HA, 這里選擇1

           

            <!-- Separated by commas,such as: server1,server2,server1 -->

            <property name="poolNames">slave</property>  #myslave組成員

        </poolConfig>

    </dbServer>

       

</amoeba:dbServers>

 

 

 

 

另一個配置文件amoeba.xml

[root@bogon amoeba]# cat conf/amoeba.xml

<?xml version="1.0" encoding="gbk"?>

 

<!DOCTYPE amoeba:configuration SYSTEM "amoeba.dtd">

<amoeba:configuration xmlns:amoeba="http://amoeba.meidusa.com/">

 

    <proxy>

   

        <!-- service class must implements com.meidusa.amoeba.service.Service -->

        <service name="Amoeba for Mysql" class="com.meidusa.amoeba.mysql.server.MySQLService">

            <!-- port -->

            <property name="port">8066</property>    #設置amoeba監聽的端口,默認是8066

           

            <!-- bind ipAddress -->    #下面配置監聽的接口,如果不設置,默認監聽所以的IP

            <!--

            <property name="ipAddress">127.0.0.1</property>

             -->

           

            <property name="connectionFactory">

                <bean class="com.meidusa.amoeba.mysql.net.MysqlClientConnectionFactory">

                    <property name="sendBufferSize">128</property>

                    <property name="receiveBufferSize">64</property>

                </bean>

            </property>

           

            <property name="authenticateProvider">

                <bean class="com.meidusa.amoeba.mysql.server.MysqlClientAuthenticator">

                   

 

# 提供客戶端連接amoeba時需要使用這里設定的賬號 (這里的賬號密碼和amoeba連接后端數據庫服務器的密碼無關)

 

                    <property name="user">root</property>    

 

                   

                    <property name="password">123456</property>

                   

                    <property name="filter">

                        <bean class="com.meidusa.toolkit.net.authenticate.server.IPAccessController">

                            <property name="ipFile">${amoeba.home}/conf/access_list.conf</property>

                        </bean>

                    </property>

                </bean>

            </property>

           

        </service>

       

        <runtime class="com.meidusa.amoeba.mysql.context.MysqlRuntimeContext">

           

            <!-- proxy server client process thread size -->

            <property name="executeThreadSize">128</property>

           

            <!-- per connection cache prepared statement size  -->

            <property name="statementCacheSize">500</property>

           

            <!-- default charset -->

            <property name="serverCharset">utf8</property>

           

            <!-- query timeout( default: 60 second , TimeUnit:second) -->

            <property name="queryTimeout">60</property>

        </runtime>

       

    </proxy>

   

    <!--

        Each ConnectionManager will start as thread

        manager responsible for the Connection IO read , Death Detection

    -->

    <connectionManagerList>

        <connectionManager name="defaultManager" class="com.meidusa.toolkit.net.MultiConnectionManagerWrapper">

            <property name="subManagerClassName">com.meidusa.toolkit.net.AuthingableConnectionManager</property>

        </connectionManager>

    </connectionManagerList>

   

        <!-- default using file loader -->

    <dbServerLoader class="com.meidusa.amoeba.context.DBServerConfigFileLoader">

        <property name="configFile">${amoeba.home}/conf/dbServers.xml</property>

    </dbServerLoader>

   

    <queryRouter class="com.meidusa.amoeba.mysql.parser.MysqlQueryRouter">

        <property name="ruleLoader">

            <bean class="com.meidusa.amoeba.route.TableRuleFileLoader">

                <property name="ruleFile">${amoeba.home}/conf/rule.xml</property>

                <property name="functionFile">${amoeba.home}/conf/ruleFunctionMap.xml</property>

            </bean>

        </property>

        <property name="sqlFunctionFile">${amoeba.home}/conf/functionMap.xml</property>

        <property name="LRUMapSize">1500</property>

        <property name="defaultPool">writedb</property>  #設置amoeba默認的池,這里設置為writedb

       

       

        <property name="writePool">writedb</property>  #這兩個選項默認是注銷掉的,需要取消注釋,這里用來指定前面定義好的倆個讀寫池

        <property name="readPool">myslave</property>   #

       

        <property name="needParse">true</property>

    </queryRouter>

</amoeba:configuration>

 

 

 

5.3 配置腳本dbServers.xml

首先注意到這個是一個抽象的父服務,用來給實際服務的繼承使用. 每一個server都要有獨立的后端mysql連接端口,數據庫名稱,帳號,密碼, 連接池大小等.寫在抽象父類中,方便統一管理. 如果不用繼承的方式,也可以復制到每一個具體server.

 

再創建兩個實際的server.都繼承了相同的父類,唯一不同的是他們的IP

 

再配置一個虛擬的server,相當于自由組合實際server為一個server,這里命名為multiPool,名稱可以自定義.

 

# cd /usr/local/amoeba/conf/

# vi dbServers.xml

<?xml version="1.0" encoding="gbk"?>

 

<!DOCTYPE amoeba:dbServers SYSTEM "dbserver.dtd">

<amoeba:dbServers xmlns:amoeba="http://amoeba.meidusa.com/">

 

                <!--

                        Each dbServer needs to be configured into a Pool,

                        If you need to configure multiple dbServer with load balancing that can be simplified by the following configuration:

                         add attribute with name virtual = "true" in dbServer, but the configuration does not allow the element with name factoryConfig

                         such as 'multiPool' dbServer  

                -->

 

        <dbServer name="abstractServer" abstractive="true">

                <factoryConfig class="com.meidusa.amoeba.mysql.net.MysqlServerConnectionFactory">

                        <property name="connectionManager">${defaultManager}</property>

                        <property name="sendBufferSize">64</property>

                        <property name="receiveBufferSize">128</property>

 

                        <!-- mysql port -->

                        <property name="port">3306</property>

 

                        <!-- mysql schema -->

                        <property name="schema">shaw_db</property>

 

                        <!-- mysql user -->

                        <property name="user">user01</property>

 

                        <property name="password">111111</property>

                </factoryConfig>

 

                <poolConfig class="com.meidusa.toolkit.common.poolable.PoolableObjectPool">

                        <property name="maxActive">500</property>

                        <property name="maxIdle">500</property>

                        <property name="minIdle">1</property>

                        <property name="minEvictableIdleTimeMillis">600000</property>

                        <property name="timeBetweenEvictionRunsMillis">600000</property>

                        <property name="testOnBorrow">true</property>

                        <property name="testOnReturn">true</property>

                        <property name="testWhileIdle">true</property>

                </poolConfig>

        </dbServer>

 

        <dbServer name="master"  parent="abstractServer">

                <factoryConfig>

                        <!-- mysql ip -->

                        <property name="ipAddress">192.168.12.56</property>

                </factoryConfig>

        </dbServer>

 

        <dbServer name="slave1"  parent="abstractServer">

                <factoryConfig>

                        <!-- mysql ip -->

                        <property name="ipAddress">192.168.12.55</property>

                </factoryConfig>

        </dbServer>

 

 

        <dbServer name="myslave" virtual="true">

                <poolConfig class="com.meidusa.amoeba.server.MultipleServerPool">

                        <!-- Load balancing strategy: 1=ROUNDROBIN , 2=WEIGHTBASED , 3=HA-->

                        <property name="loadbalance">1</property>

 

                        <!-- Separated by commas,such as: server1,server2,server1 -->

                        <property name="poolNames">slave1</property>

                </poolConfig>

        </dbServer>

 

</amoeba:dbServers>

 

 

 

 

 

 

5.4 配置腳本amoeba.xml

配置amoeba的連接端口,以及amoeba的登錄帳號密碼,dbServers.xml中的帳號密碼不同,dbServers.xml是配置后端mysql的帳號密碼

 

設置默認數據庫為master,以及讀和寫策略,這里讀策略是從虛擬服務中輪詢, 當對一條sql語句解析不出的時候,無法正確路由到writePoolreadPool,就會路由到defaultPool.所以defaultPool一般配置為主節點.

 

# cd /usr/local/amoeba/conf

# vi amoeba.xml

<?xml version="1.0" encoding="gbk"?>

 

<!DOCTYPE amoeba:configuration SYSTEM "amoeba.dtd">

<amoeba:configuration xmlns:amoeba="http://amoeba.meidusa.com/">

 

        <proxy>

 

                <!-- service class must implements com.meidusa.amoeba.service.Service -->

                <service name="Amoeba for Mysql" class="com.meidusa.amoeba.mysql.server.MySQLService">

                        <!-- port -->

                        <property name="port">8066</property>

 

                        <!-- bind ipAddress -->

                        <property name="ipAddress">192.168.12.55</property>

 

                        <property name="connectionFactory">

                                <bean class="com.meidusa.amoeba.mysql.net.MysqlClientConnectionFactory">

                                        <property name="sendBufferSize">128</property>

                                        <property name="receiveBufferSize">64</property>

                                </bean>

                        </property>

 

                        <property name="authenticateProvider">

                                <bean class="com.meidusa.amoeba.mysql.server.MysqlClientAuthenticator">

 

                                        <property name="user">amobeba</property>

 

                                        <property name="password">123456</property>

 

                                        <property name="filter">

                                                <bean class="com.meidusa.toolkit.net.authenticate.server.IPAccessController">

                                                        <property name="ipFile">${amoeba.home}/conf/access_list.conf</property>

                                                </bean>

                                        </property>

                                </bean>

                        </property>

 

                </service>

 

                <runtime class="com.meidusa.amoeba.mysql.context.MysqlRuntimeContext">

 

                        <!-- proxy server client process thread size -->

                        <property name="executeThreadSize">128</property>

 

                        <!-- per connection cache prepared statement size  -->

                        <property name="statementCacheSize">500</property>

 

                        <!-- default charset -->

                        <property name="serverCharset">utf8</property>

 

                        <!-- query timeout( default: 60 second , TimeUnit:second) -->

                        <property name="queryTimeout">60</property>

                </runtime>

 

        </proxy>

 

        <!--

                Each ConnectionManager will start as thread

                manager responsible for the Connection IO read , Death Detection

        -->

        <connectionManagerList>

                <connectionManager name="defaultManager" class="com.meidusa.toolkit.net.MultiConnectionManagerWrapper">

                        <property name="subManagerClassName">com.meidusa.toolkit.net.AuthingableConnectionManager</property>

                </connectionManager>

        </connectionManagerList>

 

                <!-- default using file loader -->

        <dbServerLoader class="com.meidusa.amoeba.context.DBServerConfigFileLoader">

                <property name="configFile">${amoeba.home}/conf/dbServers.xml</property>

        </dbServerLoader>

 

        <queryRouter class="com.meidusa.amoeba.mysql.parser.MysqlQueryRouter">

                <property name="ruleLoader">

                        <bean class="com.meidusa.amoeba.route.TableRuleFileLoader">

                                <property name="ruleFile">${amoeba.home}/conf/rule.xml</property>

                                <property name="functionFile">${amoeba.home}/conf/ruleFunctionMap.xml</property>

                        </bean>

                </property>

                <property name="sqlFunctionFile">${amoeba.home}/conf/functionMap.xml</property>

                <property name="LRUMapSize">1500</property>

                <property name="defaultPool">master</property>


                <property name="writePool">master</property>

                <property name="readPool">myslave</property>

                <property name="needParse">true</property>

        </queryRouter>

</amoeba:configuration>

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

5.5 啟動Amoeba服務

5.5.1 啟動服務

#  /usr/local/amoeba/bin/launcher

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=16m; support was removed in 8.0

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=96m; support was removed in 8.0

 

The stack size specified is too small, Specify at least 228k

Error: Could not create the Java Virtual Machine.

Error: A fatal exception has occurred. Program will exit.

 

5.5.2 處理報錯

從錯誤文字上看,應該是由于stack size太小,導致JVM啟動失敗,要如何修改呢?

其實Amoeba已經考慮到這個問題,并將JVM參數配置寫在屬性文件里。現在,讓我們通過該屬性文件修改JVM參數。

修改jvm.properties文件JVM_OPTIONS參數。

 

# vim /usr/local/amoeba/jvm.properties

改成:

JVM_OPTIONS="-server -Xms1024m -Xmx1024m -Xss256k -XX:PermSize=16m -XX:MaxPermSize=96m"

原為:

JVM_OPTIONS="-server -Xms256m -Xmx1024m -Xss196k -XX:PermSize=16m -XX:MaxPermSize=96m"

 

# /usr/local/amoeba/bin/launcher

2018-03-06 14:17:12,281 INFO  net.ServerableConnectionManager - Server listening on 0.0.0.0/0.0.0.0:8066.

 2018-03-06 15:19:46 [INFO] Project Name=Amoeba-MySQL, PID=34261 , System shutdown ....

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=16m; support was removed in 8.0

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=96m; support was removed in 8.0

 2018-03-06 16:12:06 [INFO] Project Name=Amoeba-MySQL, PID=34592 , starting...

log4j:WARN log4j config load completed from file:/usr/local/amoeba/conf/log4j.xml

2018-03-06 16:12:06,852 INFO  context.MysqlRuntimeContext - Amoeba for Mysql current versoin=5.1.45-mysql-amoeba-proxy-3.0.4-BETA

log4j:WARN ip access config load completed from file:/usr/local/amoeba/conf/access_list.conf

2018-03-06 16:12:07,142 INFO  net.ServerableConnectionManager - Server listening on /192.168.12.55:8066.

 2018-03-06 16:22:06 [INFO] Project Name=Amoeba-MySQL, PID=34592 , System shutdown ....

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=16m; support was removed in 8.0

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=96m; support was removed in 8.0

 2018-03-06 16:22:54 [INFO] Project Name=Amoeba-MySQL, PID=34684 , starting...

log4j:WARN log4j config load completed from file:/usr/local/amoeba/conf/log4j.xml

2018-03-06 16:22:54,571 INFO  context.MysqlRuntimeContext - Amoeba for Mysql current versoin=5.1.45-mysql-amoeba-proxy-3.0.4-BETA

log4j:WARN ip access config load completed from file:/usr/local/amoeba/conf/access_list.conf

2018-03-06 16:22:54,855 INFO  net.ServerableConnectionManager - Server listening on /192.168.12.55:8066.

 

 

5.5.3 檢查服務端口

# netstat -tlnp |grep java

tcp        0      0 :::8066                     :::*                        LISTEN      32534/java 

 

5.5.4 停止服務

# /usr/local/amoeba/bin/shutdown

kill -15 34592

 

 

 

6. 測試Amoeba主從讀寫

## 注意看登錄的提示amoeba

# mysql -uamobeba -p123456 -h 192.168.12.55 -P8066

Welcome to the MySQL monitor.  Commands end with ; or \g.

Your MySQL connection id is 2012979869

Server version: 5.1.45-mysql-amoeba-proxy-3.0.4-BETA Source distribution

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

 

## 我們之前在從庫1上插入了一條slave1的數據,這里可以確定連接的是從庫1

mysql> select * from t_usedb;

+------+--------+

| id   | name   |

+------+--------+

|    1 | master |

|    2 | slave1 |

+------+--------+

 

## 插入一條數據

mysql> insert into t_usedb values (55,'amoeba');

Query OK, 1 row affected (0.10 sec)

 

mysql> select * from t_usedb;

+------+--------+

| id   | name   |

+------+--------+

|    1 | master |

|    2 | slave1 |

|   55 | amoeba |

+------+--------+

 

### 此時我們登錄到主庫查看是否有新插入數據

# mysql -uroot -p111111

mysql> select * from shaw_db.t_usedb;

+------+--------+

| id   | name   |

+------+--------+

|    1 | master |

|   55 | amoeba |

+------+--------+

 

### 此時我們登錄到從庫2查看是否有新插入數據

# mysql -uroot -p111111 -S /mysqldata/3307/mysql3307.sock

mysql> select * from shaw_db.t_usedb;

+------+--------+

| id   | name   |

+------+--------+

|    1 | master |

|    3 | slave2 |

|   55 | amoeba |

+------+--------+

 

#####  我們把主庫stop,然后再插入數據,看是否報錯

# /etc/init.d/mysqld stop

# mysql -uamobeba -p123456 -h 192.168.12.55 -P8066

mysql> insert into t_usedb values (100,'amoeba222');

ERROR 1044 (42000): Amoeba could not connect to MySQL server[192.168.12.56:3306],Connection refused

mysql> select * from t_usedb;

+------+--------+

| id   | name   |

+------+--------+

|    1 | master |

|    2 | slave1 |

|   55 | amoeba |

+------+--------+

3 rows in set (0.00 sec)

 

 

 

7. 添加從庫2amoeba

7.1 配置腳本dbServers.xml

###  此時只需要配置腳本dbServers.xml

# cd /usr/local/amoeba/conf

# /usr/local/amoeba/bin/shutdown

 

# vi dbServers.xml

 

<?xml version="1.0" encoding="gbk"?>

 

<!DOCTYPE amoeba:dbServers SYSTEM "dbserver.dtd">

<amoeba:dbServers xmlns:amoeba="http://amoeba.meidusa.com/">

 

                <!--

                        Each dbServer needs to be configured into a Pool,

                        If you need to configure multiple dbServer with load balancing that can be simplified by the following configuration:

                         add attribute with name virtual = "true" in dbServer, but the configuration does not allow the element with name factoryConfig

                         such as 'multiPool' dbServer  

                -->

 

        <dbServer name="abstractServer" abstractive="true">

                <factoryConfig class="com.meidusa.amoeba.mysql.net.MysqlServerConnectionFactory">

                        <property name="connectionManager">${defaultManager}</property>

                        <property name="sendBufferSize">64</property>

                        <property name="receiveBufferSize">128</property>

 

                        <!-- mysql port -->

                        <property name="port">3306</property>

 

                        <!-- mysql schema -->

                        <property name="schema">shaw_db</property>

 

                        <!-- mysql user -->

                        <property name="user">user01</property>

 

                        <property name="password">111111</property>

                </factoryConfig>

 

                <poolConfig class="com.meidusa.toolkit.common.poolable.PoolableObjectPool">

                        <property name="maxActive">500</property>

                        <property name="maxIdle">500</property>

                        <property name="minIdle">1</property>

                        <property name="minEvictableIdleTimeMillis">600000</property>

                        <property name="timeBetweenEvictionRunsMillis">600000</property>

                        <property name="testOnBorrow">true</property>

                        <property name="testOnReturn">true</property>

                        <property name="testWhileIdle">true</property>

                </poolConfig>

        </dbServer>

 

        <dbServer name="master"  parent="abstractServer">

                <factoryConfig>

                        <!-- mysql ip -->

                        <property name="ipAddress">192.168.12.56</property>

                </factoryConfig>

        </dbServer>

 

        <dbServer name="slave1"  parent="abstractServer">

                <factoryConfig>

                        <!-- mysql ip -->

                        <property name="ipAddress">192.168.12.55</property>

                </factoryConfig>

        </dbServer>

 

        <dbServer name="slave2"  parent="abstractServer">

                <factoryConfig>

                        <property name="ipAddress">192.168.12.56</property>

                        <property name="port">3307</property>

                        <property name="user">user01</property>

                        <property name="password">111111</property>

                </factoryConfig>

        </dbServer>

 

 

        <dbServer name="myslave" virtual="true">

                <poolConfig class="com.meidusa.amoeba.server.MultipleServerPool">

                        <!-- Load balancing strategy: 1=ROUNDROBIN , 2=WEIGHTBASED , 3=HA-->

                        <property name="loadbalance">1</property>

 

                        <!-- Separated by commas,such as: server1,server2,server1 -->

                        <property name="poolNames">slave1,slave2</property>

                </poolConfig>

        </dbServer>

 

</amoeba:dbServers>

 

 

7.2 測試主從

# /usr/local/amoeba/bin/launcher &

[1] 34792

[root@orcl conf]# 2018-03-06 16:12:07,142 INFO  net.ServerableConnectionManager - Server listening on /192.168.12.55:8066.

 2018-03-06 16:22:06 [INFO] Project Name=Amoeba-MySQL, PID=34592 , System shutdown ....

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=16m; support was removed in 8.0

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=96m; support was removed in 8.0

 2018-03-06 16:22:54 [INFO] Project Name=Amoeba-MySQL, PID=34684 , starting...

log4j:WARN log4j config load completed from file:/usr/local/amoeba/conf/log4j.xml

2018-03-06 16:22:54,571 INFO  context.MysqlRuntimeContext - Amoeba for Mysql current versoin=5.1.45-mysql-amoeba-proxy-3.0.4-BETA

log4j:WARN ip access config load completed from file:/usr/local/amoeba/conf/access_list.conf

2018-03-06 16:22:54,855 INFO  net.ServerableConnectionManager - Server listening on /192.168.12.55:8066.

 2018-03-06 16:35:49 [INFO] Project Name=Amoeba-MySQL, PID=34684 , System shutdown ....

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=16m; support was removed in 8.0

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=96m; support was removed in 8.0

 2018-03-06 16:44:36 [INFO] Project Name=Amoeba-MySQL, PID=34797 , starting...

log4j:WARN log4j config load completed from file:/usr/local/amoeba/conf/log4j.xml

2018-03-06 16:44:37,450 INFO  context.MysqlRuntimeContext - Amoeba for Mysql current versoin=5.1.45-mysql-amoeba-proxy-3.0.4-BETA

log4j:WARN ip access config load completed from file:/usr/local/amoeba/conf/access_list.conf

2018-03-06 16:44:37,735 INFO  net.ServerableConnectionManager - Server listening on /192.168.12.55:8066.

 

 

###  連接后做查詢可以發現,一次連接的是從1一次連接的是從2

# mysql -uamobeba -p123456 -h 192.168.12.55 -P8066

mysql> select * from t_usedb;

+------+--------+

| id   | name   |

+------+--------+

|    1 | master |

|    2 | slave1 |

|   55 | amoeba |

+------+--------+

3 rows in set (0.00 sec)

mysql> select * from t_usedb;

+------+--------+

| id   | name   |

+------+--------+

|    1 | master |

|    3 | slave2 |

|   55 | amoeba |

+------+--------+

3 rows in set (0.01 sec)

 

## 創建一張表,由于之前測試把主庫停了,所以創建失敗,這里啟動主庫后再創建表

mysql> create table t_zhong as select * from mysql.user;

ERROR 1044 (42000): Amoeba could not connect to MySQL server[192.168.12.56:3306],Connection refused

 

# /etc/init.d/mysqld start

 

mysql> create table t_zhong as select * from mysql.user;

ERROR 1050 (42S01): Table 't_zhong' already exists

mysql> show tables;

+-------------------+

| Tables_in_shaw_db |

+-------------------+

| t_usedb           |

| t_zhong           |

+-------------------+

2 rows in set (0.00 sec)

mysql> create table t_zhong2 as select * from mysql.user;

Query OK, 5 rows affected (0.06 sec)

Records: 5  Duplicates: 0  Warnings: 0

 

## 從庫2登錄查看下

# mysql -uuser01 -p111111 -h 192.168.12.56 -P3307

mysql> use shaw_db;

Database changed

mysql> show tables;

+-------------------+

| Tables_in_shaw_db |

+-------------------+

| t_usedb           |

| t_zhong           |

| t_zhong2          |

+-------------------+

3 rows in set (0.00 sec)

 

 

8. 配置Amoeba 服務啟動腳本

/etc/init.d/amoeba   把腳本寫到此路徑

dos2unxi amoeba 如果出現找不到文件之類的,可能需要轉換編碼。

 

 

cd /etc/init.d

 chkconfig --add ./amoeba

 chkconfig amoeba on

 service amoeba start

 

 

### 腳本如下:

 

#!/bin/sh

# chkconfig: 12345 62 62

 # description: amoeba 3.05 AutoRun Servimces

 # /etc/init.d/amoeba

 #

 # Run-level Startup script for the Oracle Instance, Listener, and

 # Web Interface

 export JAVA_HOME=/usr/java/jdk1.8.0_161

 export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH

 PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

 NAME=Amoeba

 AMOEBA_BIN=/usr/local/amoeba/bin/launcher

 SHUTDOWN_BIN=/usr/local/amoeba/bin/shutdown

 PIDFILE=/usr/local/amoeba/Amoeba-MySQL.pid

 SCRIPTNAME=/etc/init.d/amoeba

 

 case "$1" in

start)

echo -n "Starting $NAME... "

$AMOEBA_BIN &

echo " done"

;;

 

stop)

echo -n "Stoping $NAME... "

$SHUTDOWN_BIN

echo " done"

;;

 

restart)

$SHUTDOWN_BIN

sleep 1

$AMOEBA_BIN &

;;

 

*)

echo "Usage: $SCRIPTNAME {start|stop|restart}"

exit 1

;;

 

 esac

 

 

三、基于mysql-proxyMySQL讀寫分離

refencen    https://www.cnblogs.com/lin3615/p/5684891.html

http://www.mamicode.com/info-detail-1566167.html

http://blog.itpub.net/15480802/viewspace-1432659/

http://www.bubuko.com/infodetail-1523794.html

https://www.cnblogs.com/tae44/p/4701226.html

 

MySQL讀寫分離amoeba&mysql-proxy

 

1. 部署環境前介紹

主庫         192.168.12.56   3306

從庫1       192.168.12.56   3307

從庫2       192.168.12.55   3306

MySQL-Proxy  192.168.12.55   4040

 

mysql-proxymysql官方提供的mysql中間件服務,上游可接入若干個mysql-client,后端可連接若干個mysql-server。它使用mysql協議,任何使用mysql-client的上游無需修改任何代碼,即可遷移至mysql-proxy上。

mysql-proxy最基本的用法,就是作為一個請求攔截,請求中轉的中間層,攔截查詢和修改結果,需要通過編寫Lua腳本來完成。

 

MySQL Proxy通過lua腳本來控制連接轉發,主要的函數都是配合MySQL Protocol各個過程的:

    * connect_server()     // 接收到Client的連接請求時調用

    * read_handshake()   // 讀取server發起的handshake信息時調用

    * read_auth()               // 讀取Client的認證信息時調用

    * read_auth_result() // 讀取認證結果時調用

    * read_query()            // 讀取Clientquery請求時調用

  * read_query_result()   //讀取query結果時調用

 

 

具體功能:

   1.數據連接的故障轉移

   2.數據連接的負載均衡

   3.攔截查詢(取通信包,實現關鍵字替換)

   4.重寫查詢(例如,強制密碼度等規則)

   5.添加額外的查詢(附)

   6.刪除,修改或者添加返回到客戶端的 SQL結果集

 

配置文件:

mysql-proxy.cnf(權限設為660)

 [mysql-proxy]

    admin-username=root

    admin-password=123456

    admin-lua-script=/usr/local/lib/admin.lua

    proxy-read-only-backend-addresses=192.168.2.115

    proxy-backend-addresses=192.168.2.117

    proxy-lua-script=/usr/local/lib/rw-splitting.lua

    log-file=/var/log/mysql-proxy.log

    log-level=debug

    daemon=true

keepalive=true

 

proxy-lua-script,指定一個Lua腳本來控制mysql-proxy的運行和設置,這個腳本在每次新建連接和腳本發生修改的的時候將重新調用

keepalive,額外建立一個進程專門監控mysql_proxy進程,當mysql_proxy crash予以重新啟動;

 

啟動:

/usr/local/mysql-proxy/bin/mysql-proxy -P 192.168.2.112:3306 --defaults-file=/etc/mysql-proxy.cnf

 

 

讀寫分離:

proxy-lua-script指定為rw-splitting.lua時,mysql_proxy會對客戶端傳入的sql執行讀寫分離;

同一個事務,DML傳輸給backendselect則被傳到read-only-backend

Lua腳本默認最小4個最大8個以上的客戶端連接才會實現讀寫分離(這是因為mysql-proxy會檢測客戶端連接當連接沒有超過min_idle_connections預設值時,不會進行讀寫分離,即查詢操作會發生到Master上)

 

 

2. 安裝MySQL-Proxy

安裝需要的基礎組件,基本系統都可以滿足lua的組件版本要求

安裝lua(確定是否需要安裝)

    yum -y install gcc* gcc-c++* autoconf* automake* zlib* libxml* ncurses-devel* libmc rypt* libtool* flex* pkgconfig*

 

實現讀寫分離是有lua腳本實現的,現在mysql-proxy里面已經集成,無需再安裝。

download  https://downloads.mysql.com/archives/proxy/

 

# tar zxvf mysql-proxy-0.8.5-linux-rhel5-x86-64bit.tar.gz

# mv mysql-proxy-0.8.5-linux-rhel5-x86-64bit /usr/local/mysql-proxy

 

 

3. 配置MySQL-Proxy,創建主配置文件

MySQL讀寫分離amoeba&mysql-proxy

以上面文件為準。如果不配置管理接口4041,admin相關的都可以省略

 

 

MySQL讀寫分離amoeba&mysql-proxy

 

# cd /usr/local/mysql-proxy

# mkdir lua

# mkdir logs

# cp share/doc/mysql-proxy/rw-splitting.lua ./lua/

# cp share/doc/mysql-proxy/admin-sql.lua ./lua/

# vi /etc/mysql-proxy.cnf

[mysql-proxy]

user=root

admin-username=user01

admin-password=111111

proxy-address=192.168.12.55:4040

proxy-read-only-backend-addresses=192.168.12.55

proxy-backend-addresses=192.168.12.56

proxy-lua-script=/usr/local/mysql-proxy/lua/rw-splitting.lua

admin-lua-script=/usr/local/mysql-proxy/lua/admin-sql.lua

log-file=/usr/local/mysql-proxy/logs/mysql-proxy.log

log-level=info

daemon=true

keepalive=true

 

# chmod 660 /etc/mysql-proxy.cnf

 

4. 修改讀寫分離配置文件

MySQL讀寫分離amoeba&mysql-proxy

mysql-proxy會檢測客戶端連接, 當連接沒有超過min_idle_connections預設值時,不會進行讀寫分離, 即查詢操作會發生到Master.

 

# vi /usr/local/mysql-proxy/lua/rw-splitting.lua

if not proxy.global.config.rwsplit then

        proxy.global.config.rwsplit = {

                min_idle_connections = 1,

                max_idle_connections = 1,

 

                is_debug = false

        }

end

 

 

5. 啟動mysql-proxy

# /usr/local/mysql-proxy/bin/mysql-proxy --defaults-file=/etc/mysql-proxy.cnf

# netstat -nltp| grep mysql-proxy

tcp        0      0 192.168.12.55:4040          0.0.0.0:*                   LISTEN      31749/mysql-proxy 

 

# killall -9 mysql-proxy #關閉mysql-proxy使用

 

----查看日志

# tail -200f /usr/local/mysql-proxy/logs/mysql-proxy.log

2018-03-07 11:22:39: (message) chassis-unix-daemon.c:136: [angel] we try to keep PID=31749 alive

2018-03-07 11:22:39: (critical) plugin proxy 0.8.5 started

2018-03-07 11:22:39: (message) proxy listening on port 192.168.12.55:4040

2018-03-07 11:22:39: (message) added read/write backend: 192.168.12.56

2018-03-07 11:22:39: (message) added read-only backend: 192.168.12.55

 

 

6. 測試讀寫分離

# mysql -uuser01 -p111111 -h292.168.12.55 -P4040

 

## 連接的是主庫

mysql>  select * from shaw_db.t_usedb;

+------+--------+

| id   | name   |

+------+--------+

|    1 | master |

+------+--------+

 

#### 多開兩個窗口,發現連接到從庫1

# mysql -uuser01 -p111111 -h292.168.12.55 -P4040

mysql>  select * from shaw_db.t_usedb;

+------+--------+

| id   | name   |

+------+--------+

|    1 | master |

|    2 | slave1 |

+------+--------+

 

## 插入測試

mysql> insert into shaw_db.t_usedb values(55,'myproxy');

Query OK, 1 row affected (0.25 sec)

 

mysql>  select * from shaw_db.t_usedb;

+------+---------+

| id   | name    |

+------+---------+

|    1 | master  |

|    2 | slave1  |

|   55 | myproxy |

+------+---------+

 

## 連接從庫2查看是否同步數據

# mysql -uroot -p111111 -S /mysqldata/3307/mysql3307.sock

mysql> select * from shaw_db.t_usedb;

+------+---------+

| id   | name    |

+------+---------+

|    1 | master  |

|    3 | slave2  |

|   55 | myproxy |

+------+---------+

 

7. 添加從庫2proxy配置文件

7.1 配置主文件

# killall -9 mysql-proxy #關閉mysql-proxy使用

# vi /etc/mysql-proxy.cnf

[mysql-proxy]

user=root

admin-username=user01

admin-password=111111

proxy-address=192.168.12.55:4040

proxy-read-only-backend-addresses=192.168.12.55:3306,192.168.12.56:3307

proxy-backend-addresses=192.168.12.56:3306

proxy-lua-script=/usr/local/mysql-proxy/lua/rw-splitting.lua

admin-lua-script=/usr/local/mysql-proxy/lua/admin-sql.lua

log-file=/usr/local/mysql-proxy/logs/mysql-proxy.log

log-level=info

daemon=true

keepalive=true

 

7.2 啟動mysql-proxy

# /usr/local/mysql-proxy/bin/mysql-proxy --defaults-file=/etc/mysql-proxy.cnf

# tail -200f /usr/local/mysql-proxy/logs/mysql-proxy.log

2018-03-07 14:08:59: (critical) plugin proxy 0.8.5 started

2018-03-07 14:08:59: (message) proxy listening on port 192.168.12.55:4040

2018-03-07 14:08:59: (message) added read/write backend: 192.168.12.56:3306

2018-03-07 14:08:59: (message) added read-only backend: 192.168.12.55:3306

2018-03-07 14:08:59: (message) added read-only backend: 192.168.12.56:3307

 

 

# netstat -nltpd |grep mysql-proxy

tcp      0    0 192.168.12.55:4040    0.0.0.0:*   LISTEN   31871/mysql-proxy

 

7.3 測試主從

#  mysql -uuser01 -p111111 -h292.168.12.55 -P4040

 

## 連接的是主庫

mysql>  select * from shaw_db.t_usedb;

+------+---------+

| id   | name    |

+------+---------+

|    1 | master  |

|   55 | myproxy |

+------+---------+

 

## 多開幾個窗口,發現都能正常訪問

mysql> select * from shaw_db.t_usedb;

+------+---------+

| id   | name    |

+------+---------+

|    1 | master  |

|    3 | slave2  |

|   55 | myproxy |

+------+---------+

mysql> select * from shaw_db.t_usedb;

+------+---------+

| id   | name    |

+------+---------+

|    1 | master  |

|    2 | slave1  |

|   55 | myproxy |

 

 

## 從庫2啟停(這里把從庫2停了,然后連接mysql-proxy更新一條數據,然后在啟動從庫2

# mysqladmin -uroot -p111111 -S /mysqldata/3307/mysql3307.sock shutdown

# mysqld_safe --defaults-file=/mysqldata/3307/my3307.cnf 2>&1 >/dev/null &

 

## 上面停從庫2后往表t_usedb中插入數據測試

# mysql -uuser01 -p111111 -h292.168.12.55 -P4040

mysql> select * from shaw_db.t_usedb;

+------+---------+

| id   | name    |

+------+---------+

|    1 | master  |

|    2 | slave1  |

|   55 | myproxy |

+------+---------+

mysql> insert into shaw_db.t_usedb values(100,'zhong');

Query OK, 1 row affected (0.03 sec)

mysql> select * from shaw_db.t_usedb;

+------+---------+

| id   | name    |

+------+---------+

|    1 | master  |

|    2 | slave1  |

|   55 | myproxy |

|  100 | zhong   |

+------+---------+

4 rows in set (0.00 sec)

 

## 此時啟動從庫2,并從從庫2連接到數據庫中查看表數據

# mysql -uroot -p111111 -S /mysqldata/3307/mysql3307.sock

mysql> select * from shaw_db.t_usedb;

+------+---------+

| id   | name    |

+------+---------+

|    1 | master  |

|    3 | slave2  |

|   55 | myproxy |

|  100 | zhong   |

+------+---------+

 

 

## 最后在測試下把主庫關閉后,注意報錯

# /etc/init.d/mysqld stop

# mysql -uuser01 -p111111 -h292.168.12.55 -P4040

mysql> select * from shaw_db.t_usedb;

+------+---------+

| id   | name    |

+------+---------+

|    1 | master  |

|    2 | slave1  |

|   55 | myproxy |

|  100 | zhong   |

+------+---------+

mysql> insert into shaw_db.t_usedb values(222,'err');

ERROR 2013 (HY000): Lost connection to MySQL server during query

mysql> create table t as select * from mysql.user;

ERROR 2006 (HY000): MySQL server has gone away

No connection. Trying to reconnect...

Connection id:    5

Current database: *** NONE ***

ERROR 2013 (HY000): Lost connection to MySQL server during query

 

 

8. 配置mysql-proxy服務啟動腳本

/etc/init.d/mysqlproxy   把腳本寫到此路徑

dos2unxi mysqlproxy 如果出現找不到文件之類的,可能需要轉換編碼。

 

 

cd /etc/init.d

 chkconfig --add ./mysqlproxy

 chkconfig mysqlproxy on

 service mysqlproxy start

 

 

### 腳本如下:

#!/bin/bash

#chkconfig: - 99 23

#description: mysql_proxy

mysql_proxy_home='/usr/local/mysql-proxy'

case "$1" in

start)

        $mysql_proxy_home/bin/mysql-proxy --defaults-file=/etc/mysql-proxy.cnf

        ;;

stop)

        killall -9 mysql-proxy &>/dev/null

        ;;

restart)

        killall -9 mysql-proxy &>/dev/null

        $mysql_proxy_home/bin/mysql-proxy --defaults-file=/etc/mysql-proxy.cnf

        ;;

*)

        echo "Usage: $0 {start|stop|restart}"

        exit 1

esac

exit 0

 

 

9. !!配置mysql-proxy配置管理地址!!

9.1 配置主文件

[mysql-proxy]

user=root

plugins=admin,proxy

admin-username=admin

admin-password=admin

admin-lua-script=/usr/local/mysql-proxy/lua/admin-sql.lua

admin-address=192.168.12.55:4041

proxy-backend-addresses=192.168.12.56:3306

proxy-read-only-backend-addresses=192.168.12.55:3306,192.168.12.56:3307

proxy-address=192.168.12.55:4040

proxy-lua-script=/usr/local/mysql-proxy/lua/rw-splitting.lua

log-file=/usr/local/mysql-proxy/logs/mysql-proxy.log

log-level=debug

daemon=true

keepalive=true

 

#### 注意,這里和前面的配置可能有點不同,這里再解釋下參數意義:

MySQL讀寫分離amoeba&mysql-proxy

 

 

9.2 配置管理接口腳本

# vi /usr/local/mysql-proxy/lua/admin.lua

function set_error(errmsg) 

    proxy.response = { 

        type = proxy.MYSQLD_PACKET_ERR, 

        errmsg = errmsg or "error" 

    } 

end

function read_query(packet)

    if packet:byte() ~= proxy.COM_QUERY then  

        set_error("[admin] we only handle text-based queries (COM_QUERY)") 

        return proxy.PROXY_SEND_RESULT 

    end

    local query = packet:sub(2)

    local rows = { }

    local fields = { }

    if query:lower() == "select * from backends" then

        fields = {  

            { name = "backend_ndx",  

              type = proxy.MYSQL_TYPE_LONG },

            { name = "address",

              type = proxy.MYSQL_TYPE_STRING }, 

            { name = "state", 

              type = proxy.MYSQL_TYPE_STRING }, 

            { name = "type", 

              type = proxy.MYSQL_TYPE_STRING }, 

            { name = "uuid", 

              type = proxy.MYSQL_TYPE_STRING }, 

            { name = "connected_clients",  

              type = proxy.MYSQL_TYPE_LONG }, 

        }

        for i = 1, #proxy.global.backends do

            local states = { 

                "unknown", 

                "up", 

                "down" 

            } 

            local types = { 

                "unknown", 

                "rw", 

                "ro" 

            } 

            local b = proxy.global.backends[i]

            rows[#rows + 1] = {

                i, 

                b.dst.name,          -- configured backend address 

                states[b.state + 1], -- the C-id is pushed down starting at 0 

                types[b.type + 1],   -- the C-id is pushed down starting at 0 

                b.uuid,              -- the MySQL Server's UUID if it is managed 

                b.connected_clients  -- currently connected clients 

            } 

        end 

    elseif query:lower() == "select * from help" then 

        fields = {  

            { name = "command",  

              type = proxy.MYSQL_TYPE_STRING }, 

            { name = "description",  

              type = proxy.MYSQL_TYPE_STRING }, 

        } 

        rows[#rows + 1] = { "SELECT * FROM help", "shows this help" } 

        rows[#rows + 1] = { "SELECT * FROM backends", "lists the backends and their state" } 

    else 

        set_error("use 'SELECT * FROM help' to see the supported commands") 

        return proxy.PROXY_SEND_RESULT 

    end

    proxy.response = {

        type = proxy.MYSQLD_PACKET_OK, 

        resultset = { 

            fields = fields, 

            rows = rows 

        } 

    } 

    return proxy.PROXY_SEND_RESULT 

end

 

 

9.3 重啟mysql-proxy

注:這次啟動要添加以下啟動選項 ,因為我們添加了額外的插件,把新加功能添加進來,選項如下

--plugins=admin  mysql-proxy啟動時加載的插件;

--admin-username="admin" 運行mysql-proxy進程管理的用戶;

--admin-password="admin" 密碼

--admin-lua-script=/usr/local/mysql-proxy/lua/admin-sql.lua  插件使用的配置文件路徑;

 

##  重啟后,注意看日志

# /etc/init.d/mysqlproxy restart

 

MySQL讀寫分離amoeba&mysql-proxy

 

# netstat -nltpd |grep mysql-

tcp        0      0 192.168.12.55:4040    0.0.0.0:*     LISTEN  34596/mysql-proxy  

tcp        0      0 192.168.12.55:4041    0.0.0.0:*     LISTEN  34596/mysql-proxy

 

 

9.4 登錄管理口

# mysql -uadmin -padmin -h292.168.12.55 -P4041

 

## 管理口只能執行兩條查詢命令

mysql> select * from help;

+------------------------+------------------------------------+

| command                | description                        |

+------------------------+------------------------------------+

| SELECT * FROM help     | shows this help                    |

| SELECT * FROM backends | lists the backends and their state |

+------------------------+------------------------------------+

2 rows in set (0.00 sec)

 

## 執行下面命令,可以看到主從狀態是否up

mysql> SELECT * FROM backends;

+-------------+--------------------+---------+------+------+-------------------+

| backend_ndx | address            | state   | type | uuid | connected_clients |

+-------------+--------------------+---------+------+------+-------------------+

|           1 | 192.168.12.56:3306 | unknown | rw   | NULL |                 0 |

|           2 | 192.168.12.55:3306 | unknown | ro   | NULL |                 0 |

|           3 | 192.168.12.56:3307 | unknown | ro   | NULL |                 0 |

+-------------+--------------------+---------+------+------+-------------------+

3 rows in set (0.00 sec)

 

 

9.5 主從登錄測試

## 多登陸幾個窗口

# mysql -uuser01 -p111111 -h 192.168.12.55 -P4040

# mysql -urep -p123456 -h292.168.12.55 -P4040

 

 

## 此時登錄管理口查看,主和從1已經up

# mysql -uadmin -padmin -h292.168.12.55 -P4041

mysql> SELECT * FROM backends;

+-------------+--------------------+---------+------+------+-------------------+

| backend_ndx | address            | state   | type | uuid | connected_clients |

+-------------+--------------------+---------+------+------+-------------------+

|           1 | 192.168.12.56:3306 | up      | rw   | NULL |                 0 |

|           2 | 192.168.12.55:3306 | up      | ro   | NULL |                 0 |

|           3 | 192.168.12.56:3307 | unknown | ro   | NULL |                 0 |

+-------------+--------------------+---------+------+------+-------------------+

 

9.6 關于mysql-proxy的測試再提一下注意點

主庫的主機關機時,數據只能讀不能寫,并且從庫的主機關機后,數據仍然可讀可寫,這個不好弄,如果你對lua腳本有所了解,修改mysql-proxy 的讀寫分離腳本或許可以實現。

 

不過主庫關機時不能訪問(指的是不能訪問mysql-proxy指定的ip地址),并且從庫關機仍可讀可寫,這個不需要配讀寫分離,這本來就是主從復制的基本能力,從庫只是備用的而已。從庫關閉后再開啟,slave IO線程會自動從中斷處二進制日志的位置開始復制主機的二進制日志,slave SQL線程會讀取并執行這些二進制日志中的SQL

 

總之,不需要特別的配置,這是主從復制的基本能力。

 

 

不過主庫關機時不能訪問(指的是不能訪問mysql-proxy指定的ip地址)

主庫關閉后,不影響現有連接,可能影響新的連接。一旦有連接執行dml及ddl等語句,就會報錯,并斷開連接,并導致新的連接無法再連接(現有連接select不受影響)。這個或許可以配置proxy-lua-script=/usr/local/mysql-proxy/lua/rw-splitting.lua腳本解決,但是對lua腳本不熟,不知道是否可行。

 

測試報錯如下:

主庫down

mysql> SELECT * FROM backends;

+-------------+--------------------+-------+------+------+-------------------+

| backend_ndx | address            | state | type | uuid | connected_clients |

+-------------+--------------------+-------+------+------+-------------------+

|           1 | 192.168.12.56:3306 | down  | rw   | NULL |                 0 |

|           2 | 192.168.12.55:3306 | up    | ro   | NULL |                 0 |

|           3 | 192.168.12.56:3307 | up    | ro   | NULL |                 0 |

+-------------+--------------------+-------+------+------+-------------------+

 

### session1 執行ddl語句

mysql> create table shaw_db.zhong as select * from mysql.zhong;

ERROR 2013 (HY000): Lost connection to MySQL server during query

mysql> create table shaw_db.zhong as select * from mysql.zhong;

ERROR 2006 (HY000): MySQL server has gone away

No connection. Trying to reconnect...

Connection id:    23

Current database: *** NONE ***

ERROR 2013 (HY000): Lost connection to MySQL server during query

mysql>  select * from shaw_db.t_usedb;

ERROR 2006 (HY000): MySQL server has gone away

No connection. Trying to reconnect...

ERROR 1105 (HY000): (proxy) all backends are down

ERROR:

Can't connect to the server

 

### session2執行新的連接

[root@xuan2 ~]# mysql -uuser01 -p111111 -h 192.168.12.55 -P4040

ERROR 1105 (HY000): (proxy) all backends are down

 

### session3 現有連接繼續執行查詢操作

mysql> select * from shaw_db.t_usedb;

+------+---------+

| id   | name    |

+------+---------+

|    1 | master  |

|    2 | slave1  |

|   55 | myproxy |

|  100 | zhong   |

+------+---------+

 

 

 

 

向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

新沂市| 集安市| 衡阳县| 常熟市| 错那县| 大方县| 阳江市| 阿尔山市| 平阳县| 米泉市| 东莞市| 台南市| 和田市| 铜陵市| 洪雅县| 康平县| 北碚区| 循化| 兴宁市| 寻甸| 潞城市| 光泽县| 郸城县| 西华县| 内乡县| 镇平县| 商城县| 玉山县| 襄汾县| 苏尼特左旗| 马鞍山市| 清丰县| 峡江县| 进贤县| 南通市| 丰台区| 砚山县| 额敏县| 莱州市| 宽城| 内江市|