91超碰碰碰碰久久久久久综合_超碰av人澡人澡人澡人澡人掠_国产黄大片在线观看画质优化_txt小说免费全本

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

MySQL 5.7 Group Replication錯誤總結(r11筆記第84天)

發布時間:2020-08-09 21:29:28 來源:ITPUB博客 閱讀:680 作者:jeanron100 欄目:MySQL數據庫

   今天來總結下MySQL 5.7中的一些問題處理,相對來說常規一些。搭建的過程我就不用多說了,昨天的文章里面可以看到一個基本的方式,在測試環境很容易模擬,如果在多臺物理機環境中搭建是不是也一樣呢,答案是肯定的,我自己都一一試過了。

    因為搭建的環境官方建議也是single_primary的方式,即一主寫入,其它做讀,也就是讀寫分離,當然支持multi_primary理論上也是可行的,但是還是有點小問題,我們就以single_primary來舉例。

 問題1:

讀節點加入組的時候,start group_replication拋出了下面的錯誤。基本碰到這個錯誤,你離搭建成功就不遠了。

2017-02-20T07:56:30.064556Z 0 [ERROR] Plugin group_replication reported: 'This member has more executed transactions than those present in the group. Local transactions: 89328c79-f730-11e6-ab63-782bcb377193:1-2 > Group transactions: 7c744904-f730-11e6-a72d-782bcb377193:1-4'
2017-02-20T07:56:30.064580Z 0 [ERROR] Plugin group_replication reported: 'The member contains transactions not present in the group. The member will now exit the group.'
2017-02-20T07:56:30.064587Z 0 [Note] Plugin group_replication reported: 'To force this member into the group you can use the group_replication_allow_local_disjoint_gtids_join option'
可以很明顯看到日志中已經提示了,需要設置參數,也就是兼容加入組。group_replication_allow_local_disjoint_gtids_join設置完成后運行start group_replication即可。


問題2:

如果碰到這個錯誤,也不用太擔心,可以從日志看到是因為參數的不兼容性導致的。比如主寫設置為multi-primary,讀節點設置為single-primary,統一一下即可。

2017-02-21T10:20:56.324890+08:00 0 [ERROR] Plugin group_replication reported: 'This member has more executed transactions than those present in the group. Local transactions: 87b9c8fe-f352-11e6-bb33-0026b935eb76:1-5,
b79d42f4-f351-11e6-9891-0026b935eb76:1,
f7c7b9f8-f352-11e6-b1de-a4badb1b524e:1 > Group transactions: 87b9c8fe-f352-11e6-bb33-0026b935eb76:1-5,
b79d42f4-f351-11e6-9891-0026b935eb76:1'
2017-02-21T10:20:56.324971+08:00 0 [ERROR] Plugin group_replication reported: 'The member configuration is not compatible with the group configuration. Variables such as single_primary_mode or enforce_update_everywhere_checks must have the same value on every server in the group. (member configuration option: [], group configuration option: [group_replication_single_primary_mode]).'
2017-02-21T10:20:56.325052+08:00 19 [Note] Plugin group_replication reported: 'Going to wait for view modification'
2017-02-21T10:20:56.325594+08:00 0 [Note] Plugin group_replication reported: 'getstart group_id 53d187f2'

問題3:

這個問題困擾了我很久,其實本質上就是節點的設置,里面有一個group_name, 這個名字可以不能設置為每個節點的uuid,比如節點1,2,3這幾個節點,group_replication_group_name是需要一致的。之前每次失敗都會認認真真拷貝uuid,發現適得其反。

2017-02-22T14:46:35.819072Z 0 [Warning] Plugin group_replication reported: 'read failed'
2017-02-22T14:46:35.851829Z 0 [ERROR] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 24902'
2017-02-22T14:47:05.814080Z 30 [ERROR] Plugin group_replication reported: 'Timeout on wait for view after joining group'
2017-02-22T14:47:05.814183Z 30 [Note] Plugin group_replication reported: 'Requesting to leave the group despite of not being a member'
2017-02-22T14:47:05.814213Z 30 [ERROR] Plugin group_replication reported: '[GCS] The member is leaving a group without being on one.'
2017-02-22T14:47:05.814567Z 30 [Note] Plugin group_replication reported: 'auto_increment_increment is reset to 1'
2017-02-22T14:47:05.814583Z 30 [Note] Plugin group_replication reported: 'auto_increment_offset is reset to 1'
2017-02-22T14:47:05.814859Z 36 [Note] Error reading relay log event for channel 'group_replication_applier': slave SQL thread was killed
2017-02-22T14:47:05.815720Z 33 [Note] Plugin group_replication reported: 'The group replication applier thread was killed'統一之后,啟動的過程其實很快。

mysql> start group_replication;
Query OK, 0 rows affected (1.52 sec)


基本上搭建過程就這幾類問題,還有主機名類的問題,這方面還有一些小的bug,如果需要特別設置,還可以指定report_host來完成。


問題4:

環境搭建好之后,我們來創建一個普通的表,有時候好的習慣和規范在這種時候就尤其重要。

創建表test_tab

create table test_tab (id int,name varchar(30));然后插入一條數據,看起來這是一個再正常不過的操作,但是在MGR里面就會有錯誤,因為一個基本要求就是表中含有主鍵。

mysql> insert into test_tab values(1,'a');
ERROR 3098 (HY000): The table does not comply with the requirements by an external plugin.修復的方式就是添加主鍵:

mysql> alter table test_tab add primary key(id);
Query OK, 0 rows affected (0.01 sec)
Records: 0  Duplicates: 0  Warnings: 0

問題5(模擬災難):

我們目前搭建的是single-primary的模式。如果主寫節點發生故障,整個group該怎么處理呢,就會優先把第二個節點S2省紀委主寫。

MySQL 5.7 Group Replication錯誤總結(r11筆記第84天)

要測試的話還是很簡單的。我們把節點1的服務直接kill掉。看看主節點會漂移到哪里。

首先是組復制的基本情況,目前存在5個節點,我們直接kill節點1,即端口為24801的。

+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 52d26194-f90a-11e6-a247-782bcb377193 | grtest      |       24801 | ONLINE       |
| group_replication_applier | 5abaaf89-f90a-11e6-b4de-782bcb377193 | grtest      |       24802 | ONLINE       |
| group_replication_applier | 655248b9-f90a-11e6-86b4-782bcb377193 | grtest      |       24803 | ONLINE       |
| group_replication_applier | 6defc92c-f90a-11e6-990c-782bcb377193 | grtest      |       24804 | ONLINE       |
| group_replication_applier | 76bc07a1-f90a-11e6-ab0a-782bcb377193 | grtest      |       24805 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+

  節點2會輸出下面的日志,意味值這個節點正式上崗了。

2017-02-22T14:59:45.157989Z 0 [Note] Plugin group_replication reported: 'getstart group_id 98e4de29'
2017-02-22T14:59:45.434062Z 0 [Note] Plugin group_replication reported: 'Unsetting super_read_only.'
2017-02-22T14:59:45.434130Z 40 [Note] Plugin group_replication reported: 'A new primary was elected, enabled conflict detection until the new primary applies all relay logs'然后就會看到組復制的情況成了下面的局面,毫無疑問,第一個節點被剔除了。

+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 5abaaf89-f90a-11e6-b4de-782bcb377193 | grtest      |       24802 | ONLINE       |
| group_replication_applier | 655248b9-f90a-11e6-86b4-782bcb377193 | grtest      |       24803 | ONLINE       |
| group_replication_applier | 6defc92c-f90a-11e6-990c-782bcb377193 | grtest      |       24804 | ONLINE       |
| group_replication_applier | 76bc07a1-f90a-11e6-ab0a-782bcb377193 | grtest      |       24805 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+

從日志我們可以看到是第二個節點升為主寫了,那么問題來了。


問題6:

怎么判斷一個復制組中哪個是主節點,不能完全靠猜或者翻看日志來判斷吧。

我們用下面的語句來過濾得到。

mysql> select *from  performance_schema.replication_group_members where member_id =(select variable_value from performance_schema.global_status WHERE VARIABLE_NAME= 'group_replication_primary_member');
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 5abaaf89-f90a-11e6-b4de-782bcb377193 | grtest      |       24802 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
1 row in set (0.00 sec)


向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

竹北市| 万安县| 横山县| 南木林县| 九龙城区| 务川| 高州市| 淮安市| 黑龙江省| 延川县| 长寿区| 深泽县| 防城港市| 牡丹江市| 剑河县| 沙坪坝区| 阿瓦提县| 专栏| 嫩江县| 鄄城县| 蓝田县| 织金县| 太谷县| 洪洞县| 响水县| 屏东县| 福贡县| 雷山县| 漳浦县| 吕梁市| 青神县| 永顺县| 扎兰屯市| 留坝县| 郎溪县| 天津市| 永福县| 太仆寺旗| 镇雄县| 虎林市| 理塘县|