您好,登錄后才能下訂單哦!
Mongodb復制集由一組Mongod實例(進程)組成,包含一個Primary節點和多個Secondary節點,Mongodb Driver(客戶端)的所有數據都寫入Primary,Secondary從Primary同步寫入的數據,以保持復制集內所有成員存儲相同的數據集,提供數據的高可用。
保障數據的安全性
數據高可用性 (24*7)
災難恢復
無需停機維護(如備份,重建索引,壓縮)
分布式讀取數據
N 個節點的集群
任何節點可作為主節點
所有寫入操作都在主節點上
自動故障轉移
自動恢復
復制集圖解
實驗環境
系統:centos7
mongodb版本:v3.6.7
操作流程
添加三個實例
mkdir -p /data/mongodb/mongodb{2,3,4} /首先創建數據文件存放位置 mkdir -p /data/mongodb/logs /日志文件存放位置 touch /data/mongodb/logs/mongodb{2,3,4}.log /日志文件 chmod 777 /data/mongodb/logs/*.log /給日志文件加權 cd /data/mongodb/ /記得檢查操作是否奏效 [root@cent mongodb]# ls logs mongodb2 mongodb3 mongodb4 [root@cent mongodb]# cd logs/ [root@cent logs]# ll 總用量 0 -rwxrwxrwx. 1 root root 0 9月 12 09:48 mongodb2.log -rwxrwxrwx. 1 root root 0 9月 12 09:48 mongodb3.log -rwxrwxrwx. 1 root root 0 9月 12 09:48 mongodb4.log
編輯2.3.4的配置文件
vim /etc/mongod2.conf 修改如下 systemLog: destination: file logAppend: true path: //后面的3,4則改為mongodb3.log,mongodb4.log storage: dbPath: /后面的3,4則改為mongodb3,4 journal: enabled: true net: port: /后面的3,4則改為27019,27020 bindIp: 0.0.0.0 replication: replSetName: yang
啟動與檢測·
開啟服務 [root@cent logs]# mongod -f /etc/mongod2.conf about to fork child process, waiting until server is ready for connections. forked process: 83795child process started successfully, parent exiting [root@cent logs]# mongod -f /etc/mongod3.conf about to fork child process, waiting until server is ready for connections. forked process: 83823child process started successfully, parent exiting [root@cent logs]# mongod -f /etc/mongod4.conf about to fork child process, waiting until server is ready for connections. forked process: 83851child process started successfully, parent exiting [root@cent logs]# netstat -ntap /檢測端口,分別看到27017,18,19,20
登陸測試
[root@cent logs]# mongo --port 27019 /指定端口登陸 MongoDB shell version v3.6.7 connecting to: mongodb://127.0.0.1:27019 /MongoDB server version: 3.6.7 >
復制集操作
定義復制集
[root@cent logs]# mongo //27017端口 >cfg={"_id":"yang","members":[{"_id":0,"host":"192.168.137.11:27017"},{"_id":1,"host":"192.168.137.11:27018"},{"_id":2,"host":"192.168.137.11:27019"}]} //定義復制集 { "_id" : "yang", "members" : [ { "_id" : 0, "host" : "192.168.137.11:27017" }, { "_id" : 1, "host" : "192.168.137.11:27018" }, { "_id" : 2, "host" : "192.168.137.11:27019" } ] } 啟動 > rs.initiate(cfg) { "ok" : 1, "operationTime" : Timestamp(1536848891, 1), "$clusterTime" : { "clusterTime" : Timestamp(1536848891, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } }
查看復制集信息
yang:OTHER> db.stats(){ "db" : "test", "collections" : 0, "views" : 0, "objects" : 0, "avgObjSize" : 0, "dataSize" : 0, "storageSize" : 0, "numExtents" : 0, "indexes" : 0, "indexSize" : 0, "fileSize" : 0, "fsUsedSize" : 0, "fsTotalSize" : 0, "ok" : 1, "operationTime" : Timestamp(1536741495, 1), "$clusterTime" : { "clusterTime" : Timestamp(1536741495, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } }
查看復制集狀態
yang:SECONDARY> rs.status() { "set" : "yang", "date" : ISODate("2018-09-12T08:58:56.358Z"), "myState" : 1, "term" : NumberLong(1), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1536742728, 1), "t" : NumberLong(1) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1536742728, 1), "t" : NumberLong(1) }, "appliedOpTime" : { "ts" : Timestamp(1536742728, 1), "t" : NumberLong(1) }, "durableOpTime" : { "ts" : Timestamp(1536742728, 1), "t" : NumberLong(1) } }, "members" : [{ "_id" : 0, "name" : "192.168.137.11:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 24741, "optime" : { "ts" : Timestamp(1536742728, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-09-12T08:58:48Z"), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "electionTime" : Timestamp(1536741506, 1), "electionDate" : ISODate("2018-09-12T08:38:26Z"), "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 1, "name" : "192.168.137.11:27018", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 1240, "optime" : { "ts" : Timestamp(1536742728, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1536742728, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-09-12T08:58:48Z"), "optimeDurableDate" : ISODate("2018-09-12T08:58:48Z"), "lastHeartbeat" : ISODate("2018-09-12T08:58:54.947Z"), "lastHeartbeatRecv" : ISODate("2018-09-12T08:58:55.699Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "192.168.137.11:27017", "syncSourceHost" : "192.168.137.11:27017", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1 }, { "_id" : 2, "name" : "192.168.137.11:27019", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 1240, "optime" : { "ts" : Timestamp(1536742728, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1536742728, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-09-12T08:58:48Z"), "optimeDurableDate" : ISODate("2018-09-12T08:58:48Z"), "lastHeartbeat" : ISODate("2018-09-12T08:58:54.947Z"), "lastHeartbeatRecv" : ISODate("2018-09-12T08:58:55.760Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "192.168.137.11:27017", "syncSourceHost" : "192.168.137.11:27017", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1 }], "ok" : 1, "operationTime" : Timestamp(1536742728, 1), "$clusterTime" : { "clusterTime" : Timestamp(1536742728, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } }
添加與刪除
添加節點時,不要有數據,否則可能丟失數據。
添加一個27020節點
yang:PRIMARY> rs.add("192.168.137.11:27020")
{
"ok" : 1,
"operationTime" : Timestamp(1536849195, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1536849195, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
查驗
yang:PRIMARY> rs.status(){
"_id" : 3,
"name" : "192.168.137.11:27020", //27020在末尾出現了
刪除27020
yang:PRIMARY> rs.remove("192.168.137.11:27020")
{
"ok" : 1,
"operationTime" : Timestamp(1536849620, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1536849620, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
故障轉移切換
注釋:mongodb中每個實例都對應一個進程,故結束進程則節點停運,以此達到模擬故障。
查看進程
[root@cent mongodb]# ps aux | grep mongod
root 74970 0.4 3.1 1580728 59392 ? Sl 21:47 0:15 mongod -f /etcmongod.conf
root 75510 0.4 2.8 1465952 53984 ? Sl 22:16 0:07 mongod -f /etcmongod2.conf
root 75538 0.4 2.9 1501348 54496 ? Sl 22:17 0:07 mongod -f /etcmongod3.conf
root 75566 0.3 2.7 1444652 52144 ? Sl 22:17 0:06 mongod -f /etcmongod4.conf
結束primary(27017)
[root@cent mongodb]# kill -9 74970
[root@cent mongodb]# ps aux | grep mongod
root 75510 0.4 2.9 1465952 55016 ? Sl 22:16 0:10 mongod -f /etcmongod2.conf
root 75538 0.4 2.9 1493152 55340 ? Sl 22:17 0:10 mongod -f /etcmongod3.conf
root 75566 0.3 2.7 1444652 52168 ? Sl 22:17 0:08 mongod -f /etcmongod4.conf
進入27018查驗
ARY> rs.status()
yang:SECOND
"_id" : 0,
"name" : "192.168.137.11:27017",
"health" : 0, //健康值為0,顯示原本的primary已經失效"_id" : 2,
"name" : "192.168.137.11:27019",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY", //而這一臺服務器搶到了primary
手動切換
手動切換需要在primary下進行操作,現在primary是27018
暫停三十秒不進行選舉
[root@cent mongodb]# mongo --port 27019
yang:PRIMARY> rs.freeze(30)
{
"ok" : 0,
"errmsg" : "cannot freeze node when primary or running for election. state: Primary",
"code" : 95,
"codeName" : "NotSecondary",
"operationTime" : Timestamp(1536851239, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1536851239, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
交出主節點位置,維持從節點狀態不少于60秒,等待30秒使主節點和從節點日至同步。
yang:PRIMARY> rs.stepDown(60,30)
2018-09-13T23:07:48.655+0800 E QUERY [thread1] Error: error doing query: failed: network error while attempting to run command 'replSetStepDown' on host '127.0.0.1:27019' :
DB.prototype.runCommand@src/mongo/shell/db.js:168:1
DB.prototype.adminCommand@src/mongo/shell/db.js:186:16
rs.stepDown@src/mongo/shell/utils.js:1341:12
@(shell):1:1
2018-09-13T23:07:48.658+0800 I NETWORK [thread1] trying reconnect to 127.0.0.1:27019 (127.0.0.1) failed
2018-09-13T23:07:48.659+0800 I NETWORK [thread1] reconnect 127.0.0.1:27019 (127.0.0.1) ok
yang:SECONDARY> //可以看到它直接變成了從
節點選舉
復制集通過replSetInitiate命令(或mongo shell的rs.initiate())進行初始化,初始化后各個成員間開始發送心跳消息,并發起Priamry選舉操作,獲得『大多數』成員投票支持的節點,會成為Primary,其余節點成為Secondary。
回到上面定義復制集那一步,這里把語句稍作改變,添加優先值與仲裁節點。
> cfg={"_id":"yang","members":[{"_id":0,"host":"192.168.137.11:27017","priority":100},{"_id":1,"host":"192.168.137.11:27018","priority":100},{"_id":2,"host":"192.168.137.11:27019","priority":0},{"_id":3,"host":"192.168.137.11:27020","arbiterOnly":true}]}
{
"_id" : "yang",
"members" : [
{
"_id" : 0,
"host" : "192.168.137.11:27017",
"priority" : 100 //優先級
},
{
"_id" : 1,
"host" : "192.168.137.11:27018",
"priority" : 100 //優先級
},
{
"_id" : 2,
"host" : "192.168.137.11:27019",
"priority" : 0 //優先級
},
{
"_id" : 3,
"host" : "192.168.137.11:27020",
"arbiterOnly" : true
}
]
}
啟動cfg
> rs.initiate(cfg)
{
"ok" : 1,
"operationTime" : Timestamp(1536852325, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1536852325, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
查看各實例關系與日志
yang:OTHER> rs.isMaster()
{
"hosts" : [ //標準節點
"192.168.137.11:27017",
"192.168.137.11:27018"
],
"passives" : [ //被動節點
"192.168.137.11:27019"
],
"arbiters" : [ //動態節點
"192.168.137.11:27020"
],
"setName" : "yang",
"setVersion" : 1,
"ismaster" : false,
"secondary" : true,
"me" : "192.168.137.11:27017",
"lastWrite" : {
"opTime" : {
"ts" : Timestamp(1536852325, 1),
"t" : NumberLong(-1)
},
"lastWriteDate" : ISODate("2018-09-13T15:25:25Z")
},
"maxBsonObjectSize" : 16777216,
"maxMessageSizeBytes" : 48000000,
"maxWriteBatchSize" : 100000,
"localTime" : ISODate("2018-09-13T15:25:29.008Z"),
"logicalSessionTimeoutMinutes" : 30,
"minWireVersion" : 0,
"maxWireVersion" : 6,
"readOnly" : false,
"ok" : 1,
"operationTime" : Timestamp(1536852325, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1536852325, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
添加一個集合,做一些基本操作,讓日志更新。
yang:SECONDARY> use mood
switched to db mood
yang:PRIMARY> db.info.insert({"id":1,"name":"mark"})
WriteResult({ "nInserted" : 1 })
yang:PRIMARY> db.info.find()
{ "_id" : ObjectId("5b9a8244b4360d88324a69fc"), "id" : 1, "name" : "mark" }
yang:PRIMARY> db.info.update({"id":1},{$set:{"name":"zhangsan"}})
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
yang:PRIMARY> db.info.find()
{ "_id" : ObjectId("5b9a8244b4360d88324a69fc"), "id" : 1, "name" : "zhangsan" }
查看日志
yang:PRIMARY> use local
switched to db local
yang:PRIMARY> show collections
me
oplog.rs
replset.election
replset.minvalid
startup_log
system.replset
system.rollback.id
yang:PRIMARY> db.oplog.rs.find() //數量很多
搶占測試
首先把主節點關閉
[root@cent ~]# mongod -f /etc/mongod.conf --shutdown
killing process with pid: 74970
登陸下一個節點27018,發現它變成了主節點,繼續關閉27018,兩個標準節點都沒有了,實驗被動節點時候會變為主節點。
登陸被動節點
yang:SECONDARY> rs.status()
"_id" : 2,
"name" : "192.168.137.11:27019",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
發現并不會,啟動前面關閉的兩個節點。
[root@cent ~]# mongod -f /etc/mongod2.conf
about to fork child process, waiting until server is ready for connections.
forked process: 77132
child process started successfully, parent exiting
[root@cent ~]# mongod -f /etc/mongod.conf
about to fork child process, waiting until server is ready for connections.
forked process: 77216
child process started successfully, parent exiting
其中27018又變為主節點了。
[root@cent ~]# mongo --port 27018
yang:PRIMARY>
可見只有標準節點會互相搶占主節點。
數據權限
Primary與Secondary之間通過oplog來同步數據,Primary上的寫操作完成后,會向特殊的local.oplog.rs特殊集合寫入一條oplog,Secondary不斷的從Primary取新的oplog并應用。
因oplog的數據會不斷增加,local.oplog.rs被設置成為一個capped集合,當容量達到配置上限時,會將最舊的數據刪除掉。另外考慮到oplog在Secondary上可能重復應用,oplog必須具有冪等性,即重復應用也會得到相同的結果。
只有主節點有權限查看數據,從節點是沒有權限的,會出現以下回饋。
yang:SECONDARY> show dbs
2018-09-13T23:58:34.112+0800 E QUERY [thread1] Error: listDatabases failed:{
"operationTime" : Timestamp(1536854312, 1),
"ok" : 0,
"errmsg" : "not master and slaveOk=false",
"code" : 13435,
"codeName" : "NotMasterNoSlaveOk",
"$clusterTime" : {
"clusterTime" : Timestamp(1536854312, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
} :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
Mongo.prototype.getDBs@src/mongo/shell/mongo.js:65:1
shellHelper.show@src/mongo/shell/utils.js:849:19
shellHelper@src/mongo/shell/utils.js:739:15
@(shellhelp2):1:1
使用如下命令在從節點上查看信息
yang:SECONDARY> rs.slaveOk()
yang:SECONDARY> show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
mood 0.000GB
仲裁節點不會復制主節點中的復制信息
可以看到只有兩臺節點又數據日志,27020仲裁節點是沒有的
yang:SECONDARY> rs.printSlaveReplicationInfo()
source: 192.168.137.11:27017
syncedTo: Fri Sep 14 2018 00:03:52 GMT+0800 (CST)
0 secs (0 hrs) behind the primary
source: 192.168.137.11:27019
syncedTo: Fri Sep 14 2018 00:03:52 GMT+0800 (CST)
0 secs (0 hrs) behind the primary
yang:SECONDARY> rs.help() //獲取副本集相關的幫助命令
rs.status() { replSetGetStatus : 1 } checks repl set status
rs.initiate() { replSetInitiate : null } initiates set with default settings
rs.initiate(cfg) { replSetInitiate : cfg } initiates set with configuration cfg
rs.conf() get the current configuration object from local.system.replset
rs.reconfig(cfg) updates the configuration of a running replica set with cfg (disconnects)
rs.add(hostportstr) add a new member to the set with default attributes (disconnects)
rs.add(membercfgobj) add a new member to the set with extra attributes (disconnects)
rs.addArb(hostportstr) add a new member which is arbiterOnly:true (disconnects)
rs.stepDown([stepdownSecs, catchUpSecs]) step down as primary (disconnects)
rs.syncFrom(hostportstr) make a secondary sync from the given member
rs.freeze(secs) make a node ineligible to become primary for the time specified
rs.remove(hostportstr) remove a host from the replica set (disconnects)
rs.slaveOk() allow queries on secondary nodesrs.printReplicationInfo() check oplog size and time range
rs.printSlaveReplicationInfo() check replica set members and replication lag
db.isMaster() check who is primaryreconfiguration helpers disconnect from the database so the shell will display
an error, even if the command succeeds.
總結一下各節點的屬性:
Arbiter節點只參與投票,不能被選為Primary,并且不從Primary同步數據。
比如你部署了一個2個節點的復制集,1個Primary,1個Secondary,任意節點宕機,復制集將不能提供服務了(無法選出Primary),這時可以給復制集添加一個Arbiter節點,即使有節點宕機,仍能選出Primary。
Arbiter本身不存儲數據,是非常輕量級的服務,當復制集成員為偶數時,最好加入一個Arbiter節點,以提升復制集可用性。
Priority0節點的選舉優先級為0,不會被選舉為Primary
比如你跨機房A、B部署了一個復制集,并且想指定Primary必須在A機房,這時可以將B機房的復制集成員Priority設置為0,這樣Primary就一定會是A機房的成員。(注意:如果這樣部署,最好將『大多數』節點部署在A機房,否則網絡分區時可能無法選出Primary)
Mongodb 3.0里,復制集成員最多50個,參與Primary選舉投票的成員最多7個,其他成員(Vote0)的vote屬性必須設置為0,即不參與投票。
Hidden節點不能被選為主(Priority為0),并且對Driver不可見。
因Hidden節點不會接受Driver的請求,可使用Hidden節點做一些數據備份、離線計算的任務,不會影響復制集的服務。
Delayed節點必須是Hidden節點,并且其數據落后與Primary一段時間(可配置,比如1個小時)。
因Delayed節點的數據比Primary落后一段時間,當錯誤或者無效的數據寫入Primary時,可通過Delayed節點的數據來恢復到之前的時間點。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。