您好,登錄后才能下訂單哦!
這篇文章主要講解了“如何學習raft分布式一致性算法”,文中的講解內容簡單清晰,易于學習與理解,下面請大家跟著小編的思路慢慢深入,一起來研究和學習“如何學習raft分布式一致性算法”吧!
分布式存儲系統通常會通過維護多個副本來進行容錯, 以提高系統的可用性。 這就引出了分布式存儲系統的核心問題——如何保證多個副本的一致性? Raft算法把問題分解成了四個子問題: 1. 領袖選舉(leader election)、 2. 日志復制(log replication)、 3. 安全性(safety) 4. 成員關系變化(membership changes) 這幾個子問題。 源碼gitee地址: https://gitee.com/ioly/learning.gooop 原文鏈接: https://my.oschina.net/ioly/blog/5011356
根據raft協議,實現高可用分布式強一致的kv存儲
終于可以“點火”了,來到這里不容易 _
添加大量診斷日志
修復若干細節問題
編寫單元測試代碼:
啟動多個raft節點
檢測Leader選舉是否成功
向節點1寫入若干數據
向節點2寫入若干數據
在節點3讀取數據
kill掉當前Leader節點,觀察重新選舉是否成功
tRaftKVServer_test.go,在本地啟動四個raft節點進行功能性測試
package server import ( "learning/gooop/etcd/raft/debug" "learning/gooop/etcd/raft/logger" "learning/gooop/etcd/raft/rpc" "testing" "time" nrpc "net/rpc" ) func Test_RaftKVServer(t *testing.T) { fnAssertTrue := func(b bool, msg string) { if !b { t.Fatal(msg) } } logger.Exclude("RaftRPCServer.Ping") logger.Exclude("RaftRPCServer.Heartbeat") logger.Exclude("feLeaderHeartbeat") logger.Exclude(").Heartbeat") // start node 1 to 3 _ = new(tRaftKVServer).BeginServeTCP("./node-01") _ = new(tRaftKVServer).BeginServeTCP("./node-02") _ = new(tRaftKVServer).BeginServeTCP("./node-03") _ = new(tRaftKVServer).BeginServeTCP("./node-04") // wait for up time.Sleep(1 * time.Second) // tRaftLSMImplement(node-01,1).HandleStateChanged, state=2 fnAssertTrue(logger.Count("HandleStateChanged, state=3") == 1, "expecting leader node") t.Logf("passing electing, leader=%v", debug.LeaderNodeID) // put into node-1 c1,_ := nrpc.Dial("tcp", "localhost:3331") defer c1.Close() kcmd := new(rpc.KVCmd) kcmd.OPCode = rpc.KVPut kcmd.Key = []byte("key-01") kcmd.Content = []byte("content 01") kret := new(rpc.KVRet) err := c1.Call("KVStoreRPCServer.ExecuteKVCmd", kcmd, kret) fnAssertTrue(err == nil && kret.Code == rpc.KVOk, "expecting KVOk") t.Log("passing put into node-01") // put into node-2 c2,_ := nrpc.Dial("tcp", "localhost:3332") defer c2.Close() kcmd.Key = []byte("key-02") kcmd.Content = []byte("content 02") err = c2.Call("KVStoreRPCServer.ExecuteKVCmd", kcmd, kret) fnAssertTrue(err == nil && kret.Code == rpc.KVOk, "expecting KVOk") t.Log("passing put into node-02") // get from node-3 c3,_ := nrpc.Dial("tcp", "localhost:3333") defer c3.Close() kcmd.OPCode = rpc.KVGet kcmd.Key = []byte("key-02") kcmd.Content = nil kret.Content = nil kret.Key = nil err = c3.Call("KVStoreRPCServer.ExecuteKVCmd", kcmd, kret) fnAssertTrue(err == nil && kret.Code == rpc.KVOk, "expecting KVOk") fnAssertTrue(kret.Content != nil && string(kret.Content) == "content 02", "expecting content 02") t.Log("passing get from node-04") // kill leader node debug.KilledNodeID = debug.LeaderNodeID time.Sleep(2 * time.Second) fnAssertTrue(logger.Count("HandleStateChanged, state=3") == 2, "expecting reelecting leader node") t.Logf("passing reelecting, leader=%v", debug.LeaderNodeID) time.Sleep(2 * time.Second) }
可以觀察到5個passing,測試ok,重新選舉的時延也在預期范圍內,約700ms
API server listening at: [::]:46709 === RUN Test_RaftKVServer 16:51:09.329792609 tRaftKVServer.BeginServeTCP, starting node-01, port=3331 16:51:09.329864584 tBrokenState(from=node-01, to=node-01@localhost:3331).whenStartThenBeginDial 16:51:09.329888978 tBrokenState(from=node-01, to=node-02@localhost:3332).whenStartThenBeginDial 16:51:09.329903778 tBrokenState(from=node-01, to=node-03@localhost:3333).whenStartThenBeginDial 16:51:09.329912231 tBrokenState(from=node-01, to=node-04@localhost:3334).whenStartThenBeginDial 16:51:09.329920585 tFollowerState(node-01).init 16:51:09.329926372 tFollowerState(node-01).initEventHandlers 16:51:09.329941794 tFollowerState(node-01).Start 16:51:09.330218761 tRaftKVServer.BeginServeTCP, service ready at port=3331 16:51:09.330549519 tFollowerState(node-01).whenStartThenBeginWatchLeaderTimeout, begin 16:51:09.333852427 tRaftKVServer.BeginServeTCP, starting node-02, port=3332 16:51:09.333893483 tBrokenState(from=node-02, to=node-01@localhost:3331).whenStartThenBeginDial 16:51:09.333925018 tBrokenState(from=node-02, to=node-02@localhost:3332).whenStartThenBeginDial 16:51:09.333955573 tBrokenState(from=node-02, to=node-03@localhost:3333).whenStartThenBeginDial 16:51:09.33397762 tBrokenState(from=node-02, to=node-04@localhost:3334).whenStartThenBeginDial 16:51:09.333990318 tFollowerState(node-02).init 16:51:09.333997643 tFollowerState(node-02).initEventHandlers 16:51:09.334015293 tFollowerState(node-02).Start 16:51:09.334089713 tRaftKVServer.BeginServeTCP, service ready at port=3332 16:51:09.334290701 tFollowerState(node-02).whenStartThenBeginWatchLeaderTimeout, begin 16:51:09.337803901 tRaftKVServer.BeginServeTCP, starting node-03, port=3333 16:51:09.337842816 tBrokenState(from=node-03, to=node-01@localhost:3331).whenStartThenBeginDial 16:51:09.337866444 tBrokenState(from=node-03, to=node-02@localhost:3332).whenStartThenBeginDial 16:51:09.337880481 tBrokenState(from=node-03, to=node-03@localhost:3333).whenStartThenBeginDial 16:51:09.337893773 tBrokenState(from=node-03, to=node-04@localhost:3334).whenStartThenBeginDial 16:51:09.337905184 tFollowerState(node-03).init 16:51:09.337912795 tFollowerState(node-03).initEventHandlers 16:51:09.337945677 tFollowerState(node-03).Start 16:51:09.338027861 tRaftKVServer.BeginServeTCP, service ready at port=3333 16:51:09.338089164 tFollowerState(node-03).whenStartThenBeginWatchLeaderTimeout, begin 16:51:09.341594205 tRaftKVServer.BeginServeTCP, starting node-04, port=3334 16:51:09.34163547 tBrokenState(from=node-04, to=node-01@localhost:3331).whenStartThenBeginDial 16:51:09.341679869 tBrokenState(from=node-04, to=node-02@localhost:3332).whenStartThenBeginDial 16:51:09.341694419 tBrokenState(from=node-04, to=node-03@localhost:3333).whenStartThenBeginDial 16:51:09.3417269 tBrokenState(from=node-04, to=node-04@localhost:3334).whenStartThenBeginDial 16:51:09.341741739 tFollowerState(node-04).init 16:51:09.341770267 tFollowerState(node-04).initEventHandlers 16:51:09.341793763 tFollowerState(node-04).Start 16:51:09.34213956 tRaftKVServer.BeginServeTCP, service ready at port=3334 16:51:09.342361058 tFollowerState(node-04).whenStartThenBeginWatchLeaderTimeout, begin 16:51:09.481747744 tBrokenState(from=node-01, to=node-04@localhost:3334).whenDialOKThenSetConn 16:51:09.481770012 tBrokenState(from=node-01, to=node-01@localhost:3331).whenDialOKThenSetConn 16:51:09.481771692 tBrokenState(from=node-01, to=node-04@localhost:3334).whenDialOKThenSwitchToConnectedState 16:51:09.481791046 tBrokenState(from=node-01, to=node-04@localhost:3334).beDisposing 16:51:09.481781787 tBrokenState(from=node-01, to=node-01@localhost:3331).whenDialOKThenSwitchToConnectedState 16:51:09.481807689 tBrokenState(from=node-01, to=node-01@localhost:3331).beDisposing 16:51:09.481747893 tBrokenState(from=node-01, to=node-02@localhost:3332).whenDialOKThenSetConn 16:51:09.481933708 tBrokenState(from=node-01, to=node-02@localhost:3332).whenDialOKThenSwitchToConnectedState 16:51:09.481955515 tBrokenState(from=node-01, to=node-02@localhost:3332).beDisposing 16:51:09.481747742 tBrokenState(from=node-01, to=node-03@localhost:3333).whenDialOKThenSetConn 16:51:09.481973577 tBrokenState(from=node-01, to=node-03@localhost:3333).whenDialOKThenSwitchToConnectedState 16:51:09.481980127 tBrokenState(from=node-01, to=node-03@localhost:3333).beDisposing 16:51:09.485403927 tBrokenState(from=node-02, to=node-01@localhost:3331).whenDialOKThenSetConn 16:51:09.485692968 tBrokenState(from=node-02, to=node-01@localhost:3331).whenDialOKThenSwitchToConnectedState 16:51:09.485707781 tBrokenState(from=node-02, to=node-01@localhost:3331).beDisposing 16:51:09.485462572 tBrokenState(from=node-02, to=node-02@localhost:3332).whenDialOKThenSetConn 16:51:09.485520127 tBrokenState(from=node-02, to=node-03@localhost:3333).whenDialOKThenSetConn 16:51:09.485723854 tBrokenState(from=node-02, to=node-02@localhost:3332).whenDialOKThenSwitchToConnectedState 16:51:09.485733962 tBrokenState(from=node-02, to=node-02@localhost:3332).beDisposing 16:51:09.485733667 tBrokenState(from=node-02, to=node-03@localhost:3333).whenDialOKThenSwitchToConnectedState 16:51:09.485749968 tBrokenState(from=node-02, to=node-03@localhost:3333).beDisposing 16:51:09.485474638 tBrokenState(from=node-02, to=node-04@localhost:3334).whenDialOKThenSetConn 16:51:09.485780798 tBrokenState(from=node-02, to=node-04@localhost:3334).whenDialOKThenSwitchToConnectedState 16:51:09.485787997 tBrokenState(from=node-02, to=node-04@localhost:3334).beDisposing 16:51:09.489019463 tBrokenState(from=node-03, to=node-02@localhost:3332).whenDialOKThenSetConn 16:51:09.489141518 tBrokenState(from=node-03, to=node-02@localhost:3332).whenDialOKThenSwitchToConnectedState 16:51:09.489165663 tBrokenState(from=node-03, to=node-02@localhost:3332).beDisposing 16:51:09.489021724 tBrokenState(from=node-03, to=node-03@localhost:3333).whenDialOKThenSetConn 16:51:09.489191277 tBrokenState(from=node-03, to=node-03@localhost:3333).whenDialOKThenSwitchToConnectedState 16:51:09.489199495 tBrokenState(from=node-03, to=node-03@localhost:3333).beDisposing 16:51:09.489021727 tBrokenState(from=node-03, to=node-04@localhost:3334).whenDialOKThenSetConn 16:51:09.489019621 tBrokenState(from=node-03, to=node-01@localhost:3331).whenDialOKThenSetConn 16:51:09.489217044 tBrokenState(from=node-03, to=node-04@localhost:3334).whenDialOKThenSwitchToConnectedState 16:51:09.489222223 tBrokenState(from=node-03, to=node-01@localhost:3331).whenDialOKThenSwitchToConnectedState 16:51:09.489234054 tBrokenState(from=node-03, to=node-01@localhost:3331).beDisposing 16:51:09.489225544 tBrokenState(from=node-03, to=node-04@localhost:3334).beDisposing 16:51:09.492701804 tBrokenState(from=node-04, to=node-01@localhost:3331).whenDialOKThenSetConn 16:51:09.492720605 tBrokenState(from=node-04, to=node-01@localhost:3331).whenDialOKThenSwitchToConnectedState 16:51:09.492728029 tBrokenState(from=node-04, to=node-01@localhost:3331).beDisposing 16:51:09.492702391 tBrokenState(from=node-04, to=node-02@localhost:3332).whenDialOKThenSetConn 16:51:09.492764 tBrokenState(from=node-04, to=node-02@localhost:3332).whenDialOKThenSwitchToConnectedState 16:51:09.492771402 tBrokenState(from=node-04, to=node-02@localhost:3332).beDisposing 16:51:09.492778635 tBrokenState(from=node-04, to=node-04@localhost:3334).whenDialOKThenSetConn 16:51:09.492791174 tBrokenState(from=node-04, to=node-04@localhost:3334).whenDialOKThenSwitchToConnectedState 16:51:09.492799699 tBrokenState(from=node-04, to=node-04@localhost:3334).beDisposing 16:51:09.492844734 tBrokenState(from=node-04, to=node-03@localhost:3333).whenDialOKThenSetConn 16:51:09.492855638 tBrokenState(from=node-04, to=node-03@localhost:3333).whenDialOKThenSwitchToConnectedState 16:51:09.492863777 tBrokenState(from=node-04, to=node-03@localhost:3333).beDisposing 16:51:10.238765817 tFollowerState(node-01).whenLeaderHeartbeatTimeoutThenSwitchToCandidateState, term=0 16:51:10.238808459 tFollowerState(node-01).feDisposing, disposed=true 16:51:10.238885964 tRaftLSMImplement(node-01,1).HandleStateChanged, state=2 16:51:10.238892892 tRaftLSMImplement(node-01,1).meStateChanged, 2 16:51:10.238897706 tCandidateState(node-01).whenStartThenAskForVote 16:51:10.238902038 tCandidateState(node-01).ceAskingForVote, term=1 16:51:10.238907133 tCandidateState(node-01).ceAskingForVote, vote to myself 16:51:10.2389139 tCandidateState(node-01).ceAskingForVote, ticketCount=1 16:51:10.238920737 tCandidateState(node-01).whenAskingForVoteThenWatchElectionTimeout 16:51:10.239208777 tFollowerState(node-04).feCandidateRequestVote, reset last vote 16:51:10.239233375 tFollowerState(node-04).feVoteToCandidate, candidate=node-01, term=1 16:51:10.239261011 tFollowerState(node-02).feCandidateRequestVote, reset last vote 16:51:10.239273156 tFollowerState(node-02).feVoteToCandidate, candidate=node-01, term=1 16:51:10.239288823 tRaftLSMImplement(node-04,1).RequestVote, cmd=&{node-01 1 0 0}, ret=&{0 1}, err=<nil> 16:51:10.239303552 RaftRPCServer.RequestVote, cmd=&{node-01 1 0 0}, ret=&{0 1}, e=<nil> 16:51:10.239343533 tRaftLSMImplement(node-02,1).RequestVote, cmd=&{node-01 1 0 0}, ret=&{0 1}, err=<nil> 16:51:10.239390716 tFollowerState(node-03).feCandidateRequestVote, reset last vote 16:51:10.239431327 tFollowerState(node-03).feVoteToCandidate, candidate=node-01, term=1 16:51:10.239442927 tCandidateState(node-01).handleRequestVoteOK, peer=node-04, term=1 16:51:10.239455262 tCandidateState(node-01).ceReceiveTicket, mTicketCount=2 16:51:10.239463079 tCandidateState(node-01).whenReceiveTicketThenCheckTicketCount 16:51:10.239473836 tRaftLSMImplement(node-03,1).RequestVote, cmd=&{node-01 1 0 0}, ret=&{0 1}, err=<nil> 16:51:10.239488078 RaftRPCServer.RequestVote, cmd=&{node-01 1 0 0}, ret=&{0 1}, e=<nil> 16:51:10.239412948 RaftRPCServer.RequestVote, cmd=&{node-01 1 0 0}, ret=&{0 1}, e=<nil> 16:51:10.239578689 tCandidateState(node-01).handleRequestVoteOK, peer=node-03, term=1 16:51:10.239593183 tCandidateState(node-01).ceReceiveTicket, mTicketCount=3 16:51:10.239601334 tCandidateState(node-01).whenReceiveTicketThenCheckTicketCount 16:51:10.239629478 tCandidateState(node-01).whenWinningTheVoteThenSwitchToLeader 16:51:10.239639823 tCandidateState(node-01).ceDisposing, mTicketCount=0 16:51:10.239696198 tCandidateState(node-01).ceDisposing, mDisposedFlag=true 16:51:10.239752502 tRaftLSMImplement(node-01,2).HandleStateChanged, state=3 16:51:10.239764172 tRaftLSMImplement(node-01,2).meStateChanged, 3 tRaftKVServer_test.go:34: passing electing, leader=node-01 16:51:10.366875446 tRaftLSMImplement(node-02,1).AppendLog, cmd=&{node-01 1 0xc0004961c0}, ret=&{0 1 0 0}, err=<nil> 16:51:10.366931566 RaftRPCServer.AppendLog, cmd=&{node-01 1 0xc0004961c0}, ret=&{0 1 0 0}, e=<nil> 16:51:10.370788589 tRaftLSMImplement(node-03,1).AppendLog, cmd=&{node-01 1 0xc00043c5c0}, ret=&{0 1 0 0}, err=<nil> 16:51:10.370829944 RaftRPCServer.AppendLog, cmd=&{node-01 1 0xc00043c5c0}, ret=&{0 1 0 0}, e=<nil> 16:51:10.374865684 tRaftLSMImplement(node-04,1).AppendLog, cmd=&{node-01 1 0xc000496580}, ret=&{0 1 0 0}, err=<nil> 16:51:10.374904568 RaftRPCServer.AppendLog, cmd=&{node-01 1 0xc000496580}, ret=&{0 1 0 0}, e=<nil> 16:51:10.375163435 tRaftLSMImplement(node-02,1).CommitLog, cmd=&{node-01 1 1}, ret=&{1}, err=<nil> 16:51:10.375176692 RaftRPCServer.CommitLog, cmd=&{node-01 1 1}, ret=&{1}, e=<nil> 16:51:10.375444843 tRaftLSMImplement(node-03,1).CommitLog, cmd=&{node-01 1 1}, ret=&{1}, err=<nil> 16:51:10.375512284 RaftRPCServer.CommitLog, cmd=&{node-01 1 1}, ret=&{1}, e=<nil> 16:51:10.375797446 tRaftLSMImplement(node-04,1).CommitLog, cmd=&{node-01 1 1}, ret=&{1}, err=<nil> 16:51:10.375859612 RaftRPCServer.CommitLog, cmd=&{node-01 1 1}, ret=&{1}, e=<nil> 16:51:10.379551174 tRaftLSMImplement(node-01,3).ExecuteKVCmd, cmd=&{1 [107 101 121 45 48 49] [99 111 110 116 101 110 116 32 48 49]}, ret=&{0 [] []}, err=<nil> 16:51:10.379577233 KVStoreRPCServer.ExecuteKVCmd, cmd=&{1 [107 101 121 45 48 49] [99 111 110 116 101 110 116 32 48 49]}, ret=&{0 [] []}, e=<nil> tRaftKVServer_test.go:46: passing put into node-01 16:51:10.387761245 tRaftLSMImplement(node-02,1).AppendLog, cmd=&{node-01 1 0xc000496d80}, ret=&{0 1 0 0}, err=<nil> 16:51:10.387777654 RaftRPCServer.AppendLog, cmd=&{node-01 1 0xc000496d80}, ret=&{0 1 0 0}, e=<nil> 16:51:10.391348874 tRaftLSMImplement(node-03,1).AppendLog, cmd=&{node-01 1 0xc000496e40}, ret=&{0 1 0 0}, err=<nil> 16:51:10.391387707 RaftRPCServer.AppendLog, cmd=&{node-01 1 0xc000496e40}, ret=&{0 1 0 0}, e=<nil> 16:51:10.395137344 tRaftLSMImplement(node-04,1).AppendLog, cmd=&{node-01 1 0xc000496f00}, ret=&{0 1 0 0}, err=<nil> 16:51:10.395155304 RaftRPCServer.AppendLog, cmd=&{node-01 1 0xc000496f00}, ret=&{0 1 0 0}, e=<nil> 16:51:10.395343688 tRaftLSMImplement(node-02,1).CommitLog, cmd=&{node-01 1 1}, ret=&{1}, err=<nil> 16:51:10.395357145 RaftRPCServer.CommitLog, cmd=&{node-01 1 1}, ret=&{1}, e=<nil> 16:51:10.395495604 tRaftLSMImplement(node-03,1).CommitLog, cmd=&{node-01 1 1}, ret=&{1}, err=<nil> 16:51:10.3955081 RaftRPCServer.CommitLog, cmd=&{node-01 1 1}, ret=&{1}, e=<nil> 16:51:10.395667457 tRaftLSMImplement(node-04,1).CommitLog, cmd=&{node-01 1 1}, ret=&{1}, err=<nil> 16:51:10.395688067 RaftRPCServer.CommitLog, cmd=&{node-01 1 1}, ret=&{1}, e=<nil> 16:51:10.399174064 tRaftLSMImplement(node-01,3).ExecuteKVCmd, cmd=&{1 [107 101 121 45 48 50] [99 111 110 116 101 110 116 32 48 50]}, ret=&{0 [] []}, err=<nil> 16:51:10.399217896 KVStoreRPCServer.ExecuteKVCmd, cmd=&{1 [107 101 121 45 48 50] [99 111 110 116 101 110 116 32 48 50]}, ret=&{0 [] []}, e=<nil> 16:51:10.399373787 tRaftLSMImplement(node-02,1).ExecuteKVCmd, cmd=&{1 [107 101 121 45 48 50] [99 111 110 116 101 110 116 32 48 50]}, ret=&{0 [] []}, err=<nil> 16:51:10.399397275 KVStoreRPCServer.ExecuteKVCmd, cmd=&{1 [107 101 121 45 48 50] [99 111 110 116 101 110 116 32 48 50]}, ret=&{0 [] []}, e=<nil> tRaftKVServer_test.go:55: passing put into node-02 16:51:10.400256236 tRaftLSMImplement(node-01,3).ExecuteKVCmd, cmd=&{0 [107 101 121 45 48 50] []}, ret=&{0 [] [99 111 110 116 101 110 116 32 48 50]}, err=<nil> 16:51:10.400298117 KVStoreRPCServer.ExecuteKVCmd, cmd=&{0 [107 101 121 45 48 50] []}, ret=&{0 [] [99 111 110 116 101 110 116 32 48 50]}, e=<nil> 16:51:10.400639059 tRaftLSMImplement(node-03,1).ExecuteKVCmd, cmd=&{0 [107 101 121 45 48 50] []}, ret=&{0 [] [99 111 110 116 101 110 116 32 48 50]}, err=<nil> 16:51:10.400663438 KVStoreRPCServer.ExecuteKVCmd, cmd=&{0 [107 101 121 45 48 50] []}, ret=&{0 [] [99 111 110 116 101 110 116 32 48 50]}, e=<nil> tRaftKVServer_test.go:68: passing get from node-04 16:51:10.431051964 tRaftKVServer.whenStartThenWatchDebugKill, killing node-01 2021/04/07 16:51:10 rpc.Serve: accept:accept tcp [::]:3331: use of closed network connection 16:51:11.19072568 tFollowerState(node-02).whenLeaderHeartbeatTimeoutThenSwitchToCandidateState, term=1 16:51:11.190755031 tFollowerState(node-02).feDisposing, disposed=true 16:51:11.190856259 tRaftLSMImplement(node-02,1).HandleStateChanged, state=2 16:51:11.190885201 tRaftLSMImplement(node-02,1).meStateChanged, 2 16:51:11.190898966 tCandidateState(node-02).whenStartThenAskForVote 16:51:11.190908485 tCandidateState(node-02).ceAskingForVote, term=2 16:51:11.1909172 tCandidateState(node-02).ceAskingForVote, vote to myself 16:51:11.19093098 tCandidateState(node-02).ceAskingForVote, ticketCount=1 16:51:11.190944035 tCandidateState(node-02).whenAskingForVoteThenWatchElectionTimeout 16:51:11.191694746 tFollowerState(node-03).feVoteToCandidate, candidate=node-02, term=2 16:51:11.191724769 tRaftLSMImplement(node-01,3).RequestVote, cmd=&{node-02 2 0 0}, ret=&{0 0}, err=<nil> 16:51:11.192305012 RaftRPCServer.RequestVote, cmd=&{node-02 2 0 0}, ret=&{0 0}, e=<nil> 16:51:11.192223342 tFollowerState(node-04).feCandidateRequestVote, reset last vote 16:51:11.192464666 tFollowerState(node-04).feVoteToCandidate, candidate=node-02, term=2 16:51:11.19253627 tRaftLSMImplement(node-04,1).RequestVote, cmd=&{node-02 2 0 0}, ret=&{0 2}, err=<nil> 16:51:11.192208542 tRaftLSMImplement(node-03,1).RequestVote, cmd=&{node-02 2 0 0}, ret=&{0 2}, err=<nil> 16:51:11.192613581 tCandidateState(node-02).handleRequestVoteOK, peer=node-01, term=2 16:51:11.192627483 tCandidateState(node-02).ceReceiveTicket, mTicketCount=2 16:51:11.192634994 tCandidateState(node-02).whenReceiveTicketThenCheckTicketCount 16:51:11.19260158 RaftRPCServer.RequestVote, cmd=&{node-02 2 0 0}, ret=&{0 2}, e=<nil> 16:51:11.192764937 tCandidateState(node-02).handleRequestVoteOK, peer=node-03, term=2 16:51:11.192778197 tCandidateState(node-02).ceReceiveTicket, mTicketCount=3 16:51:11.192784986 tCandidateState(node-02).whenReceiveTicketThenCheckTicketCount 16:51:11.192806525 tCandidateState(node-02).whenWinningTheVoteThenSwitchToLeader 16:51:11.192815315 tCandidateState(node-02).ceDisposing, mTicketCount=0 16:51:11.192836837 tCandidateState(node-02).ceDisposing, mDisposedFlag=true 16:51:11.192853274 tRaftLSMImplement(node-02,2).HandleStateChanged, state=3 16:51:11.192863098 tRaftLSMImplement(node-02,2).meStateChanged, 3 16:51:11.193007386 tFollowerState(node-01).init 16:51:11.193017792 tFollowerState(node-01).initEventHandlers 16:51:11.193037127 tRaftLSMImplement(node-01,3).HandleStateChanged, state=1 16:51:11.193046674 tRaftLSMImplement(node-01,3).meStateChanged, 1 16:51:11.193053504 tFollowerState(node-01).Start 16:51:11.19313721 tFollowerState(node-01).whenStartThenBeginWatchLeaderTimeout, begin 16:51:11.192549822 RaftRPCServer.RequestVote, cmd=&{node-02 2 0 0}, ret=&{0 2}, e=<nil> tRaftKVServer_test.go:74: passing reelecting, leader=node-02 --- PASS: Test_RaftKVServer (5.09s) PASS Debugger finished with exit code 0
支持單元測試的上下文變量
package debug // KilledNodeID was used to detect whether a node should stop wroking, written by unit test code var KilledNodeID = "" // LeaderNodeID presents current leader node's ID, written by lsm/tLeaderState var LeaderNodeID = ""
感謝各位的閱讀,以上就是“如何學習raft分布式一致性算法”的內容了,經過本文的學習后,相信大家對如何學習raft分布式一致性算法這一問題有了更深刻的體會,具體使用情況還需要大家實踐驗證。這里是億速云,小編將為大家推送更多相關知識點的文章,歡迎關注!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。