您好,登錄后才能下訂單哦!
本篇內容介紹了“CTDB中main loop怎么配置”的有關知識,在實際案例的操作過程中,不少人都會遇到這樣的困境,接下來就讓小編帶領大家學習一下如何處理這些情況吧!希望大家仔細閱讀,能夠學有所成!
kill -o daemon is still running
ping local daemon
if election_timeout out
get debug_level
get relevant tunables
get runstate
get recovery lock file from the server
get nodemap
flags
if self_ban
if stop banned frozen
Retrieve capabilities from all connected nodes
validate_recovery_master --> force_election
verify ip public ip {ips.pnn == self && dont have ip
ips.pnn != self && have ip
} tell recmaster takeover_run
down here only run with recmaster
flags right
active nodes agree we are recmaster --> force_election get vnnmap
need recovery --> do_recovery
verify all active nodes not in recover mode --> do_recovery
hold recovery lock --> do_recovery
get remote_nodemaps --> do_recovery
num_lmasters
vnnmap->size != num_lmasters --> do_recovery
nodemap node also in vnnmap --> do_recovery
all nodes have same vnnmap
if need_takeover_run do_takeover_run
<span id="force_election"></span>
election_handler
rec = self ctdb = rec->ctdb
pnn == self outctdb_election_win
states
longest running
biggest pnn
release recover lock file
let recmaster = that
<h2 id="do_recovery"></h2> # do_recovery > we are rec > need_recovery = true > begin > self_ban > recover_lock_file F_SETLK 為 F_WRLCK > get list of all databases dbmap > create missing local db > create missing remote db > update use same lock files > [db_recovery_parallel](#db_recovery_parallel) > [do_takeover_run](#do_takeover_run) > send message reconfigured > need_recovery = false > end > wait rerecovery_timeout
<span id='db_recovery_parallel'></span>
envvar CTDB_RECOVERY_HELPER
dir CTDB_HELPER_BINDIR == /usr/libexec/ctdb/
file ctdb_recovery_helper
pipe libsocket
args[0] = fd[1]
args[1] = daemon.name = CTDB_SOCKET = /var/run/ctdb/ctdb.socket
args[2] = random !=1
exec /usr/libexec/ctdb/ctdb_recovery_helper
<log-fd> <output-fd> <ctdb-socket-path> <generation>
1 1 /var/run/ctdb/ctdbd.socket 2
<span id='do_takeover_run'></span>
is_in_progress done
begin
srvid = 0 pnn = -1
list_of_connected_nodes
disable takeover_runs 60s
ctdb_takeover_run
reenable takeover_runs
ok
end
<span id='ctdb_takeover_run'></span>
分配 ipalloc_state的內存,包括每個節點數組
填充 ipalloc_state的ip分配算法
本地填充 ipalloc_state NoIPFailback 參數--這是一個真正的集群范圍的配置,只有master使用此值
取所有連接的節點的 NoIPTakeover 和 NoIPHostOnAllDisabled --這各動作是分開執行的,所以在單元測試過程中可以偽造
填充 ipalloc_state 的 NoIPTakover
填充 ipalloc_state 的 NoIPHost ,衍生出節點 flags 和 NoIPHostOnAllDisabled
檢索和填充 ipalloc_state 已知和可用的IP列表
如果沒有可用IP地址,則提前退出
構建列表(已知的IPs,當前指定的節點)
填充節點列表以強制重新平衡 - 內部結構,目前沒有辦法獲取,只有使用LCP2算法 增加了新的IP地址的節點
運行IP分配算法
發送 RELEASE_IP 到所有節點的 取消不應有的ips
發送 TAKE_IP 到所有節點的 配置應有ips
發送 IPREALLOCATED 所有節點(向后兼容的 hack )
ipalloc_state_init
三種算法 ipalloc_lcp2 ==> 默認
ipalloc_deterministic ==> pnn = i % numnodes ipalloc_nondeterministic ==> min以pnn=0為基準,輪詢 已有ip<min的則可收ip
<span id='ipalloc_lcp2'></span>
unassign_unsuitable_ips 不匹配的ip的pnn = -1
lcp2_init
lcp2_allocate_unassigned^運算 計算出從高到低不同=distance ipv4 32 + 32 + dis + 32 = 0 ~ 128 sum = ip 到其他每個ip distance平方和 minnode || mindstdsum rebalance_candidates
lcp2_failback
均衡所有lcp2_imbalances
“CTDB中main loop怎么配置”的內容就介紹到這里了,感謝大家的閱讀。如果想了解更多行業相關的知識可以關注億速云網站,小編將為大家輸出更多高質量的實用文章!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。