您好,登錄后才能下訂單哦!
本篇內容主要講解“DaemonSet Controller怎么創建和啟動”,感興趣的朋友不妨來看看。本文介紹的方法操作簡單快捷,實用性強。下面就讓小編來帶大家學習“DaemonSet Controller怎么創建和啟動”吧!
DaemonSet Controller的核心結構包括:
burstReplcas int
: 每次sync時,Create和Delete Pods的數量上限,代碼中寫死為250。
queue workqueue.RateLimitingInterface
: 存放待同步DaemonSet Key(namespaces/name)的Delaying Queue。
syncHandler func(dsKey string) error
: 負責同步DaemonSet Queue中對象,包括Replicas管理、UpdateStrategy升級、更新DaemonSet Status等工作,是DaemonSet Controller中最核心的邏輯。
expectations controller.ControllerExpectationsInterface
: 維護每個DaemonSet對象每次Sync期望Create/Delete Pods數的TTLCache。
suspendedDaemonPods map[string]sets.String
: key為NodeName,value是DaemonSet集合,這些DaemonSet包含該Node上'wantToRun & !shouldSchedule'的Pod。
如果DaemonSet的Spec中指定了NodeName,則根據其是否與node.Name匹配成功來決定shouldSchedule的值。
如果Predicate時出現所有類型的PredicateFailureError之一,則shouldSchedule都為false。
如果出現InsufficientResourceError,則shouldSchedule也為false。
ErrDiskConflict;
ErrVolumeZoneConflict;
ErrMaxVolumeCountExceeded;
ErrNodeUnderMemoryPressure;
ErrNodeUnderDiskPressure;
InsufficientResourceError;
wantToRun: 為True,當DaemonSet Controller去Simulate調度時,Predicate(主要是GeneralPredicates和PodToleratesNodeTaints)時忽略如下PredicateFailureError(都是些資源類的Error)時成功,有其他PredicateFailureError為False。如果DaemonSet的Spec中指定了NodeName,則根據其是否與node.Name匹配成功來決定wantToRun的值。
shouldSchedule:
failedPodsBackoff *flowcontrol.Backoff
: DaemonSet Controller Run時會啟動一個協程,每隔2*MaxDuration(2*15Min)
會強制進行一次failedPods GC清理。每次syncDaemonSet處理該刪除的Pods時,會按照1s,2s,4s,8s,.....15min的Backoff機制做一定的delay處理,實現流控的效果。防止kubelet拒絕某些DaemonSet Pods后,馬上又被拒絕,如此就會出現很多無效的循環,因此加入了Backoff機制。
NewDaemonSetsController負責創建Controller,其中很重要的工作就是注冊以下Informer的EventHandler:
daemonSetInformer: AddFunc/DeleteFunc/UpdateFunc最終其實都主要是enqueue DaemonSet;
historyInformer:
AddFunc: addHistory;
UpdateFunc: updateHistory;
DeleteFunc: deleteHistory;
podInformer:
AddFunc: addPod;
UpdateFunc: updatePod;
DeleteFunc: deletePod;
nodeInformer:
AddFunc: addNode;
UpdateFunc: updateNode;
DamonSet Controller Run啟動時,主要干兩件事:
啟動2個workers協程,每個worker負責從queue中取DaemonSet Key進行sync。
啟動1個failedPodsBackoff GC協程,每隔1Min清理一次集群中所有DaemonSet/Node對應的Failed Pods。
只有deletePod時,才會requeueSuspendedDaemonPods。-- 為什么?
worker會從queue中取待同步的DamonSet Key,調用syncDaemonSet完成自動管理,syncDaemonSet是DaemonSet管理的核心入口。
pkg/controller/daemon/daemon_controller.go:1208 func (dsc *DaemonSetsController) syncDaemonSet(key string) error { ... ds, err := dsc.dsLister.DaemonSets(namespace).Get(name) if errors.IsNotFound(err) { klog.V(3).Infof("daemon set has been deleted %v", key) dsc.expectations.DeleteExpectations(key) return nil } if err != nil { return fmt.Errorf("unable to retrieve ds %v from store: %v", key, err) } everything := metav1.LabelSelector{} if reflect.DeepEqual(ds.Spec.Selector, &everything) { dsc.eventRecorder.Eventf(ds, v1.EventTypeWarning, SelectingAllReason, "This daemon set is selecting all pods. A non-empty selector is required.") return nil } // Don't process a daemon set until all its creations and deletions have been processed. // For example if daemon set foo asked for 3 new daemon pods in the previous call to manage, // then we do not want to call manage on foo until the daemon pods have been created. ... if ds.DeletionTimestamp != nil { return nil } // Construct histories of the DaemonSet, and get the hash of current history cur, old, err := dsc.constructHistory(ds) if err != nil { return fmt.Errorf("failed to construct revisions of DaemonSet: %v", err) } hash := cur.Labels[apps.DefaultDaemonSetUniqueLabelKey] if !dsc.expectations.SatisfiedExpectations(dsKey) { // Only update status. Don't raise observedGeneration since controller didn't process object of that generation. return dsc.updateDaemonSetStatus(ds, hash, false) } err = dsc.manage(ds, hash) if err != nil { return err } // Process rolling updates if we're ready. if dsc.expectations.SatisfiedExpectations(dsKey) { switch ds.Spec.UpdateStrategy.Type { case apps.OnDeleteDaemonSetStrategyType: case apps.RollingUpdateDaemonSetStrategyType: err = dsc.rollingUpdate(ds, hash) } if err != nil { return err } } err = dsc.cleanupHistory(ds, old) if err != nil { return fmt.Errorf("failed to clean up revisions of DaemonSet: %v", err) } return dsc.updateDaemonSetStatus(ds, hash, true) }
核心的流程如下:
首先檢查該DaemonSet對象在本地Store中是否被刪除,如果是,則從expectations中刪除該DaemonSet對應的數據。
檢查該DaemonSet對象的LabelSelector是否為空,如果是,則syncDaemonSet返回結束,不進行同步,那么DaemonSet對應的Pod也不會被創建了。
如果其DeletionTimestamp非空,意味著用戶觸發了刪除,則syncDaemonSet返回結束,不進行同步。DaemonSet對應的Pod交由GC Controller去完成刪除。
然后constructHistory獲取該DaemonSet的Current ControllerRevision和所有Old ControllerRevisions,并確保所有ControllerRevisions都打上Label: "controller-revision-hash: ControllerRevision.Name",更新Current ControllerRevision的Revision = maxRevision(old) + 1。
檢查當前expectations是否已經滿足,當不滿足時,只更新DaemonSet Status,同步流程結束。
DesiredNumberScheduled:用戶期望調度的DaemonSet Pods數量,對應前面提到的wantToRun為true的pods數量。
CurrentNumberScheduled:用戶期望調度的,并且當前已經運行在Node上的Pods數量。
NumberMisscheduled:用戶不期望調度的(wantToRun為false),并且已經運行在對應Node上Pods數量,即已經錯誤調度的Pods數量。
NumberReady:CurrentNumberScheduled中,Pod Type Ready Condition為true的Pods數量。
UpdatedNumberScheduled:CurrentNumberScheduled中,Pod Label controller-revision-hash
對應的hash值與Current ControllerRevision的該hash值相等的Pods數量,即Pod Template已經更新的Pods數量。
NumberAvailable:CurrentNumberScheduled中,Pod Type Ready Condition為true,并且Available(Ready時間超過minReadySeconds)的Pods數量。
NumberUnavailable:desiredNumberScheduled - numberAvailable。
expectations中add和del都不大于0,表示Controller expectations已經實現,則當前expectations已經滿足。
expectations已經超時,超時時間是5min(不可配置),如果超時,則表示需要進行同步。
如果expectations中還沒有該DaemonSet的信息,則表示也滿足了,將觸發DaemonSet同步。
此處updateDaemonSetStatus會更新該Daemonset.Status的如下字段,注意不會更新ObservedGeneration(也沒發生變化)。
調用manage進行DaemonSet Pod的管理:計算待刪除和創建的Pod列表,然后調用syncNodes分批次(1,2,4,8,..)的完成Pod的創建和刪除。如果syncNodes之前發現某些Node上對應DaemonSet Pod是Failed,那么syncNodes后返回error。syncNode會將expectations中的add/del都歸零甚至負數,只有這樣,才會在syncDaemonSet中調用manage進行Pod管理。
如果manage返回error,則syncDaemonSet流程結束。否則會繼續下面的流程。
檢查當前expectations是否已經滿足,如果滿足,則根據UpdateStrategy觸發DaemonSet更新:
如果UpdateStrategy是OnDelete,則等待用戶delete Pod,觸發對應的DaemonSet的enqueue,在syncNodes時更新最新的Pod Template創建新Pod。
如果UpdateStrategy是RollingUpdate,則調用rollingUpdate進行滾動更新,后面會詳細分析。
如果DaemonSet更新成功,則根據需要(Old ControllerRevisions數量是否超過Spec.RevisionHistoryLimit,默認為10)清理超過RevisionHistoryLimit的最老的ControllerRevisions。
updateDaemonSetStatus會更新該Daemonset.Status,跟前面不同的是,這里還需要更新Status.ObservedGeneration。
在Kubernetes 1.12之前的版本中,默認由DaemonSet Controller完成Daemon Pods的調度工作,即由DaemonSet Controller給待調度Pod的spec.nodeName
設置值,然后對應Node的kubelet watch到該事件,再在本節點創建DaemonSet Pod。在Kubernetes 1.12+,默認啟用了ScheduleDaemonSetPods
FeatureGate, DaemonSet的調度就交由default scheduler完成。
在manage daemonset時,通過調用podsShouldBeOnNode
來計算出希望在該Node上啟動的DaemonSet Pods(nodesNeedingDaemonPods)、希望在該Node上刪除的DaemonSet Pods(podsToDelete),以及在該Node上已經Failed DamonSetPods數量,然后在syncNodes中根據這三個信息,去創建、刪除對應的Pods。
func (dsc *DaemonSetsController) manage(ds *apps.DaemonSet, hash string) error { // Find out the pods which are created for the nodes by DaemonSet. nodeToDaemonPods, err := dsc.getNodesToDaemonPods(ds) ... for _, node := range nodeList { nodesNeedingDaemonPodsOnNode, podsToDeleteOnNode, failedPodsObservedOnNode, err := dsc.podsShouldBeOnNode( node, nodeToDaemonPods, ds) if err != nil { continue } nodesNeedingDaemonPods = append(nodesNeedingDaemonPods, nodesNeedingDaemonPodsOnNode...) podsToDelete = append(podsToDelete, podsToDeleteOnNode...) failedPodsObserved += failedPodsObservedOnNode } // Label new pods using the hash label value of the current history when creating them if err = dsc.syncNodes(ds, podsToDelete, nodesNeedingDaemonPods, hash); err != nil { return err } ... return nil }
podsShouldBeOnNode
是如何計算出nodesNeedingDaemonPods、podsToDelete、failedPodsObserved的呢?—— 通過調用nodeShouldRunDaemonPod(node *v1.Node, ds *apps.DaemonSet)
計算出如下三個狀態值:
wantToRun: 當DaemonSet Controller去Simulate調度時,Predicate(主要是GeneralPredicates和PodToleratesNodeTaints)時忽略如下PredicateFailureError(都是些資源類的Error)時為True,有其他PredicateFailureError為False。如果DaemonSet的Spec中指定了NodeName,則根據其是否與node.Name匹配成功來決定wantToRun的值。 - ErrDiskConflict; - ErrVolumeZoneConflict; - ErrMaxVolumeCountExceeded; - ErrNodeUnderMemoryPressure; - ErrNodeUnderDiskPressure; - InsufficientResourceError;
shouldSchedule:
- 如果DaemonSet的Spec中指定了NodeName,則根據其是否與node.Name匹配成功來決定shouldSchedule的值。 - 如果Predicate時出現所有類型的PredicateFailureError之一,則shouldSchedule都為false。 - 如果出現InsufficientResourceError,則shouldSchedule也為false。
failedPodsBackoff *flowcontrol.Backoff
: 按照1s,2s,4s,8s,...的backoff周期去處理(刪除重建)Failed DaemonSet Pods,實現流控的效果。DaemonSet Controller Run時會啟動一個協程,每隔2*MaxDuration(2*15Min)
會強制進行一次failedPods GC清理。
shouldContinueRunning,如下情況之一出現,則該值為false,其他情況為true。
ErrNodeSelectorNotMatch,
ErrPodNotMatchHostName,
ErrNodeLabelPresenceViolated,
ErrPodNotFitsHostPorts:
ErrTaintsTolerationsNotMatch,如果是No Execute類型的Taint/Toleration匹配,則為true,否則為false,也就是說會忽略NoExecute類型的Taint/Toleration匹配。
ErrPodAffinityNotMatch,
ErrServiceAffinityViolated,
unknown predicate failure reason
然后根據這三個狀態值,得到nodesNeedingDaemonPods []string、podsToDelete []string、failedPodsObserved int
。
// podsShouldBeOnNode figures out the DaemonSet pods to be created and deleted on the given node: func (dsc *DaemonSetsController) podsShouldBeOnNode( node *v1.Node, nodeToDaemonPods map[string][]*v1.Pod, ds *apps.DaemonSet, ) (nodesNeedingDaemonPods, podsToDelete []string, failedPodsObserved int, err error) { wantToRun, shouldSchedule, shouldContinueRunning, err := dsc.nodeShouldRunDaemonPod(node, ds) if err != nil { return } daemonPods, exists := nodeToDaemonPods[node.Name] dsKey, _ := cache.MetaNamespaceKeyFunc(ds) dsc.removeSuspendedDaemonPods(node.Name, dsKey) switch { case wantToRun && !shouldSchedule: // If daemon pod is supposed to run, but can not be scheduled, add to suspended list. dsc.addSuspendedDaemonPods(node.Name, dsKey) case shouldSchedule && !exists: // If daemon pod is supposed to be running on node, but isn't, create daemon pod. nodesNeedingDaemonPods = append(nodesNeedingDaemonPods, node.Name) case shouldContinueRunning: // If a daemon pod failed, delete it // If there's non-daemon pods left on this node, we will create it in the next sync loop var daemonPodsRunning []*v1.Pod for _, pod := range daemonPods { if pod.DeletionTimestamp != nil { continue } if pod.Status.Phase == v1.PodFailed { failedPodsObserved++ // This is a critical place where DS is often fighting with kubelet that rejects pods. // We need to avoid hot looping and backoff. backoffKey := failedPodsBackoffKey(ds, node.Name) now := dsc.failedPodsBackoff.Clock.Now() inBackoff := dsc.failedPodsBackoff.IsInBackOffSinceUpdate(backoffKey, now) if inBackoff { delay := dsc.failedPodsBackoff.Get(backoffKey) klog.V(4).Infof("Deleting failed pod %s/%s on node %s has been limited by backoff - %v remaining", pod.Namespace, pod.Name, node.Name, delay) dsc.enqueueDaemonSetAfter(ds, delay) continue } dsc.failedPodsBackoff.Next(backoffKey, now) msg := fmt.Sprintf("Found failed daemon pod %s/%s on node %s, will try to kill it", pod.Namespace, pod.Name, node.Name) klog.V(2).Infof(msg) // Emit an event so that it's discoverable to users. dsc.eventRecorder.Eventf(ds, v1.EventTypeWarning, FailedDaemonPodReason, msg) podsToDelete = append(podsToDelete, pod.Name) } else { daemonPodsRunning = append(daemonPodsRunning, pod) } } // If daemon pod is supposed to be running on node, but more than 1 daemon pod is running, delete the excess daemon pods. // Sort the daemon pods by creation time, so the oldest is preserved. if len(daemonPodsRunning) > 1 { sort.Sort(podByCreationTimestampAndPhase(daemonPodsRunning)) for i := 1; i < len(daemonPodsRunning); i++ { podsToDelete = append(podsToDelete, daemonPodsRunning[i].Name) } } case !shouldContinueRunning && exists: // If daemon pod isn't supposed to run on node, but it is, delete all daemon pods on node. for _, pod := range daemonPods { podsToDelete = append(podsToDelete, pod.Name) } } return nodesNeedingDaemonPods, podsToDelete, failedPodsObserved, nil } // nodeShouldRunDaemonPod checks a set of preconditions against a (node,daemonset) and returns a summary. func (dsc *DaemonSetsController) nodeShouldRunDaemonPod(node *v1.Node, ds *apps.DaemonSet) (wantToRun, shouldSchedule, shouldContinueRunning bool, err error) { newPod := NewPod(ds, node.Name) // Because these bools require an && of all their required conditions, we start // with all bools set to true and set a bool to false if a condition is not met. // A bool should probably not be set to true after this line. wantToRun, shouldSchedule, shouldContinueRunning = true, true, true // If the daemon set specifies a node name, check that it matches with node.Name. if !(ds.Spec.Template.Spec.NodeName == "" || ds.Spec.Template.Spec.NodeName == node.Name) { return false, false, false, nil } reasons, nodeInfo, err := dsc.simulate(newPod, node, ds) if err != nil { klog.Warningf("DaemonSet Predicates failed on node %s for ds '%s/%s' due to unexpected error: %v", node.Name, ds.ObjectMeta.Namespace, ds.ObjectMeta.Name, err) return false, false, false, err } // TODO(k82cn): When 'ScheduleDaemonSetPods' upgrade to beta or GA, remove unnecessary check on failure reason, // e.g. InsufficientResourceError; and simplify "wantToRun, shouldSchedule, shouldContinueRunning" // into one result, e.g. selectedNode. var insufficientResourceErr error for _, r := range reasons { klog.V(4).Infof("DaemonSet Predicates failed on node %s for ds '%s/%s' for reason: %v", node.Name, ds.ObjectMeta.Namespace, ds.ObjectMeta.Name, r.GetReason()) switch reason := r.(type) { case *predicates.InsufficientResourceError: insufficientResourceErr = reason case *predicates.PredicateFailureError: var emitEvent bool // we try to partition predicates into two partitions here: intentional on the part of the operator and not. switch reason { // intentional case predicates.ErrNodeSelectorNotMatch, predicates.ErrPodNotMatchHostName, predicates.ErrNodeLabelPresenceViolated, // this one is probably intentional since it's a workaround for not having // pod hard anti affinity. predicates.ErrPodNotFitsHostPorts: return false, false, false, nil case predicates.ErrTaintsTolerationsNotMatch: // DaemonSet is expected to respect taints and tolerations fitsNoExecute, _, err := predicates.PodToleratesNodeNoExecuteTaints(newPod, nil, nodeInfo) if err != nil { return false, false, false, err } if !fitsNoExecute { return false, false, false, nil } wantToRun, shouldSchedule = false, false // unintentional case predicates.ErrDiskConflict, predicates.ErrVolumeZoneConflict, predicates.ErrMaxVolumeCountExceeded, predicates.ErrNodeUnderMemoryPressure, predicates.ErrNodeUnderDiskPressure: // wantToRun and shouldContinueRunning are likely true here. They are // absolutely true at the time of writing the comment. See first comment // of this method. shouldSchedule = false emitEvent = true // unexpected case predicates.ErrPodAffinityNotMatch, predicates.ErrServiceAffinityViolated: klog.Warningf("unexpected predicate failure reason: %s", reason.GetReason()) return false, false, false, fmt.Errorf("unexpected reason: DaemonSet Predicates should not return reason %s", reason.GetReason()) default: klog.V(4).Infof("unknown predicate failure reason: %s", reason.GetReason()) wantToRun, shouldSchedule, shouldContinueRunning = false, false, false emitEvent = true } if emitEvent { dsc.eventRecorder.Eventf(ds, v1.EventTypeWarning, FailedPlacementReason, "failed to place pod on %q: %s", node.ObjectMeta.Name, reason.GetReason()) } } } // only emit this event if insufficient resource is the only thing // preventing the daemon pod from scheduling if shouldSchedule && insufficientResourceErr != nil { dsc.eventRecorder.Eventf(ds, v1.EventTypeWarning, FailedPlacementReason, "failed to place pod on %q: %s", node.ObjectMeta.Name, insufficientResourceErr.Error()) shouldSchedule = false } return }
如果shouldSchedule && !exists
,則會把該Pod加入到nodesNeedingDaemonPods
中。
如果shouldContinueRunning && pod.DeletionTimestamp == nil && pod.Status.Phase == v1.PodFailed
則檢查是否在流控周期(15min, hardcode)中,如果已經超過流控周期,會把該Pod加入到podsToDelete
中,否則將再次入隊列。
如果shouldContinueRunning && pod.DeletionTimestamp == nil && pod.Status.Phase != v1.PodFailed
則會把該Pod加入到daemonPodsRunning
中記錄著該DamonSet在該Node上正在運行的非Failed的Pods,如果daemonPodsRunning
不止一個,則需要按照創建時間排序,將不是最早創建的其他所有DaemonSet Pods都加入到podsToDelete
中。
在nodeShouldRunDaemonPod
中調用simulate
仿真調度返回Pod和Node的匹配結果,根據algorithm.PredicateFailureReason
結果知道wantToRun,shouldSchedule,shouldContinueRunning的值。下面我們看看simulate中的調度邏輯。
// Predicates checks if a DaemonSet's pod can be scheduled on a node using GeneralPredicates // and PodToleratesNodeTaints predicate func Predicates(pod *v1.Pod, nodeInfo *schedulercache.NodeInfo) (bool, []algorithm.PredicateFailureReason, error) { var predicateFails []algorithm.PredicateFailureReason // If ScheduleDaemonSetPods is enabled, only check nodeSelector, nodeAffinity and toleration/taint match. if utilfeature.DefaultFeatureGate.Enabled(features.ScheduleDaemonSetPods) { fit, reasons, err := checkNodeFitness(pod, nil, nodeInfo) if err != nil { return false, predicateFails, err } if !fit { predicateFails = append(predicateFails, reasons...) } return len(predicateFails) == 0, predicateFails, nil } critical := kubelettypes.IsCriticalPod(pod) fit, reasons, err := predicates.PodToleratesNodeTaints(pod, nil, nodeInfo) if err != nil { return false, predicateFails, err } if !fit { predicateFails = append(predicateFails, reasons...) } if critical { // If the pod is marked as critical and support for critical pod annotations is enabled, // check predicates for critical pods only. fit, reasons, err = predicates.EssentialPredicates(pod, nil, nodeInfo) } else { fit, reasons, err = predicates.GeneralPredicates(pod, nil, nodeInfo) } if err != nil { return false, predicateFails, err } if !fit { predicateFails = append(predicateFails, reasons...) } return len(predicateFails) == 0, predicateFails, nil }
如果是啟用了ScheduleDaemonSetPods
FeatureGate,則Predicate邏輯如下。這里并沒有真正的完成調度,只是做了三個predicate檢查,最終的調度還是會交給default scheduler。default scheduler又是如何控制DaemonSet Pod和Node綁定關系的呢,先買個關子。
PodFitsHost: 檢查Pod.spec.nodeName非空時是否與Node Name匹配;
PodMatchNodeSelector: 檢查Pod的NodeSelector和NodeAffinity是否與Node匹配;
PodToleratesNodeTaints: 檢查Pod的NoExecute和NoSchedule類型的Toleration是否與Node Taint匹配。
如果是沒啟用ScheduleDaemonSetPods
FeatureGate,則Predicate邏輯如下。這里并沒有真正的完成調度,只是做了幾個predicate檢查,最終的調度還是會交給DaemonSet Controller。
PodFitsResources:檢查Node剩余可分配資源是否能滿足Pod請求;
PodFitsHost: 檢查Pod.spec.nodeName非空時是否與Node Name匹配;
PodFitsHostPorts: 檢查DaemonSet Pods請求的協議&Host端口是否已經被占用;
PodMatchNodeSelector: 檢查Pod的NodeSelector和NodeAffinity是否與Node匹配;
PodFitsHost:檢查Pod.spec.nodeName非空時是否與Node Name匹配;
PodFitsHostPorts:檢查DaemonSet Pods請求的協議&Host端口是否已經被占用;
PodMatchNodeSelector: 檢查Pod的NodeSelector和NodeAffinity是否與Node匹配;
PodToleratesNodeTaints:檢查Pod的NoExecute和NoSchedule類型的Toleration是否與Node Taint匹配。
如果是Critical DaemonSet Pod,則再進行EssentialPredicates,包括:
如果不是Critical DaemonSet Pod,則再進行GeneralPredicates,
前面通過podsShouldBeOnNode得到了nodesNeedingDaemonPods []string, podsToDelete []string, failedPodsObserved int
,接下來就該去創建和刪除對應的Pods了。
// syncNodes deletes given pods and creates new daemon set pods on the given nodes // returns slice with erros if any func (dsc *DaemonSetsController) syncNodes(ds *apps.DaemonSet, podsToDelete, nodesNeedingDaemonPods []string, hash string) error { // We need to set expectations before creating/deleting pods to avoid race conditions. dsKey, err := controller.KeyFunc(ds) if err != nil { return fmt.Errorf("couldn't get key for object %#v: %v", ds, err) } createDiff := len(nodesNeedingDaemonPods) deleteDiff := len(podsToDelete) if createDiff > dsc.burstReplicas { createDiff = dsc.burstReplicas } if deleteDiff > dsc.burstReplicas { deleteDiff = dsc.burstReplicas } dsc.expectations.SetExpectations(dsKey, createDiff, deleteDiff) // error channel to communicate back failures. make the buffer big enough to avoid any blocking errCh := make(chan error, createDiff+deleteDiff) klog.V(4).Infof("Nodes needing daemon pods for daemon set %s: %+v, creating %d", ds.Name, nodesNeedingDaemonPods, createDiff) createWait := sync.WaitGroup{} // If the returned error is not nil we have a parse error. // The controller handles this via the hash. generation, err := util.GetTemplateGeneration(ds) if err != nil { generation = nil } template := util.CreatePodTemplate(ds.Namespace, ds.Spec.Template, generation, hash) // Batch the pod creates. Batch sizes start at SlowStartInitialBatchSize // and double with each successful iteration in a kind of "slow start". // This handles attempts to start large numbers of pods that would // likely all fail with the same error. For example a project with a // low quota that attempts to create a large number of pods will be // prevented from spamming the API service with the pod create requests // after one of its pods fails. Conveniently, this also prevents the // event spam that those failures would generate. batchSize := integer.IntMin(createDiff, controller.SlowStartInitialBatchSize) for pos := 0; createDiff > pos; batchSize, pos = integer.IntMin(2*batchSize, createDiff-(pos+batchSize)), pos+batchSize { errorCount := len(errCh) createWait.Add(batchSize) for i := pos; i < pos+batchSize; i++ { go func(ix int) { defer createWait.Done() var err error podTemplate := &template if utilfeature.DefaultFeatureGate.Enabled(features.ScheduleDaemonSetPods) { podTemplate = template.DeepCopy() // The pod's NodeAffinity will be updated to make sure the Pod is bound // to the target node by default scheduler. It is safe to do so because there // should be no conflicting node affinity with the target node. podTemplate.Spec.Affinity = util.ReplaceDaemonSetPodNodeNameNodeAffinity( podTemplate.Spec.Affinity, nodesNeedingDaemonPods[ix]) err = dsc.podControl.CreatePodsWithControllerRef(ds.Namespace, podTemplate, ds, metav1.NewControllerRef(ds, controllerKind)) } else { err = dsc.podControl.CreatePodsOnNode(nodesNeedingDaemonPods[ix], ds.Namespace, podTemplate, ds, metav1.NewControllerRef(ds, controllerKind)) } if err != nil && errors.IsTimeout(err) { // Pod is created but its initialization has timed out. // If the initialization is successful eventually, the // controller will observe the creation via the informer. // If the initialization fails, or if the pod keeps // uninitialized for a long time, the informer will not // receive any update, and the controller will create a new // pod when the expectation expires. return } if err != nil { klog.V(2).Infof("Failed creation, decrementing expectations for set %q/%q", ds.Namespace, ds.Name) dsc.expectations.CreationObserved(dsKey) errCh <- err utilruntime.HandleError(err) } }(i) } createWait.Wait() // any skipped pods that we never attempted to start shouldn't be expected. skippedPods := createDiff - batchSize if errorCount < len(errCh) && skippedPods > 0 { klog.V(2).Infof("Slow-start failure. Skipping creation of %d pods, decrementing expectations for set %q/%q", skippedPods, ds.Namespace, ds.Name) for i := 0; i < skippedPods; i++ { dsc.expectations.CreationObserved(dsKey) } // The skipped pods will be retried later. The next controller resync will // retry the slow start process. break } } klog.V(4).Infof("Pods to delete for daemon set %s: %+v, deleting %d", ds.Name, podsToDelete, deleteDiff) deleteWait := sync.WaitGroup{} deleteWait.Add(deleteDiff) for i := 0; i < deleteDiff; i++ { go func(ix int) { defer deleteWait.Done() if err := dsc.podControl.DeletePod(ds.Namespace, podsToDelete[ix], ds); err != nil { klog.V(2).Infof("Failed deletion, decrementing expectations for set %q/%q", ds.Namespace, ds.Name) dsc.expectations.DeletionObserved(dsKey) errCh <- err utilruntime.HandleError(err) } }(i) } deleteWait.Wait() // collect errors if any for proper reporting/retry logic in the controller errors := []error{} close(errCh) for err := range errCh { errors = append(errors, err) } return utilerrors.NewAggregate(errors) }
每次刪除和創建的最大Pods個數分別為250個。
根據DaemonSet Object構建Pod Template,并且增加/更新以下Tolerations:
node.kubernetes.io/out-of-disk
| exist | NoExecute
node.kubernetes.io/out-of-disk
| exist | NoSchedule
node.kubernetes.io/not-ready
| exist | NoExecute
node.kubernetes.io/unreachable
| exist | NoExecute
node.kubernetes.io/disk-pressure
| exist | NoSchedule
node.kubernetes.io/memory-pressure
| exist | NoSchedule
node.kubernetes.io/unschedulable
| exist | NoSchedule
node.kubernetes.io/network-unavailable
| exist | NoSchedule
如果是Critical Pod,還會增加以下Tolerations:
給Pod加上Label: controller-revision-hash=$DaemonSetControlelrHash
分批的創建DaemonSet Pods(按照1,2,4,8,...的batch size去Create DaemonSet Pods,防止大批量的一次性創建所有DaemonSet Pods時因同樣的錯誤導致失敗。對于創建失敗的Pods,注意更新expectations中的Adds值,每失敗一個就會將expectations.adds值減1。
如果啟用了ScheduleDaemonSetPods FeatureGate,則往Pod Tempalete中添加/更新metadata.name=$NodeName
的NodeAffinity。通過這種方式,來實現通過default scheduler來調度DaemonSet Pods的目的。
一次性的刪除podsToDelete的Pods。
DaemonSet的滾動更新,跟Deployment的滾動更新略有不同,DaemonSet RollingUpdate只有MaxUnavailable這一個配置項,沒有MinAvailable。
// rollingUpdate deletes old daemon set pods making sure that no more than // ds.Spec.UpdateStrategy.RollingUpdate.MaxUnavailable pods are unavailable func (dsc *DaemonSetsController) rollingrollingrollingUpdate(ds *apps.DaemonSet, hash string) error { nodeToDaemonPods, err := dsc.getNodesToDaemonPods(ds) if err != nil { return fmt.Errorf("couldn't get node to daemon pod mapping for daemon set %q: %v", ds.Name, err) } _, oldPods := dsc.getAllDaemonSetPods(ds, nodeToDaemonPods, hash) maxUnavailable, numUnavailable, err := dsc.getUnavailableNumbers(ds, nodeToDaemonPods) if err != nil { return fmt.Errorf("Couldn't get unavailable numbers: %v", err) } oldAvailablePods, oldUnavailablePods := util.SplitByAvailablePods(ds.Spec.MinReadySeconds, oldPods) // for oldPods delete all not running pods var oldPodsToDelete []string klog.V(4).Infof("Marking all unavailable old pods for deletion") for _, pod := range oldUnavailablePods { // Skip terminating pods. We won't delete them again if pod.DeletionTimestamp != nil { continue } klog.V(4).Infof("Marking pod %s/%s for deletion", ds.Name, pod.Name) oldPodsToDelete = append(oldPodsToDelete, pod.Name) } klog.V(4).Infof("Marking old pods for deletion") for _, pod := range oldAvailablePods { if numUnavailable >= maxUnavailable { klog.V(4).Infof("Number of unavailable DaemonSet pods: %d, is equal to or exceeds allowed maximum: %d", numUnavailable, maxUnavailable) break } klog.V(4).Infof("Marking pod %s/%s for deletion", ds.Name, pod.Name) oldPodsToDelete = append(oldPodsToDelete, pod.Name) numUnavailable++ } return dsc.syncNodes(ds, oldPodsToDelete, []string{}, hash) }
根據最新的Hash值選出所有的OldPods;
計算那些!available
及那些期望調度但還沒運行的Pods之和,作為numUnavailable。
將OldPods分為oldAvailablePods和oldUnavailablePods,將DeletionTimestamp為空的oldUnavailablePods加入到待刪除Pods列表(oldPodsToDelete)。
遍歷oldAvailablePods,逐個加入到oldPodsToDelete中,直到numUnavailable達到maxUnavailable為止,從oldAvailablePods加入到oldPodsToDelete的Pods最大個數為(maxUnavailable - 1)。
因此,oldPodsToDelete包括所有的DeletionTimestamp為空的oldUnavailablePods及最多(maxUnavailable - 1)個oldAvailablePods。
最后調用syncNodes開始刪除oldPodsToDelete中的DaemonSet Pods。
Node Add事件很簡單,遍歷所有DaemonSets對象,調用nodeShouldRunDaemonPod計算出每個DaemonSet是否應該在該Node上啟動。如果要啟動,則把DaemonSet加入到Queue,由syncDaemonSet進行處理。
對于Node Update事件,需要判斷Update的字段等,然后根據情況決定是否要加入到Queue進行syncDaemonSet。
func (dsc *DaemonSetsController) updateNode(old, cur interface{}) { oldNode := old.(*v1.Node) curNode := cur.(*v1.Node) if shouldIgnoreNodeUpdate(*oldNode, *curNode) { return } dsList, err := dsc.dsLister.List(labels.Everything()) if err != nil { klog.V(4).Infof("Error listing daemon sets: %v", err) return } // TODO: it'd be nice to pass a hint with these enqueues, so that each ds would only examine the added node (unless it has other work to do, too). for _, ds := range dsList { _, oldShouldSchedule, oldShouldContinueRunning, err := dsc.nodeShouldRunDaemonPod(oldNode, ds) if err != nil { continue } _, currentShouldSchedule, currentShouldContinueRunning, err := dsc.nodeShouldRunDaemonPod(curNode, ds) if err != nil { continue } if (oldShouldSchedule != currentShouldSchedule) || (oldShouldContinueRunning != currentShouldContinueRunning) { dsc.enqueueDaemonSet(ds) } } }
如果Node Condition沒有發生變更,則不能忽略該Node變更事件。
除了Node Condition和ResourceVersion之外,如果新舊Node對象不一致,也不能忽略該變更事件。
對于不能忽略的變更,則分別對于oldNode,currentNode調用nodeShouldRunDaemonPod計算ShouldSchedule、ShouldContinueRunning是否一致,只要ShouldSchedule或者ShouldContinueRunning發生變更,則將該DaemonSet Object入隊列進入syncDaemonSet進行處理。
到此,相信大家對“DaemonSet Controller怎么創建和啟動”有了更深的了解,不妨來實際操作一番吧!這里是億速云網站,更多相關內容可以進入相關頻道進行查詢,關注我們,繼續學習!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。