您好,登錄后才能下訂單哦!
使用Deployment創建的Pod是無狀態的,當掛在Volume之后,如果該Pod掛了,Replication Controller會再run一個來保證可用性,但是由于是無狀態的,Pod掛了的時候與之前的Volume的關系就已經斷開了,新起來的Pod無法找到之前的Pod。但是對于用戶而言,他們對底層的Pod掛了沒有感知,但是當Pod掛了之后就無法再使用之前掛載的磁盤了。
Pod一致性:包含次序(啟動、停止次序)、網絡一致性。此一致性與Pod相關,與被調度到哪個node節點無關。
穩定的次序:對于N個副本的StatefulSet,每個Pod都在[0,N)的范圍內分配一個數字序號,且是唯一的。
穩定的網絡:Pod的hostname模式為(statefulset名稱)- (序號)。
穩定的存儲:通過VolumeClaimTemplate為每個Pod創建一個PV。刪除、減少副本,不會刪除相關的卷。
template(模板):根據模板 創建出來的Pod,它們J的狀態都是一模一樣的(除了名稱,IP, 域名之外)
可以理解為:任何一個Pod, 都可以被刪除,然后用新生成的Pod進行替換。
mysql:主從關系。
如果把之前無狀態的服務比喻為牛、羊等牲畜,因為,這些到一定時候就可以”送出“。那么,有狀態就比喻為:寵物,而寵物不像牲畜一樣到達一定時候“送出”,人們往往會照顧寵物的一生。
storageclass:自動創建PV
需要解決:自動創建PVC。
與 ReplicaSet 和 Deployment 資源一樣,StatefulSet 也使用控制器的方式實現,它主要由 StatefulSetController、StatefulSetControl 和 StatefulPodControl 三個組件協作來完成 StatefulSet 的管理,StatefulSetController 會同時從 PodInformer 和 ReplicaSetInformer 中接受增刪改事件并將事件推送到隊列中:
控制器 StatefulSetController 會在 Run 方法中啟動多個 Goroutine 協程,這些協程會從隊列中獲取待處理的 StatefulSet 資源進行同步,接下來我們會先介紹 Kubernetes 同步 StatefulSet 的過程。
[root@master yaml]# vim statefulset.yaml
apiVersion: v1
kind: Service
metadata:
name: headless-svc
labels:
app: headless-svc
spec:
ports:
- port: 80
selector:
app: headless-pod
clusterIP: None #沒有同一的ip
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: statefulset-test
spec:
serviceName: headless-svc
replicas: 3
selector:
matchLabels:
app: headless-pod
template:
metadata:
labels:
app: headless-pod
spec:
containers:
- name: myhttpd
image: httpd
ports:
- containerPort: 80
Deployment : Deploy+RS+隨機字符串(Pod的名稱。)沒有順序的,可
以沒隨意替代的。
1、headless-svc :無頭服務。因為沒有IP地址,所以它不具備負載均衡的功能了。因為statefulset要求Pod的名稱是有順序的,每一個Pod都不能被隨意取代,也就是即使Pod重建之后,名稱依然不變。為后端的每一個Pod去命名。
2、statefulSet:定義具體的應用
3、volumeClaimT emplates:自動創建PVC,為后端的Pod提供專有的存儲。
[root@master yaml]# kubectl apply -f statefulset.yaml
[root@master yaml]# kubectl get svc
[root@master yaml]# kubectl get pod
//可看到這些pod是有順序的
下載nfs所需安裝包
[root@node02 ~]# yum -y install nfs-utils rpcbind
創建共享目錄
[root@master ~]# mkdir /nfsdata
創建共享目錄的權限
[root@master ~]# vim /etc/exports
/nfsdata *(rw,sync,no_root_squash)
開啟nfs和rpcbind
[root@master ~]# systemctl start nfs-server.service
[root@master ~]# systemctl start rpcbind
測試一下
[root@master ~]# showmount -e
[root@master yaml]# vim rbac-rolebind.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nfs-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["watch", "create", "update", "patch"]
- apiGroups: [""]
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nfs-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get","create","list", "watch","update"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: default #必寫字段
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
[root@master yaml]# kubectl apply -f rbac-rolebind.yaml
[root@master yaml]# vim nfs-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccount: nfs-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: bdqn
- name: NFS_SERVER
value: 192.168.1.21
- name: NFS_PATH
value: /nfsdata
volumes:
- name: nfs-client-root
nfs:
server: 192.168.1.21
path: /nfsdata
[root@master yaml]# kubectl apply -f nfs-deployment.yaml
[root@master yaml]# kubectl get pod
[root@master yaml]# vim test-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: stateful-nfs
provisioner: bdqn #通過provisioner字段關聯到上述Deploy
reclaimPolicy: Retain
[root@master yaml]# kubectl apply -f test-storageclass.yaml
[root@master yaml]# kubectl get sc
[root@master yaml]# vim statefulset.yaml
apiVersion: v1
kind: Service
metadata:
name: headless-svc
labels:
app: headless-svc
spec:
ports:
- port: 80
name: myweb
selector:
app: headless-pod
clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: statefulset-test
spec:
serviceName: headless-svc
replicas: 3
selector:
matchLabels:
app: headless-pod
template:
metadata:
labels:
app: headless-pod
spec:
containers:
- image: httpd
name: myhttpd
ports:
- containerPort: 80
name: httpd
volumeMounts:
- mountPath: /mnt
name: test
volumeClaimTemplates: #> 自動創建PVC,為后端的Pod提供專有的存儲。**
- metadata:
name: test
annotations: #這是指定storageclass
volume.beta.kubernetes.io/storage-class: stateful-nfs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
在此示例中:
headless-svc
的 Service 對象,由 metadata: name
字段指示。該 Service 會定位一個名為 headless-svc
的應用,由 labels: app: headless-svc
和 selector: app: headless-pod
指示。該 Service 會公開端口 80 并將其命名為 web
。而且該 Service 會控制網域并將互聯網流量路由到 StatefulSet 部署的容器化應用。replicas: 3
) 創建了一個名為 web
的 StatefulSet。spec: template
) 指示其 Pod 標記為 app: headless-pod
。template: spec
) 指示 StatefulSet 的 Pod 運行一個容器 myhttpd
,該容器運行版本為 httpd
映像。容器映像由 Container Registry 管理。web
端口。template: spec: volumeMounts
指定一個名為 test
的 mountPath
。mountPath
是容器中應裝載存儲卷的路徑。test
。[root@master yaml]# kubectl apply -f statefulset.yaml
[root@master yaml]# kubectl get pod
如果第一個pod出現了問題,后面的pod就不會生成。
[root@master yaml]# kubectl get statefulsets
[root@master yaml]# kubectl exec -it statefulset-test-0 /bin/sh
# cd /mnt
# touch testfile
# exit
[root@master yaml]# ls /nfsdata/default-test-statefulset-test-0-pvc-bf1ae1d0-f496-4d69-b33b-39e8aa0a6e8d/
testfile
以自己的名稱創建一個名稱空間,以下所有資源都運行在此空間中。用statefuset資源運行一個httpd web服務,要求3個Pod,但是每個Pod的主界面內容不一樣,并且都要做專有的數據持久化,嘗試刪除其中一個Pod,查看新生成的Pod,總結對比與之前Deployment資源控制器控制的Pod有什么不同之處?
注意:nfs服務要開啟
[root@master yaml]# vim namespace.yaml
kind: Namespace
apiVersion: v1
metadata:
name: xgp-lll #namespave的名稱
[root@master yaml]# kubectl apply -f namespace.yaml
[root@master yaml]# kubectl get namespaces
[root@master yaml]# vim rbac-rolebind.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
namespace: xgp-lll
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nfs-provisioner-runner
namespace: xgp-lll
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get","create","list", "watch","update"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: xgp-lll
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
[root@master yaml]# kubectl apply -f rbac-rolebind.yaml
[root@master yaml]# vim nfs-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nfs-client-provisioner
namespace: xgp-lll
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccount: nfs-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: xgp
- name: NFS_SERVER
value: 192.168.1.21
- name: NFS_PATH
value: /nfsdata
volumes:
- name: nfs-client-root
nfs:
server: 192.168.1.21
path: /nfsdata
[root@master yaml]# kubectl apply -f nfs-deployment.yaml
[root@master yaml]# kubectl get pod -n xgp-lll
[root@master yaml]# vim test-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: stateful-nfs
namespace: xgp-lll
provisioner: xgp #通過provisioner字段關聯到上述Deploy
reclaimPolicy: Retain
[root@master yaml]# kubectl apply -f test-storageclass.yaml
[root@master yaml]# kubectl get sc -n xgp-lll
apiVersion: v1
kind: Service
metadata:
name: headless-svc
namespace: xgp-lll
labels:
app: headless-svc
spec:
ports:
- port: 80
name: myweb
selector:
app: headless-pod
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: statefulset-test
namespace: xgp-lll
spec:
serviceName: headless-svc
replicas: 3
selector:
matchLabels:
app: headless-pod
template:
metadata:
labels:
app: headless-pod
spec:
containers:
- image: httpd
name: myhttpd
ports:
- containerPort: 80
name: httpd
volumeMounts:
- mountPath: /usr/local/apache2/htdocs
name: test
volumeClaimTemplates: #> 自動創建PVC,為后端的Pod提供專有的>存儲。**
- metadata:
name: test
annotations: #這是指定storageclass
volume.beta.kubernetes.io/storage-class: stateful-nfs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
[root@master yaml]# kubectl apply -f statefulset.yaml
[root@master yaml]# kubectl get pod -n xgp-lll
第一個
[root@master yaml]# kubectl exec -it -n xgp-lll statefulset-test-0 /bin/bash
root@statefulset-test-0:/usr/local/apache2# echo 123 > /usr/local/apache2/htdocs/index.html
第二個
[root@master yaml]# kubectl exec -it -n xgp-lll statefulset-test-1 /bin/bash
root@statefulset-test-2:/usr/local/apache2# echo 456 > /usr/local/apache2/htdocs/index.html
第三個
[root@master yaml]# kubectl exec -it -n xgp-lll statefulset-test-2 /bin/bash
root@statefulset-test-1:/usr/local/apache2# echo 789 > /usr/local/apache2/htdocs/index.html
第一個
[root@master yaml]# cat /nfsdata/xgp-lll-test-statefulset-test-0-pvc-ccaa02df-4721-4453-a6ec-4f2c928221d7/index.html
123
第二個
[root@master yaml]# cat /nfsdata/xgp-lll-test-statefulset-test-1-pvc-88e60a58-97ea-4986-91d5-a3a6e907deac/index.html
456
第三個
[root@master yaml]# cat /nfsdata/xgp-lll-test-statefulset-test-2-pvc-4eb2bbe2-63d2-431a-ba3e-b7b8d7e068d3/index.html
789
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。