您好,登錄后才能下訂單哦!
[TOC]
原由:在使用helm update
stable/sonatype-nexus
從1.6版本更新到1.13版本后,出現PVC刪除,重新創建PVC的情況,好在原來PV為Retain。故研究下Retain的PV怎么恢復數據。
實驗目的:PVC刪除后,PV因Retain策略,狀態為Released,將PV內數據恢復成PVC,掛載到POD內,達到數據恢復。
環境說明:
準備yaml文件:
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-test
spec:
accessModes:
- ReadWriteOnce
storageClassName: ceph-rbd
resources:
requests:
storage: 1Gi
nginx.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-rbd
spec:
replicas: 1
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: ceph-rbd-volume
mountPath: "/usr/share/nginx/html"
volumes:
- name: ceph-rbd-volume
persistentVolumeClaim:
claimName: pvc-test
新建pvc、deployment、寫入數據并刪除pvc操作過程:
[root@lab1 test]# ll
total 8
-rw-r--r-- 1 root root 533 Oct 24 17:54 nginx.yaml
-rw-r--r-- 1 root root 187 Oct 24 17:55 pvc.yaml
[root@lab1 test]# kubectl apply -f pvc.yaml
persistentvolumeclaim/pvc-test created
[root@lab1 test]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-test Bound pvc-069c4486-d773-11e8-bd12-000c2931d938 1Gi RWO ceph-rbd 7s
[root@lab1 test]# kubectl apply -f nginx.yaml
deployment.extensions/nginx-rbd created
[root@lab1 test]# kubectl get pod |grep nginx-rbd
nginx-rbd-7c6449886-thv25 1/1 Running 0 33s
[root@lab1 test]# kubectl exec -it nginx-rbd-7c6449886-thv25 -- /bin/bash -c 'echo ygqygq2 > /usr/share/nginx/html/ygqygq2.html'
[root@lab1 test]# kubectl exec -it nginx-rbd-7c6449886-thv25 -- cat /usr/share/nginx/html/ygqygq2.html
ygqygq2
[root@lab1 test]# kubectl delete -f nginx.yaml
deployment.extensions "nginx-rbd" deleted
[root@lab1 test]# kubectl get pvc pvc-test
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-test Bound pvc-069c4486-d773-11e8-bd12-000c2931d938 1Gi RWO ceph-rbd 4m10s
[root@lab1 test]# kubectl delete pvc pvc-test # 刪除PVC
persistentvolumeclaim "pvc-test" deleted
[root@lab1 test]# kubectl get pv pvc-069c4486-d773-11e8-bd12-000c2931d938
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-069c4486-d773-11e8-bd12-000c2931d938 1Gi RWO Retain Released default/pvc-test ceph-rbd 4m33s
[root@lab1 test]# kubectl get pv pvc-069c4486-d773-11e8-bd12-000c2931d938 -o yaml > /tmp/pvc-069c4486-d773-11e8-bd12-000c2931d938.yaml # 保留備用
從上面可以看到,pvc刪除后,pv變成Released狀態。
再次創建同名PVC,查看是否分配原來PV操作過程:
[root@lab1 test]# kubectl apply -f pvc.yaml
persistentvolumeclaim/pvc-test created
[root@lab1 test]# kubectl get pvc # 查看新建的PVC
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-test Bound pvc-f2df48ea-d773-11e8-b6c8-000c29ea3e30 1Gi RWO ceph-rbd 19s
[root@lab1 test]# kubectl get pv pvc-069c4486-d773-11e8-bd12-000c2931d938 # 查看原來的PV
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-069c4486-d773-11e8-bd12-000c2931d938 1Gi RWO Retain Released default/pvc-test ceph-rbd 7m18s
[root@lab1 test]#
從上面可以看到,PVC分配的是新的PV,因為PV狀態不是Available。
那怎么才能讓PV狀態變成Available呢?我們來查看之前的PV:
[root@lab1 test]# cat /tmp/pvc-069c4486-d773-11e8-bd12-000c2931d938.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: ceph.com/rbd
rbdProvisionerIdentity: ceph.com/rbd
creationTimestamp: 2018-10-24T09:56:06Z
finalizers:
- kubernetes.io/pv-protection
name: pvc-069c4486-d773-11e8-bd12-000c2931d938
resourceVersion: "11752758"
selfLink: /api/v1/persistentvolumes/pvc-069c4486-d773-11e8-bd12-000c2931d938
uid: 06b57ef7-d773-11e8-bd12-000c2931d938
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: pvc-test
namespace: default
resourceVersion: "11751559"
uid: 069c4486-d773-11e8-bd12-000c2931d938
persistentVolumeReclaimPolicy: Retain
rbd:
fsType: ext4
image: kubernetes-dynamic-pvc-06a25bd3-d773-11e8-8c3e-0a580af400d5
keyring: /etc/ceph/keyring
monitors:
- 192.168.105.92:6789
- 192.168.105.93:6789
- 192.168.105.94:6789
pool: kube
secretRef:
name: ceph-secret
namespace: kube-system
user: kube
storageClassName: ceph-rbd
status:
phase: Released
從上面可以看到,spec.claimRef這段,仍保留之前的PVC信息。
我們大膽刪除spec.claimRef這段。再次查看PV:
kubectl edit pv pvc-069c4486-d773-11e8-bd12-000c2931d938
[root@lab1 test]# kubectl get pv pvc-069c4486-d773-11e8-bd12-000c2931d938
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-069c4486-d773-11e8-bd12-000c2931d938 1Gi RWO Retain Available ceph-rbd 10m
從上面可以看到,之前的PV
pvc-069c4486-d773-11e8-bd12-000c2931d938
已經變為Available。
再次創建PVC、deployment,并查看數據:
new_pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-test-new
spec:
accessModes:
- ReadWriteOnce
storageClassName: ceph-rbd
resources:
requests:
storage: 1Gi
new_nginx.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-rbd
spec:
replicas: 1
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: ceph-rbd-volume
mountPath: "/usr/share/nginx/html"
volumes:
- name: ceph-rbd-volume
persistentVolumeClaim:
claimName: pvc-test-new
操作過程:
[root@lab1 test]# kubectl apply -f new_pvc.yaml
persistentvolumeclaim/pvc-test-new created
[root@lab1 test]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-test Bound pvc-f2df48ea-d773-11e8-b6c8-000c29ea3e30 1Gi RWO ceph-rbd 31m
pvc-test-new Bound pvc-069c4486-d773-11e8-bd12-000c2931d938 1Gi RWO ceph-rbd 27m
[root@lab1 test]# kubectl apply -f new_nginx.yaml
[root@lab1 test]# kubectl get pod|grep nginx-rbd
nginx-rbd-79bb766b6c-mv2h8 1/1 Running 0 20m
[root@lab1 test]# kubectl exec -it nginx-rbd-79bb766b6c-mv2h8 -- ls /usr/share/nginx/html
lost+found ygqygq2.html
[root@lab1 test]# kubectl exec -it nginx-rbd-79bb766b6c-mv2h8 -- cat /usr/share/nginx/html/ygqygq2.html
ygqygq2
從上面可以看到,新的PVC分配到的是原來的PV
pvc-069c4486-d773-11e8-bd12-000c2931d938
,并且數據完全還在。
當前版本Kubernetes PVC存儲大小是唯一能被設置或請求的資源,因我們沒有修改PVC的大小,在PV的Available狀態下,有PVC請求分配相同大小時,PV會被分配出去并綁定成功。
在PV變成Available過程中,最關鍵的是PV的spec.claimRef
字段,該字段記錄著原來PVC的綁定信息,刪除綁定信息,即可重新釋放PV從而達到Available。
參考資料:
[1] https://kubernetes.io/docs/concepts/storage/persistent-volumes/
[2] https://kubernetes.io/docs/concepts/storage/storage-classes/
[3] http://dockone.io/article/2082
[4] http://dockone.io/article/2087
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。