91超碰碰碰碰久久久久久综合_超碰av人澡人澡人澡人澡人掠_国产黄大片在线观看画质优化_txt小说免费全本

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

如何實現rook ceph的三位一體

發布時間:2021-12-17 09:33:51 來源:億速云 閱讀:141 作者:小新 欄目:云計算

小編給大家分享一下如何實現rook ceph的三位一體,相信大部分人都還不怎么了解,因此分享這篇文章給大家參考一下,希望大家閱讀完這篇文章后大有收獲,下面讓我們一起去了解一下吧!

快速上手

官網地址:https://rook.io/

項目地址:https://github.com/rook/rook

安裝集群

準備osd存儲介質

硬盤符號大小作用
sdb50GBOSD Data
sdc50GBOSD Data
sdd50GBOSD Data
sde50GBOSD Metadata

> 安裝前使用命令lvm lvs,lvm vgslvm pvs檢查上述硬盤是否已經被使用,若已經使用需要刪除,且確保硬盤上不存在分區和文件系統

確保開啟內核rbd模塊并安裝lvm2

modprobe rbd
yum install -y lvm2

安裝operator

git clone --single-branch --branch release-1.2 https://github.com/rook/rook.git
cd rook/cluster/examples/kubernetes/ceph
kubectl create -f common.yaml
kubectl create -f operator.yaml

安裝ceph集群

---
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
  name: rook-ceph
  namespace: rook-ceph
spec:
  cephVersion:
    image: ceph/ceph:v14.2.5
    allowUnsupported: false
  dataDirHostPath: /var/lib/rook
  skipUpgradeChecks: false
  mon:
    count: 3
    allowMultiplePerNode: true
  mgr:
    modules:
    - name: pg_autoscaler
      enabled: true
  dashboard:
    enabled: true
    ssl: true
  monitoring:
    enabled: false
    rulesNamespace: rook-ceph
  network:
    hostNetwork: false
  rbdMirroring:
    workers: 0
  annotations:
  resources:
  removeOSDsIfOutAndSafeToRemove: false
    useAllNodes: false
    useAllDevices: false
    config:
    nodes:
    - name: "minikube"
      devices:
      - name: "sdb"
      - name: "sdc"
      - name: "sdd"
      config:
        storeType: bluestore
        metadataDevice: "sde"
        databaseSizeMB: "1024"
        journalSizeMB: "1024"
        osdsPerDevice: "1"
  disruptionManagement:
    managePodBudgets: false
    osdMaintenanceTimeout: 30
    manageMachineDisruptionBudgets: false
    machineDisruptionBudgetNamespace: openshift-machine-api

安裝命令行工具

kubectl create -f toolbox.yaml

在toolbox中使用命令ceph -s查看集群狀態

> 在重裝ceph集群時需要清理rook數據目錄(默認:/var/lib/rook)

為ceph-dashboard服務添加ingress路由

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: rook-ceph-mgr-dashboard
  namespace: rook-ceph
  annotations:
    kubernetes.io/ingress.class: "nginx"
    kubernetes.io/tls-acme: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/server-snippet: |
      proxy_ssl_verify off;
spec:
  tls:
   - hosts:
     - rook-ceph.minikube.local
     secretName: rook-ceph.minikube.local
  rules:
  - host: rook-ceph.minikube.local
    http:
      paths:
      - path: /
        backend:
          serviceName: rook-ceph-mgr-dashboard
          servicePort: https-dashboard

獲取訪問dashboard所需的admin賬號密碼

kubectl get secret rook-ceph-dashboard-password -n rook-ceph -o jsonpath='{.data.password}'|base64 -d

將域名rook-ceph.minikube.local加入/etc/hosts后通過瀏覽器訪問

https://rook-ceph.minikube.local/

如何實現rook ceph的三位一體

使用rbd存儲

創建rbd存儲池

---
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
  name: replicapool
  namespace: rook-ceph
spec:
  failureDomain: osd
  replicated:
    size: 3

> 由于僅有一個節點和三個OSD,因此采用osd作為故障域

創建完成后在rook-ceph-tools中使用指令ceph osd pool ls可以看到新建了以下存儲池

  • replicapool

以rbd為存儲介質創建storageclass

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: rook-ceph-block
provisioner: rook-ceph.rbd.csi.ceph.com
parameters:
  clusterID: rook-ceph
  pool: replicapool
  imageFormat: "2"
  imageFeatures: layering
  csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
  csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
  csi.storage.k8s.io/fstype: ext4
reclaimPolicy: Delete

使用statefulset測試通過storageclass掛載rbd存儲

---
kind: StatefulSet
apiVersion: apps/v1
metadata:
  name: storageclass-rbd-test
  namespace: default
  labels:
    app: storageclass-rbd-test
spec:
  replicas: 2
  selector:
    matchLabels:
      app: storageclass-rbd-test
  template:
    metadata:
      labels:
        app: storageclass-rbd-test
    spec:
      restartPolicy: Always
      containers:
        - name: storageclass-rbd-test
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: data
              mountPath: /data
          image: 'centos:7'
          args:
            - 'sh'
            - '-c'
            - 'sleep 3600'
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
        storageClassName: rook-ceph-block

使用cephfs存儲

創建mds服務與cephfs文件系統

---
apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
  name: myfs
  namespace: rook-ceph
spec:
  metadataPool:
    failureDomain: osd
    replicated:
      size: 3
  dataPools:
    - failureDomain: osd
      replicated:
        size: 3
  preservePoolsOnDelete: true
  metadataServer:
    activeCount: 1
    activeStandby: true
    placement:
    annotations:
    resources:

創建完成后在rook-ceph-tools中使用指令ceph osd pool ls可以看到新建了以下存儲池

  • myfs-metadata

  • myfs-data0

以cephfs為存儲介質創建storageclass

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: csi-cephfs
provisioner: rook-ceph.cephfs.csi.ceph.com
parameters:
  clusterID: rook-ceph
  fsName: myfs
  pool: myfs-data0
  csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
  csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
  csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
reclaimPolicy: Delete
mountOptions:

使用deployment測試通過storageclass掛載cephfs共享存儲

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: data-storageclass-cephfs-test
  namespace: default
  labels:
    app: storageclass-cephfs-test
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-cephfs
  volumeMode: Filesystem
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: storageclass-cephfs-test
  namespace: default
  labels:
    app: storageclass-cephfs-test
spec:
  replicas: 2
  selector:
    matchLabels:
      app: storageclass-cephfs-test
  template:
    metadata:
      labels:
        app: storageclass-cephfs-test
    spec:
      restartPolicy: Always
      containers:
        - name: storageclass-cephfs-test
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: data
              mountPath: /data
          image: 'centos:7'
          args:
            - 'sh'
            - '-c'
            - 'sleep 3600'
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: data-storageclass-cephfs-test

使用s3存儲

創建對象存儲網關

---
apiVersion: ceph.rook.io/v1
kind: CephObjectStore
metadata:
  name: my-store
  namespace: rook-ceph
spec:
  metadataPool:
    failureDomain: osd
    replicated:
      size: 3
  dataPool:
    failureDomain: osd
    replicated:
      size: 3
  preservePoolsOnDelete: false
  gateway:
    type: s3
    sslCertificateRef:
    port: 80
    securePort:
    instances: 1
    placement:
    annotations:
    resources:

創建完成后在rook-ceph-tools中使用指令ceph osd pool ls可以看到新建了以下存儲池

  • .rgw.root

  • my-store.rgw.buckets.data

  • my-store.rgw.buckets.index

  • my-store.rgw.buckets.non-ec

  • my-store.rgw.control

  • my-store.rgw.log

  • my-store.rgw.meta

為ceph-rgw服務添加ingress路由

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: rook-ceph-rgw
  namespace: rook-ceph
  annotations:
    kubernetes.io/ingress.class: "nginx"
    kubernetes.io/tls-acme: "true"
spec:
  tls:
   - hosts:
     - rook-ceph-rgw.minikube.local
     secretName: rook-ceph-rgw.minikube.local
  rules:
  - host: rook-ceph-rgw.minikube.local
    http:
      paths:
      - path: /
        backend:
          serviceName: rook-ceph-rgw-my-store
          servicePort: http

將域名rook-ceph-rgw.minikube.local加入/etc/hosts后通過瀏覽器訪問

https://rook-ceph-rgw.minikube.local/

如何實現rook ceph的三位一體

使用S3用戶

添加對象存儲用戶

---
apiVersion: ceph.rook.io/v1
kind: CephObjectStoreUser
metadata:
  name: my-user
  namespace: rook-ceph
spec:
  store: my-store
  displayName: "my display name"

創建對象存儲用戶的同時會生成以{{.metadata.namespace}}-object-user-{{.spec.store}}-{{.metadata.name}}為命名規則的secret,其中保存了該S3用戶的AccessKey和SecretKey

獲取AccessKey

kubectl get secret rook-ceph-object-user-my-store-my-user -n rook-ceph -o jsonpath='{.data.AccessKey}'|base64 -d

獲取SecretKey

kubectl get secret rook-ceph-object-user-my-store-my-user -n rook-ceph -o jsonpath='{.data.SecretKey}'|base64 -d

根據上述步驟獲取到的信息,使用S3客戶端進行連接即可使用該S3用戶

如何實現rook ceph的三位一體

使用S3存儲桶

創建以s3為存儲的storageclass

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: rook-ceph-delete-bucket
provisioner: ceph.rook.io/bucket
reclaimPolicy: Delete
parameters:
  objectStoreName: my-store
  objectStoreNamespace: rook-ceph
  region: default

> 目前不支持以s3存儲創建pvc,僅可用于創建存儲桶

為storageclass創建對應的存儲桶資源申請

apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
  name: ceph-delete-bucket
spec:
  generateBucketName: ceph-bkt
  storageClassName: rook-ceph-delete-bucket

存儲桶創建后會生成與桶資源申請同名的secret,其中保存著用于連接該存儲桶的AccessKey和SecretKey

獲取AccessKey

kubectl get secret ceph-delete-bucket -n rook-ceph -o jsonpath='{.data.AWS_ACCESS_KEY_ID}'|base64 -d

獲取SecretKey

kubectl get secret ceph-delete-bucket -n rook-ceph -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}'|base64 -d

> 使用該方式獲取的s3用戶已經做了配額限制只能使用一個存儲桶

以上是“如何實現rook ceph的三位一體”這篇文章的所有內容,感謝各位的閱讀!相信大家都有了一定的了解,希望分享的內容對大家有所幫助,如果還想學習更多知識,歡迎關注億速云行業資訊頻道!

向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

高淳县| 泾源县| 金坛市| 雷州市| 乌拉特中旗| 澎湖县| 正蓝旗| 喀喇| 岑巩县| 郴州市| 双牌县| 启东市| 鸡泽县| 城口县| 营口市| 上思县| 呼和浩特市| 宽城| 伊春市| 叙永县| 屏东市| 鸡西市| 从化市| 甘肃省| 临泉县| 罗平县| 定日县| 酒泉市| 资阳市| 通渭县| 六枝特区| 镇原县| 石首市| 光泽县| 安仁县| 含山县| 基隆市| 白朗县| 兖州市| 定远县| 柯坪县|