您好,登錄后才能下訂單哦!
這期內容當中小編將會給大家帶來有關Kubernetes怎樣部署Nebula圖數據庫集群,文章內容豐富且以專業的角度為大家分析和敘述,閱讀完這篇文章希望大家可以有所收獲。
Kubernetes 是一個開源的,用于管理云平臺中多個主機上的容器化的應用,Kubernetes 的目標是讓部署容器化的應用簡單并且高效,Kubernetes 提供了應用部署,規劃,更新,維護的一種機制。
Kubernetes 在設計結構上定義了一系列的構建模塊,其目的是為了提供一個可以部署、維護和擴展應用程序的機制,組成 Kubernetes 的組件設計概念為松耦合和可擴展的,這樣可以使之滿足多種不同的工作負載。可擴展性在很大程度上由 Kubernetes
API 提供,此 API 主要被作為擴展的內部組件以及 Kubernetes 上運行的容器來使用。
Kubernetes 主要由以下幾個核心組件組成:
etcd
保存了整個集群的狀態
apiserver
提供了資源操作的唯一入口,并提供認證、授權、訪問控制、API注冊和發現等機制
controller manager
負責維護集群的狀態,比如故障檢測、自動擴展、滾動更新等
scheduler
負責資源的調度,按照預定的調度策略將Pod調度到相應的機器上
kubelet
負責維護容器的生命周期,同時也負責 Volume和網絡的管理
Container runtime
負責鏡像管理以及 Pod 和容器的真正運行(CRI)
kube-proxy
負責為 Service 提供 cluster 內部的服務發現和負載均衡
除了核心組件,還有一些推薦的 Add-ons:
kube-dns
負責為整個集群提供 DNS 服務
Ingress Controller
為服務提供外網入口
Heapster
提供資源監控
Dashboard
提供 GUI
Federation
提供跨可用區的集群
Fluentd-elasticsearch
提供集群日志采集、存儲與查詢
數據庫容器化是最近的一大熱點,那么 Kubernetes 能為數據庫帶來什么好處呢?
故障恢復: Kubernetes 提供故障恢復的功能,數據庫應用如果宕掉,Kubernetes 可以將其自動重啟,或者將數據庫實例遷移到集群中其他節點上
存儲管理: Kubernetes 提供了豐富的存儲接入方案,數據庫應用能透明地使用不同類型的存儲系統
負載均衡: Kubernetes Service 提供負載均衡功能,能將外部訪問平攤給不同的數據庫實例副本上
水平拓展: Kubernetes 可以根據當前數據庫集群的資源利用率情況,縮放副本數目,從而提升資源的利用率
目前很多數據庫,如:MySQL,MongoDB 和 TiDB 在 Kubernetes 集群中都能運行很良好。
Nebula Graph 是一個分布式的開源圖數據庫,主要組件有:Query Engine 的 graphd,數據存儲的 storaged,和元數據的 meted。在 Kubernetes 實踐過程中,它主要給圖數據庫 Nebula Graph 帶來了以下的好處:
Kubernetes 能分攤 nebula graphd,metad 和 storaged 不副本之間的負載。graphd,metad 和 storaged 可以通過 Kubernetes 的域名服務自動發現彼此。
通過 storageclass,pvc 和 pv 可以屏蔽底層存儲細節,無論使用本地卷還是云盤,Kubernetes 均可以屏蔽這些細節。
通過 Kubernetes 可以在幾秒內成功部署一套 Nebula 集群,Kubernetes 也可以無感知地實現 Nebula 集群的升級。
Nebula 集群通過 Kubernetes 可以做到自我恢復,單體副本 crash,Kubernetes 可以重新將其拉起,無需運維人員介入。
Kubernetes 可以根據當前 Nebula 集群的資源利用率情況水平伸縮 Nebula 集群,從而提供集群的性能。
下面來講解下具體的實踐內容。
這里主要羅列下本文部署涉及到的機器、操作系統參數
操作系統使用的 CentOS-7.6.1810 x86_64
虛擬機配置
4 CPU
8G 內存
50G 系統盤
50G 數據盤A
50G 數據盤B
Kubernetes 集群版本 v1.16
Nebula 版本為 v1.0.0-rc3
使用本地 PV 作為數據存儲
以下為集群清單
服務器 IP | nebula 實例 | role |
---|---|---|
192.168.0.1 | k8s-master | |
192.168.0.2 | graphd, metad-0, storaged-0 | k8s-slave |
192.168.0.3 | graphd, metad-1, storaged-1 | k8s-slave |
192.168.0.4 | graphd, metad-2, storaged-2 | k8s-slave |
安裝 Helm
準備本地磁盤,并安裝本地卷插件
安裝 nebula 集群
安裝 ingress-controller
Helm 是 Kubernetes 集群上的包管理工具,類似 CentOS 上的 yum,Ubuntu 上的 apt-get。使用 Helm 可以極大地降低使用 Kubernetes 部署應用的門檻。由于本篇文章不做 Helm 詳細介紹,有興趣的小伙伴可自行閱讀 《Helm 入門指南》
使用下面命令在終端執行即可安裝 Helm
[root@nebula ~]# wget https://get.helm.sh/helm-v3.0.1-linux-amd64.tar.gz [root@nebula ~]# tar -zxvf helm/helm-v3.0.1-linux-amd64.tgz [root@nebula ~]# mv linux-amd64/helm /usr/bin/helm [root@nebula ~]# chmod +x /usr/bin/helm
執行
helm version
命令即可查看對應的 Helm 版本,以文本為例,以下為輸出結果:
version.BuildInfo{ Version:"v3.0.1", GitCommit:"7c22ef9ce89e0ebeb7125ba2ebf7d421f3e82ffa", GitTreeState:"clean", GoVersion:"go1.13.4" }
在每臺機器上做如下配置
[root@nebula ~]# sudo mkdir -p /mnt/disks
[root@nebula ~]# sudo mkfs.ext4 /dev/diskA [root@nebula ~]# sudo mkfs.ext4 /dev/diskB
[root@nebula ~]# DISKA_UUID=$(blkid -s UUID -o value /dev/diskA) [root@nebula ~]# DISKB_UUID=$(blkid -s UUID -o value /dev/diskB) [root@nebula ~]# sudo mkdir /mnt/disks/$DISKA_UUID [root@nebula ~]# sudo mkdir /mnt/disks/$DISKB_UUID [root@nebula ~]# sudo mount -t ext4 /dev/diskA /mnt/disks/$DISKA_UUID [root@nebula ~]# sudo mount -t ext4 /dev/diskB /mnt/disks/$DISKB_UUID [root@nebula ~]# echo UUID=`sudo blkid -s UUID -o value /dev/diskA` /mnt/disks/$DISKA_UUID ext4 defaults 0 2 | sudo tee -a /etc/fstab [root@nebula ~]# echo UUID=`sudo blkid -s UUID -o value /dev/diskB` /mnt/disks/$DISKB_UUID ext4 defaults 0 2 | sudo tee -a /etc/fstab
[root@nebula ~]# curl https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/archive/v2.3.3.zip [root@nebula ~]# unzip v2.3.3.zip
修改 v2.3.3/helm/provisioner/values.yaml
# # Common options. # common: # # Defines whether to generate service account and role bindings. # rbac: true # # Defines the namespace where provisioner runs # namespace: default # # Defines whether to create provisioner namespace # createNamespace: false # # Beta PV.NodeAffinity field is used by default. If running against pre-1.10 # k8s version, the `useAlphaAPI` flag must be enabled in the configMap. # useAlphaAPI: false # # Indicates if PVs should be dependents of the owner Node. # setPVOwnerRef: false # # Provisioner clean volumes in process by default. If set to true, provisioner # will use Jobs to clean. # useJobForCleaning: false # # Provisioner name contains Node.UID by default. If set to true, the provisioner # name will only use Node.Name. # useNodeNameOnly: false # # Resync period in reflectors will be random between minResyncPeriod and # 2*minResyncPeriod. Default: 5m0s. # #minResyncPeriod: 5m0s # # Defines the name of configmap used by Provisioner # configMapName: "local-provisioner-config" # # Enables or disables Pod Security Policy creation and binding # podSecurityPolicy: false # # Configure storage classes. # classes: - name: fast-disks # Defines name of storage classe. # Path on the host where local volumes of this storage class are mounted # under. hostDir: /mnt/fast-disks # Optionally specify mount path of local volumes. By default, we use same # path as hostDir in container. # mountDir: /mnt/fast-disks # The volume mode of created PersistentVolume object. Default to Filesystem # if not specified. volumeMode: Filesystem # Filesystem type to mount. # It applies only when the source path is a block device, # and desire volume mode is Filesystem. # Must be a filesystem type supported by the host operating system. fsType: ext4 blockCleanerCommand: # Do a quick reset of the block device during its cleanup. # - "/scripts/quick_reset.sh" # or use dd to zero out block dev in two iterations by uncommenting these lines # - "/scripts/dd_zero.sh" # - "2" # or run shred utility for 2 iteration.s - "/scripts/shred.sh" - "2" # or blkdiscard utility by uncommenting the line below. # - "/scripts/blkdiscard.sh" # Uncomment to create storage class object with default configuration. # storageClass: true # Uncomment to create storage class object and configure it. # storageClass: # reclaimPolicy: Delete # Available reclaim policies: Delete/Retain, defaults: Delete. # isDefaultClass: true # set as default class # # Configure DaemonSet for provisioner. # daemonset: # # Defines the name of a Provisioner # name: "local-volume-provisioner" # # Defines Provisioner's image name including container registry. # image: quay.io/external_storage/local-volume-provisioner:v2.3.3 # # Defines Image download policy, see kubernetes documentation for available values. # #imagePullPolicy: Always # # Defines a name of the service account which Provisioner will use to communicate with API server. # serviceAccount: local-storage-admin # # Defines a name of the Pod Priority Class to use with the Provisioner DaemonSet # # Note that if you want to make it critical, specify "system-cluster-critical" # or "system-node-critical" and deploy in kube-system namespace. # Ref: https://k8s.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical # #priorityClassName: system-node-critical # If configured, nodeSelector will add a nodeSelector field to the DaemonSet PodSpec. # # NodeSelector constraint for local-volume-provisioner scheduling to nodes. # Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector nodeSelector: {} # # If configured KubeConfigEnv will (optionally) specify the location of kubeconfig file on the node. # kubeConfigEnv: KUBECONFIG # # List of node labels to be copied to the PVs created by the provisioner in a format: # # nodeLabels: # - failure-domain.beta.kubernetes.io/zone # - failure-domain.beta.kubernetes.io/region # # If configured, tolerations will add a toleration field to the DaemonSet PodSpec. # # Node tolerations for local-volume-provisioner scheduling to nodes with taints. # Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ tolerations: [] # # If configured, resources will set the requests/limits field to the Daemonset PodSpec. # Ref: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ resources: {} # # Configure Prometheus monitoring # prometheus: operator: ## Are you using Prometheus Operator? enabled: false serviceMonitor: ## Interval at which Prometheus scrapes the provisioner interval: 10s # Namespace Prometheus is installed in namespace: monitoring ## Defaults to whats used if you follow CoreOS [Prometheus Install Instructions](https://github.com/coreos/prometheus-operator/tree/master/helm#tldr) ## [Prometheus Selector Label](https://github.com/coreos/prometheus-operator/blob/master/helm/prometheus/templates/prometheus.yaml#L65) ## [Kube Prometheus Selector Label](https://github.com/coreos/prometheus-operator/blob/master/helm/kube-prometheus/values.yaml#L298) selector: prometheus: kube-prometheus
將hostDir: /mnt/fast-disks
改成hostDir: /mnt/disks
將# storageClass: true
改成
storageClass: true
然后執行:
#安裝 [root@nebula ~]# helm install local-static-provisioner v2.3.3/helm/provisioner #查看local-static-provisioner部署情況 [root@nebula ~]# helm list
# 下載nebula [root@nebula ~]# wget https://github.com/vesoft-inc/nebula/archive/master.zip # 解壓 [root@nebula ~]# unzip master.zip
下面是 Kubernetes 節點列表,我們需要設置 slave 節點的調度標簽。可以將 192.168.0.2,192.168.0.3,192.168.0.4 打上 nebula: “yes” 的標簽。
服務器 IP | kubernetes roles | nodeName |
---|---|---|
192.168.0.1 | master | 192.168.0.1 |
192.168.0.2 | worker | 192.168.0.2 |
192.168.0.3 | worker | 192.168.0.3 |
192.168.0.4 | worker | 192.168.0.4 |
具體操作如下:
[root@nebula ~]# kubectl label node 192.168.0.2 nebula="yes" --overwrite [root@nebula ~]# kubectl label node 192.168.0.3 nebula="yes" --overwrite [root@nebula ~]# kubectl label node 192.168.0.4 nebula="yes" --overwrite
nebula helm-chart 包目錄如下:
master/kubernetes/ └── helm ├── Chart.yaml ├── templates │ ├── configmap.yaml │ ├── deployment.yaml │ ├── _helpers.tpl │ ├── ingress-configmap.yaml\ │ ├── NOTES.txt │ ├── pdb.yaml │ ├── service.yaml │ └── statefulset.yaml └── values.yaml 2 directories, 10 files
我們需要調整
master/kubernetes/values.yaml
里面的 MetadHosts 的值,將這個 IP List 替換本環境的 3 個 k8s worker 的 ip。
MetadHosts: - 192.168.0.2:44500 - 192.168.0.3:44500 - 192.168.0.4:44500
# 安裝 [root@nebula ~]# helm install nebula master/kubernetes/helm # 查看 [root@nebula ~]# helm status nebula # 查看k8s集群上nebula部署情況 [root@nebula ~]# kubectl get pod | grep nebula nebula-graphd-579d89c958-g2j2c 1/1 Running 0 1m nebula-graphd-579d89c958-p7829 1/1 Running 0 1m nebula-graphd-579d89c958-q74zx 1/1 Running 0 1m nebula-metad-0 1/1 Running 0 1m nebula-metad-1 1/1 Running 0 1m nebula-metad-2 1/1 Running 0 1m nebula-storaged-0 1/1 Running 0 1m nebula-storaged-1 1/1 Running 0 1m nebula-storaged-2 1/1 Running 0 1m
Ingress-controller 是 Kubernetes 的一個 Add-Ons。Kubernetes 通過 ingress-controller 將 Kubernetes 內部署的服務暴露給外部用戶訪問。Ingress-controller 還提供負載均衡的功能,可以將外部訪問流量平攤給 k8s 中應用的不同的副本。
[root@nebula ~]# kubectl get node NAME STATUS ROLES AGE VERSION 192.168.0.1 Ready master 82d v1.16.1 192.168.0.2 Ready <none> 82d v1.16.1 192.168.0.3 Ready <none> 82d v1.16.1 192.168.0.4 Ready <none> 82d v1.16.1 [root@nebula ~]# kubectl label node 192.168.0.4 ingress=yes
編寫 ingress-nginx.yaml 部署文件
apiVersion: v1 kind: Namespace metadata: name: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: nginx-configuration namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: tcp-services namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: udp-services namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- apiVersion: v1 kind: ServiceAccount metadata: name: nginx-ingress-serviceaccount namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: nginx-ingress-clusterrole labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx rules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - watch - apiGroups: - "extensions" - "networking.k8s.io" resources: - ingresses verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "extensions" - "networking.k8s.io" resources: - ingresses/status verbs: - update --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: nginx-ingress-role namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx rules: - apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps resourceNames: # Defaults to "<election-id>-<ingress-class>" # Here: "<ingress-controller-leader>-<nginx>" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller. - "ingress-controller-leader-nginx" verbs: - get - update - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - endpoints verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: nginx-ingress-role-nisa-binding namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: nginx-ingress-role subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: nginx-ingress-clusterrole-nisa-binding labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nginx-ingress-clusterrole subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx --- apiVersion: apps/v1 kind: DaemonSet metadata: name: nginx-ingress-controller namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" spec: hostNetwork: true tolerations: - key: "node-role.kubernetes.io/master" operator: "Exists" effect: "NoSchedule" affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app.kubernetes.io/name operator: In values: - ingress-nginx topologyKey: "ingress-nginx.kubernetes.io/master" nodeSelector: ingress: "yes" serviceAccountName: nginx-ingress-serviceaccount containers: - name: nginx-ingress-controller image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller-amd64:0.26.1 args: - /nginx-ingress-controller - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=default/graphd-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services - --publish-service=$(POD_NAMESPACE)/ingress-nginx - --annotations-prefix=nginx.ingress.kubernetes.io - --http-port=8000 securityContext: allowPrivilegeEscalation: true capabilities: drop: - ALL add: - NET_BIND_SERVICE # www-data -> 33 runAsUser: 33 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 - name: https containerPort: 443 livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10
部署 ingress-nginx
# 部署 [root@nebula ~]# kubectl create -f ingress-nginx.yaml # 查看部署情況 [root@nebula ~]# kubectl get pod -n ingress-nginx NAME READY STATUS RESTARTS AGE nginx-ingress-controller-mmms7 1/1 Running 2 1m
查看 ingress-nginx 所在的節點:
[root@nebula ~]# kubectl get node -l ingress=yes -owide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME 192.168.0.4 Ready <none> 1d v1.16.1 192.168.0.4 <none> CentOS Linux 7 (Core) 7.6.1810.el7.x86_64 docker://19.3.3
訪問 nebula 集群:
[root@nebula ~]# docker run --rm -ti --net=host vesoft/nebula-console:nightly --addr=192.168.0.4 --port=3699
如何調整 nebula 集群的部署參數?
在使用 helm install 時,使用 —set 可以設置部署參數,從而覆蓋掉 helm chart 中 values.yaml 中的變量。
如何查看 nebula 集群狀況?
使用kubectl get pod | grep nebula
命令,或者直接在 Kubernetes dashboard 上查看 nebula 集群的運行狀況。
上述就是小編為大家分享的Kubernetes怎樣部署Nebula圖數據庫集群了,如果剛好有類似的疑惑,不妨參照上述分析進行理解。如果想知道更多相關知識,歡迎關注億速云行業資訊頻道。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。