91超碰碰碰碰久久久久久综合_超碰av人澡人澡人澡人澡人掠_国产黄大片在线观看画质优化_txt小说免费全本

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

Kubernetes 監控方案之 Prometheus Operator(十九)

發布時間:2020-08-03 12:05:38 來源:網絡 閱讀:2714 作者:wzlinux 欄目:云計算

一、Prometheus 介紹

Prometheus Operator 是 CoreOS 開發的基于 Prometheus 的 Kubernetes 監控方案,也可能是目前功能最全面的開源方案。
Prometheus Operator 通過 Grafana 展示監控數據,預定義了一系列的 Dashboard

1.1、Prometheus 架構

Prometheus 是一個非常優秀的監控工具。準確的說,應該是監控方案。Prometheus 提供了數據搜集、存儲、處理、可視化和告警一套完整的解決方案。Prometheus 的架構如下圖所示:
Kubernetes 監控方案之 Prometheus Operator(十九)

Prometheus Server
Prometheus Server 負責從 Exporter 拉取和存儲監控數據,并提供一套靈活的查詢語言(PromQL)供用戶使用。

Exporter
Exporter 負責收集目標對象(host, container...)的性能數據,并通過 HTTP 接口供 Prometheus Server 獲取。

可視化組件
監控數據的可視化展現對于監控方案至關重要。以前 Prometheus 自己開發了一套工具,不過后來廢棄了,因為開源社區出現了更為優秀的產品 Grafana。Grafana 能夠與 Prometheus 無縫集成,提供完美的數據展示能力。

Alertmanager
用戶可以定義基于監控數據的告警規則,規則會觸發告警。一旦 Alermanager 收到告警,會通過預定義的方式發出告警通知。支持的方式包括 Email、PagerDuty、Webhook 等.

1.2、Prometheus Operator 架構

Prometheus Operator 目前功能最全面的開源監控方案。 能夠監控Node Port ,并支持集群的各種管理組件,如 API Server 、Scheduler、Controller Manager等。

Prometheus Operator 的目標是盡可能簡化在 Kubernetes 中部署和維護 Prometheus 的工作。其架構如下圖所示:
Kubernetes 監控方案之 Prometheus Operator(十九)

圖上的每一個對象都是 Kubernetes 中運行的資源。

Operator
Operator 即 Prometheus Operator,在 Kubernetes 中以 Deployment 運行。其職責是部署和管理 Prometheus Server,根據 ServiceMonitor 動態更新 Prometheus Server 的監控對象。

Prometheus Server
Prometheus Server 會作為 Kubernetes 應用部署到集群中。為了更好地在 Kubernetes 中管理 Prometheus,CoreOS 的開發人員專門定義了一個命名為 Prometheus 類型的 Kubernetes 定制化資源。我們可以把 Prometheus看作是一種特殊的 Deployment,它的用途就是專門部署 Prometheus Server。

Service
這里的 Service 就是 Cluster 中的 Service 資源,也是 Prometheus 要監控的對象,在 Prometheus 中叫做 Target。每個監控對象都有一個對應的 Service。比如要監控 Kubernetes Scheduler,就得有一個與 Scheduler 對應的 Service。當然,Kubernetes 集群默認是沒有這個 Service 的,Prometheus Operator 會負責創建。

ServiceMonitor
Operator 能夠動態更新 Prometheus 的 Target 列表,ServiceMonitor 就是 Target 的抽象。比如想監控 Kubernetes Scheduler,用戶可以創建一個與 Scheduler Service 相映射的 ServiceMonitor 對象。Operator 則會發現這個新的 ServiceMonitor,并將 Scheduler 的 Target 添加到 Prometheus 的監控列表中。

ServiceMonitor 也是 Prometheus Operator 專門開發的一種 Kubernetes 定制化資源類型。

Alertmanager
除了 Prometheus 和 ServiceMonitor,Alertmanager 是 Operator 開發的第三種 Kubernetes 定制化資源。我們可以把 Alertmanager 看作是一種特殊的 Deployment,它的用途就是專門部署 Alertmanager 組件。

二、Helm 安裝部署

Helm 有兩個重要的概念:chart 和 release。
chart 是創建一個應用的信息集合,包括各種 Kubernetes 對象的配置模板、參數定義、依賴關系、文檔說明等。chart 是應用部署的自包含邏輯單元。可以將 chart 想象成 apt、yum 中的軟件安裝包。
release 是 chart 的運行實例,代表了一個正在運行的應用。當 chart 被安裝到 Kubernetes 集群,就生成一個 release。chart 能夠多次安裝到同一個集群,每次安裝都是一個 release。

2.1、Helm 客戶端安裝

https://github.com/helm/helm/releases下載最新的版本。

[root@master ~]# tar xf helm-v2.12.1-linux-amd64.tar.gz
[root@master ~]# cp linux-amd64/helm /usr/local/bin/
[root@master ~]# helm version
Client: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}
Error: could not find tiller

目前只能查看到客戶端的版本,服務器還沒有安裝。

2.2、Tiller 服務器安裝

對于啟用了 RBAC 的集群,我們首先創建授權,參照文檔https://github.com/helm/helm/blob/master/docs/rbac.md,創建rbac-config.yaml,內容如下:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

然后進行創建。

[root@master ~]# kubectl apply -f rbac-config.yaml 
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created

Tiller 服務器安裝非常簡單,只需要執行 helm init,版本盡快使用相同的,因為國內訪問不了google的鏡像,我們使用國內的阿里云:

[root@master ~]# helm init --service-account tiller --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.12.1 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
Creating /root/.helm/repository 
Creating /root/.helm/repository/cache 
Creating /root/.helm/repository/local 
Creating /root/.helm/plugins 
Creating /root/.helm/starters 
Creating /root/.helm/cache/archive 
Creating /root/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

查看安裝結果。

[root@master ~]# helm version
Client: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}
[root@master ~]# helm repo list
NAME    URL                                                   
stable  https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
local   http://127.0.0.1:8879/charts

三、部署 Prometheus Operator

本節在實踐時使用的是 Prometheus Operator 版本 v0.26.0。由于項目開發迭代速度很快,部署方法可能會更新,必要時請參考官方文檔。

[root@master ~]# git clone https://github.com/coreos/prometheus-operator.git
[root@master ~]# cd prometheus-operator/

為方便管理,創建一個單獨的 Namespace monitoring,Prometheus Operator 相關的組件都會部署到這個 Namespace。

[root@master prometheus-operator]# kubectl create namespace monitoring
namespace/monitoring created

3.1、安裝 Prometheus Operator Deployment

首先更新一下 repo 源。

helm repo update

然后進行 helm 安裝,因為要下載幾百兆的鏡像,速度會慢一些,也可以提醒把鏡像下載好,建議使用阿里云的鏡像,然后再改名,這樣速度比較快。

[root@master prometheus-operator]# helm install --name prometheus-operator --set rbacEnable=true --namespace=monitoring helm/prometheus-operator
NAME:   prometheus-operator
LAST DEPLOYED: Tue Dec 25 22:09:31 2018
NAMESPACE: monitoring
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/PodSecurityPolicy
NAME                 PRIV   CAPS      SELINUX   RUNASUSER  FSGROUP    SUPGROUP  READONLYROOTFS  VOLUMES
prometheus-operator  false  RunAsAny  RunAsAny  MustRunAs  MustRunAs  false     configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim

==> v1/ConfigMap
NAME                 DATA  AGE
prometheus-operator  1     3s

==> v1/ServiceAccount
NAME                 SECRETS  AGE
prometheus-operator  1        3s

==> v1beta1/ClusterRole
NAME                     AGE
prometheus-operator      3s
psp-prometheus-operator  3s

==> v1beta1/ClusterRoleBinding
NAME                     AGE
prometheus-operator      3s
psp-prometheus-operator  3s

==> v1beta1/Deployment
NAME                 DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
prometheus-operator  1        1        1           1          3s

==> v1/Pod(related)
NAME                                  READY  STATUS   RESTARTS  AGE
prometheus-operator-867bbfddbd-vpsjd  1/1    Running  0         3s

NOTES:
The Prometheus Operator has been installed. Check its status by running:
  kubectl --namespace monitoring get pods -l "app=prometheus-operator,release=prometheus-operator"

Visit https://github.com/coreos/prometheus-operator for instructions on how
to create & configure Alertmanager and Prometheus instances using the Operator.

查看創建的資源。

[root@master prometheus-operator]# kubectl get pod -n monitoring
NAME                                   READY   STATUS    RESTARTS   AGE
prometheus-operator-867bbfddbd-vpsjd   1/1     Running   0          95s
[root@master prometheus-operator]# kubectl get deploy -n monitoring
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
prometheus-operator   1/1     1            1           103s
[root@master prometheus-operator]# helm list
NAME                REVISION    UPDATED                     STATUS      CHART                       APP VERSION NAMESPACE 
prometheus-operator 1           Tue Dec 25 22:09:31 2018    DEPLOYED    prometheus-operator-0.0.29  0.20.0      monitoring

3.2、安裝 Prometheus

[root@master prometheus-operator]# helm install --name prometheus --set serviceMonitorsSelector.app=prometheus --set ruleSelector.app=prometheus --namespace=monitoring helm/prometheus
NAME:   prometheus
LAST DEPLOYED: Tue Dec 25 22:17:06 2018
NAMESPACE: monitoring
STATUS: DEPLOYED

RESOURCES:
==> v1/Prometheus
NAME        AGE
prometheus  0s

==> v1/PrometheusRule
NAME              AGE
prometheus-rules  0s

==> v1/ServiceMonitor
NAME        AGE
prometheus  0s

==> v1beta1/PodSecurityPolicy
NAME        PRIV   CAPS      SELINUX   RUNASUSER  FSGROUP    SUPGROUP  READONLYROOTFS  VOLUMES
prometheus  false  RunAsAny  RunAsAny  MustRunAs  MustRunAs  false     configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim

==> v1/ServiceAccount
NAME        SECRETS  AGE
prometheus  1        0s

==> v1beta1/ClusterRole
NAME            AGE
prometheus      0s
psp-prometheus  0s

==> v1beta1/ClusterRoleBinding
NAME            AGE
prometheus      0s
psp-prometheus  0s

==> v1/Service
NAME        TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)   AGE
prometheus  ClusterIP  10.109.224.217  <none>       9090/TCP  0s

NOTES:
A new Prometheus instance has been created.

DEPRECATION NOTICE:

- additionalRulesConfigMapLabels is not used anymore, use additionalRulesLabels

查看創建情況。

[root@master prometheus-operator]# kubectl get pod -n monitoring
NAME                                   READY   STATUS    RESTARTS   AGE
prometheus-operator-867bbfddbd-vpsjd   1/1     Running   0          16m
prometheus-prometheus-0                3/3     Running   1          9m10s
[root@master prometheus-operator]# kubectl get svc -n monitoring
NAME                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
prometheus                            ClusterIP   10.111.72.32     <none>        9090/TCP            22m
prometheus-operated                   ClusterIP   None             <none>        9090/TCP            19m

3.3、安裝 Alertmanager

[root@master prometheus-operator]# helm install --name alertmanager --namespace=monitoring helm/alertmanager
NAME:   alertmanager
LAST DEPLOYED: Tue Dec 25 22:30:11 2018
NAMESPACE: monitoring
STATUS: DEPLOYED

RESOURCES:
==> v1/PrometheusRule
NAME          AGE
alertmanager  0s

==> v1/ServiceMonitor
NAME          AGE
alertmanager  0s

==> v1beta1/PodSecurityPolicy
NAME          PRIV   CAPS      SELINUX   RUNASUSER  FSGROUP    SUPGROUP  READONLYROOTFS  VOLUMES
alertmanager  false  RunAsAny  RunAsAny  MustRunAs  MustRunAs  false     configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim

==> v1/Secret
NAME                       TYPE    DATA  AGE
alertmanager-alertmanager  Opaque  1     0s

==> v1beta1/ClusterRole
NAME              AGE
psp-alertmanager  0s

==> v1beta1/ClusterRoleBinding
NAME              AGE
psp-alertmanager  0s

==> v1/Service
NAME          TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)   AGE
alertmanager  ClusterIP  10.102.68.166  <none>       9093/TCP  0s

==> v1/Alertmanager
NAME          AGE
alertmanager  0s

NOTES:
A new Alertmanager instance has been created.

DEPRECATION NOTICE:

- additionalRulesConfigMapLabels is not used anymore, use additionalRulesLabels

查看安裝結果。

[root@master prometheus-operator]# kubectl get pod -n monitoring
NAME                                   READY   STATUS    RESTARTS   AGE
alertmanager-alertmanager-0            2/2     Running   0          13m
prometheus-operator-867bbfddbd-vpsjd   1/1     Running   0          33m
prometheus-prometheus-0                3/3     Running   1          26m

3.4、安裝 kube-prometheus

kube-prometheus 是一個 Helm Chart,打包了監控 Kubernetes 需要的所有 Exporter 和 ServiceMonitor,會創建幾個 Service。
https://github.com/coreos/prometheus-operator/blob/master/helm/README.md
從上面的連接中可以看到調整了安裝方式,安裝過程如下:

[root@master prometheus-operator]# mkdir -p helm/kube-prometheus/charts
[root@master prometheus-operator]# helm package -d helm/kube-prometheus/charts helm/alertmanager helm/grafana helm/prometheus  helm/exporter-kube-dns \
> helm/exporter-kube-scheduler helm/exporter-kubelets helm/exporter-node helm/exporter-kube-controller-manager \
> helm/exporter-kube-etcd helm/exporter-kube-state helm/exporter-coredns helm/exporter-kubernetes
Successfully packaged chart and saved it to: helm/kube-prometheus/charts/alertmanager-0.1.7.tgz
Successfully packaged chart and saved it to: helm/kube-prometheus/charts/grafana-0.0.37.tgz
Successfully packaged chart and saved it to: helm/kube-prometheus/charts/prometheus-0.0.51.tgz
Successfully packaged chart and saved it to: helm/kube-prometheus/charts/exporter-kube-dns-0.1.7.tgz
Successfully packaged chart and saved it to: helm/kube-prometheus/charts/exporter-kube-scheduler-0.1.9.tgz
Successfully packaged chart and saved it to: helm/kube-prometheus/charts/exporter-kubelets-0.2.11.tgz
Successfully packaged chart and saved it to: helm/kube-prometheus/charts/exporter-node-0.4.6.tgz
Successfully packaged chart and saved it to: helm/kube-prometheus/charts/exporter-kube-controller-manager-0.1.10.tgz
Successfully packaged chart and saved it to: helm/kube-prometheus/charts/exporter-kube-etcd-0.1.15.tgz
Successfully packaged chart and saved it to: helm/kube-prometheus/charts/exporter-kube-state-0.2.6.tgz
Successfully packaged chart and saved it to: helm/kube-prometheus/charts/exporter-coredns-0.0.3.tgz
Successfully packaged chart and saved it to: helm/kube-prometheus/charts/exporter-kubernetes-0.1.10.tgz
[root@master prometheus-operator]# helm install helm/kube-prometheus --name kube-prometheus --namespace monitoring
NAME:   kube-prometheus
LAST DEPLOYED: Tue Dec 25 23:02:25 2018
NAMESPACE: monitoring
STATUS: DEPLOYED

RESOURCES:
==> v1/Secret
NAME                          TYPE    DATA  AGE
alertmanager-kube-prometheus  Opaque  1     1s
kube-prometheus-grafana       Opaque  2     1s

==> v1beta1/RoleBinding
NAME                                 AGE
kube-prometheus-exporter-kube-state  1s

==> v1/Service
NAME                                              TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)              AGE
kube-prometheus-alertmanager                      ClusterIP  10.109.99.33    <none>       9093/TCP             1s
kube-prometheus-exporter-kube-controller-manager  ClusterIP  None            <none>       10252/TCP            1s
kube-prometheus-exporter-kube-dns                 ClusterIP  None            <none>       10054/TCP,10055/TCP  1s
kube-prometheus-exporter-kube-etcd                ClusterIP  None            <none>       4001/TCP             1s
kube-prometheus-exporter-kube-scheduler           ClusterIP  None            <none>       10251/TCP            1s
kube-prometheus-exporter-kube-state               ClusterIP  10.106.111.57   <none>       80/TCP               1s
kube-prometheus-exporter-node                     ClusterIP  10.107.178.109  <none>       9100/TCP             1s
kube-prometheus-grafana                           ClusterIP  10.110.171.226  <none>       80/TCP               1s
kube-prometheus                                   ClusterIP  10.102.19.97    <none>       9090/TCP             1s

==> v1beta1/Deployment
NAME                                 DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
kube-prometheus-exporter-kube-state  1        1        1           0          1s
kube-prometheus-grafana              1        1        1           0          1s

==> v1/Pod(related)
NAME                                                  READY  STATUS             RESTARTS  AGE
kube-prometheus-exporter-node-8cclq                   0/1    ContainerCreating  0         1s
kube-prometheus-exporter-node-xsqvj                   0/1    ContainerCreating  0         1s
kube-prometheus-exporter-node-zcjfj                   0/1    ContainerCreating  0         1s
kube-prometheus-exporter-kube-state-7bb8cf75d9-czp24  0/2    ContainerCreating  0         1s
kube-prometheus-grafana-6f4bb75c95-jvfzn              0/2    ContainerCreating  0         1s

==> v1beta1/ClusterRole
NAME                                     AGE
psp-kube-prometheus-alertmanager         1s
kube-prometheus-exporter-kube-state      1s
psp-kube-prometheus-exporter-kube-state  1s
psp-kube-prometheus-exporter-node        1s
psp-kube-prometheus-grafana              1s
kube-prometheus                          1s
psp-kube-prometheus                      1s

==> v1beta1/DaemonSet
NAME                           DESIRED  CURRENT  READY  UP-TO-DATE  AVAILABLE  NODE SELECTOR  AGE
kube-prometheus-exporter-node  3        3        0      3           0          <none>         1s

==> v1beta1/PodSecurityPolicy
NAME                                 PRIV   CAPS      SELINUX   RUNASUSER  FSGROUP    SUPGROUP  READONLYROOTFS  VOLUMES
kube-prometheus-alertmanager         false  RunAsAny  RunAsAny  MustRunAs  MustRunAs  false     configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
kube-prometheus-exporter-kube-state  false  RunAsAny  RunAsAny  MustRunAs  MustRunAs  false     configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
kube-prometheus-exporter-node        false  RunAsAny  RunAsAny  MustRunAs  MustRunAs  false     configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim,hostPath
kube-prometheus-grafana              false  RunAsAny  RunAsAny  MustRunAs  MustRunAs  false     configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim,hostPath
kube-prometheus                      false  RunAsAny  RunAsAny  MustRunAs  MustRunAs  false     configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim

==> v1/ServiceAccount
NAME                                 SECRETS  AGE
kube-prometheus-exporter-kube-state  1        1s
kube-prometheus-exporter-node        1        1s
kube-prometheus-grafana              1        1s
kube-prometheus                      1        1s

==> v1beta1/ClusterRoleBinding
NAME                                     AGE
psp-kube-prometheus-alertmanager         1s
kube-prometheus-exporter-kube-state      1s
psp-kube-prometheus-exporter-kube-state  1s
psp-kube-prometheus-exporter-node        1s
psp-kube-prometheus-grafana              1s
kube-prometheus                          1s
psp-kube-prometheus                      1s

==> v1beta1/Role
NAME                                 AGE
kube-prometheus-exporter-kube-state  1s

==> v1/Prometheus
NAME             AGE
kube-prometheus  1s

==> v1/ServiceMonitor
NAME                                              AGE
kube-prometheus-alertmanager                      0s
kube-prometheus-exporter-kube-controller-manager  0s
kube-prometheus-exporter-kube-dns                 0s
kube-prometheus-exporter-kube-etcd                0s
kube-prometheus-exporter-kube-scheduler           0s
kube-prometheus-exporter-kube-state               0s
kube-prometheus-exporter-kubelets                 0s
kube-prometheus-exporter-kubernetes               0s
kube-prometheus-exporter-node                     0s
kube-prometheus-grafana                           0s
kube-prometheus                                   0s

==> v1/ConfigMap
NAME                     DATA  AGE
kube-prometheus-grafana  10    1s

==> v1/Alertmanager
NAME             AGE
kube-prometheus  1s

==> v1/PrometheusRule
NAME                                              AGE
kube-prometheus-alertmanager                      1s
kube-prometheus-exporter-kube-controller-manager  1s
kube-prometheus-exporter-kube-etcd                1s
kube-prometheus-exporter-kube-scheduler           1s
kube-prometheus-exporter-kube-state               1s
kube-prometheus-exporter-kubelets                 1s
kube-prometheus-exporter-kubernetes               1s
kube-prometheus-exporter-node                     1s
kube-prometheus-rules                             1s
kube-prometheus                                   0s

NOTES:
DEPRECATION NOTICE:

- alertmanager.ingress.fqdn is not used anymore, use alertmanager.ingress.hosts []
- prometheus.ingress.fqdn is not used anymore, use prometheus.ingress.hosts []
- grafana.ingress.fqdn is not used anymore, use prometheus.grafana.hosts []

- additionalRulesConfigMapLabels is not used anymore, use additionalRulesLabels
- prometheus.additionalRulesConfigMapLabels is not used anymore, use additionalRulesLabels
- alertmanager.additionalRulesConfigMapLabels is not used anymore, use additionalRulesLabels
- exporter-kube-controller-manager.additionalRulesConfigMapLabels is not used anymore, use additionalRulesLabels
- exporter-kube-etcd.additionalRulesConfigMapLabels is not used anymore, use additionalRulesLabels
- exporter-kube-scheduler.additionalRulesConfigMapLabels is not used anymore, use additionalRulesLabels
- exporter-kubelets.additionalRulesConfigMapLabels is not used anymore, use additionalRulesLabels
- exporter-kubernetes.additionalRulesConfigMapLabels is not used anymore, use additionalRulesLabels

等需要的鏡像下載完成,因為有的鏡像國內訪問不到,建議都從阿里云間接下載,我們查看安裝結果。
每個 Exporter 會對應一個 Service,為 Pormetheus 提供 Kubernetes 集群的各類監控數據。

[root@master prometheus-operator]# kubectl get svc -n monitoring
NAME                                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
alertmanager                          ClusterIP    10.102.68.166    <none>        9093/TCP      52m
alertmanager-operated                 ClusterIP   None             <none>        9093/TCP,6783/TCP   52m
kube-prometheus                       ClusterIP   10.109.193.133   <none>        9090/TCP            3m42s
kube-prometheus-alertmanager          ClusterIP   10.110.67.174    <none>        9093/TCP            3m42s
kube-prometheus-exporter-kube-state   ClusterIP   10.101.225.77    <none>        80/TCP              3m42s
kube-prometheus-exporter-node         ClusterIP   10.103.162.196   <none>        9100/TCP            3m42s
kube-prometheus-grafana               ClusterIP   10.101.81.167    <none>        80/TCP              3m42s
prometheus                            ClusterIP    10.109.224.217   <none>        9090/TCP      65m
prometheus-operated                   ClusterIP   None             <none>        9090/TCP            65m

每個 Service 對應一個 ServiceMonitor,組成 Pormetheus 的 Target 列表。

[root@master prometheus-operator]# kubectl get servicemonitor -n monitoring
NAME                                               AGE
alertmanager                                       14m
kube-prometheus                                    22m
kube-prometheus-alertmanager                       22m
kube-prometheus-exporter-kube-controller-manager   22m
kube-prometheus-exporter-kube-dns                  22m
kube-prometheus-exporter-kube-etcd                 22m
kube-prometheus-exporter-kube-scheduler            22m
kube-prometheus-exporter-kube-state                22m
kube-prometheus-exporter-kubelets                  22m
kube-prometheus-exporter-kubernetes                22m
kube-prometheus-exporter-node                      22m
kube-prometheus-grafana                            22m
prometheus                                         16m
prometheus-operator                                2h

如下是與 Prometheus Operator 相關的所有 Pod,我們注意到有些 Exporter 沒有運行 Pod,這是因為像 API Server、Scheduler、Kubelet 等 Kubernetes 內部組件原生就支持 Prometheus,只需要定義 Service 就能直接從預定義端口獲取監控數據。

[root@master prometheus-operator]# kubectl get pod -n monitoring
NAME                                                  READY   STATUS    RESTARTS   AGE
alertmanager-alertmanager-0                           2/2     Running   0          15m
alertmanager-kube-prometheus-0                        2/2     Running   0          15m
kube-prometheus-exporter-kube-state-dc6966bb5-r5kng   2/2     Running   0          25m
kube-prometheus-exporter-node-grtvz                   1/1     Running   0          25m
kube-prometheus-exporter-node-jfq79                   1/1     Running   0          25m
kube-prometheus-exporter-node-n79vq                   1/1     Running   0          25m
kube-prometheus-grafana-6f4bb75c95-bw72r              2/2     Running   0          25m
prometheus-kube-prometheus-0                          3/3     Running   1          15m
prometheus-operator-867bbfddbd-rxj6s                  1/1     Running   0          15m
prometheus-prometheus-0                               3/3     Running   1          15m

為了方便訪問 kube-prometheus-grafana,我們將 Service 類型改為 NodePort。

[root@master prometheus-operator]# kubectl patch svc kube-prometheus-grafana -p '{"spec":{"type":"NodePort"}}' -n monitoring
service/alertmanager patched
[root@master prometheus-operator]# kubectl patch svc kube-prometheus -p '{"spec":{"type":"NodePort"}}' -n monitoring
service/kube-prometheus patched
[root@master prometheus-operator]# kubectl get svc -n monitoring
NAME                                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
alertmanager                          ClusterIP   10.110.11.161    <none>        9093/TCP            21m
alertmanager-operated                 ClusterIP   None             <none>        9093/TCP,6783/TCP   19m
kube-prometheus                       NodePort    10.102.23.109    <none>        9090:31679/TCP      29m
kube-prometheus-alertmanager          NodePort    10.97.58.177     <none>        9093:31627/TCP      29m
kube-prometheus-exporter-kube-state   ClusterIP   10.110.185.195   <none>        80/TCP              29m
kube-prometheus-exporter-node         ClusterIP   10.111.98.237    <none>        9100/TCP            29m
kube-prometheus-grafana               NodePort    10.105.188.204   <none>        80:30357/TCP        29m
prometheus                            ClusterIP   10.111.72.32     <none>        9090/TCP            22m
prometheus-operated                   ClusterIP   None             <none>        9090/TCP            19m

四、查看效果圖

4.1、查看 kube-prometheus

訪問MASTER_IP:31697,如下所示:

Kubernetes 監控方案之 Prometheus Operator(十九)

4.2、查看 kube-prometheus-alertmanager

訪問MASTER_IP:31627,如下所示:

Kubernetes 監控方案之 Prometheus Operator(十九)

4.3、查看 kube-prometheus-grafana

訪問MASTER_IP:30357/login,然后登陸,賬號密碼都是 admin。
Kubernetes 監控方案之 Prometheus Operator(十九)

可以監控 Kubernetes 集群的整體健康狀態:
Kubernetes 監控方案之 Prometheus Operator(十九)

整個集群的資源使用情況:
Kubernetes 監控方案之 Prometheus Operator(十九)

Kubernetes 各個管理組件的狀態:
Kubernetes 監控方案之 Prometheus Operator(十九)

Kubernetes 監控方案之 Prometheus Operator(十九)

節點的資源使用情況:
Kubernetes 監控方案之 Prometheus Operator(十九)

Deployment 的運行狀態:
Kubernetes 監控方案之 Prometheus Operator(十九)

Pod 的運行狀態:
Kubernetes 監控方案之 Prometheus Operator(十九)

StatefulSet 運行狀態:
Kubernetes 監控方案之 Prometheus Operator(十九)

這些 Dashboard 展示了從集群到 Pod 的運行狀況,能夠幫助用戶更好地運維 Kubernetes。而且 Prometheus Operator 迭代非常快,相信會繼續開發出更多更好的功能,所以值得我們花些時間學習和實踐。

官方文檔:https://github.com/coreos/prometheus-operator/tree/master/helm

向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

铁岭市| 九寨沟县| 武安市| 平罗县| 阿荣旗| 新巴尔虎左旗| 林周县| 泸州市| 乐清市| 巴马| 芦溪县| 定西市| 海伦市| 龙山县| 大埔区| 双鸭山市| 临沭县| 岑溪市| 墨江| 伽师县| 阿巴嘎旗| 古田县| 丰城市| 招远市| 龙口市| 江阴市| 长海县| 玉门市| 贵定县| 兴国县| 运城市| 宽城| 广汉市| 文成县| 剑阁县| 任丘市| 楚雄市| 昌黎县| 广东省| 石棉县| 秦安县|