91超碰碰碰碰久久久久久综合_超碰av人澡人澡人澡人澡人掠_国产黄大片在线观看画质优化_txt小说免费全本

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

生產環境kubernetes集群安裝部署-1.15.3

發布時間:2020-06-20 20:26:23 來源:網絡 閱讀:2564 作者:無鋒劍 欄目:云計算

版本介紹


NAME                   VERSION   INTERNAL-IP         
cnvs-kubm-101-103      v1.15.3   172.20.101.103   

OS-IMAGE                KERNEL-VERSION              CONTAINER-RUNTIME
CentOS Linux 7 (Core)   5.2.9-1.el7.elrepo.x86_64   docker://18.6.1    

項目地址:

https://gitlab.com/PtmindDev/devops/kub-deploy/tree/cn-k8s-prod

分支:
cn-k8s-prod

集群介紹

#master
[kub-m]
172.20.101.103 name=cnvskubm-101-103  
172.20.101.104 name=cnvskubm-101-104  
172.20.101.105 name=cnvskubm-101-105  

#node
[kub-n]
172.20.101.106 name=cnvs-kubnode-101-106 
172.20.101.107 name=cnvs-kubnode-101-107
172.20.101.108 name=cnvs-kubnode-101-108
172.20.101.118 name=cnvs-kubnode-101-118 
172.20.101.120 name=cnvs-kubnode-101-120
172.20.101.122 name=cnvs-kubnode-101-122
172.20.101.123 name=cnvs-kubnode-101-123 
172.20.101.124 name=cnvs-kubnode-101-124

ansible 安裝環境:

cd /workspace/kub-deploy/roles

1:升級內核 -按需

ansible-playbook  1-kernelup.yaml  

驗證效果

ansible kub-all -a "uname -a"

Linux kubm-01 5.2.9-1.el7.elrepo.x86_64 #1 SMP Fri Aug 16 08:17:55 EDT 2019 x86_64 x86_64 x86_64 GNU/Linux

系統初始化

ansible-playbook 2-basic.yml

#單獨指定其中一臺運行:
ansible-playbook -i /etc/ansible/hosts 2-basic.yml --limit 172.20.101.103

安裝nginx

ansible-playbook 3-nginx.yaml

驗證

#版本
[root@kubm-01 roles]# ansible kub-m -a "nginx -v"     

172.20.101.103 | CHANGED | rc=0 >>
nginx version: nginx/1.16.1
....

#端口
ansible kub-m -m shell -a  "lsof -n -i:16443"

172.20.101.103 | CHANGED | rc=0 >>
COMMAND   PID  USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
nginx   21392  root    5u  IPv4 434526      0t0  TCP *:16443 (LISTEN)
。。。。

安裝keepalived

ansible-playbook 4-keepalived.yml 

返回

?

********
ok: [172.20.101.103] => {
    "output.stdout_lines": [
        "    inet 172.20.101.253/32 scope global eth0"
    ]
.......
ok: [172.20.101.105] => {
    "output.stdout_lines": []
}

監測 vip

[root@kubm-01 roles]# ping 172.20.101.253
PING 172.20.101.253 (172.20.101.253) 56(84) bytes of data.
64 bytes from 172.20.101.253: icmp_seq=1 ttl=64 time=0.059 ms

新建安裝部署目錄

mkdir -p /etc/kubeinstall
cd /etc/kubeinstall

創建一個初始初始化文件 (kubm-01執行)

我使用的flannel 網絡插件需要配置網絡參數 --pod-network-cidr=10.244.0.0/16 。

cat <<EOF > /etc/kubeinstall/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.20.101.103
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: cnvs-kubm-101-103
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: cn-k8s-prod
controlPlaneEndpoint: "172.20.101.253:16443"
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.15.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.245.0.0/16
  podSubnet: "10.244.0.0/16"
scheduler: {}
EOF
注意我使用nginx做的代理
master上面都配置Nginx反向代理 API Server;
172.20.101.253 是master節點的vip;
Nginx 代理端口為 16443 端口;
API Server使用 6443 端口;

使用config指定初始化集群。

kubeadm init \
--config=/etc/kubeinstall/kubeadm-config.yaml \
--upload-certs 

master 節點:

[kub-m]
172.20.101.103 name=cnvs-kubm-101-103  
172.20.101.104 name=cnvs-kubm-101-104  
172.20.101.105 name=cnvs-kubm-101-105  

第一臺master節點初始化返回結果

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 172.20.101.253:16443 --token hgep1g.fwo8y7rt8o8xqjml \
    --discovery-token-ca-cert-hash sha256:08462cf2017a1e3292ea355a7fc56c49ac713b84d5af45b649d7c8be539b97cf \
    --control-plane --certificate-key 1c20a3656bbcc9be4b5a16bcb4c4bab5445d221d4721900bf31b5b196b733cec

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use 
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.20.101.253:16443 --token hgep1g.fwo8y7rt8o8xqjml \
    --discovery-token-ca-cert-hash sha256:08462cf2017a1e3292ea355a7fc56c49ac713b84d5af45b649d7c8be539b97cf 
在執行節點上執行如下操作,初始化k8s環境。
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

節點驗證:

[root@cnvs-kubnode-101-103 kubeinstall]# 

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#節點狀態
[root@cnvs-kubnode-101-103 kubeinstall]# kubectl get nodes
NAME                STATUS     ROLES    AGE     VERSION
cnvs-kubm-101-103   NotReady   master   3m35s   v1.15.3    <=== 狀態 NotReady,安裝網絡插件后恢復

#服務狀態
[root@cnvs-kubnode-101-103 kubeinstall]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"} 

部署flannel網絡

使用與podSubnet上面配置匹配的pod CIDR 安裝CNI插件,按照實際情況修改。

kubernetes 版本更新較快,推薦部署前閱讀相關文檔,使用匹配版本網絡插件。!!!
https://github.com/coreos/flannel#flannel

  kubeadm join 172.20.101.253:16443 --token hgep1g.fwo8y7rt8o8xqjml \
    --discovery-token-ca-cert-hash sha256:08462cf2017a1e3292ea355a7fc56c49ac713b84d5af45b649d7c8be539b97cf \
    --control-plane --certificate-key 1c20a3656bbcc9be4b5a16bcb4c4bab5445d221d4721900bf31b5b196b733cec

驗證節點狀態:

[root@cnvs-kubnode-101-103 kubeinstall]# kubectl get nodes
NAME                STATUS   ROLES    AGE     VERSION
cnvs-kubm-101-103   Ready    master   4m51s   v1.15.3     <=== Ready

#服務狀態全部為running
root@cnvs-kubm-101-103 kubeinstall]# kubectl get pods -n kube-system
NAME                                        READY   STATUS    RESTARTS   AGE
coredns-5c98db65d4-kl66m                    1/1     Running   0          83s
coredns-5c98db65d4-xjlkl                    0/1     Running   0          83s
etcd-cnvs-kubm-101-103                      1/1     Running   0          40s
kube-apiserver-cnvs-kubm-101-103            1/1     Running   0          25s
kube-controller-manager-cnvs-kubm-101-103   1/1     Running   0          27s
kube-flannel-ds-amd64-jln7d                 1/1     Running   0          17s
kube-proxy-g2b2p                            1/1     Running   0          83s
kube-scheduler-cnvs-kubm-101-103            1/1     Running   0          35s
添加第(2 ~ 3)master節點執行如下操作
  kubeadm join 172.20.101.253:16443 --token m1n5s7.ktdbt3ce3yj4czm1 \
    --discovery-token-ca-cert-hash sha256:0eca032dcb2354f8c9e4f3ecfd2a19941b8a7b0c6cc4cc0764dc61a3a8e5ff68 \
    --control-plane --certificate-key e5b5fe5b9576a604b7107bbe12a8aa09d4ddc309c9d9447bc5552fdd481df627   
在執行節點上執行如下操作,初始化一下k8s環境。
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

驗證

所有master節點ready

[root@cnvs-kubm-101-105 ~]# kubectl get nodes
NAME                STATUS   ROLES    AGE     VERSION
cnvs-kubm-101-103   Ready    master   4m35s   v1.15.3
cnvs-kubm-101-104   Ready    master   96s     v1.15.3
cnvs-kubm-101-105   Ready    master   22s     v1.15.3

所有node節點,執行如下操作

[kub-n]
172.20.101.106
172.20.101.107
172.20.101.108
172.20.101.118
172.20.101.120
172.20.101.122
172.20.101.123
172.20.101.124

單節點安裝

kubeadm join 172.20.101.253:16443 --token hgep1g.fwo8y7rt8o8xqjml \
    --discovery-token-ca-cert-hash sha256:08462cf2017a1e3292ea355a7fc56c49ac713b84d5af45b649d7c8be539b97cf 

ansible 安裝

ansible kub-n -m shell -a "kubeadm join 172.20.101.253:16443 --token hgep1g.fwo8y7rt8o8xqjml \
    --discovery-token-ca-cert-hash sha256:08462cf2017a1e3292ea355a7fc56c49ac713b84d5af45b649d7c8be539b97cf"

返回

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

驗證:

[root@cnvs-kubm-101-104 ~]# kubectl get node
NAME                   STATUS   ROLES    AGE     VERSION
cnvs-kubm-101-103      Ready    master   8m32s   v1.15.3
cnvs-kubm-101-104      Ready    master   5m33s   v1.15.3
cnvs-kubm-101-105      Ready    master   4m19s   v1.15.3
cnvs-kubnode-101-106   Ready    <none>   28s     v1.15.3
cnvs-kubnode-101-107   Ready    <none>   28s     v1.15.3
cnvs-kubnode-101-108   Ready    <none>   28s     v1.15.3
cnvs-kubnode-101-118   Ready    <none>   28s     v1.15.3
cnvs-kubnode-101-120   Ready    <none>   28s     v1.15.3
cnvs-kubnode-101-122   Ready    <none>   13s     v1.15.3
cnvs-kubnode-101-123   Ready    <none>   13s     v1.15.3
cnvs-kubnode-101-124   Ready    <none>   2m31s   v1.15.3

添加標簽

為部署traefik做準備

kubectl label nodes {cnvs-kubnode-101-106,cnvs-kubnode-101-107} traefik=traefik-outer --overwrite

kubectl label nodes {cnvs-kubnode-101-123,cnvs-kubnode-101-124} traefik=traefik-inner --overwrite

驗證

[root@cnvs-kubm-101-103 kub-deploy]# kubectl get node  -l "traefik=traefik-outer"
NAME                   STATUS   ROLES    AGE     VERSION
cnvs-kubnode-101-106   Ready    <none>   5m25s   v1.15.3
cnvs-kubnode-101-107   Ready    <none>   5m25s   v1.15.3

[root@cnvs-kubm-101-103 kub-deploy]# kubectl get node  -l "traefik=traefik-inner"
NAME                   STATUS   ROLES    AGE     VERSION
cnvs-kubnode-101-123   Ready    <none>   5m18s   v1.15.3
cnvs-kubnode-101-124   Ready    <none>   7m36s   v1.15.3

集群總體驗證

#所有服務狀態均為 running
[root@cnvs-kubm-101-103 kub-deploy]# kubectl get pods -n kube-system
NAME                                        READY   STATUS    RESTARTS   AGE
coredns-5c98db65d4-kl66m                    1/1     Running   0          13m
coredns-5c98db65d4-xjlkl                    1/1     Running   0          13m
etcd-cnvs-kubm-101-103                      1/1     Running   0          13m
etcd-cnvs-kubm-101-104                      1/1     Running   0          7m57s
etcd-cnvs-kubm-101-105                      1/1     Running   0          5m26s
kube-apiserver-cnvs-kubm-101-103            1/1     Running   0          13m
kube-apiserver-cnvs-kubm-101-104            1/1     Running   1          7m47s
kube-apiserver-cnvs-kubm-101-105            1/1     Running   0          4m8s
kube-controller-manager-cnvs-kubm-101-103   1/1     Running   1          13m
kube-controller-manager-cnvs-kubm-101-104   1/1     Running   0          6m38s
kube-controller-manager-cnvs-kubm-101-105   1/1     Running   0          4m11s
kube-flannel-ds-amd64-2nfbb                 1/1     Running   2          88s
kube-flannel-ds-amd64-2pbqs                 1/1     Running   1          104s
kube-flannel-ds-amd64-4w7cb                 1/1     Running   2          92s
kube-flannel-ds-amd64-gxzhw                 1/1     Running   1          3m58s
kube-flannel-ds-amd64-jln7d                 1/1     Running   0          12m
kube-flannel-ds-amd64-lj9t4                 1/1     Running   2          92s
kube-flannel-ds-amd64-mbp8k                 1/1     Running   2          91s
kube-flannel-ds-amd64-r8t9c                 1/1     Running   1          7m57s
kube-flannel-ds-amd64-rdsfm                 1/1     Running   0          3m5s
kube-flannel-ds-amd64-w8gww                 1/1     Running   1          5m26s
kube-flannel-ds-amd64-x7rh7                 1/1     Running   2          92s
kube-proxy-4kxjv                            1/1     Running   0          5m26s
kube-proxy-4vqpf                            1/1     Running   0          92s
kube-proxy-677lf                            1/1     Running   0          92s
kube-proxy-b9kr2                            1/1     Running   0          104s
kube-proxy-dm9kd                            1/1     Running   0          3m5s
kube-proxy-g2b2p                            1/1     Running   0          13m
kube-proxy-m79jv                            1/1     Running   0          3m58s
kube-proxy-snqhr                            1/1     Running   0          92s
kube-proxy-t7mkx                            1/1     Running   0          91s
kube-proxy-z2f67                            1/1     Running   0          7m57s
kube-proxy-zjpwn                            1/1     Running   0          88s
kube-scheduler-cnvs-kubm-101-103            1/1     Running   1          13m
kube-scheduler-cnvs-kubm-101-104            1/1     Running   0          7m4s
kube-scheduler-cnvs-kubm-101-105            1/1     Running   0          4m32s

#所有節點狀態為ready
[root@cnvs-kubm-101-103 kub-deploy]# kubectl get nodes
NAME                   STATUS   ROLES    AGE     VERSION
cnvs-kubm-101-103      Ready    master   15m     v1.15.3
cnvs-kubm-101-104      Ready    master   9m32s   v1.15.3
cnvs-kubm-101-105      Ready    master   7m1s    v1.15.3
cnvs-kubnode-101-106   Ready    <none>   3m6s    v1.15.3
cnvs-kubnode-101-107   Ready    <none>   3m19s   v1.15.3
cnvs-kubnode-101-108   Ready    <none>   3m7s    v1.15.3
cnvs-kubnode-101-118   Ready    <none>   3m7s    v1.15.3
cnvs-kubnode-101-120   Ready    <none>   3m7s    v1.15.3
cnvs-kubnode-101-122   Ready    <none>   3m3s    v1.15.3
cnvs-kubnode-101-123   Ready    <none>   4m40s   v1.15.3
cnvs-kubnode-101-124   Ready    <none>   5m33s   v1.15.3

批量清理集群

kubectl delete node --all
ansible kub-all -m shell -a "kubeadm reset -f"
ansible kub-all -m shell -a "rm -rf /etc/kubernetes && rm -rf /var/lib/etcd && rm -rf /var/lib/kubelet && rm -rf /var/lib/kubelet && rm -rf $HOME/.kube/config "
ansible kub-all -m shell -a "iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X"
ansible kub-all -m shell -a "systemctl restart docker && systemctl enable kubelet"
ansible kub-all -m shell -a "ip link del flannel.1 && ip a|grep flannel "

推薦清理環境

如果之前配置過k8s或者首次配置沒有成功等情況,推薦把系統環境清理一下,每一個節點。

systemctl stop kubelet
docker rm -f -v $(docker ps -a -q)

rm -rf /etc/kubernetes
rm -rf /var/lib/etcd
rm -rf /var/lib/kubelet
rm -rf $HOME/.kube/config
ip link del flannel.1 
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

yum reinstall -y kubelet
systemctl daemon-reload
systemctl restart docker
systemctl enable kubelet

參考文檔

https://www.cnblogs.com/net2817/p/10513369.html
https://k8smeetup.github.io/docs/reference/setup-tools/kubeadm/kubeadm-config/

向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

敦化市| 温宿县| 米脂县| 庆元县| 金门县| 衡南县| 霍邱县| 轮台县| 浦县| 古交市| 天柱县| 济宁市| 万荣县| 南雄市| 腾冲县| 时尚| 明水县| 浏阳市| 九龙城区| 遂宁市| 青浦区| 西峡县| 邵阳县| 沙坪坝区| 邵阳市| 永春县| 齐河县| 苏尼特左旗| 宝山区| 陕西省| 丹巴县| 武陟县| 乌兰浩特市| 东辽县| 渝中区| 新乡市| 常山县| 那曲县| 佛山市| 北辰区| 修水县|