您好,登錄后才能下訂單哦!
這篇文章主要介紹“K8s集群部署高可用架構”,在日常操作中,相信很多人在K8s集群部署高可用架構問題上存在疑惑,小編查閱了各式資料,整理出簡單好用的操作方法,希望對大家解答”K8s集群部署高可用架構”的疑惑有所幫助!接下來,請跟著小編一起來學習吧!
環境
系統 角色 IP centos7.4 master-1 10.10.25.149 centos7.4 master-2 10.10.25.112 centos7.4 node-1 10.10.25.150 centos7.4 node-2 10.10.25.151 centos7.4 lb-1 backup 10.10.25.111 centos7.4 lb-2 master 10.10.25.110 centos7.4 VIP 10.10.25.113
部署master02 節點
拷貝master01上面的 /opt/kubernetes/目錄 scp -r /opt/kubernetes/ root@10.10.25.112:/opt 拷貝master01上的相關服務 scp /usr/lib/systemd/system/{kube-apiserver,kube-scheduler,kube-controller-manager}.service root@10.10.25.112:/usr/lib/systemd/system vim /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] ExecStart=/opt/kubernetes/bin/kube-apiserver \ --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \ --bind-address=10.10.25.112 \ --insecure-bind-address=127.0.0.1 \ --authorization-mode=Node,RBAC \ --runtime-config=rbac.authorization.k8s.io/v1 \ --kubelet-https=true \ --anonymous-auth=false \ --basic-auth-file=/opt/kubernetes/ssl/basic-auth.csv \ --enable-bootstrap-token-auth \ --token-auth-file=/opt/kubernetes/ssl/bootstrap-token.csv \ --service-cluster-ip-range=10.1.0.0/16 \ --service-node-port-range=20000-40000 \ --tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem \ --tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \ --client-ca-file=/opt/kubernetes/ssl/ca.pem \ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/opt/kubernetes/ssl/ca.pem \ --etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem \ --etcd-keyfile=/opt/kubernetes/ssl/kubernetes-key.pem \ --etcd-servers=https://10.10.25.149:2379,https://10.10.25.150:2379,https://10.10.25.151:2379 \ --enable-swagger-ui=true \ --allow-privileged=true \ --audit-log-maxage=30 \ --audit-log-maxbackup=3 \ --audit-log-maxsize=100 \ --audit-log-path=/opt/kubernetes/log/api-audit.log \ --event-ttl=1h \ --v=2 \ --logtostderr=false \ --log-dir=/opt/kubernetes/log Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target 啟動apiserver systemctl start kube-apiserver # ps -aux | grep kube systemctl start kube-scheduler kube-controller-manager 加入系統path vim /root/.bash_profile 添加 PATH=$PATH:$HOME/bin:/opt/kubernetes/bin source .bash_profile
查看組件狀態
# /opt/kubernetes/bin/kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-1 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} 此時已經說明可以連接到ETCD集群
查看node狀態
# /opt/kubernetes/bin/kubectl get node NAME STATUS ROLES AGE VERSION 10.10.25.150 NotReady <none> 14d v1.10.3 10.10.25.151 NotReady <none> 14d v1.10.3 說明master02 還無法與node通信
配置單節點LB負載均衡
注:做高可用集群時間上需要同步
lb02節點配置
配置nginx yum源,使用4層代理做
vim /etc/yum.repos.d/nginx.repo [nginx] name=nginx repo baseurl=https://nginx.org/packages/centos/7/$basearch/ gpgcheck=0 enabled=1 yum install -y nginx
修改Nginx配置文件
vim /etc/nginx/nginx.conf stream { log_format main "remote_addr $upstream_addr $time_local $status"; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { server 10.10.25.149:6443; server 10.10.25.112:6443; } server { listen 10.10.25.110:6443; proxy_pass k8s-apiserver; } }
修改node節點
cd /opt/kubernetes/cfg/ vim bootstrap.kubeconfig 修改 server: https://10.10.25.149:6443 為 server: https://10.10.25.110:6443 vim kubelet.kubeconfig 修改 server: https://10.10.25.149:6443 為 server: https://10.10.25.110:6443 vim kube-proxy.kubeconfig 修改 server: https://10.10.25.149:6443 為 server: https://10.10.25.110:6443 systemctl restart kubelet systemctl restart kube-proxy
此時啟動以后會發現master01 master02 都無法與node節點通訊,查看node日志發現,提示證書錯誤,大致意思是kube-proxy證書是master01節點的而不是LB節點的.所以接下來我們需要重新生成Kube-proxy證書
master01重新生成api-server證書
編輯證書json文件 [root@master ssl]# cat kubernetes-csr.json { "CN": "kubernetes", "hosts": [ "127.0.0.1", "10.10.25.149", "10.10.25.112", "10.10.25.110", "10.10.25.111", "10.10.25.113", "10.1.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } 說明:json文件中的IP包括master01 master02節點IP,所有LB節點IP和VIP 的地址,因為我們最終需要實現 Nginx + Keepalive 0單節點的負載均衡架構 生成證書 cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \ -ca-key=/opt/kubernetes/ssl/ca-key.pem \ -config=/opt/kubernetes/ssl/ca-config.json \ -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes 拷貝到相應節點 cp kubernetes*.pem /opt/kubernetes/ssl/ scp kubernetes*.pem 10.10.25.112:/opt/kubernetes/ssl/ scp kubernetes*.pem 10.10.25.150:/opt/kubernetes/ssl/ scp kubernetes*.pem 10.10.25.151:/opt/kubernetes/ssl/
重啟master節點的服務
systemctl start kube-scheduler kube-controller-manager kube-apiserver
重啟node節點服務
systemctl restart kube-proxy kubelet
驗證
# kubectl get node NAME STATUS ROLES AGE VERSION 10.10.25.150 Ready <none> 15d v1.10.3 10.10.25.151 Ready <none> 15d v1.10.3 說明已經實現了單節點負載均衡. 這里有個地方需要注意,在以上配置都完成并且沒有錯誤的情況下,有可能出現獲取到的node狀態是notready,有可能出現此問題的有原因是在日志里面發現node無法注冊,此時我們需要手動注冊,在master01上執行一下命令即可 kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate approve
lb01節點配置
同樣安裝nginx,過程不贅述,nginx配置也相同。需要改變的只是綁定的IP
vim /etc/nginx/nginx.conf stream { log_format main "remote_addr $upstream_addr $time_local $status"; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { server 10.10.25.149:6443; server 10.10.25.112:6443; } server { listen 10.10.25.111:6443; proxy_pass k8s-apiserver; } }
使用Keepalive實現LB節點的高可用
安裝keepalive兩個節點都需要
yum install keepalived -y
設置lb02為keepalived為master節點
修改lb02keepalived配置文件
vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_skip_check_adv_addr #vrrp_strict vrrp_garp_interval 0 vrrp_gna_interval 0 } vrrp_script check_nginx { script "/etc/keepalived/check_nginx.sh" #腳本檢查ngixn狀態 } vrrp_instance VI_1 { state MASTER interface ens192 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.10.25.113/24 } track_script { check_nginx } }
編寫nginx狀態檢測腳本
cat /etc/keepalived/check_nginx.sh #!/bin/sh count=$(ps -ef | grep nginx | egrep -cv "grep|$$") #獲取nginx進程數 if [ "$count" -eq 0 ];then systemctl stop keepalived fi 授予腳本執行權限 chmod +x /etc/keepalived/check_nginx.sh
啟動keepalived
systemctl start keepalived
查看是VIP是否生效
ip addr 2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:0c:29:2e:86:82 brd ff:ff:ff:ff:ff:ff inet 10.10.25.110/24 brd 10.10.25.255 scope global dynamic ens192 valid_lft 71256sec preferred_lft 71256sec inet 10.10.25.113/32 scope global ens192 valid_lft forever preferred_lft forever inet6 fe80::58b8:49be:54a7:4c43/64 scope link valid_lft forever preferred_lft forever
配置lb01keepalived
修改為backup的keepalived配置文件
cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_skip_check_adv_addr #vrrp_strict vrrp_garp_interval 0 vrrp_gna_interval 0 } vrrp_script check_nginx { script "/etc/keepalived/check_nginx.sh" } vrrp_instance VI_1 { state BACKUP interface ens192 virtual_router_id 51 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.10.25.113 } track_script { check_nginx } }
編寫nginx狀態檢測腳本
cat /etc/keepalived/check_nginx.sh #!/bin/sh count=$(ps -ef | grep nginx | egrep -cv "grep|$$") #獲取nginx進程數 if [ "$count" -eq 0 ];then systemctl stop keepalived fi 授予腳本執行權限 chmod +x /etc/keepalived/check_nginx.sh
啟動lb01keepalived
systemctl start keepalived
Keepalive故障切換
做keepalived故障切換,測試方法 1 打開一個窗口一直ping VIP 2 kill master節點nginx 3 觀察VIP是否遷移到備份和VIP的丟包情況 4 啟動master節點的nginx 和keepalive 5 觀察VIP時候漂移回到master節點
接入K8s集群
將node節點的接入VIP
cd /opt/kubernetes/cfg/ vim bootstrap.kubeconfig 修改 server: https://10.10.25.110:6443 為 server: https://10.10.25.113:6443 vim kubelet.kubeconfig 修改 server: https://10.10.25.110:6443 為 server: https://10.10.25.113:6443 vim kube-proxy.kubeconfig 修改 server: https://10.10.25.110:6443 為 server: https://10.10.25.113:6443 systemctl restart kubelet systemctl restart kube-proxy
重啟服務
systemctl restart kubelet systemctl restart kube-proxy
修改nginx配置文件(兩個節點都需要)
cat /etc/nginx/nginx.conf user nginx; worker_processes 2; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } stream { log_format main "remote_addr $upstream_addr $time_local $status"; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { server 10.10.25.149:6443; server 10.10.25.112:6443; } server { listen 0.0.0.0:6443; proxy_pass k8s-apiserver; } } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; }
重啟Nginx
systemctl restart nginx
驗證VIP接入
kubectl get node NAME STATUS ROLES AGE VERSION 10.10.25.150 Ready <none> 15d v1.10.3 10.10.25.151 Ready <none> 15d v1.10.3 此時說明接入VIP成功
到此,關于“K8s集群部署高可用架構”的學習就結束了,希望能夠解決大家的疑惑。理論與實踐的搭配能更好的幫助大家學習,快去試試吧!若想繼續學習更多相關知識,請繼續關注億速云網站,小編會繼續努力為大家帶來更多實用的文章!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。