您好,登錄后才能下訂單哦!
小編給大家分享一下怎么使用kubeadm安裝kubernetes 1.13高可用集群,希望大家閱讀完這篇文章之后都有所收獲,下面讓我們一起去探討吧!
vim /etc/hosts
192.168.3.147test-master01 192.168.3.148test-master02 192.168.3.149test-master03 192.168.3.150test-work01
ssh-keygen ssh-copy-id test-master01 ssh-copy-id test-master02 ssh-copy-id test-master03 ssh-copy-id test-work01
關閉防火墻
systemctl stop firewalld systemctl disable firewalld
關閉swap
swapoff -a sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
修改 /etc/fstab 文件,注釋掉 SWAP 的自動掛載,使用free -m確認swap已經關閉。
關閉selinux
sed-i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux setenforce0
配置轉發相關參數,否則可能會出錯
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward=1 net.ipv4.tcp_tw_recycle=0 vm.swappiness=0 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.ipv6.conf.all.disable_ipv6=1 net.netfilter.nf_conntrack_max=2310720 EOF sysctl --system
以上在所有的Kubernetes節點執行命令使修改生效
kube-proxy開啟ipvs
在所有work節點執行:
cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
上面腳本創建了的/etc/sysconfig/modules/ipvs.modules文件,保證在節點重啟后能自動加載所需模塊。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已經正確加載所需的內核模塊.
接下來還需要確保各個節點上已經安裝了ipset軟件包yum install ipset。 為了便于查看ipvs的代理規則,最好安裝一下管理工具ipvsadm yum install ipvsadm
yum install ipset -y yum install ipvsadm -y
如果以上前提條件如果不滿足,則即使kube-proxy的配置開啟了ipvs模式,也會退回到iptables模式
系統優化參數
systemctl enable ntpdate.service echo '*/30 * * * * /usr/sbin/ntpdate time7.aliyun.com >/dev/null 2>&1'> /tmp/crontab2.tmp crontab /tmp/crontab2.tmp systemctl start ntpdate.service
echo "* soft nofile 65536" >> /etc/security/limits.conf echo "* hard nofile 65536" >> /etc/security/limits.conf echo "* soft nproc 65536" >>/etc/security/limits.conf echo "* hard nproc 65536" >>/etc/security/limits.conf echo "* soft memlock unlimited" >> /etc/security/limits.conf echo "* hard memlock unlimited" >>/etc/security/limits.conf
yum install -y epel-release yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-toolswget vim ntpdate libseccomp libtool-ltdltelnet rsync bind-utils yum install -y https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-18.06.1.ce-3.el7.x86_64.rpm
所有節點安裝docker
編輯/etc/docker/daemon.json,添加以下一行
{ "registry-mirrors":["https://registry.docker-cn.com"] }
重啟docker
systemctl daemon-reload systemctl enable docker systemctl start docker
注:如果使用overlay2的寫法:
daemon.json { "log-driver": "json-file", "log-opts": { "max-size": "100m", "max-file": "10" }, "registry-mirrors": ["https://pqbap4ya.mirror.aliyuncs.com"], "storage-driver": "overlay2", "storage-opts":["overlay2.override_kernel_check=true"] }
如果要使用overlay2,前提條件為使用ext4,如果使用xfs,需要格式化磁盤加上參數 mkfs.xfs -n ftype=1 /path/to/your/device ,ftype=1這個參數需要配置為1
三臺master 節點
略
VIP : 192.168.3.80
所有節點都執行
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
yum install -y kubelet-1.13.1 kubeadm-1.13.1 kubectl-1.13.1
systemctl enable kubelet.service
下面配置是kubeadm安裝etcd寫法:
cat > kubeadm-config.yaml << EOF apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration kubernetesVersion: v1.13.1 apiServer: certSANs: - "192.168.3.80" controlPlaneEndpoint: "192.168.3.80:8443" networking: podSubnet: "10.50.0.0/16" imageRepository: "harbor.oneitfarm.com/k8s-cluster-images" EOF
CNI使用Calico,設置podSubnet: “10.50.0.0/16”
192.168.3.80是剛才安裝haproxy+keepalived的VIP
kubeadm init --config kubeadm-config.yaml ... [root@master01 ~]# mkdir -p $HOME/.kube [root@master01 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@master01 ~]# chown $(id -u):$(id -g) $HOME/.kube/config
按官網方式:
Installing with the Kubernetes API datastore—50 nodes or less:
kubectl apply -f \ https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml kubectl apply -f \ https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
以上建議先wget下來,需要根據自己網絡修改配置 :
- name: CALICO_IPV4POOL_CIDR value: "10.50.0.0/16"
ssh root@master02 mkdir -p /etc/kubernetes/pki/etcd scp /etc/kubernetes/admin.conf root@master02:/etc/kubernetes scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@master02:/etc/kubernetes/pki scp /etc/kubernetes/pki/etcd/ca.* root@master02:/etc/kubernetes/pki/etcd
在其它slave節點上執行下面命令,加入集群
kubeadm join 192.168.3.80:8443 --token pv2a9n.uh3yx1082ffpdf7n --discovery-token-ca-cert-hash sha256:872cac35b0bfec28fab8f626a727afa6529e2a63e3b7b75a3397e6412c06ebc5 --experimental-control-plane
修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: “ipvs”:
kubectl edit configmap kube-proxy -n kube-system kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -- grace-period=0 --force -n kube-system")}'
kubectl get nodes -o wide kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"}
本文通過kubeadm自動安裝etcd,也就是docker方式安裝的etcd,可以exec進去容器內檢查:
kubectl exec -ti -n kube-system etcd-an-master01 sh / # export ETCDCTL_API=3 / # etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key member list
集群初始化如果遇到問題,可以使用下面的命令進行清理:
kubeadm reset systemctl stop kubelet systemctl stop docker rm -rf /var/lib/cni/ rm -rf /var/lib/kubelet/* rm -rf /etc/cni/ ifconfig cni0 down ifconfig flannel.1 down ifconfig docker0 down ip link delete cni0 ip link delete flannel.1 systemctl start docker
使用kubeadm初始化的集群,出于安全考慮Pod不會被調度到Master Node上,也就是說Master Node不參與工作負載。這是因為當前的master節點被打上了node-role.kubernetes.io/master:NoSchedule的污點:
kubectl describe node master01 | grep Taint Taints: node-role.kubernetes.io/master:NoSchedule
檢查join進集群的master和work節點,如果調度不對,可以通過如下方式設置:
[root@an-master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION an-master01 Ready master 4h49m v1.13.1 an-master02 Ready <none> 4h42m v1.13.1 an-master03 Ready master 86m v1.13.1 an-work01 Ready <none> 85m v1.13.1 查看當前狀態: kubectl describe nodes/an-master02 |grep -E '(Roles|Taints)' Roles: <none> Taints: <none> 設置為master節點且不調度: kubectl label node an-master02 node-role.kubernetes.io/master= kubectl taint nodes an-master02 node-role.kubernetes.io/master=:NoSchedule 如果想去除限制的話: kubectl taint nodes an-master03 node-role.kubernetes.io/master- work節點設置: kubectl label node an-work01 node-role.kubernetes.io/work= kubectl describe nodes/an-work01 |grep -E '(Roles|Taints)' Roles: work Taints: <none>
看完了這篇文章,相信你對“怎么使用kubeadm安裝kubernetes 1.13高可用集群”有了一定的了解,如果想了解更多相關知識,歡迎關注億速云行業資訊頻道,感謝各位的閱讀!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。