您好,登錄后才能下訂單哦!
標簽: CentOS-7-安裝Kubernetes-1.12.1
系統: CentOS-7 4.19.0-1.el7.elrepo.x86_64
Kubernetes:Kubernetes-1.12.1
架構一臺master一臺node
xian使用命令cat /proc/sys/net/bridge/bridge-nf-call-iptables
查看值是否為1
,如果為1
則如下步驟不需要執行,否則請繼續下面的步驟開啟相關功能。
1.1修改文件
sed -i 7,9s/0/1/g /usr/lib/sysctl.d/00-system.conf
1.2加載netfilter模塊(可以使用 lsmod | grep netfilter
命令查看是否加載了模塊)
modprobe br_netfilter
2.1.3使做的更改生效
sysctl -p /usr/lib/sysctl.d/00-system.conf
2.1修改文件
echo 'vm.swappiness = 0' >> /usr/lib/sysctl.d/00-system.conf
2.2使做的更改生效
sysctl -p /usr/lib/sysctl.d/00-system.conf
2.3關閉swap
swapoff -a
2.4注釋掉“/etc/fstab”文件中關于swap的掛載代碼 (關閉開機自動掛載)
更改前:
/dev/mapper/cl-swap swap swap defaults 0 0
更改后:
#/dev/mapper/cl-swap swap swap defaults 0 0
echo -e '192.168.2.168 node1.ztpt.com\n192.168.2.162 node2.ztpt.com\n192.168.2.170 node3.ztpt.com' >> /etc/hosts
[root@node1 ~]# getenforce
Disabled
[root@node1 ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
[root@node1 ~]# systemctl status iptables
● iptables.service - IPv4 firewall with iptables
Loaded: loaded (/usr/lib/systemd/system/iptables.service; disabled; vendor preset: disabled)
Active: inactive (dead)
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
安裝依賴
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
安裝docker-ce
yum install -y docker-ce
設置docker服務開機啟動
systemctl enable docker.service
設置docker-registry-mirrors地址(阿里云提供免費的鏡像服務)
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://kzflpq4b.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
tee /etc/yum.repos.d/kubernets.repo << EOF
[Kubernetes]
name=kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg,https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg,https://mirrors.aliyun.com/kubernetes/yum/doc/apt-key.gpg
EOF
#創建yum元數據
sudo yum makecache
安裝kubelet、kubectl、kubeadm
注意:如果報密鑰的問題請使用rpm --import
命令手動倒入key,或者禁用gpgcheck
yum install -y kubelet kubectl kubeadm
設置kubelet開啟自啟動
systemctl enable kubelet.service
#!/bin/sh
#拉取鏡像
docker pull mirrorgooglecontainers/kube-apiserver:v1.12.1
docker pull mirrorgooglecontainers/kube-controller-manager:v1.12.1
docker pull mirrorgooglecontainers/kube-scheduler:v1.12.1
docker pull mirrorgooglecontainers/kube-proxy:v1.12.1
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.2.24
docker pull coredns/coredns:1.2.2
#修改標簽
docker tag mirrorgooglecontainers/kube-proxy:v1.12.1 k8s.gcr.io/kube-proxy:v1.12.1
docker tag mirrorgooglecontainers/kube-scheduler:v1.12.1 k8s.gcr.io/kube-scheduler:v1.12.1
docker tag mirrorgooglecontainers/kube-apiserver:v1.12.1 k8s.gcr.io/kube-apiserver:v1.12.1
docker tag mirrorgooglecontainers/kube-controller-manager:v1.12.1 k8s.gcr.io/kube-controller-manager:v1.12.1
docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag coredns/coredns:1.2.2 k8s.gcr.io/coredns:1.2.2
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
#刪除無用鏡像
docker rmi mirrorgooglecontainers/kube-apiserver:v1.12.1
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.12.1
docker rmi mirrorgooglecontainers/kube-scheduler:v1.12.1
docker rmi mirrorgooglecontainers/kube-proxy:v1.12.1
docker rmi mirrorgooglecontainers/pause:3.1
docker rmi mirrorgooglecontainers/etcd:3.2.24
docker rmi coredns/coredns:1.2.2
使用命令初始化master節點
kubeadm init --kubernetes-version=stable-1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
輸出信息如下(保存好輸出的信息,以后會用到):
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.2.168:6443 --token j1v9o1.wxd0xz5mv1qgo6b1 --discovery-token-ca-cert-hash sha256:6ae6c734198b0a69e73c8d7b576e8692514e3aa642f9431d21234e86f35b316f
根據提示使用如下命令:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
安裝flannel(這一步程序會自動下載鏡像和運行pod,根據網絡情況時間可能會有些慢,耐心等待!)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
查看節點信息
[root@node1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
node1.ztpt.com NotReady master 6m45s v1.12.1
查看namespace
[root@node2 ~]# kubectl get namespace
NAME STATUS AGE
default Active 12h
kube-public Active 12h
kube-system Active 12h
查看pod(查看pod注意READY列和STATUS列,如果有問題請查看pod日志和kubelet日志,具體命令查看文章下面內容,實在不行就重置初始化,重置命令也請查看文章下面內容)
[root@node1 ~]# kubectl get pods --namespace=kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
coredns-576cbf47c7-5tgnm 1/1 Running 1 18h 10.244.0.47 node1.ztpt.com <none>
coredns-576cbf47c7-r9fr6 1/1 Running 1 18h 10.244.0.46 node1.ztpt.com <none>
etcd-node1.ztpt.com 1/1 Running 1 18h 192.168.2.168 node1.ztpt.com <none>
kube-apiserver-node1.ztpt.com 1/1 Running 1 18h 192.168.2.168 node1.ztpt.com <none>
kube-controller-manager-node1.ztpt.com 1/1 Running 1 18h 192.168.2.168 node1.ztpt.com <none>
kube-flannel-ds-amd64-rx9jw 1/1 Running 1 18h 192.168.2.168 node1.ztpt.com <none>
kube-proxy-nnmpj 1/1 Running 1 18h 192.168.2.168 node1.ztpt.com <none>
kube-scheduler-node1.ztpt.com 1/1 Running 1 18h 192.168.2.168 node1.ztpt.com <none>
首先要保證做了之前先決條件里的事情,并且保證master上所有的pod都屬于正常狀態
執行join命令
因為被墻的原因,我們要將Master上的3個鏡像導出到NODE節點上
k8s.gcr.io/kube-proxy v1.12.1
quay.io/coreos/flannel v0.10.0-amd64
k8s.gcr.io/pause 3.1
導出命令 docker save 鏡像名 > 鏡像名.tar
導入命令 docker load < 鏡像名.tar
我join的時候報錯沒有找到VIPS支持模塊,最后使用命令modprobe
命令加載ip_vs_sh ip_vs_wrr ip_vs_rr ip_vs 模塊后就好了
kubeadm join 192.168.2.168:6443 --token 1evrs8.iz8bl6l77jtal4na --discovery-token-ca-cert-hash sha256:fd509be1a3362afbff39ed807b5c25ef7a5034feb6876df1b76c0a0d8eb637db
#先將節點設置為維護模式(node2.ztpt.com是節點名稱)
kubectl drain node2.ztpt.com --delete-local-data --force --ignore-daemonsets
#然后刪除節點
kubectl delete node node2.ztpt.com
kubeadm reset
ip link delete flannel.1
ip link delete cni0
rm -rf /var/lib/etcd/*
[root@node2 ~]# journalctl -u kubelet -f
[root@node2 ~]# kubectl logs -f kube-apiserver-node2.ztpt.com --namespace=kube-system
#-f 是滾動輸出,就像是tail -f中的-f一樣
#kube-apiserver-node2.ztpt.com是pod名字
kubeadm token create
創建kubeadm token list
查看,如果過期了還是得重新創建openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
查看吧!然后用新的token和ca-hash加入集群免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。