您好,登錄后才能下訂單哦!
本篇內容主要講解“Kubernetes集群的搭建方法”,感興趣的朋友不妨來看看。本文介紹的方法操作簡單快捷,實用性強。下面就讓小編來帶大家學習“Kubernetes集群的搭建方法”吧!
使用kubeadm搭建一個單節點kubernets實例,僅供學習. 運行環境和軟件概要如下:
~ | 版本 | 備注 |
---|---|---|
OS | Ubuntu 18.0.4 | 192.168.132.152 my.servermaster.local/192.168.132.154 my.worker01.local |
Docker | 18.06.1~ce~3-0~ubuntu | k8s最新版(1.12.3)支持的最高版本, 必須固定 |
Kubernetes | 1.12.3 | 目標軟件 |
以上系統和軟件基本是2018.11截止最新的狀態, 其中docker需要注意必須安裝k8s支持到的版本.
關閉系交換分區
swapoff -a
安裝運行時, 默認使用docker, 安裝docker即可
apt-get install docker-ce=18.06.1~ce~3-0~ubuntu
安裝kubeadm 一下命令和官網的命令一致, 但是是包源改為阿里云
apt-get update && apt-get install -y apt-transport-https curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main EOF apt-get update apt-get install -y kubelet kubeadm kubectl apt-mark hold kubelet kubeadm kubectl
因為國內是訪問不到k8s.gcr.io所以需要將需要的鏡像提前下載, 這次采用從阿里云鏡像倉庫下載, 并修改下載后的鏡像tag為k8s.gcr.io
# a. 查看都需要哪些鏡像需要下載 kubeadm config images list --kubernetes-version=v1.12.3 k8s.gcr.io/kube-apiserver:v1.12.3 k8s.gcr.io/kube-controller-manager:v1.12.3 k8s.gcr.io/kube-scheduler:v1.12.3 k8s.gcr.io/kube-proxy:v1.12.3 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.2.24 k8s.gcr.io/coredns:1.2 # b. 創建一個自動處理腳本下載鏡像->重新tag->刪除老tag vim ./load_images.sh #!/bin/bash ### config the image map declare -A images map=() images["k8s.gcr.io/kube-apiserver:v1.12.3"]="registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.12.3" images["k8s.gcr.io/kube-controller-manager:v1.12.3"]="registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.12.3" images["k8s.gcr.io/kube-scheduler:v1.12.3"]="registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.12.3" images["k8s.gcr.io/kube-proxy:v1.12.3"]="registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.12.3" images["k8s.gcr.io/pause:3.1"]="registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1" images["k8s.gcr.io/etcd:3.2.24"]="registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24" images["k8s.gcr.io/coredns:1.2.2"]="registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.2" ### re-tag foreach for key in ${!images[@]} do docker pull ${images[$key]} docker tag ${images[$key]} $key docker rmi ${images[$key]} done ### check docker images # c. 執行腳本準鏡像 sudo chmod +x load_images.sh ./load_images.sh
初始化需要指定至少兩個參數:
kubernetes-version: 方式kubeadm訪問外網獲取版本
pod-network-cidr: flannel網絡插件配置需要
### 執行初始化命令 sudo kubeadm init --kubernetes-version=v1.12.3 --pod-network-cidr=10.244.0.0/16 ### 最后的結果如下 ... ... Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.132.152:6443 --token ymny55.4jlbbkxiggmn9ezh --discovery-token-ca-cert-hash sha256:70265fafdb22d524c15616543d0b76527c686329221340b3b8da3652abed46b9
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
使用非root賬號查看節點情況:
kubectl get nodes NAME STATUS ROLES AGE VERSION servermaster NotReady master 28m v1.12.3
發現有一個master節點, 但是狀態是NotReady, 這里需要做一個決定:
如果希望是單機則執行如下
kubectl taint nodes --all node-role.kubernetes.io/master-
如果希望搭建繼續, 則繼續后續步驟, 此時主節點狀態可以忽略.
查看kube-flannel.yml文件內容, 復制到本地文件避免terminal無法遠程獲取
kubectl apply -f kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created
worker節點新建參考[1. 安裝步驟]在另外一臺服務器上新建即可, worker節點不用準備2.1~2.3及之后的所有步驟, 僅需完成基本安裝, 安裝完畢進入新的worker節點, 執行上一步最后得到join命令:
kubeadm join 192.168.132.152:6443 --token ymny55.4jlbbkxiggmn9ezh --discovery-token-ca-cert-hash sha256:70265fafdb22d524c15616543d0b76527c686329221340b3b8da3652abed46b9 ... ... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
kubectl get nodes NAME STATUS ROLES AGE VERSION servermaster Ready master 94m v1.12.3 worker01 Ready <none> 54m v1.12.3
復制kubernetes-dashboard.yaml內容到本地文件, 方式命令行無法訪問遠程文件, 編輯最后一個配置Dashboard Service, 增加type和nodePort, 結果如下:
# ------------------- Dashboard Service ------------------- # kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: ports: - port: 443 targetPort: 8443 nodePort: 30000 type: NodePort selector: k8s-app: kubernetes-dashboard
在master節點執行創建dashboard服務的命令:
kubectl create -f kubernetes-dashboard.yaml secret/kubernetes-dashboard-certs created serviceaccount/kubernetes-dashboard created role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created deployment.apps/kubernetes-dashboard created service/kubernetes-dashboard created
瀏覽器輸入worker節點ip和端口使用https訪問如下:https://my.worker01.local:30000/#!/login 即可以驗證dashboard是否安裝成功.
通過kubectl獲取secret,然后在獲取詳細的token,復制到上一步中登錄頁選擇Token(令牌), 即可以登錄
### 查看秘鑰, 列出所有kube-system命名空間下秘鑰 kubectl -n kube-system get secret NAME TYPE DATA AGE clusterrole-aggregation-controller-token-vxzmt kubernetes.io/service-account-token 3 10h ### 查看token, 這里選擇clusterrole-aggregation-controller-token-*****的秘鑰 kubectl -n kube-system describe secret clusterrole-aggregation-controller-token-vxzmt Name: clusterrole-aggregation-controller-token-vxzmt Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: clusterrole-aggregation-controller kubernetes.io/service-account.uid: dfb9d9c3-f646-11e8-9861-000c29b7e604 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJjbHVzdGVycm9sZS1hZ2dyZWdhdGlvbi1jb250cm9sbGVyLXRva2VuLXZ4em10Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImNsdXN0ZXJyb2xlLWFnZ3JlZ2F0aW9uLWNvbnRyb2xsZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkZmI5ZDljMy1mNjQ2LTExZTgtOTg2MS0wMDBjMjliN2U2MDQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06Y2x1c3RlcnJvbGUtYWdncmVnYXRpb24tY29udHJvbGxlciJ9.MfjiMrmyKl1GUci1ivD5RNrzo_s6wxXwFzgM_3NIAmTCfRQreYdhat3yyd6agaCLpUResnNC0ZGRi4CBs_Jgjqkovhb80V05_YVIvCrlf7xHxBKEtGfkJ-qLDvtAwR5zrXNNd0Ge8hTRxw67gZ3lGMkPpw5nfWmc0rzk90xTTQD1vAtrHMvxjr3dVXph5rT8GNuCSXA_J6o2AwYUbaKCc2ugdx8t8zX6oFJfVcw0ZNYYYIyxoXzzfhdppORtKR9t9v60KsI_-q0TxY-TU-JBtzUJU-hL6lB5MOgoBWpbQiV-aG8Ov74nDC54-DH7EhYEzzsLci6uUQCPlHNvLo_J2A
master搭建好了, worker也join了get nodes發現還是NotReady狀態
原因: 太復雜說不清楚任然是一個k8s issue, 查看issue基本可以確定是cni(Container Network Interface)問題,而flannel覆蓋修改了這個問題
解決方法: 安裝flannel插件(kubectl apply -f kube-flannel.yml)
配置錯誤重新開始搭建集群
解決方案: kubeadm reset
不能訪問dashboard
原因: Back-off pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
解決方案:
修改 kubernetes-dashboard-ce.yaml 文件中的 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0 為 registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0
提前下載好鏡像并配置好tag, 注意下載的位置worker節點, 可以通過: kubectl describe pod kubernetes-dashboard-85477d54d7-wzt7 -n kube-system 查看比較具體的信息
如何增加token失效時間
原因: 默認15分鐘
解決方法:
... ... - name: kubernetes-dashboard image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0 ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates - --token-ttl=86400 # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port volumeMounts: ... ...
如果創建dashboard后: kubectl -n kube-system edit deployment kubernetes-dashboard, 在Deployment部分關于containers的args部分添加一行: - --token-ttl=86400 即可, 和創建前修改一樣的方式一樣
如果創建dashboard前: 可以修改kubernetes-dashboard.yaml文件中Dashboard Deployment部分關于containers的args部分添加一行: - --token-ttl=86400 即可,數字自定義單位是秒 如下:
使用 kubeadm completion --help 查看使用詳情,這里直接貼出bash的自動完成命令
kubeadm completion bash > ~/.kube/kubeadm_completion.bash.inc printf "\n# Kubeadm shell completion\nsource '$HOME/.kube/kubeadm_completion.bash.inc'\n" >> $HOME/.bash_profile source $HOME/.bash_profile
使用 kubectl completion --help 查看使用詳情,這里直接貼出bash的自動完成命令, 注意第二行命令不要一次性復制,先復制第一行printf再復制剩余.
kubectl completion bash > ~/.kube/completion.bash.inc printf " # Kubectl shell completion source '$HOME/.kube/completion.bash.inc' " >> $HOME/.bash_profile source $HOME/.bash_profile
創建 secret, 然后增加添加 imagePullSecrets 配置在指定image的地方. 創建和查看secret如下:
kubectl create secret docker-registry regcred --docker-server=registry.domain.cn:5001 --docker-username=xxxxx --docker-password=xxxxx --docker-email=jimmy.w@aliyun.com kubectl get secret regcred --output=yaml kubectl get secret regcred --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode
配置 imagePullSecrets 如下:
... ... containers: - name: mirage image: registry.domain.cn:5001/mirage:latest imagePullSecrets: - name: regcred ports: - containerPort: 3008 protocol: TCP volumeMounts: ... ...
如果有一些特別的入口或者以前放置到/etc/hosts中的可以通過配置hostAliases進行配置, 作用和本地的hosts一樣, 且這些hostAlieas配置會放置到容器/etc/hosts中, 具體使用如下:
apiVersion: v1 kind: Pod metadata: name: hostaliases-pod spec: hostAliases: - ip: "127.0.0.1" hostnames: - "foo.local" - "bar.local" - ip: "10.1.2.3" hostnames: - "foo.remote" - "bar.remote" containers: - name: cat-hosts image: busybox command: - cat args: - "/etc/hosts"
到此,相信大家對“Kubernetes集群的搭建方法”有了更深的了解,不妨來實際操作一番吧!這里是億速云網站,更多相關內容可以進入相關頻道進行查詢,關注我們,繼續學習!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。