您好,登錄后才能下訂單哦!
怎樣使用Kubeadm安裝Kubernetes1.5版本,針對這個問題,這篇文章詳細介紹了相對應的分析和解答,希望可以幫助更多想解決這個問題的小伙伴找到更簡單易行的方法。
使用Kubeadm安裝Kubernetes1.5版本 1、系統版本:ubuntu16.04 root@master:~# docker version Client: Version: 1.12.1 API version: 1.24 Go version: go1.6.2 Git commit: 23cf638 Built: Tue, 27 Sep 2016 12:25:38 +1300 OS/Arch: linux/amd64 Server: Version: 1.12.1 API version: 1.24 Go version: go1.6.2 Git commit: 23cf638 Built: Tue, 27 Sep 2016 12:25:38 +1300 OS/Arch: linux/amd64 1、部署前提條件 每臺主機上面至少1G內存。 所有主機之間網絡可達。 2、部署: 在主機上安裝kubelet和kubeadm curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - 我是用的是 http://119.29.98.145:8070/zhi/apt-key.gpg 主機master上操作如下: curl -s http://119.29.98.145:8070/zhi/apt-key.gpg | apt-key add - cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get update apt-get install -y docker.io apt-get install -y kubelet kubeadm kubectl kubernetes-cni 下載后的kube組件并未自動運行起來。在 /lib/systemd/system下面我們能看到kubelet.service root@master:~# ls /lib/systemd/system |grep kube kubelet.service kubelet的版本: root@master:~# kubelet --version Kubernetes v1.5.1 k8s的核心組件都有了,接下來我們就要boostrap kubernetes cluster了。 3、初始化集群 理論上通過kubeadm使用init和join命令即可建立一個集群,這init就是在master節點對集群進行初始化。和k8s 1.4之前的部署方式不同的是, kubeadm安裝的k8s核心組件都是以容器的形式運行于master node上的。因此在kubeadm init之前,最好給master node上的docker engine掛上加速器代理, 因為kubeadm要從gcr.io/google_containers repository中pull許多核心組件的images 在Kubeadm的文檔中,Pod Network的安裝是作為一個單獨的步驟的。kubeadm init并沒有為你選擇一個默認的Pod network進行安裝。 我們將首選Flannel 作為我們的Pod network,這不僅是因為我們的上一個集群用的就是flannel,而且表現穩定。 更是由于Flannel就是coreos為k8s打造的專屬overlay network add-ons。甚至于flannel repository的readme.md都這樣寫著:“flannel is a network fabric for containers, designed for Kubernetes”。 如果我們要使用Flannel,那么在執行init時,按照kubeadm文檔要求,我們必須給init命令帶上option:–pod-network-cidr=10.244.0.0/16。 4、執行kubeadm init 執行kubeadm init命令: root@master:~# kubeadm init --pod-network-cidr=10.244.0.0/16 [kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters. [preflight] Running pre-flight checks [preflight] Starting the kubelet service [init] Using Kubernetes version: v1.5.1 [tokens] Generated token: "2909ca.c0b0772a8817f9e3" [certificates] Generated Certificate Authority key and certificate. [certificates] Generated API Server key and certificate [certificates] Generated Service Account signing keys [certificates] Created keys and certificates in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [apiclient] Created API client, waiting for the control plane to become ready [apiclient] All control plane components are healthy after 14.761716 seconds [apiclient] Waiting for at least one node to register and become ready [apiclient] First node is ready after 1.003312 seconds [apiclient] Creating a test deployment [apiclient] Test deployment succeeded [token-discovery] Created the kube-discovery deployment, waiting for it to become ready [token-discovery] kube-discovery is ready after 1.002402 seconds [addons] Created essential addon: kube-proxy [addons] Created essential addon: kube-dns Your Kubernetes master has initialized successfully! You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node: kubeadm join --token=2909ca.c0b0772a8817f9e3 xxx.xxx.xxx.xxx (ip記下) init成功后的master node有啥變化?k8s的核心組件均正常啟動: root@master:~# ps -ef |grep kube root 23817 1 2 14:07 ? 00:00:35 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local root 23921 23900 0 14:07 ? 00:00:01 kube-scheduler --address=127.0.0.1 --leader-elect --master=127.0.0.1:8080 root 24055 24036 0 14:07 ? 00:00:10 kube-apiserver --insecure-bind-address=127.0.0.1 --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota --service-cluster-ip-range=10.96.0.0/12 --service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem --client-ca-file=/etc/kubernetes/pki/ca.pem --tls-cert-file=/etc/kubernetes/pki/apiserver.pem --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem --token-auth-file=/etc/kubernetes/pki/tokens.csv --secure-port=6443 --allow-privileged --advertise-address=master的ip --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --anonymous-auth=false --etcd-servers=http://127.0.0.1:2379 root 24084 24070 0 14:07 ? 00:00:11 kube-controller-manager --address=127.0.0.1 --leader-elect --master=127.0.0.1:8080 --cluster-name=kubernetes --root-ca-file=/etc/kubernetes/pki/ca.pem --service-account-private-key-file=/etc/kubernetes/pki/apiserver-key.pem --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem --insecure-experimental-approve-all-kubelet-csrs-for-group=system:kubelet-bootstrap --allocate-node-cidrs=true --cluster-cidr=10.244.0.0/16 root 24242 24227 0 14:07 ? 00:00:00 /usr/local/bin/kube-discovery root 24308 24293 1 14:07 ? 00:00:15 kube-proxy --kubeconfig=/run/kubeconfig root 29457 29441 0 14:09 ? 00:00:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgr root 29498 29481 0 14:09 ? 00:00:00 /bin/sh -c set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done root 30372 30357 0 14:10 ? 00:00:01 /exechealthz --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null --url=/healthz-dnsmasq --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null --url=/healthz-kubedns --port=8080 --quiet root 30682 30667 0 14:10 ? 00:00:01 /kube-dns --domain=cluster.local --dns-port=10053 --config-map=kube-dns --v=2 root 48755 1796 0 14:31 pts/0 00:00:00 grep --color=auto kube 而且以多cotainer的形式啟動 root@master:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c4209b1077d2 gcr.io/google_containers/kubedns-amd64:1.9 "/kube-dns --domain=c" 22 minutes ago Up 22 minutes k8s_kube-dns.61e5a20f_kube-dns-2924299975-txh2v_kube-system_f5364cd5-d631-11e6-9d86-0050569c3e9b_fc02f762 0908d6398b0b gcr.io/google_containers/exechealthz-amd64:1.2 "/exechealthz '--cmd=" 22 minutes ago Up 22 minutes k8s_healthz.9d343f54_kube-dns-2924299975-txh2v_kube-system_f5364cd5-d631-11e6-9d86-0050569c3e9b_0ee806f6 0e35e96ca4ac gcr.io/google_containers/dnsmasq-metrics-amd64:1.0 "/dnsmasq-metrics --v" 22 minutes ago Up 22 minutes k8s_dnsmasq-metrics.2bb05ef7_kube-dns-2924299975-txh2v_kube-system_f5364cd5-d631-11e6-9d86-0050569c3e9b_436b9370 3921b4e59aca gcr.io/google_containers/kube-dnsmasq-amd64:1.4 "/usr/sbin/dnsmasq --" 22 minutes ago Up 22 minutes k8s_dnsmasq.f7e18a01_kube-dns-2924299975-txh2v_kube-system_f5364cd5-d631-11e6-9d86-0050569c3e9b_06c5efa7 18513413ba60 gcr.io/google_containers/pause-amd64:3.0 "/pause" 22 minutes ago Up 22 minutes k8s_POD.d8dbe16c_kube-dns-2924299975-txh2v_kube-system_f5364cd5-d631-11e6-9d86-0050569c3e9b_9de0a18d 45132c8d6d3d quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64 "/bin/sh -c 'set -e -" 23 minutes ago Up 23 minutes k8s_install-cni.fc218cef_kube-flannel-ds-0fnxc_kube-system_22034e49-d632-11e6-9d86-0050569c3e9b_88dffd75 4c2a2e46c808 quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64 "/opt/bin/flanneld --" 23 minutes ago Up 23 minutes k8s_kube-flannel.5fdd90ba_kube-flannel-ds-0fnxc_kube-system_22034e49-d632-11e6-9d86-0050569c3e9b_2706c3cb ad08c8dd177c gcr.io/google_containers/pause-amd64:3.0 "/pause" 23 minutes ago Up 23 minutes k8s_POD.d8dbe16c_kube-flannel-ds-0fnxc_kube-system_22034e49-d632-11e6-9d86-0050569c3e9b_279d8436 847f00759977 gcr.io/google_containers/kube-proxy-amd64:v1.5.1 "kube-proxy --kubecon" 24 minutes ago Up 24 minutes k8s_kube-proxy.2f62b4e5_kube-proxy-9c0bf_kube-system_f5326252-d631-11e6-9d86-0050569c3e9b_c1f31904 f8da0f38f3e1 gcr.io/google_containers/pause-amd64:3.0 "/pause" 24 minutes ago Up 24 minutes k8s_POD.d8dbe16c_kube-proxy-9c0bf_kube-system_f5326252-d631-11e6-9d86-0050569c3e9b_c340d947 c1efa29640d1 gcr.io/google_containers/kube-discovery-amd64:1.0 "/usr/local/bin/kube-" 24 minutes ago Up 24 minutes k8s_kube-discovery.6907cb07_kube-discovery-1769846148-4rsq9_kube-system_f49933be-d631-11e6-9d86-0050569c3e9b_c4827da2 4c6a646d0b2e gcr.io/google_containers/pause-amd64:3.0 "/pause" 24 minutes ago Up 24 minutes k8s_POD.d8dbe16c_kube-discovery-1769846148-4rsq9_kube-system_f49933be-d631-11e6-9d86-0050569c3e9b_8823b66a ece79181f177 gcr.io/google_containers/pause-amd64:3.0 "/pause" 24 minutes ago Up 24 minutes k8s_dummy.702d1bd5_dummy-2088944543-r2mw3_kube-system_f38f3ede-d631-11e6-9d86-0050569c3e9b_ade728ba 9c3364c623df gcr.io/google_containers/pause-amd64:3.0 "/pause" 24 minutes ago Up 24 minutes k8s_POD.d8dbe16c_dummy-2088944543-r2mw3_kube-system_f38f3ede-d631-11e6-9d86-0050569c3e9b_838c58b5 a64a3363a82b gcr.io/google_containers/kube-controller-manager-amd64:v1.5.1 "kube-controller-mana" 25 minutes ago Up 25 minutes k8s_kube-controller-manager.84edb2e5_kube-controller-manager-master_kube-system_7b7c15f8228e3413d3b0d0bad799b1ea_697ef6ee 27625502c298 gcr.io/google_containers/kube-apiserver-amd64:v1.5.1 "kube-apiserver --ins" 25 minutes ago Up 25 minutes k8s_kube-apiserver.5942f3e3_kube-apiserver-master_kube-system_aeb59dd32f3217b366540250d2c35d8c_38a83844 5b2cc5cb9ac1 gcr.io/google_containers/pause-amd64:3.0 "/pause" 25 minutes ago Up 25 minutes k8s_POD.d8dbe16c_kube-controller-manager-master_kube-system_7b7c15f8228e3413d3b0d0bad799b1ea_2f88a796 e12ef7b3c1f0 gcr.io/google_containers/etcd-amd64:3.0.14-kubeadm "etcd --listen-client" 25 minutes ago Up 25 minutes k8s_etcd.c323986f_etcd-master_kube-system_3a26566bb004c61cd05382212e3f978f_ef6eb513 84a731cbce18 gcr.io/google_containers/pause-amd64:3.0 "/pause" 25 minutes ago Up 25 minutes k8s_POD.d8dbe16c_kube-apiserver-master_kube-system_aeb59dd32f3217b366540250d2c35d8c_a3a2ea4e 612b021457a1 gcr.io/google_containers/kube-scheduler-amd64:v1.5.1 "kube-scheduler --add" 25 minutes ago Up 25 minutes k8s_kube-scheduler.bb7d750_kube-scheduler-master_kube-system_0545c2e223307b5ab8c74b0ffed56ac7_a49fab86 ac0d8698f79f gcr.io/google_containers/pause-amd64:3.0 "/pause" 25 minutes ago Up 25 minutes k8s_POD.d8dbe16c_etcd-master_kube-system_3a26566bb004c61cd05382212e3f978f_9a6b7925 2a16a2217bf3 gcr.io/google_containers/pause-amd64:3.0 "/pause" 25 minutes ago Up 25 minutes k8s_POD.d8dbe16c_kube-scheduler-master_kube-system_0545c2e223307b5ab8c74b0ffed56ac7_d2b51317 kube-apiserver的IP是host ip,從而推斷容器使用的是host網絡,這從其對應的pause容器的network屬性就可以看出: root@master:~# docker ps |grep apiserver 27625502c298 gcr.io/google_containers/kube-apiserver-amd64:v1.5.1 "kube-apiserver --ins" 26 minutes ago Up 26 minutes k8s_kube-apiserver.5942f3e3_kubeapiserver-master_kube-system_aeb59dd32f3217b366540250d2c35d8c_38a83844 84a731cbce18 gcr.io/google_containers/pause-amd64:3.0 "/pause" 26 minutes ago Up 26 minutes k8s_POD.d8dbe16c_kube-apiserver-master_kube-system_aeb59dd32f3217b366540250d2c35d8c_a3a2ea4e 問題一、 如果kubeadm init執行過程中途出現了什么問題,比如前期忘記掛加速器導致init hang住,你可能會ctrl+c退出init執行。重新配置后,再執行kubeadm init,這時你可能會遇到下面kubeadm的輸出: # kubeadm init --pod-network-cidr=10.244.0.0/16 [kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters. [preflight] Running pre-flight checks [preflight] Some fatal errors occurred: Port 10250 is in use /etc/kubernetes/manifests is not empty /etc/kubernetes/pki is not empty /var/lib/kubelet is not empty /etc/kubernetes/admin.conf already exists /etc/kubernetes/kubelet.conf already exists [preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks` kubeadm會自動檢查當前環境是否有上次命令執行的“殘留”。如果有,必須清理后再行執行init。我們可以通過”kubeadm reset”來清理環境,以備重來。 # kubeadm reset [preflight] Running pre-flight checks [reset] Draining node: "iz25beglnhtz" [reset] Removing node: "iz25beglnhtz" [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Removing kubernetes-managed containers [reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/etcd] [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf] 5、要使用Flannel網絡,因此我們需要執行如下安裝命令: #kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml configmap "kube-flannel-cfg" created daemonset "kube-flannel-ds" created 需要稍等幾秒鐘,我們再來看master node上的cluster信息: root@master:~# ps -ef |grep kube |grep flannel root 29457 29441 0 14:09 ? 00:00:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgr root 29498 29481 0 14:09 ? 00:00:00 /bin/sh -c set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done root@master:~# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system dummy-2088944543-r2mw3 1/1 Running 0 30m kube-system etcd-master 1/1 Running 0 31m kube-system kube-apiserver-master 1/1 Running 0 31m kube-system kube-controller-manager-master 1/1 Running 0 31m kube-system kube-discovery-1769846148-4rsq9 1/1 Running 0 30m kube-system kube-dns-2924299975-txh2v 4/4 Running 0 30m kube-system kube-flannel-ds-0fnxc 2/2 Running 0 29m kube-system kube-flannel-ds-lpgpv 2/2 Running 0 23m kube-system kube-flannel-ds-s05nr 2/2 Running 0 18m kube-system kube-proxy-9c0bf 1/1 Running 0 30m kube-system kube-proxy-t8hxr 1/1 Running 0 18m kube-system kube-proxy-zd0v2 1/1 Running 0 23m kube-system kube-scheduler-master 1/1 Running 0 31m 至少集群的核心組件已經全部run起來了。看起來似乎是成功了。 接下來開始node下的操作 6、minion node:join the cluster 這里我們用到了kubeadm的第二個命令:kubeadm join。 在minion node上執行(注意:這里要保證master node的9898端口在防火墻是打開的): 前提node下需要有上面安裝的kube組建 7、安裝kubelet和kubeadm curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - 我是用的是 http://119.29.98.145:8070/zhi/apt-key.gpg 主機master上操作如下: curl -s http://119.29.98.145:8070/zhi/apt-key.gpg | apt-key add - cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get update apt-get install -y docker.io apt-get install -y kubelet kubeadm kubectl kubernetes-cni 記住master的token root@node01:~# kubeadm join --token=2909ca.c0b0772a8817f9e3 xxx.xxx.xxx.xxx(ip) 8、在master node上查看當前cluster狀態: root@master:~# kubectl get node NAME STATUS AGE master Ready,master 59m node01 Ready 51m node02 Ready 46m
關于怎樣使用Kubeadm安裝Kubernetes1.5版本問題的解答就分享到這里了,希望以上內容可以對大家有一定的幫助,如果你還有很多疑惑沒有解開,可以關注億速云行業資訊頻道了解更多相關知識。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。