您好,登錄后才能下訂單哦!
這篇文章將為大家詳細講解有關基于kubernetes如何構建Docker集群,小編覺得挺實用的,因此分享給大家做個參考,希望大家閱讀完這篇文章后可以有所收獲。
1,環境說明
組件版本說明:
系統版本:ceontos 7
kubernetes版本:0.17.1
etcd版本:2.1.1
docker版本:1.6.2
環境說明
etcd:172.16.0.3
master:172.16.0.2 kubernetes+docker
minion1:172.16.0.4 kubernetes+docker
minion2:172.16.0.5 kubernetes+docker
2,系統環境配置
更新yum源
# yum -y install wget ntpdate bind-utils
# wget http://mirror.centos.org/centos/7/extras/x86_64/Packages/epel-release-7-2.noarch.rpm
# yum update
防火墻設置(根據個人情況配置,非必須)
關閉firewall
# systemctl stop firewalld.service #停止firewall
# systemctl disable firewalld.service #禁止firewall開機啟動
安裝iptalbles
# yum install iptables-services #安裝
# systemctl start iptables.service #最后重啟防火墻使配置生效
# systemctl enable iptables.service #設置防火墻開機啟動
2,安裝配置etcd
2.1,安裝
# yum install etcd
2.2,配置
[root@etcd ~]# grep -Ev "^#|^$" /etc/etcd/etcd.conf
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001"
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:4001"
2.3,啟動
[root@etcd ~]# systemctl start etcd.service
2.4,驗證
[root@etcd ~]# etcd -version
etcd Version: 2.0.13
Git SHA: 92e3895
Go Version: go1.4.2
Go OS/Arch: linux/amd64
#在master上
[root@master ~]# telnet 172.16.0.3 4001
2.5,配置flannel
[root@etcd ~]# etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}'
{"Network":"172.17.0.0/16"}
[root@etcd ~]# etcdctl get /coreos.com/network/config
{"Network":"172.17.0.0/16"}
3,安裝k8s
服務器:所有服務器
#yum install kubernetes
升級方法:
# mkdir -p /home/install && cd /home/install
# wget https://github.com/GoogleCloudPlatform/kubernetes/releases/download/v0.6.2/kubernetes.tar.gz
# tar -zxvf kubernetes.tar.gz
# tar -zxvf kubernetes/server/kubernetes-server-linux-amd64.tar.gz
# cp kubernetes/server/bin/kube* /usr/bin
3.1,master配置k8s
master運行三個組件,包括apiserver、scheduler、controller-manager,相關配置項也只涉及這三塊。
[/etc/kubernetes/config]
[root@master ~]# grep -Ev "^$|^#" /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http:://172.16.0.2:8080"
[/etc/kubernetes/apiserver]
[root@master ~]# grep -Ev "^$|^#" /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBELET_PORT="--kubelet_port=10250"
KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:4001"
KUBE_SERVICE_ADDRESSES="--portal_net=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceAutoProvision,LimitRanger,ResourceQuota"
KUBE_API_ARGS=""
[/etc/kubernetes/controller-manager]
[root@master ~]# grep -Ev "^$|^#" /etc/kubernetes/controller-manager
KUBELET_ADDRESSES="--machines=127.0.0.1,172.16.0.4,172.16.0.5"
KUBE_CONTROLLER_MANAGER_ARGS=""
[/etc/kubernetes/scheduler]
[root@master ~]# grep -Ev "^$|^#" /etc/kubernetes/scheduler
KUBE_SCHEDULER_ARGS=""
3.2,master上啟動k8s服務
# systemctl start kube-apiserver.service kube-controller-manager.service kube-scheduler.service
# systemctl enable kube-apiserver.service kube-controller-manager.service kube-scheduler.service
3.3,查看k8s版本
[root@master ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"0+", GitVersion:"v1.0.0-290-gb2dafdaef5acea", GitCommit:"b2dafdaef5aceafad503ab56254b60f80da9e980", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"0+", GitVersion:"v1.0.0-290-gb2dafdaef5acea", GitCommit:"b2dafdaef5aceafad503ab56254b60f80da9e980", GitTreeState:"clean"}
報錯:
[root@master ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"0+", GitVersion:"v1.0.0-290-gb2dafdaef5acea", GitCommit:"b2dafdaef5aceafad503ab56254b60f80da9e980", GitTreeState:"clean"}
error: couldn't read version from server: Get http://localhost:8080/api: dial tcp 127.0.0.1:8080: connection refused
解決:
需要配置k8s,見上!
3.4,minion配置k8s
minion運行兩個組件,kubelet proxy,對應的配置是config和kubelet
minion上還需要配置docker,見3.5
[/etc/kubernetes/config]
[root@minion1 ~]# grep -Ev "^$|^#" /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://172.16.0.2:8080"
[root@localhost ~]# grep -Ev "^$|^#" /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname_override=172.16.0.4"
KUBELET_API_SERVER="--api_servers=http://172.16.0.2:8080"
KUBELET_ARGS=""
3.5,minion上配置docker
配置docker,以便遠程管理。
[root@minion1 ~]# grep -Ev "^$|^#" /etc/sysconfig/docker
OPTIONS='--selinux-enabled -H tcp://0.0.0.0:2375 -H fd://'
DOCKER_CERT_PATH=/etc/docker
#啟動docker的時候可能會報錯,可以先不配置
3.6,配置flanneld
[root@minion1 ~]# grep -Ev "^$|^#" /etc/sysconfig/flanneld
FLANNEL_ETCD="http://172.16.0.3:4001"
FLANNEL_ETCD_KEY="/coreos.com/network"
3.7,minion上啟動k8s,docker,flanneld
[root@minion1 ~]# systemctl start docker.service flanneld.service
[root@minion1 ~]# systemctl start kubelet.service kube-proxy.service
如果出現docker0和flannel設置的ip地址不同,則可采取如下方式修改
#systemctl stop docker
#ifconfig docker0 down
#brctl delbr docker0
#systemctl start docker
3.8,daocker啟動報錯
[root@localhost sysconfig]# systemctl start docker
Job for docker.service failed. See 'systemctl status docker.service' and 'journalctl -xn' for details.
[root@localhost sysconfig]# systemctl status docker.service
docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled)
Drop-In: /usr/lib/systemd/system/docker.service.d
└─flannel.conf
Active: failed (Result: exit-code) since 三 2015-09-16 14:18:47 CST; 11s ago
Docs: http://docs.docker.com
Process: 9150 ExecStart=/usr/bin/docker -d $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY (code=exited, status=1/FAILURE)
Main PID: 9150 (code=exited, status=1/FAILURE)
9月 16 14:18:47 localhost.localdomain systemd[1]: Starting Docker Application Container Engine...
9月 16 14:18:47 localhost.localdomain docker[9150]: time="2015-09-16T14:18:47.842291856+08:00" level=info msg="Listening for...sock)"
9月 16 14:18:47 localhost.localdomain docker[9150]: time="2015-09-16T14:18:47.861153138+08:00" level=error msg="WARNING: No ...n use"
9月 16 14:18:47 localhost.localdomain docker[9150]: time="2015-09-16T14:18:47.889459632+08:00" level=info msg="[graphdriver]...per\""
9月 16 14:18:47 localhost.localdomain docker[9150]: time="2015-09-16T14:18:47.902509183+08:00" level=warning msg="Running mo...tus 1"
9月 16 14:18:47 localhost.localdomain docker[9150]: time="2015-09-16T14:18:47.907255506+08:00" level=info msg="Firewalld run...false"
9月 16 14:18:47 localhost.localdomain docker[9150]: time="2015-09-16T14:18:47.949811560+08:00" level=fatal msg="Error starti....61.1"
9月 16 14:18:47 localhost.localdomain systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
9月 16 14:18:47 localhost.localdomain systemd[1]: Failed to start Docker Application Container Engine.
9月 16 14:18:47 localhost.localdomain systemd[1]: Unit docker.service entered failed state.
Hint: Some lines were ellipsized, use -l to show in full.
[root@localhost sysconfig]# docker -d
INFO[0000] Listening for HTTP on unix (/var/run/docker.sock)
ERRO[0000] WARNING: No --storage-opt dm.thinpooldev specified, using loopback; this configuration is strongly discouraged for production use
INFO[0000] [graphdriver] using prior storage driver "devicemapper"
WARN[0000] Running modprobe bridge nf_nat br_netfilter failed with message: , error: exit status 1
INFO[0000] Firewalld running: false
INFO[0000] Loading containers: start.
INFO[0000] Loading containers: done.
INFO[0000] Daemon has completed initialization
INFO[0000] Docker daemon commit=3043001/1.7.1 execdriver=native-0.2 graphdriver=devicemapper version=1.7.1
解決方法:
如果出現docker0和flannel設置的ip地址不同,則可采取如下方式修改
#systemctl stop docker
#ifconfig docker0 down
#brctl delbr docker0
#systemctl start docker
注意:在用虛擬機做測試的時候,假如克隆了minion的虛擬機,兩臺ninion虛擬機的docker0和flannel網段地址會一樣,會造成NotReady狀態。所以需要克隆master虛擬主機,配置成minion狀態。
4,集群操作
查看node信息
[root@master ~]# kubectl get nodes
報錯1:
Error from server: 501: All the given peers are not reachable (failed to propose on members [http://172.16.0.3:4001] twice [last error: Get http://172.16.0.3:4001/v2/keys/registry/minions?quorum=false&recursive=true&sorted=true: dial tcp 172.16.0.3:4001: i/o timeout]) [0]
原因:
docker沒有注冊到etcd上,查看docker未啟動。
報錯2:
[root@localhost ~]# kubectl get nodes
NAME LABELS STATUS
127.0.0.1 kubernetes.io/hostname=127.0.0.1 NotReady
172.16.0.2 kubernetes.io/hostname=172.16.0.2 NotReady
172.16.0.4 kubernetes.io/hostname=172.16.0.4 NotReady
172.16.0.5 kubernetes.io/hostname=172.16.0.5 NotReady
原因:
minion注冊etcd有問題,需要查看etcd服務的端口是否是2379(etcd Version: 2.0.13
),在minion上是否能telnet通2379端口
關于“基于kubernetes如何構建Docker集群”這篇文章就分享到這里了,希望以上內容可以對大家有一定的幫助,使各位可以學到更多知識,如果覺得文章不錯,請把它分享出去讓更多的人看到。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。