您好,登錄后才能下訂單哦!
這篇文章將為大家詳細講解有關Docker容器虛擬化網絡的示例分析,小編覺得挺實用的,因此分享給大家做個參考,希望大家閱讀完這篇文章后可以有所收獲。
一、docker網絡簡介
網絡作為docker容器化實現的6個名稱空間的其中之一,是必不可少的。其在Linux內核2.6時已經被加載進內核支持了。網絡名稱空間主要用于實現網絡設備和協議棧的隔離,列如;一個docker host有4塊網卡,在創建容器的時候,將其中一塊網卡分配給該名稱空間,那么其他名稱空間是看不到這塊網卡的。且:一個設備只能屬于一個名稱空間。因為一個名稱空間綁定一個物理網卡和外界通信,且一個物理網卡不能分配多個名稱空間,這使得我們只能創建4個名稱空間。如果要創建的名稱空間多于我們的物理網卡數量,那該怎么辦呢?
1、 虛擬網絡通信的三種方式
1.1、橋接網絡:在kvm的虛擬網絡中,我們使用的是虛擬網卡設備(用純軟件的方式來模擬一組設備來使用),而在docker中,也不例外。在Linux內核級,支持兩種級別設備的模擬,分別是2層設備(工作在鏈路層能實現封裝物理報文并在各網絡設備中報文轉發的組件);而這個功能,是可以在Linux上利用內核中對二層虛擬設備的支持創建虛擬網卡接口的。而且,這種虛擬網卡接口非常獨特,每一個網絡接口設備是成對出現的,可以模擬一根網線的兩端,其中,一端可以插在主機上,另一端可以插在交換機上。這就相當于讓一個主機連接到一個交換機上了。而Linux內核原生支持二層虛擬網橋設備(用軟件來構建一個交換機)。例如;我有兩個名稱空間,都分別使用虛擬網絡創建一對網絡接口,一頭插在名稱空間上,另一頭插在虛擬網橋設備上,并且兩個名稱空間配置在同一個網段上,這樣就實現了容器間的通信,但是這種橋接方式,如果用在有N多個容器的網絡中,由于所有容器全部是橋接在同一塊虛擬網橋設備上,會產生廣播風暴,在隔離上也是極為不易的,因此在規模容器的場景中,使用橋接這種方式無疑是自討苦吃,否則都不應該直接橋接的。
1.2、nat網絡:如果不橋接,又能與外部通信,用的是nat技術。NAT(network address transfer)網絡地址轉換,就是替換IP報文頭部的地址信息,通過將內部網絡IP地址替換為出口的IP地址提供不同網段的通信。比如:兩個容器都配置了不同的私網地址,并且為容器配置了虛擬網橋(虛擬交換機),把容器1的網關指向虛擬網橋的IP地址,而后在docker host上打開核心轉發功能,這時,當容器1與容器2通信時,報文先送給各自的虛擬網橋經由內核,內核判定目的IP不是自己,會查詢路由表,而后將報文送給對應的網卡,物理網卡收到報文之后報文的原地址替換成自己的IP(這個操作稱為snat),再將報文發送給容器2的物理網卡,物理網卡收到報文后,會將報文的原IP替換為自己的IP(這個操作稱作dnat)發送給虛擬交換機,最后在發送給容器2。容器2收到報文之后,同樣的也要經過相同的操作,將回復報文經過改寫原ip地址的操作(snat和dnat)送達給容器1的物理網卡,物理網卡收到報文之后在將報文轉發給虛擬網橋送給容器1。在這種網絡中,如果要跨物理主機,讓兩個容器通信,必須經過兩次nat(snat和dnat),造成了通信效率的低下。在多容器的場景中也不適合。
1.3、Overlay Network
疊加網絡,在這種網絡中,不同主機的容器通信會借助于一個虛擬網橋,讓當前主機的各個容器連接到這個虛擬網橋上來,隨后,他們通信時,借助物理網絡,來完成報文的隧道轉發,從而可以實現容器可以直接看到不同主機的其他容器,進而互相通信。例如;容器1要和其他host上的容器2通信,容器1會把報文發送給虛擬網橋,虛擬網橋發現目的IP不在本地物理服務器上,于是這個報文會從物理網卡發送出去,在發出去之前不在做snat,而是在添加一層IP報頭,原地址是容器1的物理網卡地址,目的地址是容器2所在主機的物理網卡地址。報文到達主機,主機拆完第一層數據報文,發現還有一層報頭,并且IP地址是當前主機的容器地址,進而將報文發送給虛擬網橋,最后在發送給容器2。這種用一個IP來承載另外一個IP的方式叫做隧道。
2、docker支持的四種網絡模型
2.1、Closed container:只有loop接口,就是null類型
2.2、Bridged container A:橋接式類型,容器網絡接入到docker0網絡上
2.3、joined container A:聯盟式網絡,讓兩個容器有一部分名稱空間隔離(User、Mount、Pid),這樣兩個容器間就擁有同一個網絡接口,網絡協議棧
2.4、Open container:開放式網絡:直接共享物理機的三個名稱空間(UTS、IPC、Net),世界使用物理主機的網卡通信,賦予容器管理物理主機網絡的特權
二、Docker網絡的指定
1、bridge網絡(NAT)
docker在安裝完以后自動提供了3種網絡,默認使用bridge(nat橋接)網絡,如果啟動容器時,不指定--network=string,就是用的bridge網絡,使用docker network ls可以看到這三種網絡類型
[root@bogon ~]# docker network ls NETWORK ID NAME DRIVER SCOPE ea9de27d788c bridge bridge local 126249d6b177 host host local 4ad67e37d383 none null local
docker在安裝完成后,會自動在本機創建一個軟交換機(docker0),可以扮演二層的交換機設備,也可以扮演二層的網卡設備
[root@bogon ~]# ifconfig docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:2f:51:41:2d txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
當我們在創建容器時,docker會通過軟件自動創建2個虛擬的網卡,一端接在容器上,另一端接在docker0交換機上,從而使得容器就好像連接在了交換機上。
這是我還沒有啟動容器之前本地host的網絡信息
[root@bogon ~]# ifconfig docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 inet6 fe80::42:2fff:fe51:412d prefixlen 64 scopeid 0x20<link> ether 02:42:2f:51:41:2d txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 14 bytes 1758 (1.7 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.31.186 netmask 255.255.255.0 broadcast 192.168.31.255 inet6 fe80::a3fa:7451:4298:fe76 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:fb:f6:a1 txqueuelen 1000 (Ethernet) RX packets 2951 bytes 188252 (183.8 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 295 bytes 36370 (35.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 96 bytes 10896 (10.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 96 bytes 10896 (10.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255 ether 52:54:00:1a:be:ae txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@bogon ~]# [root@bogon ~]# [root@bogon ~]#
下面我啟動兩個容器,查看網絡信息的變化,可以看到多出來兩個vethf的虛擬網卡
這就是docker為容器啟動創建的一對虛擬網卡中的一半
[root@bogon ~]# docker container run --name=nginx1 -d nginx:stable 11b031f93d019640b1cd636a48fb9448ed0a7fc6103aa509cd053cbbf8605e6e [root@bogon ~]# docker container run --name=redis1 -d redis:4-alpine fca571d7225f6ce94ccf6aa0d832bad9b8264624e41cdf9b18a4a8f72c9a0d33 [root@bogon ~]# ifconfig docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 inet6 fe80::42:2fff:fe51:412d prefixlen 64 scopeid 0x20<link> ether 02:42:2f:51:41:2d txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 14 bytes 1758 (1.7 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.31.186 netmask 255.255.255.0 broadcast 192.168.31.255 inet6 fe80::a3fa:7451:4298:fe76 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:fb:f6:a1 txqueuelen 1000 (Ethernet) RX packets 2951 bytes 188252 (183.8 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 295 bytes 36370 (35.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 96 bytes 10896 (10.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 96 bytes 10896 (10.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth0a95d3a: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::cc12:e7ff:fe27:2c7f prefixlen 64 scopeid 0x20<link> ether ce:12:e7:27:2c:7f txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 648 (648.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 vethf618ec3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::882a:aeff:fe73:f6df prefixlen 64 scopeid 0x20<link> ether 8a:2a:ae:73:f6:df txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 22 bytes 2406 (2.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255 ether 52:54:00:1a:be:ae txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@bogon ~]# [root@bogon ~]#
另一半在容器中
[root@bogon ~]# docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES fca571d7225f redis:4-alpine "docker-entrypoint.s?? About a minute ago Up About a minute 6379/tcp redis1 11b031f93d01 nginx:stable "nginx -g 'daemon of?? 10 minutes ago Up 10 minutes 80/tcp nginx1
并且他們都被關聯到了docker0虛擬交換機中,可以使用brctl和ip link show查看到
[root@bogon ~]# brctl show bridge name bridge id STP enabled interfaces docker0 8000.02422f51412d no veth0a95d3a vethf618ec3 [root@bogon ~]# ip link show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 00:0c:29:fb:f6:a1 brd ff:ff:ff:ff:ff:ff 3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000 link/ether 52:54:00:1a:be:ae brd ff:ff:ff:ff:ff:ff 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN mode DEFAULT group default qlen 1000 link/ether 52:54:00:1a:be:ae brd ff:ff:ff:ff:ff:ff 5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 02:42:2f:51:41:2d brd ff:ff:ff:ff:ff:ff 7: vethf618ec3@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether 8a:2a:ae:73:f6:df brd ff:ff:ff:ff:ff:ff link-netnsid 0 9: veth0a95d3a@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether ce:12:e7:27:2c:7f brd ff:ff:ff:ff:ff:ff link-netnsid 1
可以看到,vethf虛擬網卡后面還有一半“@if6和@if8”,這兩個就是在容器中的虛擬網卡
bridge0是一個nat橋,因此docker在啟動容器后,還會自動為容器生成一個iptables規則
[root@bogon ~]# iptables -t nat -vnL Chain PREROUTING (policy ACCEPT 43 packets, 3185 bytes) pkts bytes target prot opt in out source destination 53 4066 PREROUTING_direct all -- * * 0.0.0.0/0 0.0.0.0/0 53 4066 PREROUTING_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0 53 4066 PREROUTING_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 3 packets, 474 bytes) pkts bytes target prot opt in out source destination 24 2277 OUTPUT_direct all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT 3 packets, 474 bytes) pkts bytes target prot opt in out source destination 0 0 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0 2 267 RETURN all -- * * 192.168.122.0/24 224.0.0.0/24 0 0 RETURN all -- * * 192.168.122.0/24 255.255.255.255 0 0 MASQUERADE tcp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535 0 0 MASQUERADE udp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535 0 0 MASQUERADE all -- * * 192.168.122.0/24 !192.168.122.0/24 22 2010 POSTROUTING_direct all -- * * 0.0.0.0/0 0.0.0.0/0 22 2010 POSTROUTING_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0 22 2010 POSTROUTING_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0 Chain DOCKER (2 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT_direct (1 references) pkts bytes target prot opt in out source destination Chain POSTROUTING_ZONES (1 references) pkts bytes target prot opt in out source destination 12 953 POST_public all -- * ens33 0.0.0.0/0 0.0.0.0/0 [goto] 10 1057 POST_public all -- * + 0.0.0.0/0 0.0.0.0/0 [goto] Chain POSTROUTING_ZONES_SOURCE (1 references) pkts bytes target prot opt in out source destination Chain POSTROUTING_direct (1 references) pkts bytes target prot opt in out source destination Chain POST_public (2 references) pkts bytes target prot opt in out source destination 22 2010 POST_public_log all -- * * 0.0.0.0/0 0.0.0.0/0 22 2010 POST_public_deny all -- * * 0.0.0.0/0 0.0.0.0/0 22 2010 POST_public_allow all -- * * 0.0.0.0/0 0.0.0.0/0 Chain POST_public_allow (1 references) pkts bytes target prot opt in out source destination Chain POST_public_deny (1 references) pkts bytes target prot opt in out source destination Chain POST_public_log (1 references) pkts bytes target prot opt in out source destination Chain PREROUTING_ZONES (1 references) pkts bytes target prot opt in out source destination 53 4066 PRE_public all -- ens33 * 0.0.0.0/0 0.0.0.0/0 [goto] 0 0 PRE_public all -- + * 0.0.0.0/0 0.0.0.0/0 [goto] Chain PREROUTING_ZONES_SOURCE (1 references) pkts bytes target prot opt in out source destination Chain PREROUTING_direct (1 references) pkts bytes target prot opt in out source destination Chain PRE_public (2 references) pkts bytes target prot opt in out source destination 53 4066 PRE_public_log all -- * * 0.0.0.0/0 0.0.0.0/0 53 4066 PRE_public_deny all -- * * 0.0.0.0/0 0.0.0.0/0 53 4066 PRE_public_allow all -- * * 0.0.0.0/0 0.0.0.0/0 Chain PRE_public_allow (1 references) pkts bytes target prot opt in out source destination Chain PRE_public_deny (1 references) pkts bytes target prot opt in out source destination Chain PRE_public_log (1 references) pkts bytes target prot opt in out source destination
其中在POSTROUTING的chain上,有一個“MASQUERADE”從任何地址進入,只要不從docker0出去,原地址是172.17網段,到任何地址去的數據,都將被地址轉換,snat
上面提到過,當docker使用nat網絡時,僅僅只有當前docker host和當前docker host上的容器之間可以互相訪問,那么不同主機的容器要進行通信,就必須要進行dnat(端口映射的方式),且同一個端口只能映射一個服務,那么在這個docker host中如果有多個web服務,就只能映射到一個80端口,其他的web服務就只能改默認端口,這也為我們帶來了很大的局限性。
1.1、使用ip命令操作net名稱空間
由于docker的Net、UTS以及IPC是可以被容器共享的,所以能夠構建出一個此前在KVM的虛擬化網絡中所謂的隔離式網絡、橋接式網絡、NET式網絡、物理橋式網絡初次之外所不具有的特殊網絡模型,我們可以用ip命令手動去操作網絡名稱空間的,ip命令所能操作的眾多對象當中包括netns
查詢是否安裝ip命令
[root@bogon ~]# rpm -q iproute iproute-4.11.0-14.el7.x86_64
創建net名稱空間
[root@bogon ~]# ip netns help Usage: ip netns list ip netns add NAME ip netns set NAME NETNSID ip [-all] netns delete [NAME] ip netns identify [PID] ip netns pids NAME ip [-all] netns exec [NAME] cmd ... ip netns monitor ip netns list-id [root@bogon ~]# ip netns add ns1 [root@bogon ~]# ip netns add ns2
如果沒有單獨為netns創建網卡接口的話,那么默認就只有一個loop網卡
[root@bogon ~]# ip netns exec ns1 ifconfig -a lo: flags=8<LOOPBACK> mtu 65536 loop txqueuelen 1000 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@bogon ~]# ip netns exec ns2 ifconfig -a lo: flags=8<LOOPBACK> mtu 65536 loop txqueuelen 1000 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
創建網卡接口對并放入net名稱空間
[root@bogon ~]# ip link add name veth2.1 type veth peer name veth2.2 [root@bogon ~]# ip link show ... ... 7: veth2.2@veth2.1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 06:9d:b4:1f:96:88 brd ff:ff:ff:ff:ff:ff 8: veth2.1@veth2.2: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 22:ac:45:de:61:5d brd ff:ff:ff:ff:ff:ff [root@bogon ~]# ip netns exec ns1 ip link set dev veth2.1 name eth0 [root@bogon ~]# ip netns exec ns2 ip link set dev veth2.2 name eth0 [root@bogon ~]# ip netns exec ns1 ifconfig eth0 10.10.1.1/24 up [root@bogon ~]# ip netns exec ns2 ifconfig eth0 10.10.1.2/24 up [root@bogon ~]# ip netns exec ns1 ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.10.1.1 netmask 255.255.255.0 broadcast 10.10.1.255 inet6 fe80::20ac:45ff:fede:615d prefixlen 64 scopeid 0x20<link> ether 22:ac:45:de:61:5d txqueuelen 1000 (Ethernet) RX packets 8 bytes 648 (648.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 648 (648.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@bogon ~]# ip netns exec ns2 ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.10.1.2 netmask 255.255.255.0 broadcast 10.10.1.255 inet6 fe80::49d:b4ff:fe1f:9688 prefixlen 64 scopeid 0x20<link> ether 06:9d:b4:1f:96:88 txqueuelen 1000 (Ethernet) RX packets 8 bytes 648 (648.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 648 (648.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@bogon ~]# ip netns exec ns1 ping 10.10.1.2 PING 10.10.1.2 (10.10.1.2) 56(84) bytes of data. 64 bytes from 10.10.1.2: icmp_seq=1 ttl=64 time=0.261 ms 64 bytes from 10.10.1.2: icmp_seq=2 ttl=64 time=0.076 ms
這樣就完成了ip命令創建netns并設置網卡接口的配置
2、Host網絡
重新啟動一個容器,指定--network為host網絡
[root@bogon ~]# docker container run --name=myhttpd --network=host -d httpd:1.1 17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769 [root@bogon ~]# [root@bogon ~]# ip netns list ns2 ns1
使用交互模式連接到容器內部,查看網絡信息
可以看到,這個容器使用的網絡和物理主機的一模一樣。注意:在這個容器內部更改網絡信息,就和改物理主機的網絡信息是同等的。
[root@bogon ~]# docker container exec -it myhttpd /bin/sh sh-4.1# sh-4.1# ifconfig docker0 Link encap:Ethernet HWaddr 02:42:2F:51:41:2D inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0 inet6 addr: fe80::42:2fff:fe51:412d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:14 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:1758 (1.7 KiB) ens33 Link encap:Ethernet HWaddr 00:0C:29:FB:F6:A1 inet addr:192.168.31.186 Bcast:192.168.31.255 Mask:255.255.255.0 inet6 addr: fe80::a3fa:7451:4298:fe76/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:30112 errors:0 dropped:0 overruns:0 frame:0 TX packets:2431 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1927060 (1.8 MiB) TX bytes:299534 (292.5 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:96 errors:0 dropped:0 overruns:0 frame:0 TX packets:96 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:10896 (10.6 KiB) TX bytes:10896 (10.6 KiB) veth0a95d3a Link encap:Ethernet HWaddr CE:12:E7:27:2C:7F inet6 addr: fe80::cc12:e7ff:fe27:2c7f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:648 (648.0 b) virbr0 Link encap:Ethernet HWaddr 52:54:00:1A:BE:AE inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) sh-4.1# ping www.baidu.com PING www.a.shifen.com (61.135.169.125) 56(84) bytes of data. 64 bytes from 61.135.169.125: icmp_seq=1 ttl=46 time=6.19 ms 64 bytes from 61.135.169.125: icmp_seq=2 ttl=46 time=6.17 ms 64 bytes from 61.135.169.125: icmp_seq=3 ttl=46 time=6.11 ms
使用inspect也可以看到該容器的網絡信息使用的是host
sh-4.1# exit exit [root@bogon ~]# docker container inspect myhttpd [ { "Id": "17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769", "Created": "2018-11-03T13:29:08.34016135Z", "Path": "/usr/sbin/apachectl", "Args": [ " -D", "FOREGROUND" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 4015, "ExitCode": 0, "Error": "", "StartedAt": "2018-11-03T13:29:08.528631643Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:bbffcf779dd42e070d52a4661dcd3eaba2bed898bed8bbfe41768506f063ad32", "ResolvConfPath": "/var/lib/docker/containers/17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769/resolv.conf", "HostnamePath": "/var/lib/docker/containers/17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769/hostname", "HostsPath": "/var/lib/docker/containers/17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769/hosts", "LogPath": "/var/lib/docker/containers/17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769/17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769-json.log", "Name": "/myhttpd", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "", "ExecIDs": null, "HostConfig": { "Binds": null, "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "host", "PortBindings": {}, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "shareable", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": false, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": null, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 0, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [], "DeviceCgroupRules": null, "DiskQuota": 0, "KernelMemory": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": 0, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": [ "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware" ], "ReadonlyPaths": [ "/proc/asound", "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ] }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/75fab11f1bf93cae37d9725ee9cbd167a0323379f383013241457220d62159fa-init/diff:/var/lib/docker/overlay2/619fd02d3390a6299f2bb3150762a765dd68bada7f432037769778a183d94817/diff:/var/lib/docker/overlay2/fd29d7fada3334bf5dd4dfa4f38db496b7fcbb3ec070e07fe21124a4f143b85a/diff", "MergedDir": "/var/lib/docker/overlay2/75fab11f1bf93cae37d9725ee9cbd167a0323379f383013241457220d62159fa/merged", "UpperDir": "/var/lib/docker/overlay2/75fab11f1bf93cae37d9725ee9cbd167a0323379f383013241457220d62159fa/diff", "WorkDir": "/var/lib/docker/overlay2/75fab11f1bf93cae37d9725ee9cbd167a0323379f383013241457220d62159fa/work" }, "Name": "overlay2" }, "Mounts": [], "Config": { "Hostname": "bogon", "Domainname": "", "User": "", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "5000/tcp": {} }, "Tty": false, "OpenStdin": false, "StdinOnce": false, "Env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": [ "/usr/sbin/apachectl", " -D", "FOREGROUND" ], "ArgsEscaped": true, "Image": "httpd:1.1", "Volumes": null, "WorkingDir": "", "Entrypoint": null, "OnBuild": null, "Labels": {} }, "NetworkSettings": { "Bridge": "", "SandboxID": "91444230e357927973371cb315b9a247463320beffcde3b56248fa840bd24547", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": {}, "SandboxKey": "/var/run/docker/netns/default", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "host": { "IPAMConfig": null, "Links": null, "Aliases": null, "NetworkID": "126249d6b1771dc8aeab4aa3e75a2f3951cc765f6a43c4d0053d77c8e8f23685", "EndpointID": "b87ae83df3424565b138c9d9490f503b9632d3369ed01036c05cd885e902f8ca", "Gateway": "", "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "", "DriverOpts": null } } } } ]
3、none網絡
[root@bogon ~]# docker container run --name=myhttpd2 --network=none -d httpd:1.1 3e7148946653a10f66158d2b1410ffb290657de81c3833efe971cad85174abc3 [root@bogon ~]# [root@bogon ~]# docker container exec -it myhttpd2 /bin/sh sh-4.1# ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
使用inspect查看詳細信息,可以看到,網絡信息變成了none。
sh-4.1# exit exit [root@bogon ~]# docker container inspect myhttpd2 [ { "Id": "3e7148946653a10f66158d2b1410ffb290657de81c3833efe971cad85174abc3", "Created": "2018-11-03T13:37:53.153680433Z", "Path": "/usr/sbin/apachectl", "Args": [ " -D", "FOREGROUND" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 4350, "ExitCode": 0, "Error": "", "StartedAt": "2018-11-03T13:37:53.563817908Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:bbffcf779dd42e070d52a4661dcd3eaba2bed898bed8bbfe41768506f063ad32", "ResolvConfPath": "/var/lib/docker/containers/3e7148946653a10f66158d2b1410ffb290657de81c3833efe971cad85174abc3/resolv.conf", "HostnamePath": "/var/lib/docker/containers/3e7148946653a10f66158d2b1410ffb290657de81c3833efe971cad85174abc3/hostname", "HostsPath": "/var/lib/docker/containers/3e7148946653a10f66158d2b1410ffb290657de81c3833efe971cad85174abc3/hosts", "LogPath": "/var/lib/docker/containers/3e7148946653a10f66158d2b1410ffb290657de81c3833efe971cad85174abc3/3e7148946653a10f66158d2b1410ffb290657de81c3833efe971cad85174abc3-json.log", "Name": "/myhttpd2", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "", "ExecIDs": null, "HostConfig": { "Binds": null, "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "none", "PortBindings": {}, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "shareable", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": false, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": null, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 0, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [], "DeviceCgroupRules": null, "DiskQuota": 0, "KernelMemory": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": 0, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": [ "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware" ], "ReadonlyPaths": [ "/proc/asound", "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ] }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/ce58a73cd982d40a80723180c4e09ca16d74471a1eb3888c05da9eec6a2f4ac0-init/diff:/var/lib/docker/overlay2/619fd02d3390a6299f2bb3150762a765dd68bada7f432037769778a183d94817/diff:/var/lib/docker/overlay2/fd29d7fada3334bf5dd4dfa4f38db496b7fcbb3ec070e07fe21124a4f143b85a/diff", "MergedDir": "/var/lib/docker/overlay2/ce58a73cd982d40a80723180c4e09ca16d74471a1eb3888c05da9eec6a2f4ac0/merged", "UpperDir": "/var/lib/docker/overlay2/ce58a73cd982d40a80723180c4e09ca16d74471a1eb3888c05da9eec6a2f4ac0/diff", "WorkDir": "/var/lib/docker/overlay2/ce58a73cd982d40a80723180c4e09ca16d74471a1eb3888c05da9eec6a2f4ac0/work" }, "Name": "overlay2" }, "Mounts": [], "Config": { "Hostname": "3e7148946653", "Domainname": "", "User": "", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "5000/tcp": {} }, "Tty": false, "OpenStdin": false, "StdinOnce": false, "Env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": [ "/usr/sbin/apachectl", " -D", "FOREGROUND" ], "ArgsEscaped": true, "Image": "httpd:1.1", "Volumes": null, "WorkingDir": "", "Entrypoint": null, "OnBuild": null, "Labels": {} }, "NetworkSettings": { "Bridge": "", "SandboxID": "f9402b5b2dbb95c2736f25626704dec79f75800c33c0905c362e79af3810234d", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": {}, "SandboxKey": "/var/run/docker/netns/f9402b5b2dbb", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "none": { "IPAMConfig": null, "Links": null, "Aliases": null, "NetworkID": "4ad67e37d38389253ca55c39ad8d615cef40c6bb9b535051679b2d1ed6cb01e8", "EndpointID": "83913b6eaeed3775fbbcbb9375491dd45e527d81837048cffa63b3064ad6e7e3", "Gateway": "", "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "", "DriverOpts": null } } } } ]
4、Joined 網絡
joined網絡是聯合其他容器啟動,使用共同的NET、UTS和IPC名稱空間,但是其余名稱空間是不共享的
啟動兩個容器,并且第二個容器使用第一個容器的網絡名稱空間
[root@bogon ~]# docker container run --name myhttpd -d httpd:1.1 7053b88aacb35d859e00d47133c084ebb9288ce3fb47b6c588153a5e6c6dd5f0 [root@bogon ~]# docker container run --name myhttpd1 -d --network container:myhttpd redis:4-alpine 99191b8fc853f546f3b381d36cc2f86bc7f31af31daf0e19747411d2f1a10686 [root@bogon ~]# docker container ls -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 99191b8fc853 redis:4-alpine "docker-entrypoint.s?? 5 seconds ago Up 3 seconds myhttpd1 7053b88aacb3 httpd:1.1 "/usr/sbin/apachectl?? 3 minutes ago Up 3 minutes 5000/tcp myhttpd [root@bogon ~]#
登錄第一個容器開始驗證
[root@bogon ~]# docker container exec -it myhttpd /bin/sh sh-4.1# sh-4.1# sh-4.1# ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02 inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:648 (648.0 b) TX bytes:0 (0.0 b) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) sh-4.1# ps -ef |grep httpd root 7 1 0 08:15 ? 00:00:00 /usr/sbin/httpd -D FOREGROUND apache 8 7 0 08:15 ? 00:00:00 /usr/sbin/httpd -D FOREGROUND apache 9 7 0 08:15 ? 00:00:00 /usr/sbin/httpd -D FOREGROUND sh-4.1# mkdir /tmp/testdir sh-4.1# sh-4.1# ls /tmp/ testdir
登錄第二個容器驗證
[root@bogon ~]# docker container exec -it myhttpd1 /bin/sh /data # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02 inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:648 (648.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) /data # ps -ef |grep redis 1 redis 0:00 redis-server /data # ls /tmp/ /data #
可以看到,在容器httpd上創建了一個目錄,但是容器httpd1上并沒有,使用的ip地址也是同一個。由此看出,joined網絡是隔離mount、user以及pid名稱空間但是共享同一組net、ipc和uts名稱空間的
三、啟動容器并設置網絡相關配置
3.1、指定容器的HOSTNAME
容器啟動后默認使用的是容器ID作為hostname,需要指定hostname需要在容器啟動時加上參數
hostname參數會為容器設置指定的hostname,并且會自動在hosts文件中添加本地解析
[root@bogon ~]# docker container run --name mycentos -it centos:6.6 /bin/sh sh-4.1# hostname 02f68247b097 [root@bogon ~]# docker container ls -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 02f68247b097 centos:6.6 "/bin/sh" 15 seconds ago Exited (0) 7 seconds ago mycentos [root@bogon ~]# docker container run --name mycentos --hostname centos1.local -it centos:6.6 /bin/sh sh-4.1# sh-4.1# hostname centos1.local sh-4.1# cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.17.0.2 centos1.local centos1
3.2、指定容器的DNS
如果不指定DNS容器啟動默認使用的是宿主機配置的DNS地址
[root@bogon ~]# cat /etc/resolv.conf # Generated by NetworkManager nameserver 202.106.196.115 [root@bogon ~]# docker container run --name mycentos -it centos:6.6 /bin/sh sh-4.1# cat /etc/resolv.conf # Generated by NetworkManager nameserver 202.106.196.115 [root@bogon ~]# docker container run --name mycentos --dns 114.114.114.114 -it --rm centos:6.6 /bin/sh sh-4.1# sh-4.1# cat /etc/resolv.conf nameserver 114.114.114.114
3.3、手動添加hosts本地解析
[root@bogon ~]# docker container run --name mycentos --rm --add-host bogon:192.168.31.186 --add-host www.baidu.com:1.1.1.1 -it centos:6.6 /bin/sh sh-4.1# sh-4.1# cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 192.168.31.186 bogon 1.1.1.1 www.baidu.com 172.17.0.2 ea40852f5871
3.4、容器的端口暴漏
如果容器使用的網絡是bridge,并且容器內部的服務需要讓外部客戶端訪問。那么就需要做容器內端口暴漏
1、動態暴漏(將指定容器內的端口映射至物理主機的所有地址中的一個動態端口)
[root@bogon ~]# docker container run --name myhttpd --rm -d -p 80 httpd:1.1 54c1b69f4a8b28abc8d65d836d3ed1ae916d982947800da5bace2fa41d2a0ce5 [root@bogon ~]# [root@bogon ~]# curl 172.17.0.2 <h2>Welcom To My Httpd</h2> [root@bogon ~]# iptables -t nat -vnL Chain PREROUTING (policy ACCEPT 17 packets, 1582 bytes) pkts bytes target prot opt in out source destination 34 3134 PREROUTING_direct all -- * * 0.0.0.0/0 0.0.0.0/0 34 3134 PREROUTING_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0 34 3134 PREROUTING_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0 1 52 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 3 packets, 310 bytes) pkts bytes target prot opt in out source destination 28 2424 OUTPUT_direct all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT 3 packets, 310 bytes) pkts bytes target prot opt in out source destination 0 0 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0 2 267 RETURN all -- * * 192.168.122.0/24 224.0.0.0/24 0 0 RETURN all -- * * 192.168.122.0/24 255.255.255.255 0 0 MASQUERADE tcp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535 0 0 MASQUERADE udp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535 0 0 MASQUERADE all -- * * 192.168.122.0/24 !192.168.122.0/24 26 2157 POSTROUTING_direct all -- * * 0.0.0.0/0 0.0.0.0/0 26 2157 POSTROUTING_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0 26 2157 POSTROUTING_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 MASQUERADE tcp -- * * 172.17.0.2 172.17.0.2 tcp dpt:80 Chain DOCKER (2 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0 0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:32768 to:172.17.0.2:80 [root@bogon ~]# docker port myhttpd 80/tcp -> 0.0.0.0:32768
可以看到,啟動容器指定了端口暴漏會自動在docker host上創建一個iptables的nat規則,主機的所有地址32768端口映射到了容器的80端口上
在另一臺主機上訪問容器的httpd服務
[root@centos7-node2 ~]# curl 192.168.31.186:32768 <h2>Welcom To My Httpd</h2> [root@centos7-node2 ~]#
2、靜態暴漏(如果想映射主機的指定地址)
2.1、暴漏到物理主機指定地址的隨機端口上
如果不指定物理主機端口,使用兩個冒號分隔,主機地址::容器端口
[root@bogon ~]# docker container run --name myhttpd -d -p 192.168.31.186::80 httpd:1.1 50f3788eefe1016b9df2a3f2fcc1bfa19a2110675396daed075d1d4d0e69798b [root@bogon ~]# docker port myhttpd 80/tcp -> 192.168.31.186:32768 [root@bogon ~]# [root@centos7-node2 ~]# curl 192.168.31.186:32768 <h2>Welcom To My Httpd</h2>
2.2、暴漏到物理主機所有地址的指定端口上
如果不指定地址,可以將地址省略,主機端口:容器端口
[root@bogon ~]# docker container run --name myhttpd --rm -d -p 80:80 httpd:1.1 2fde0e49c3545fb28624b01b737b22650ba98dfa09674e8ccb3b6722c7dcd257 [root@bogon ~]# docker port myhttpd 80/tcp -> 0.0.0.0:80
2.3、暴漏到物理主機指定地址的指定端口
[root@bogon ~]# docker container run --name myhttpd --rm -d -p 192.168.31.186:8080:80 httpd:1.1 a9152173fafc650c47c6a35040e0c50876f841756529334cb509fdff53ce60c7 [root@bogon ~]# [root@bogon ~]# docker port myhttpd 80/tcp -> 192.168.31.186:8080
四、自定義docker配置文件
1、可以通過修改docker的配置文件修改默認的docker0網橋地址信息
[root@bogon ~]# vim /etc/docker/daemon.json { "registry-mirrors": ["https://registry.docker-cn.com"], "bip": "10.10.1.2/16" } [root@bogon ~]# systemctl restart docker.service [root@bogon ~]# ifconfig docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 10.10.1.2 netmask 255.255.0.0 broadcast 10.10.255.255 inet6 fe80::42:cdff:fef5:e3ba prefixlen 64 scopeid 0x20<link> ether 02:42:cd:f5:e3:ba txqueuelen 0 (Ethernet) RX packets 38 bytes 3672 (3.5 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 53 bytes 5152 (5.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
配置文件中只需要加入bip即可,網關等信息docker會自動算出來。可以看到,docker0的ip地址已經變成了剛才我們改的網段
2、修改docker監聽方式
docker默認只監聽在socket文件上,如果要監聽tcp,需要更改配置文件
[root@bogon ~]# vim /etc/docker/daemon.json { "registry-mirrors": ["https://registry.docker-cn.com"], "bip": "10.10.1.2/16", "hosts": ["tcp://0.0.0.0:33333", "unix:///var/run/docker.sock"] } [root@bogon ~]# systemctl restart docker.service [root@bogon ~]# netstat -tlunp |grep 33333 tcp6 0 0 :::33333 :::* LISTEN 6621/dockerd [root@bogon ~]#
修改后,就可以從其他安裝了docker的主機上遠程訪問這臺docker服務器了
[root@centos7-node2 ~]# docker -H 192.168.31.186:33333 image ls REPOSITORY TAG IMAGE ID CREATED SIZE httpd 1.1 bbffcf779dd4 4 days ago 264MB nginx stable ecc98fc2f376 3 weeks ago 109MB centos 6.6 4e1ad2ce7f78 3 weeks ago 203MB redis 4-alpine 05097a3a0549 4 weeks ago 30MB [root@centos7-node2 ~]# docker -H 192.168.31.186:33333 container ls -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a8409019e310 redis:4-alpine "docker-entrypoint.s?? 2 hours ago Exited (0) 29 minutes ago redis2 99191b8fc853 redis:4-alpine "docker-entrypoint.s?? 3 hours ago Exited (0) 29 minutes ago myhttpd1 7053b88aacb3 httpd:1.1 "/usr/sbin/apachectl?? 3 hours ago Exited (137) 28 minutes ago myhttpd [root@centos7-node2 ~]#
3、為docker創建網絡
docker支持的網絡模型有bridge、none、host、macvlan、overlay,創建時不指定默認創建bridge網橋
[root@bogon ~]# docker info Containers: 3 Running: 0 Paused: 0 Stopped: 3 Images: 4 Server Version: 18.06.1-ce Storage Driver: overlay2 Backing Filesystem: xfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay
創建自定義網絡
[root@bogon ~]# docker network create --driver bridge --subnet 192.168.30.0/24 --gateway 192.168.30.1 mybridge0 859e5a2975979740575d6365de326e18991db7b70188b7a50f6f842ca21e1d3d [root@bogon ~]# docker network ls NETWORK ID NAME DRIVER SCOPE f23c1f889968 bridge bridge local 126249d6b177 host host local 859e5a297597 mybridge0 bridge local 4ad67e37d383 none null local [root@bogon ~]# ifconfig br-859e5a297597: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.30.1 netmask 255.255.255.0 broadcast 192.168.30.255 inet6 fe80::42:f4ff:feeb:6a16 prefixlen 64 scopeid 0x20<link> ether 02:42:f4:eb:6a:16 txqueuelen 0 (Ethernet) RX packets 5 bytes 365 (365.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 29 bytes 3002 (2.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 10.10.1.2 netmask 255.255.0.0 broadcast 10.10.255.255 inet6 fe80::42:cdff:fef5:e3ba prefixlen 64 scopeid 0x20<link> ether 02:42:cd:f5:e3:ba txqueuelen 0 (Ethernet) RX packets 38 bytes 3672 (3.5 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 53 bytes 5152 (5.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
啟動容器加入mybridge0網絡
[root@bogon ~]# docker container run --name redis1 --network mybridge0 -d redis:4-alpine 6d6d11266e3208e45896c40e71c6e3cecd9f7710f2f3c39b401d9f285f28c2f7 [root@bogon ~]# docker container exec -it redis1 /bin/sh /data # /data # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:C0:A8:1E:02 inet addr:192.168.30.2 Bcast:192.168.30.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:24 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2618 (2.5 KiB) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
4、刪除自定義docker網絡
刪除之前要先關閉正在運行在此網絡上的容器
[root@bogon ~]# docker network rm mybridge [root@bogon ~]# docker network ls NETWORK ID NAME DRIVER SCOPE f23c1f889968 bridge bridge local 126249d6b177 host host local 4ad67e37d383 none null local
關于“Docker容器虛擬化網絡的示例分析”這篇文章就分享到這里了,希望以上內容可以對大家有一定的幫助,使各位可以學到更多知識,如果覺得文章不錯,請把它分享出去讓更多的人看到。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。