91超碰碰碰碰久久久久久综合_超碰av人澡人澡人澡人澡人掠_国产黄大片在线观看画质优化_txt小说免费全本

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

Docker的原生overlay網絡的實現原理

發布時間:2020-07-09 23:27:30 來源:網絡 閱讀:1092 作者:品鑒初心 欄目:云計算

系統環境

manager node: CentOS Linux release 7.4.1708 (Core)

workr node: CentOS Linux release 7.5.1804 (Core)

Docker版本信息

manager node: Docker version 18.09.4, build d14af54266

worker node: Docker version 19.03.1, build 74b1e89

Docker Swarm系統環境

manager node: 192.168.246.194

worker node: 192.168.246.195

創建 docker swarm集群前的網絡

manager node:

# docker network ls
NETWOrk ID          NAME                DRIVER              SCOPE
e01d59fe00e5        bridge              bridge              local
15426f623c37        host                host                local
dd5d570ac60e        none                null                local
worker node:

# docker network ls
NETWOrk ID          NAME                DRIVER              SCOPE
70ed15a24acd        bridge              bridge              local
e2da5d928935        host                host                local
a7dbda3b96e8        none                null                local

創建 docker swarm 集群

初始化 docker swarm 集群

manager node執行: docker swarm init

worker node執行: docker swarm join --token SWMTKN-1-0p3g6ijmphmw5xrikh9e3asg5n3yzan0eomnsx1xuvkovvgfsp-enrmg2lj1dejg5igmnpoaywr1 192.168.246.194:2377

說明??:

如果遺忘了docker swarm join的命令,可以使用下面命令查找:

(1)對于 worker 節點:docker swarm join-token worker

(2)對于 manager 節點:docker swarm join-token manager

查看集群節點信息

manager node:

# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
hplz9lawfpjx6fpz0j1bevocp     MyTest03            Ready               Active                                  19.03.1
q5af6b67bmho8z0d7**m2yy5j *   mysql-nginx         Ready               Active              Leader              18.09.4

查看集群網絡信息

manager node:

# docker network ls

NETWOrk ID          NAME                DRIVER              SCOPE
e01d59fe00e5        bridge              bridge              local
7c90d1bf0f62        docker_gwbridge     bridge              local
15426f623c37        host                host                local
8lyfiluksqu0        ingress             overlay             swarm
dd5d570ac60e        none                null                local
worker node:

# docker network ls

NETWOrk ID          NAME                DRIVER              SCOPE
70ed15a24acd        bridge              bridge              local
985367037d3b        docker_gwbridge     bridge              local
e2da5d928935        host                host                local
8lyfiluksqu0        ingress             overlay             swarm
a7dbda3b96e8        none                null                local

說明??:

在docker swarm集群創建的開始,docker 會給每臺host創建除了docker0以外的兩個網絡,分是bridge類型(docker_gwbridge網橋)和overlay類型(ingress)的網絡,以及一個過渡的命名空間ingress_sbox,我們可以使用如下命令在 manager節點自建overlay網絡,結果如下:

docker network create -d overlay uber-svc

再次查看 manager 和 worker 兩臺主機 docker swarm 集群網絡:

manager node:

# docker network ls

NETWOrk ID          NAME                DRIVER              SCOPE
e01d59fe00e5        bridge              bridge              local
7c90d1bf0f62        docker_gwbridge     bridge              local
15426f623c37        host                host                local
8lyfiluksqu0        ingress             overlay             swarm
dd5d570ac60e        none                null                local
kzxwwwtunpqe        uber-svc            overlay             swarm  ===> 這個 network 就是我們剛新建的 uber-svc
worker node:

# docker network ls

NETWOrk ID          NAME                DRIVER              SCOPE
70ed15a24acd        bridge              bridge              local
985367037d3b        docker_gwbridge     bridge              local
e2da5d928935        host                host                local
8lyfiluksqu0        ingress             overlay             swarm
a7dbda3b96e8        none                null                local

說明??:

我們會發現在 worker node上并沒有 uber-svc 網絡。這是因為只有當運行中的容器連接到覆蓋網絡的時候,該網絡才變為可用狀態。這種延遲生效策略通過減少網絡梳理,提升了網絡的擴展性。

查看網絡命名空間信息

manager node:

# ip netns
1-8lyfiluksq (id: 0)
ingress_sbox (id: 1)
worker node:

# ip netns
1-8lyfiluksq (id: 0)
ingress_sbox (id: 1)

說明??:

(1)由于容器和overlay的網絡的網絡命名空間文件不再操作系統默認的/var/run/netns下,只能手動通過軟連接的方式查看。ln -s /var/run/docker/netns /var/run/netns

(2)有時候網絡的網絡命名空間名稱前面會帶上1-、2-等序號,有時候不帶。但不影響網絡的通信和操作。

查看網絡IPAM(IP Address Management)信息

(1)ingress網絡的IPAM( IP Address Management)分配如下:

manager node 和 worker node 是相同的:

# docker network inspect ingress

[
    {
        "Name": "ingress",
        "Id": "8lyfiluksqu09jfdjndhj68hl",
        "Created": "2019-09-09T17:59:06.326723762+08:00",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.255.0.0/16",     ===> ingress子網
                    "Gateway": "10.255.0.1"        ===> ingress網關
                }

(2)uber-svc自建的overlay會使用docker自動分配的IPAM:

# docker network inspect uber-svc

[
    {
        "Name": "uber-svc",
        "Id": "kzxwwwtunpqeucnrhmirg6rhm",
        "Created": "2019-09-09T10:14:06.606521342Z",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",          ===> uber-svc子網
                    "Gateway": "10.0.0.1"             ===> uber-svc網關
                }

Docker swarm 中的LB分為兩種情況

(1)Ingress Load Balancing

(2)Internal Load Balancing

說明??:我們本節重點聊聊 LB 的第二種情況,即Internal Load Balancing~

定義 shell 腳本

在開始下面的實踐之前,我們先編輯以下兩個腳本。對于腳本的使用,我會給出具體實例~

第一個腳本docker_netns.sh:

#!/bin/bash

NAMESPACE=$1

if [[ -z $NAMESPACE ]];then
    ls -1 /var/run/docker/netns/
    exit 0
fi

NAMESPACE_FILE=/var/run/docker/netns/${NAMESPACE}

if [[ ! -f $NAMESPACE_FILE ]];then
    NAMESPACE_FILE=$(docker inspect -f "{{.NetworkSettings.SandboxKey}}" $NAMESPACE 2>/dev/null)
fi

if [[ ! -f $NAMESPACE_FILE ]];then
    echo "Cannot open network namespace '$NAMESPACE': No such file or directory"
    exit 1
fi

shift

if [[ $# -lt 1 ]]; then
    echo "No command specified"
    exit 1
fi

nsenter --net=${NAMESPACE_FILE} $@

說明??:

(1)該腳本通過指定容器id、name或者namespace快速進入容器的network namespace并執行相應的shell命令。

(2)如果不指定任何參數,則列舉所有Docker容器相關的network namespaces。

執行腳本結果如下:

# sh docker_netns.sh ==> 列出所有的網絡命名空間

1-ycqv46f5tl
8402c558c13c
ingress_sbox
# sh docker_netns.sh deploy_nginx_nginx_1 ip r ==> 進入查看名為deploy_nginx_nginx_1容器ip信息

default via 172.18.0.1 dev eth0 
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.2 
# sh docker_netns.sh 8402c558c13c ip r ==> 進入和查看網絡命名空間為8402c558c13c容器ip信息

default via 172.18.0.1 dev eth0 
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.2 

第二個腳本find_links.sh:

#!/bin/bash

DOCKER_NETNS_SCRIPT=./docker_netns.sh
IFINDEX=$1
if [[ -z $IFINDEX ]];then
    for namespace in $($DOCKER_NETNS_SCRIPT);do
        printf "\e[1;31m%s:\e[0m" $namespace
        $DOCKER_NETNS_SCRIPT $namespace ip -c -o link
        printf " "
    done
else
    for namespace in $($DOCKER_NETNS_SCRIPT);do
        if $DOCKER_NETNS_SCRIPT $namespace ip -c -o link | grep -Pq "^$IFINDEX: ";then
           printf "\e[1;31m%s:\e[0m" $namespace
           $DOCKER_NETNS_SCRIPT $namespace ip -c -o link | grep -P "^$IFINDEX: ";
           printf " "
        fi
    done
fi

該腳本根據ifindex查找虛擬網絡設備所在的namespace,腳本不同情況下執行結果如下:

# sh find_links.sh ==> 不指定ifindex,則列出所有namespaces的link設備。

 # sh find_links.sh
1-3gt8phomoc:1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1\    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1\    link/ipip 0.0.0.0 brd 0.0.0.0
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default \    link/ether e6:c5:04:ad:7b:31 brd ff:ff:ff:ff:ff:ff
74: vxlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UNKNOWN mode DEFAULT group default \    link/ether e6:c5:04:ad:7b:31 brd ff:ff:ff:ff:ff:ff link-netnsid 0
76: veth0@if75: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UP mode DEFAULT group default \    link/ether e6:fa:db:53:40:fd brd ff:ff:ff:ff:ff:ff link-netnsid 1
 ingress_sbox:1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1\    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1\    link/ipip 0.0.0.0 brd 0.0.0.0
75: eth0@if76: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default \    link/ether 02:42:0a:ff:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
78: eth2@if79: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default \    link/ether 02:42:ac:14:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 1
# sh find_links.sh 76 ==> 指定ifindex=76
1-3gt8phomoc:76: veth0@if75: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UP mode DEFAULT group default \    link/ether e6:fa:db:53:40:fd brd ff:ff:ff:ff:ff:ff link-netnsid 1

實戰 -- Internal Load Balancing

部署一個 service 使用我們自己創建的 uber-svc 網絡

docker service create --name uber-svc --network uber-svc -p 80:80 --replicas 2 nigelpoulton/tu-demo:v1

部署的這兩個容器分別處于 manager 和 worker 節點上:

# docker service ls

ID                  NAME                MODE                REPLICAS            IMAGE                     PORTS
pfnme5ytk59w        uber-svc            replicated          2/2                 nigelpoulton/tu-demo:v1   *:80->80/tcp
# docker service ps uber-svc

ID                  NAME                IMAGE                     NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
kh8zs9a2umwf        uber-svc.1          nigelpoulton/tu-demo:v1   mysql-nginx         Running             Running 57 seconds ago
31p0rgg1f59w        uber-svc.2          nigelpoulton/tu-demo:v1   MyTest03            Running             Running 49 seconds ago

說明??:

-p當然你也可以使用--publish代替-p,在這里的用意是將容器內部的服務暴露到host上,這樣我們就可以訪問這個services。

一般情況下我們在swarm中部署service后容器中的網絡只有一張網卡使用的是docker0網絡,當我們將服務發布出去后,swarm會做如下操作:

(1)給容器添加三塊網卡eth0和eth2,eth3,eth0連接overlay類型網絡名為ingress用于在不同主機間通信,eth2連接bridge類網絡名為docker_gwbridge,用于讓容器能訪問外網。eth3連接到我們自己創建的mynet網絡上,同樣的作用也是用于容器之間的訪問(區別于eth3網絡存在dns解析即服務發現功能)。

(2)swarm各節點會利用ingress overlay網絡負載均衡將服務發布到集群之外。

查看 uber-svc.1 容器和 uber-svc 網絡命名空間的網卡情況

(1)先查看 uber-svc.1 容器

# docker ps

CONTAINER ID        IMAGE                     COMMAND             CREATED             STATUS              PORTS               NAMES
a2a763734e42        nigelpoulton/tu-demo:v1   "python app.py"     About a minute ago   Up About a minute   80/tcp              uber-svc.1.kh8zs9a2umwf9cix381zr9x38

(2)查看 uber-svc.1 容器中網卡情況

# sh docker_netns.sh uber-svc.1.kh8zs9a2umwf9cix381zr9x38 ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
54: eth0@if55: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
    link/ether 02:42:0a:ff:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.255.0.5/16 brd 10.255.255.255 scope global eth0
       valid_lft forever preferred_lft forever
56: eth3@if57: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:13:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet 172.19.0.3/16 brd 172.19.255.255 scope global eth3
       valid_lft forever preferred_lft forever
58: eth2@if59: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
    link/ether 02:42:0a:00:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet 10.0.0.3/24 brd 10.0.0.255 scope global eth2
       valid_lft forever preferred_lft forever

當然,你也可以直接使用下面命令查看:

docker exec uber-svc.1.kh8zs9a2umwf9cix381zr9x38 ip addr

(3)查看 uber-svc 網絡命名空間的網卡

# ip netns ==> 查看 manager 網絡命名空間

d2feb68e3183 (id: 3)
1-kzxwwwtunp (id: 2)
lb_kzxwwwtun
1-8lyfiluksq (id: 0)
ingress_sbox (id: 1)
# docker network ls  ==> 查看 manager 集群網絡

NETWOrk ID          NAME                DRIVER              SCOPE
e01d59fe00e5        bridge              bridge              local
7c90d1bf0f62        docker_gwbridge     bridge              local
15426f623c37        host                host                local
8lyfiluksqu0        ingress             overlay             swarm
dd5d570ac60e        none                null                local
kzxwwwtunpqe        uber-svc            overlay             swarm
sh docker_netns.sh 1-kzxwwwtunp ip addr ==> 查看 uber-svc 網絡命名空間的網卡

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
    link/ether 3e:cb:12:d3:a3:cb brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.1/24 brd 10.0.0.255 scope global br0
       valid_lft forever preferred_lft forever
51: vxlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UNKNOWN group default
    link/ether e2:8e:35:4c:a3:7b brd ff:ff:ff:ff:ff:ff link-netnsid 0
53: veth0@if52: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UP group default
    link/ether 3e:cb:12:d3:a3:cb brd ff:ff:ff:ff:ff:ff link-netnsid 1
59: veth2@if58: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UP group default
    link/ether 9e:b4:8c:72:4e:74 brd ff:ff:ff:ff:ff:ff link-netnsid 2

當然,你也可以使用下面的命令:

ip netns exec 1-kzxwwwtunp ip addr

# ip netns exec 1-kzxwwwtunp brctl show  ==> 查看 uber-svc 網絡命名空間的接口情況

bridge name  bridge id        STP enabled    interfaces
br0       8000.3ecb12d3a3cb   no             veth0
                                             veth2
                                             vxlan0

說明??:

<1> docker exec uber-svc.1.kh8zs9a2umwf9cix381zr9x38 ip addr這條命令可以看到 manager 節點上容器的網絡有四張網卡,分別是:lo、eth0、eth2 和 eth3。
其中,eth2 對應的 veth pair為 uber-svc 網絡中的veth2,eth3 對應的 veth pair為 host 上的vethef74971。

<2> ip netns exec 1-kzxwwwtunp brctl show查看 uber-svc 網絡空間下網橋掛載情況可以看出veth2掛到了br0網橋上.

(4)查看 uber-svc 網絡的vxlan-id

ip netns exec 1-kzxwwwtunp ip -o -c -d link show  vxlan0

***** vxlan id 4097 *****

uber-svc 網絡命名空間與 service 容器之間的網絡連接圖

Docker的原生overlay網絡的實現原理

獲取 ingress 命名空間信息

主要步驟如下:

(1)獲取 ingress 的network信息

# docker network ls

NETWOrk ID          NAME                DRIVER              SCOPE
8lyfiluksqu0        ingress             overlay             swarm

(2)獲取取 ingress 的命名空間信息

# ip netns

1-8lyfiluksq (id: 0)

(3)獲取 ingress 的命名空間中ip信息

# sh docker_netns.sh 1-8lyfiluksq ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
    link/ether 6e:5c:bd:c0:95:ea brd ff:ff:ff:ff:ff:ff
    inet 10.255.0.1/16 brd 10.255.255.255 scope global br0
       valid_lft forever preferred_lft forever
45: vxlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UNKNOWN group default
    link/ether e6:f3:7a:00:85:e1 brd ff:ff:ff:ff:ff:ff link-netnsid 0
47: veth0@if46: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UP group default
    link/ether fa:98:37:aa:83:2a brd ff:ff:ff:ff:ff:ff link-netnsid 1
55: veth2@if54: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UP group default
    link/ether 6e:5c:bd:c0:95:ea brd ff:ff:ff:ff:ff:ff link-netnsid 2

(4)獲取 ingress 的命名空間中vxlan0的ID信息

# sh docker_netns.sh 1-8lyfiluksq ip -d link show vxlan0

***** vxlan id 4096 *****

(5)獲取 ingress 的命名空間中對應 veth pair 信息

# sh find_links.sh 46

ingress_sbox:46: eth0@if47: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default \    link/ether 02:42:0a:ff:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0

ingress 網絡命名空間與 service 容器之間的網絡連接圖

Docker的原生overlay網絡的實現原理

獲取 ingress_sbox 網絡命名空間信息

主要步驟如下:

(1)獲取 ingress_sbox 的ip信息

# sh docker_netns.sh ingress_sbox ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
46: eth0@if47: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
    link/ether 02:42:0a:ff:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.255.0.2/16 brd 10.255.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.255.0.4/32 brd 10.255.0.4 scope global eth0
       valid_lft forever preferred_lft forever
49: eth2@if50: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet 172.19.0.2/16 brd 172.19.255.255 scope global eth2
       valid_lft forever preferred_lft forever

(2)獲取 ingress_sbox 的veth pair 接口信息

# sh find_links.sh 47

1-8lyfiluksq:47: veth0@if46: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UP mode DEFAULT group default \    link/ether fa:98:37:aa:83:2a brd ff:ff:ff:ff:ff:ff link-netnsid 1

(3)獲取 manager 主機上veth pair 接口信息

# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:25:8b:ac brd ff:ff:ff:ff:ff:ff
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
    link/ether 02:42:cf:31:ee:03 brd ff:ff:ff:ff:ff:ff
14: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
48: docker_gwbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
    link/ether 02:42:9c:aa:15:e6 brd ff:ff:ff:ff:ff:ff
50: vetheaa661b@if49: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP mode DEFAULT group default
    link/ether 8a:3e:01:ab:db:75 brd ff:ff:ff:ff:ff:ff link-netnsid 1
57: vethef74971@if56: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP mode DEFAULT group default
    link/ether 82:5c:65:e1:9c:e8 brd ff:ff:ff:ff:ff:ff link-netnsid 3

ingress 網絡命名空間與 ingree_sbox 網絡命名空間之間的網絡連接圖

Docker的原生overlay網絡的實現原理

說明:swarm worker 節點上的情況與 manager 基本思路一樣~

Swarm 總體的網絡連接圖

Docker的原生overlay網絡的實現原理

說明??:

(1)可以看到這里ingress_sbox和創建容器的ns共用一個ingress網絡空間。

(2)通過使用docker exec [container ID/name] ip r會更加直觀的看到網絡流動情況,如下:

# docker exec uber-svc.1.kh8zs9a2umwf9cix381zr9x38 ip r
default via 172.19.0.1 dev eth3
10.0.0.0/24 dev eth2  proto kernel  scope link  src 10.0.0.3
10.255.0.0/16 dev eth0  proto kernel  scope link  src 10.255.0.5
172.19.0.0/16 dev eth3  proto kernel  scope link  src 172.19.0.3

由此可知容器默認網關為172.19.0.1,也就是說容器是通過eth3出去的~

最后

關于 Docker Swarm 底層網絡問題還有很多的知識點需要去探究,本節對最近學習到的docker network 做了一個基礎總結,如有錯誤或不足,請各位大佬指正,感謝!

另:參考文檔如有侵權,請及時與我聯系,立刪~。

最后,感謝開源,擁抱開源!

參考文檔

(1)Docker swarm中的LB和服務發現詳解

(2)萬字長文:聊聊幾種主流Docker網絡的實現原理

(3)Docker跨主機網絡——overlay

(4)Docker 跨主機網絡 overlay(十六)

(5)Docker overlay覆蓋網絡及VXLAN詳解

向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

勐海县| 山东省| 永和县| 固镇县| 木兰县| 嵊泗县| 平邑县| 明水县| 定安县| 横峰县| 韶关市| 青河县| 甘孜| 湘西| 滦南县| 辉南县| 龙泉市| 牡丹江市| 满洲里市| 瑞丽市| 天柱县| 石门县| 浦城县| 东丰县| 宕昌县| 中方县| 德清县| 昭苏县| 连平县| 吐鲁番市| 余庆县| 临海市| 大化| 丹阳市| 开阳县| 台安县| 桃源县| 大石桥市| 潍坊市| 安宁市| 承德县|