您好,登錄后才能下訂單哦!
Heartbeat+DRBD+NFS高可用案例
9.4 部署DRBD 的需求描述
9.4.1業務需求描述
假設兩臺服務器Rserver-1/Lserver-1, 其實際IP分別為192.168.236.143(Rserver)和192.168.236.192(Lserver)
配置目標:兩臺服務器分別配置好DRBD服務后、實現在Rserver-1機器上/dev/sdb 分區上寫入數據、數據會時時的同步到Lserver-1機器上面、一旦服務器Rserver-1機器宕機或硬盤損壞導致數據不可用、Lserver-1機器上的數據此時是picdata-1機器的一個完整備份、當然、不光是一個完整的備份、還可以瞬間接替壞數據或宕機的Rserver-1機器上數據的異機時時同步、從而達到數據高可用無業務影響的目的
9.4.2 DRBD部署結構圖
1、 Drbd服務通過直連或以太網實時互相數據同步、
2、 兩臺存儲服務器互相備份、正常情況下兩端各提供一個主分區提供NFS使用
3、 存儲服務器之間、存儲服務器和交換機之間都是雙千兆網卡綁定(bonding)
4、 應用服務器通過NFS訪問存儲
9.4.3服務主機資源規劃
名稱 | 接口 | IP | 用途 |
Master(Rserver-1) | Eth0 | 192.168.236.143 | 外網管理IP、用WAN轉發數據轉發 |
Eth2 | 172.16.1.1 | 內網管理IP,用于LAN數據轉發 | |
Eth3 | 192.168.1.1 | 用于提供心跳線路連接(直連) | |
VIP | 192.168.236.10 | 用于提供應用程序A掛載服務 | |
BACKUP(Lserver-1) | Eth0 | 192.168.236.192 | 外網管理IP、用WAN轉發數據轉發 |
Eth2 | 172.16.1.2 | 內網管理IP,用于LAN數據轉發 | |
Eth3 | 192.168.1.2 | 用于服務器間心跳連接 | |
VIP | 192.168.236.20 | 用于提供應用程序A掛載服務 |
9.4.5 drbd的環境配置
設置hosts文件兩臺都配置注意這里是主機名也需要改成picadata-1-1 是主機名需要改
例如:hostname picadata-1-1 如果這步沒有操作啟動服務的時候會出現報錯。
echo '172.16.1.1 Rserver-1'>>/etc/hosts
echo '172.16.1.2 Lserver-1'>>/etc/hosts
[root@Lserver-1 ~]# tail -2 /etc/hosts
172.16.1.2 Rserver-1
172.16.1.1 Lserver-1
8.3.3配置服務器間心跳連接:
192.168.1.1 和192.168.1.2 兩塊網卡之間是通過普通網線直連連接的、即不通過交換機、直接把兩塊網卡連接在一起用于做心跳檢測
Master:
ifconfig eth3 192.168.1.1 netmask255.255.255.0
Backup:
ifconfig eth3 192.168.1.2 netmask255.255.255.0
Rserver-1 server 上添加如下主機路由
route add –host 192.168.1.2 dev eth3
####這條命令是:從picdata-1-1server 訪問192.168.1.2 走網卡eth3出去、作為心跳線路
echo 'route add -host 192.168.1.2 deveth3' >>/etc/rc.local
##-à加入開機自啟動配置里、這樣下次啟動后就會自動加載這個路由配置。
route -n
Lserver-1 server 上添加如下主機路由
route add –host 192.168.1.2 dev eth3
####這條命令是:從picdata-1-2server 訪問192.168.1.2 走網卡eth3出去、作為心跳線路
echo 'route add -host 192.168.1.1 deveth3' >>/etc/rc.local
##-à加入開機自啟動配置里、這樣下次啟動后就會自動加載這個路由配置。
9.5 開始實施部署
9.5.1硬盤進行分區
首先,通過fdisk,mkfs,ext3,tune2fs 等命令、對硬盤進行分區、分區信息如下表
提示:如果生產環境中單個硬盤和raid的硬盤大于2Tfdisk 命令是查看不到的。
在虛擬機中添加兩塊硬盤。后面查看一下
Rserver-1查看
[root@Rserver-1 ~]# fdisk -l
Disk /dev/sda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000486f5
DeviceBoot Start End Blocks Id System
/dev/sda1 * 1 64 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 64 2611 20458496 8e Linux LVM
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
Lserver-1查看
[root@Lserver-1 ~]# fdisk -l
Disk /dev/sda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00087dae
DeviceBoot Start End Blocks Id System
/dev/sda1 * 1 64 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 64 2611 20458496 8e Linux LVM
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
9.5.2在master和backup中做分區操作(注:兩臺一樣)
因此、我們需要做的就是對/dev/sdb 進行分區、需要分區具體內容見下表
Device | Mount point | 存儲大小 | 作用 |
/dev/sdb1 | /data | 500M | 存儲圖片 |
/dev/sdb2 | Meta data 分區 | 300M | 存儲DRBD同步狀態信息 |
[root@Lserver-1 ~]# fdisk /dev/sdb
Device contains neither a valid DOSpartition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with diskidentifier 0x95767900.
Changes will remain in memory only,until you decide to write them.
After that, of course, the previouscontent won't be recoverable.
Warning: invalid flag 0x0000 of partitiontable 4 will be corrected by w(rite)
WARNING: DOS-compatible mode isdeprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p ####新建一個主分區
Partition number (1-4): 1
First cylinder (1-2610, default 1):
Using default value 1
Last cylinder, +cylinders or+size{K,M,G} (1-2610, default 2610): +500M ####大小為500M
Command (m for help): p
Disk /dev/sdb: 21.5 GB, 21474836480bytes
255 heads, 63 sectors/track, 2610cylinders
Units = cylinders of 16065 * 512 =8225280 bytes
Sector size (logical/physical): 512bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes /512 bytes
Disk identifier: 0x95767900
Device Boot Start End Blocks Id System
/dev/sdb1 1 65 522081 83 Linux
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (66-2610, default 66):
Using default value 66
Last cylinder, +cylinders or+size{K,M,G} (66-2610, default 2610): +200M ####新建一個200M
Command (m for help): p
Disk /dev/sdb: 21.5 GB, 21474836480bytes
255 heads, 63 sectors/track, 2610cylinders
Units = cylinders of 16065 * 512 =8225280 bytes
Sector size (logical/physical): 512bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes /512 bytes
Disk identifier: 0x95767900
Device Boot Start End Blocks Id System
/dev/sdb1 1 65 522081 83 Linux
/dev/sdb2 66 91 208845 83 Linux
Command (m for help):w ######表示保存
如果提示
the kernel still uses the old table
The new table will be used at next reboot
上面這句話的意思是內核還不知道你做了分區,需要重啟才能讓內核知道,可以用如下命令讓內核知曉
partprobe
現在查看一下分區的結果
[root@Lserver-1 ~]# fdisk -l
Disk /dev/sda: 21.5 GB, 21474836480bytes
255 heads, 63 sectors/track, 2610cylinders
Units = cylinders of 16065 * 512 =8225280 bytes
Sector size (logical/physical): 512bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes /512 bytes
Disk identifier: 0x00087dae
Device Boot Start End Blocks Id System
/dev/sda1 * 1 64 512000 83 Linux
Partition 1 does not end on cylinderboundary.
/dev/sda2 64 2611 20458496 8e Linux LVM
Disk /dev/sdb: 21.5 GB, 21474836480bytes
255 heads, 63 sectors/track, 2610cylinders
Units = cylinders of 16065 * 512 =8225280 bytes
Sector size (logical/physical): 512bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes /512 bytes
Disk identifier: 0x95767900
Device Boot Start End Blocks Id System
/dev/sdb1 1 65 522081 83 Linux
/dev/sdb2 66 91 208845 83 Linux
現在對數據分區格式化
[root@Rserver-1~]# mkfs.ext4 /dev/sdb1
[root@Lserver-1~]# mkfs.ext4 /dev/sdb1
[root@Rserver-1~]# tune2fs -c -1 /dev/sdb1
tune2fs 1.41.12(17-May-2010)
Setting maximal mount count to -1 ####設置最大掛載次數為-1
[root@Lserver-1~]# tune2fs -c -1 /dev/sdb1
tune2fs 1.41.12(17-May-2010)
Setting maximal mount count to -1 ####設置最大掛載次數為-1
9.6、安裝前準備:(Rserver-1,Lserver-1)
1、關閉iptables和SELINUX,避免安裝過程中報錯。
# service iptables stop
# chkconfig iptables off
# setenforce 0
# vi /etc/selinux/config
---------------
SELINUX=disabled
---------------
9.6.1時間同步:
ntpdate -u asia.pool.ntp.org
9.6.2 DRBD的安裝配置:
# yum install gcc gcc-c++ make glibcflex kernel-develkernel-headers 這兩個的安裝包一定要和uname –r 的版本一定是需要一樣的。不然后面不能把drbd 加入到內核當中。可以用本地yum的方式安裝。
9.6.3安裝DRBD:(Rserver-1主,Lserver-1備)
# wget http://oss.linbit.com/drbd/8.4/drbd-8.4.2.tar.gz
# tar zxvf drbd-8.4.3.tar.gz
# cd drbd-8.4.3
# ./configure --prefix=/usr/local/drbd--with-km --with-heartbeat --sysconfdir=/etc/
# make KDIR=/usr/src/kernels/2.6.32-504.16.2.el6.x86_64/
# make install
# mkdir -p /usr/local/drbd/var/run/drbd
# chkconfig --add drbd
# chkconfig drbd on
2、加載DRBD模塊:(Rserver-1主,Lserver-1備)
# modprobe drbd
查看DRBD模塊是否加載到內核:
# lsmod |grep drbd
drbd 310172 4
libcrc32c 1246 1 drbd
3、參數配置:(Rserver-1主,Lserver-1備)
vi /etc/drbd.conf
清空文件內容,并添加如下配置:
resource r0{
protocol C;
startup { wfc-timeout 0;degr-wfc-timeout 120;}
disk { on-io-error detach;}
net{
timeout 60;
connect-int 10;
ping-int 10;
max-buffers 2048;
max-epoch-size 2048;
}
syncer { rate 200M;}
on Rserver-1{ #######on 后面是主機名
device /dev/drbd0; #####指定的是一個drbd一個盤
disk /dev/sdb1; #####本地磁盤。就是上面分區好的硬盤
address 172.16.1.1:7788; ######內網IP
meta-disk internal;
}
on Lserver-1{
device /dev/drbd0;
disk /dev/sdb1;
address 172.16.1.2:7788;
meta-disk internal;
}
}
注:請修改上面配置中的主機名、IP、和disk為自己的具體配置
4,創建DRBD設備并激活r0資源:(Rserver-1主,Lserver-1備)
# mknod /dev/drbd0 b 147 0
# drbdadm create-md r0
等待片刻,顯示success表示drbd塊創建成功
Writing meta data...
initializing activity log
NOT initializing bitmap
New drbd meta data block successfullycreated.
--== Creating metadata ==--
As with nodes, we count the total numberof devices mirrored by DRBD
at http://usage.drbd.org.
The counter works anonymously. Itcreates a random number to identify
the device and sends that random number,along with the kernel and
DRBD version, to usage.drbd.org.
http://usage.drbd.org/cgi-bin/insert_usage.pl?
nu=716310175600466686&ru=15741444353112217792&rs=1085704704
* If you wish to opt out entirely,simply enter 'no'.
* To continue, just press [RETURN]
success
再次輸入該命令:
# drbdadm create-md r0
成功激活r0
[need to type 'yes' to confirm] yes
Writing meta data...
initializing activity log
NOT initializing bitmap
New drbd meta data block successfullycreated.
5、啟動DRBD服務:(Rserver-1主,Lserver-1備)
servicedrbd start
注:需要主從共同啟動方能生效
6、查看狀態:(Rserver-1主,Lserver-1備)
# service drbd status
drbd driver loaded OK; device status:
version: 8.4.3(api:1/proto:86-101)
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd1.corp.com,2015-05-12 21:05:41
m:res cs ro ds p mounted fstype
0:r0 Connected Secondary/Secondary Inconsistent/Inconsistent C
這里ro:Secondary/Secondary表示兩臺主機的狀態都是備機狀態,ds是磁盤狀態,顯示的狀態內容為“Inconsistent不一致”,這是因為DRBD無法判斷哪一方為主機,應以哪一方的磁盤數據作為標準。
7、將drbd1.example.com主機配置為主節點(Rserver-1)
# drbdsetup /dev/drbd0 primary --force
8、分別查看主從DRBD狀態:(Rserver-1)主
# service drbd status
drbd driver loaded OK; device status:
version: 8.4.3(api:1/proto:86-101)
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515 build by root@Rserver-1, 2017-05-1813:40:26
m:res cs ro ds p mounted fstype
0:r0 Connected Primary/Secondary UpToDate/UpToDate C
(Lserver-1)備
# service drbd status
drbd driver loaded OK; device status:
version: 8.4.3(api:1/proto:86-101)
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515 build by root@Lserver-1, 2017-05-1813:38:57
m:res cs ro ds p mounted fstype
0:r0 Connected Secondary/Primary UpToDate/UpToDate C
ro在主從服務器上分別顯示 Primary/Secondary和Secondary/Primary
ds顯示UpToDate/UpToDate
表示主從配置成功。
9、掛載DRBD:(Rserver-1)主
從剛才的狀態上看到mounted和fstype參數為空,所以我們這步開始掛載DRBD到系統目錄/store
# mkfs.ext4 /dev/drbd0
# mkdir /data
# mount /dev/drbd0 /data
注:Secondary節點上不允許對DRBD設備進行任何操作,包括掛載;所有的讀寫操作只能在Primary節點上進行,只有當Primary節點掛掉時,Secondary節點才能提升為Primary節點,并自動掛載DRBD繼續工作。
成功掛載后的DRBD狀態:(Rserver-1主)
[root@Rserver-1~]# service drbd status
drbddriver loaded OK; device status:
version:8.4.2 (api:1/proto:86-101)
GIT-hash:7ad5f850d711223713d6dcadc3dd48860321070c build by root@Rserver-1, 2017-05-1813:40:26
m:res cs ro ds p mounted fstype
0:r0 Connected Primary/Secondary UpToDate/UpToDate C /data ext4
9.7配置heartbeat服務
yum install heartbeat –y
9.7.1配置ha.cf
cd/usr/share/doc/heartbeat-3.0.4
ll|grep ha.cfauthkeys haresources
8.4.2.1配置ha.cf 文件
debugfile/var/log/ha-debug
logfile/var/log/ha-log
logfacility local0
####-à以上三行為日志的配置、在你配置時一般不需要改動、
keepalive 2
deadtime 30
warntime 10
initdead 120
###-à以上四行為一些基礎的參數、在你配置時一般不需要改動
# serial serialportname ...
mcast eth3225.0.0.219 694 1 0
##-à此行表示使用多播的方式、需要改動的僅有eth3 改成你的心跳線的網卡
auto_failbackon
node Rserver-1 ##-à兩臺存儲server的主機名
node Lserver-1 ##-à兩臺存儲server的主機名
crm no
9.7.2配置authkeys
auth 3
#1 crc
#2 sha1 HI!
3 md5 Hello!
authkey文件必須為600 權限。Authkey文件中已經說明了需要配置600 權限
# Authentication file. Must be mode 600
9.7.3配置haresources
添加一行文件
Rserver-1IPaddr::172.16.1.10/24/eth2 drbddisk::r0 Filesystem::/dev/drbd0::/data::ext4 killnfsd
注:該文件內IPaddr,Filesystem等腳本存放路徑在/etc/ha.d/resource.d/下,也可在該目錄下存放服務啟動腳本(例如:mysql,www),將相同腳本名稱添加到/etc/ha.d/haresources內容中,從而跟隨heartbeat啟動而啟動該腳本。
IPaddr::192.168.0.190/24/eth0:用IPaddr腳本配置對外服務的浮動虛擬IP
drbddisk::r0:用drbddisk腳本實現DRBD主從節點資源組的掛載和卸載
Filesystem::/dev/drbd0::/store::ext4:用Filesystem腳本實現磁盤掛載和卸載
Killnfsd這個為控制nfs啟動的腳本
9.7.4、編輯腳本文件killnfsd,用來重啟NFS服務:(Rserver-1,Lserver-1)
# vi/etc/ha.d/resource.d/killnfsd
killall -9nfsd; /etc/init.d/nfs restart;exit 0
賦予755執行權限:
# chmod 755/etc/ha.d/resource.d/killnfsd
9.7.5、啟動HeartBeat服務
在兩個節點上啟動HeartBeat服務,先啟動(Rserver-1):(Rserver-1,Lserver-1)
# serviceheartbeat start
# chkconfigheartbeat on
現在從其他機器能夠ping通虛IP 172.16.1.10,表示配置成功
9.7.6、配置NFS: (Rserver-1,Lserver-1)
編輯exports配置文件,添加以下配置:
# vi/etc/exports
/data *(rw,no_root_squash)
9.7.7重啟NFS服務:
# servicerpcbind restart
# service nfsrestart
# chkconfigrpcbind on
# chkconfig nfsoff
注:這里設置NFS開機不要自動運行,因為/etc/ha.d/resource.d/killnfsd 該腳本會控制NFS的啟動。
9.8、測試高可用
9.8.1、正常熱備切換
在客戶端掛載NFS共享目錄
# mount -t nfs 172.16.1.10:/store/tmp
模擬將主節點的heartbeat Rserver-1主節點服務停止
,則備節點Lserver-1備節點會立即無縫接管;
測試客戶端掛載的NFS共享讀寫正常。
此時備機(Lserver-1備)上的DRBD狀態:
如果備上面的狀態成為primary 就表示已經切換成功。
9.8.2、異常宕機切換
首先把服務和IP全部切換回主上去。后面直接關閉主的電源
[root@Rserver-1ha.d]# /etc/init.d/heartbeat start
Starting High-Availabilityservices: INFO: Resource is stopped
Done.
[root@Rserver-1ha.d]# ip addr list
1: lo:<LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen1000
link/ether 00:0c:29:20:dc:da brdff:ff:ff:ff:ff:ff
inet 192.168.236.143/24 brd 192.168.236.255scope global eth0
inet 192.168.236.10/24 brd 192.168.236.255scope global secondary eth0
inet6 fe80::20c:29ff:fe20:dcda/64 scopelink
valid_lft forever preferred_lft forever
3: eth2:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen1000
link/ether 00:0c:29:20:dc:e4 brdff:ff:ff:ff:ff:ff
inet 172.16.1.1/24 brd 172.16.1.255 scopeglobal eth2
inet 172.16.1.10/24 brd 172.16.1.255 scopeglobal secondary eth2
inet6 fe80::20c:29ff:fe20:dce4/64 scopelink
valid_lft forever preferred_lft forever
4: eth3:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen1000
link/ether 00:0c:29:20:dc:ee brdff:ff:ff:ff:ff:ff
inet 192.168.1.1/24 brd 192.168.1.255 scopeglobal eth3
inet6 fe80::20c:29ff:fe20:dcee/64 scopelink
valid_lft forever preferred_lft forever
5: pan0:<BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
link/ether 6e:5d:75:f7:48:77 brdff:ff:ff:ff:ff:ff
[root@Rserver-1ha.d]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_rserver1-lv_root 18650424 4093320 13609700 24% /
tmpfs 372156 76 372080 1% /dev/shm
/dev/sda1 495844 34853 435391 8% /boot
/dev/sr0 4363088 4363088 0 100% /media/CentOS_6.5_Final
/dev/drbd0 505552 10521 468930 3% /data
[root@Rserver-1ha.d]#
已經切換成功。現在測試一下直接宕機看看能不能轉換
已經關閉了電源。查看一下備的情況吧。
[root@Lserver-1ha.d]# ip addr list
1:lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2:eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast stateUP qlen 1000
link/ether 00:0c:29:4d:f6:92 brdff:ff:ff:ff:ff:ff
inet 192.168.236.192/24 brd 192.168.236.255scope global eth0
inet6 fe80::20c:29ff:fe4d:f692/64 scopelink
valid_lft forever preferred_lft forever
3:eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast stateUP qlen 1000
link/ether 00:0c:29:4d:f6:9c brdff:ff:ff:ff:ff:ff
inet 172.16.1.2/24 brd 172.16.1.255 scopeglobal eth2
inet 172.16.1.10/24 brd 172.16.1.255 scopeglobal secondary eth2
inet6 fe80::20c:29ff:fe4d:f69c/64 scopelink
valid_lft forever preferred_lft forever
4:eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast stateUP qlen 1000
link/ether 00:0c:29:4d:f6:a6 brdff:ff:ff:ff:ff:ff
inet 192.168.1.2/24 brd 192.168.1.255 scopeglobal eth3
inet6 fe80::20c:29ff:fe4d:f6a6/64 scopelink
valid_lft forever preferred_lft forever
5:pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
link/ether 92:be:67:20:6e:b6 brdff:ff:ff:ff:ff:ff
[root@Lserver-1ha.d]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_lserver1-lv_root 18650424 3966516 13736504 23% /
tmpfs 372156 224 371932 1% /dev/shm
/dev/sda1 495844 34856 435388 8% /boot
/dev/sr0 4363088 4363088 0 100% /media/CentOS_6.5_Final
/dev/drbd0 505552 10521 468930 3% /data
[root@Lserver-1ha.d]#
客戶端檢查一下
如上圖顯示heartbeat+DRBD+NFS已經搭建成功。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。