您好,登錄后才能下訂單哦!
實驗環境
系統:CentOS-7-x86_64-DVD-1804
實驗環境:vmware
hostname ip 功能
node1.heleicool.cn 172.16.175.11 管理節點
node2.heleicool.cn 172.16.175.12 計算節點
環境設置
安裝必要軟件:
yum install -y vim net-tools wget telnet
分別配置/etc/hosts文件:
172.16.175.11 node1.heleicool.cn
172.16.175.12 node2.heleicool.cn
分別配置/etc/resolv.conf文件:
nameserver 8.8.8.8
關閉防火墻:
systemctl disable firewalld?
systemctl stop firewalld?
關閉selinux:(應該可以省略)
setenforce 0
vim /etc/selinux/config
SELINUX=disabled
安裝openstack包
安裝對應版本的epel庫:
yum install centos-release-openstack-rocky -y
安裝openstack客戶端:
yum install python-openstackclient -y
RHEL和CentOS 默認啟用SELinux。安裝 openstack-selinux軟件包以自動管理OpenStack服務的安全策略:
yum install openstack-selinux -y
數據庫安裝
安裝包:
yum install mariadb mariadb-server python2-PyMySQL -y
創建和編輯配置文件/etc/my.cnf.d/openstack.cnf:
[mysqld]
bind-address = 172.16.175.11
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
啟動數據庫:
systemctl enable mariadb.service
systemctl start mariadb.service
通過運行mysql_secure_installation 腳本來保護數據庫服務。特別是,為數據庫root帳戶選擇合適的密碼 :
mysql_secure_installation
NOTE:?RUNNING?ALL?PARTS?OF?THIS?SCRIPT?IS?RECOMMENDED?FOR?ALL?MariaDB ??????SERVERS?IN?PRODUCTION?USE!??PLEASE?READ?EACH?STEP?CAREFULLY! In?order?to?log?into?MariaDB?to?secure?it,?we'll?need?the?current password?for?the?root?user.??If?you've?just?installed?MariaDB,?and you?haven't?set?the?root?password?yet,?the?password?will?be?blank, so?you?should?just?press?enter?here. Enter?current?password?for?root?(enter?for?none): OK,?successfully?used?password,?moving?on... Setting?the?root?password?ensures?that?nobody?can?log?into?the?MariaDB root?user?without?the?proper?authorisation. Set?root?password??[Y/n]?y??#?是否設置root密碼 New?password: #?輸入兩次root密碼 Re-enter?new?password: Password?updated?successfully! Reloading?privilege?tables.. ?...?Success! By?default,?a?MariaDB?installation?has?an?anonymous?user,?allowing?anyone to?log?into?MariaDB?without?having?to?have?a?user?account?created?for them.??This?is?intended?only?for?testing,?and?to?make?the?installation go?a?bit?smoother.??You?should?remove?them?before?moving?into?a production?environment. Remove?anonymous?users??[Y/n]?y??#?是否刪除匿名用戶 ?...?Success! Normally,?root?should?only?be?allowed?to?connect?from?'localhost'.??This ensures?that?someone?cannot?guess?at?the?root?password?from?the?network. Disallow?root?login?remotely??[Y/n]?y?#?是否禁止root遠程登陸 ?...?Success! By?default,?MariaDB?comes?with?a?database?named?'test'?that?anyone?can access.??This?is?also?intended?only?for?testing,?and?should?be?removed before?moving?into?a?production?environment. Remove?test?database?and?access?to?it??[Y/n]?y?#?是否刪除test庫 ▽ ?-?Dropping?test?database... ▽ ?...?Success! ?-?Removing?privileges?on?test?database... ?...?Success! Reloading?the?privilege?tables?will?ensure?that?all?changes?made?so?far will?take?effect?immediately. Reload?privilege?tables?now??[Y/n]?y??#?加載權限表 ?...?Success! Cleaning?up... All?done!??If?you've?completed?all?of?the?above?steps,?your?MariaDB installation?should?now?be?secure. Thanks?for?using?MariaDB!
安裝消息隊列
安裝rabbitmq
yum install rabbitmq-server -y
啟動rabbitmy
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service
添加openstack用戶
# 我 添加的用戶名為openstack,密碼也是。
rabbitmqctl add_user openstack openstack
對openstack用戶進行讀寫授權:
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
###安裝Memcached
安裝Memacached
yum install memcached python-memcached -y
編輯/etc/sysconfig/memcached,修改配置
OPTIONS="-l 127.0.0.1,::1,172.16.175.11"
啟動memcached
systemctl enable memcached.service
systemctl start memcached.service
目前為止端口信息如下
#?rabbitmq?端口 tcp????????0??????0?0.0.0.0:25672???????????0.0.0.0:*???????????????LISTEN??????1690/beam #?mariadb-server?端口 tcp????????0??????0?172.16.175.11:3306??????0.0.0.0:*???????????????LISTEN??????1506/mysqld #?memcached?端口 tcp????????0??????0?172.16.175.11:11211?????0.0.0.0:*???????????????LISTEN??????2236/memcached tcp????????0??????0?127.0.0.1:11211?????????0.0.0.0:*???????????????LISTEN??????2236/memcached tcp????????0??????0?0.0.0.0:4369????????????0.0.0.0:*???????????????LISTEN??????1/systemd tcp????????0??????0?0.0.0.0:22??????????????0.0.0.0:*???????????????LISTEN??????766/sshd tcp????????0??????0?127.0.0.1:25????????????0.0.0.0:*???????????????LISTEN??????1050/master tcp6???????0??????0?:::5672?????????????????:::*????????????????????LISTEN??????1690/beam tcp6???????0??????0?::1:11211???????????????:::*????????????????????LISTEN??????2236/memcached tcp6???????0??????0?:::22???????????????????:::*????????????????????LISTEN??????766/sshd tcp6???????0??????0?::1:25??????????????????:::*????????????????????LISTEN??????1050/master
開始安裝openstack服務
keystone服務安裝
配置keystone數據庫:
使用數據庫訪問客戶端以root用戶身份連接到數據庫服務器:
mysql -u root -p
創建keystone數據庫,授予對keystone數據庫的適當訪問權限:
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';
安裝配置keystone
運行以下命令以安裝軟件包:
yum install openstack-keystone httpd mod_wsgi -y
編輯/etc/keystone/keystone.conf文件并完成以下操作:
[database]
connection = mysql+pymysql://keystone:keystone@172.16.175.11/keystone
[token]
provider = fernet
填充Identity服務數據庫:
su -s /bin/sh -c "keystone-manage db_sync" keystone
# 驗證數據庫表
mysql -ukeystone -pkeystone -e "use keystone; show tables;"
初始化Fernet密鑰存儲庫:
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
引導身份服務:
# ADMIN_PASS為管理用戶的密碼,這里是設置密碼。
keystone-manage bootstrap --bootstrap-password admin \
? --bootstrap-admin-url http://172.16.175.11:5000/v3/ \
? --bootstrap-internal-url http://172.16.175.11:5000/v3/ \
? --bootstrap-public-url http://172.16.175.11:5000/v3/ \
? --bootstrap-region-id RegionOne
配置Apache HTTP服務
編輯/etc/httpd/conf/httpd.conf
ServerName 172.16.175.11
創建/usr/share/keystone/wsgi-keystone.conf文件的鏈接:
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
啟動服務
啟動Apache HTTP服務并將其配置為在系統引導時啟動:
systemctl enable httpd.service
systemctl start httpd.service
配置管理帳戶
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://172.16.175.11:5000/v3
export OS_IDENTITY_API_VERSION=3
創建domain,projects,users and roles
雖然本指南中的keystone-manage bootstrap步驟中已存在“默認”域,但創建新域的正式方法是:
# openstack domain create --description "An Example Domain" example
使用默認的domain,創建service project:用做服務。
openstack project create --domain default \
? --description "Service Project" service
創建myproject項目:用做常規(非管理員)任務應使用非特權項目和用戶。
openstack project create --domain default \
? --description "Demo Project" myproject
創建myuser用戶:
# 創建用戶需要輸入密碼
openstack user create --domain default \
? --password-prompt myuser
創建myrole角色:
openstack role create myrole
將myuser添加到myproject項目中并賦予myrole的角色:
openstack role add --project myproject --user myuser myrole
驗證用戶
取消設置臨時 變量OS_AUTH_URL和OS_PASSWORD環境變量:
unset OS_AUTH_URL OS_PASSWORD
作為admin用戶,請求身份驗證令牌:
# 執行后需要輸入admin密碼
openstack --os-auth-url http://172.16.175.11:5000/v3 \
? --os-project-domain-name Default --os-user-domain-name Default \
? --os-project-name admin --os-username admin token issue
作為myuser用戶,請求身份驗證令牌:
# 執行后需要輸入admin密碼
openstack --os-auth-url http://172.16.175.11:5000/v3 \
? --os-project-domain-name Default --os-user-domain-name Default \
? --os-project-name myproject --os-username myuser token issue
創建openstack 客戶端環境腳本
openstack客戶端通過添加參數或使用環境變量的方式來與Identity服務進行交互,為了提高效率,創建環境腳本:
創建admin用戶環境腳本:admin-openstack.sh
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://172.16.175.11:5000/v3
export OS_IDENTITY_API_VERSION=3
創建myuser用戶環境腳本:demo-openstack.sh
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=myuser
export OS_AUTH_URL=http://172.16.175.11:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
使用腳本
source admin-openstack.sh
openstack token issue
glance服務安裝
配置glance數據庫:
root用戶登陸數據庫:
mysql -u root -p
創建glance數據庫和用戶授權:
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';
創建glance服務憑證,使用admin用戶:
source admin-openstack.sh
創建glance用戶:
# 需要輸入glance用戶密碼,我的是 glance
openstack user create --domain default --password-prompt glance
將glance用戶添加到service項目中,并賦予admin角色:
openstack role add --project service --user glance admin
創建glance服務實體:
openstack service create --name glance \
? --description "OpenStack Image" image
創建Image服務API端點:
openstack endpoint create --region RegionOne image public http://172.16.175.11:9292
openstack endpoint create --region RegionOne image internal http://172.16.175.11:9292
openstack endpoint create --region RegionOne image admin http://172.16.175.11:9292
安裝和配置glance
安裝包:
yum install openstack-glance -y?
編輯/etc/glance/glance-api.conf文件并完成以下操作:
# 配置數據庫訪問:
[database]
connection = mysql+pymysql://glance:glance@172.16.175.11/glance
# 配置身份服務訪問:
[keystone_authtoken]
www_authenticate_uri? = http://172.16.175.11:5000
auth_url = http://172.16.175.11:5000
memcached_servers = 172.16.175.11:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance
[paste_deploy]
flavor = keystone
# 配置本地文件系統存儲和映像文件的位置:
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
編輯/etc/glance/glance-registry.conf文件并完成以下操作:
# 配置數據庫訪問:
[database]
connection = mysql+pymysql://glance:glance@172.16.175.11/glance
# 配置身份服務訪問:
[keystone_authtoken]
www_authenticate_uri = http://172.16.175.11:5000
auth_url = http://172.16.175.11:5000
memcached_servers = 172.16.175.11:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance
[paste_deploy]
flavor = keystone
填充Image服務數據庫,并驗證:
su -s /bin/sh -c "glance-manage db_sync" glance
mysql -uglance -pglance -e "use glance; show tables;"
啟動服務:
systemctl enable openstack-glance-api.service \
? openstack-glance-registry.service
systemctl start openstack-glance-api.service \
? openstack-glance-registry.service
驗證服務
來源admin憑據來訪問僅管理員CLI命令:
source admin-openstack.sh
下載源圖像:
wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
使用QCOW2磁盤格式,bare容器格式和公共可見性將圖像上載到Image服務 ,以便所有項目都可以訪問它:
# 確保cirros-0.4.0-x86_64-disk.img 文件在當前目錄下
openstack image create "cirros" \
? --file cirros-0.4.0-x86_64-disk.img \
? --disk-format qcow2 --container-format bare \
? --public
確認上傳圖像并驗證屬性:
openstack image list
nova服務安裝
Nova控制節點安裝
建立nova數據庫信息:
mysql -u root -p
創建nova_api,nova,nova_cell0,和placement數據庫:
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
CREATE DATABASE placement;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'placement';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'placement';
使用admin權限訪問:
source admin-openstack.sh
創建nova用戶:
openstack user create --domain default --password-prompt nova
將admin角色添加到nova用戶:
openstack role add --project service --user nova admin
創建nova服務實體:
openstack service create --name nova --description "OpenStack Compute" compute
創建Compute API服務端點:
openstack endpoint create --region RegionOne compute public http://172.16.175.11:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://172.16.175.11:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://172.16.175.11:8774/v2.1
創建placement用戶:
# 需要設置用戶名的密碼,我的密碼是 placement
openstack user create --domain default --password-prompt placement
使用admin角色將Placement用戶添加到服務項目:
openstack role add --project service --user placement admin
創建placement服務實體:
openstack service create --name placement --description "Placement API" placement
創建Placement API服務端點:
openstack endpoint create --region RegionOne placement public http://172.16.175.11:8778
openstack endpoint create --region RegionOne placement internal http://172.16.175.11:8778
openstack endpoint create --region RegionOne placement admin http://172.16.175.11:8778
#####安裝nova
yum install openstack-nova-api openstack-nova-conductor \
? openstack-nova-console openstack-nova-novncproxy \
? openstack-nova-scheduler openstack-nova-placement-api -y
編輯/etc/nova/nova.conf文件并完成以下操作:
# 僅啟用計算和元數據API
[DEFAULT]
enabled_apis = osapi_compute,metadata
# 配置數據庫訪問
[api_database]
connection = mysql+pymysql://nova:nova@172.16.175.11/nova_api
[database]
connection = mysql+pymysql://nova:nova@172.16.175.11/nova
[placement_database]
connection = mysql+pymysql://placement:placement@172.16.175.11/placement
# 配置RabbitMQ消息隊列訪問
[DEFAULT]
transport_url = rabbit://openstack:openstack@172.16.175.11
# 配置身份服務訪問
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_url = http://172.16.175.11:5000/v3
memcached_servers = 172.16.175.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
# 啟用對網絡服務的支持
[DEFAULT]
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
# 配置VNC代理以使用控制器節點的管理接口IP地址
[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = 172.16.175.11
# 配置Image服務API的位置
[glance]
api_servers = http://172.16.175.11:9292
# 配置鎖定路徑
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
# 配置Placement API
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://172.16.175.11:5000/v3
username = placement
password = placement
配置添加到以下內容來啟用對Placement API的訪問 /etc/httpd/conf.d/00-nova-placement-api.conf:
添加到配置文件最后
<Directory /usr/bin>
? ?<IfVersion >= 2.4>
? ? ? Require all granted
? ?</IfVersion>
? ?<IfVersion < 2.4>
? ? ? Order allow,deny
? ? ? Allow from all
? ?</IfVersion>
</Directory>
重啟httpd服務
systemctl restart httpd
填充nova-api和placement數據庫:
su -s /bin/sh -c "nova-manage api_db sync" nova
注冊cell0數據庫:
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
創建cell1單元格:
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
填充nova數據庫:
su -s /bin/sh -c "nova-manage db sync" nova
驗證nova cell0和cell1是否正確注冊:
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
驗證數據庫:
mysql -unova -pnova -e "use nova ; show tables;"
mysql -unova -pnova -e "use nova_api ; show tables;"
mysql -unova -pnova -e "use nova_cell0 ; show tables;"
mysql -uplacement -pplacement -e "use placement ; show tables;"
啟動nova 控制節點服務
systemctl enable openstack-nova-api.service \
? openstack-nova-scheduler.service openstack-nova-conductor.service \
? openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service \
? openstack-nova-scheduler.service openstack-nova-conductor.service \
? openstack-nova-novncproxy.service
Nova計算節點安裝
安裝包
yum install openstack-nova-compute -y
編輯/etc/nova/nova.conf文件并完成以下操作:
# 拉取控制節點配置進行修改。刪除以下配置即可,這些是數據庫訪問的配置。
[api_database]
connection = mysql+pymysql://nova:nova@172.16.175.11/nova_api
[database]
connection = mysql+pymysql://nova:nova@172.16.175.11/nova
[placement_database]
connection = mysql+pymysql://placement:placement@172.16.175.11/placement
# 添加內容如下:
[vnc]
# 修改為計算節點的IP
server_proxyclient_address = 172.16.175.12
novncproxy_base_url = http://172.16.175.11:6080/vnc_auto.html
確定您的計算節點是否支持虛擬機的硬件加速:
egrep -c '(vmx|svm)' /proc/cpuinfo
如果此命令返回值大于1,則計算節點支持硬件加速,通常不需要其他配置。
如果此命令返回值zero,則您的計算節點不支持硬件加速,您必須配置libvirt為使用QEMU而不是KVM。
編輯文件中的[libvirt]部分,/etc/nova/nova.conf如下所示:
[libvirt]
# ...
virt_type = kvm
# 我這里的返回值雖然大于1,但是配置為kvm導致虛擬機不能啟動,修改為qemu正常,求大神赤腳。
啟動nova計算節點服務
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
將計算節點添加到單元數據庫(在管理節點執行)
source admin-openstack.sh
# 確認數據庫中有主機
openstack compute service list --service nova-compute
# 發現計算主機
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
添加新計算節點時,必須在控制器節點上運行以注冊這些新計算節點。或者,您可以在以下位置設置適當的間隔 :/etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval = 300
驗證操作
source admin-openstack.sh
# 列出服務組件以驗證每個進程的成功啟動和注冊:state為up 狀態
openstack compute service list
# 列出Identity服務中的API端點以驗證與Identity服務的連接
openstack catalog list
# 列出Image服務中的圖像以驗證與Image服務的連接:
openstack image list
# 檢查單元格和放置API是否成功運行:
nova-status upgrade check
這里說明一下,在openstack compute service list命令進行查看時官方文檔比你多啟動一個服務器,你啟動它就行了。
這個服務是控制臺遠程連接認證服務器,不安裝不能進行vnc遠程登錄。
systemctl enable openstack-nova-consoleauth
systemctl start openstack-nova-consoleauth
neutron 服務安裝
neutron控制節點安裝
為neutron服務創建數據庫相關:
mysql -uroot -p
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';
創建neutron管理用戶
openstack user create --domain default --password-prompt neutron
將neutron用戶添加到 neutron 服務中,并賦予admin的角色
openstack role add --project service --user neutron admin
創建neutron服務實體:
openstack service create --name neutron --description "OpenStack Networking" network
創建網絡服務API端點:
openstack endpoint create --region RegionOne network public http://172.16.175.11:9696
openstack endpoint create --region RegionOne network internal http://172.16.175.11:9696
openstack endpoint create --region RegionOne network admin http://172.16.175.11:9696
配置網絡選項
您可以使用選項1(Procider)、2(Self-service)表示的兩種體系結構之一來部署網絡服務。
選項1部署了最簡單的架構,該架構僅支持將實例附加到提供商(外部)網絡。沒有自助(私有)網絡,路由器或浮動IP地址。只有該admin特權用戶或其他特權用戶才能管理提供商網絡。
Procider Network
安裝插件
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
配置服務器組件
編輯/etc/neutron/neutron.conf文件并完成以下操作
[DEFAULT]
# 啟用模塊化第2層(ML2)插件并禁用其他插件
core_plugin = ml2
service_plugins =
# 通知Compute網絡拓撲更改
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
# 配置RabbitMQ 消息隊列訪問
transport_url = rabbit://openstack:openstack@172.16.175.11
auth_strategy = keystone
[database]
# 配置數據庫訪問
connection = mysql+pymysql://neutron:neutron@172.16.175.11/neutron
[keystone_authtoken]
# 配置身份服務訪問
www_authenticate_uri = http://172.16.175.11:5000
auth_url = http://172.16.175.11:5000
memcached_servers = 172.16.175.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
# 配置網絡以通知Compute網絡拓撲更改
[nova]
auth_url = http://172.16.175.11:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
# 配置鎖定路徑
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
配置模塊化第2層(ML2)插件
ML2插件使用Linux橋接機制為實例構建第2層(橋接和交換)虛擬網絡基礎架構。
編輯/etc/neutron/plugins/ml2/ml2_conf.ini文件并完成以下操作:
[ml2]
# 啟用平面和VLAN網絡
type_drivers = flat,vlan
# 禁用自助服務網絡
tenant_network_types =
# 啟用Linux橋接機制
mechanism_drivers = linuxbridge
# 啟用端口安全性擴展驅動程序
extension_drivers = port_security
[ml2_type_flat]
# 將提供商虛擬網絡配置為扁平網絡
flat_networks = provider
[securitygroup]
# 啟用ipset以提高安全組規則的效率
enable_ipset = true
配置linux網橋代理
Linux網橋代理為實例構建第2層(橋接和交換)虛擬網絡基礎架構并處理安全組。
編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件并完成以下操作:
[linux_bridge]
# 提供者虛擬網絡映射到提供者物理網絡接口,這里的eth-0為映射的網卡
physical_interface_mappings = provider:eth-0
[vxlan]
# 禁用VXLAN覆蓋網絡
enable_vxlan = false
[securitygroup]
# 啟用安全組并配置Linux橋接iptables防火墻驅動程序:
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
驗證以下所有sysctl值設置為1:確保您的Linux操作系統內核支持網橋過濾器:
modprobe br_netfilter
ls /proc/sys/net/bridge
在/etc/sysctl.conf中添加:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
執行生效
sysctl -p
配置DHCP代理
DHCP代理為虛擬網絡提供DHCP服務。
編輯/etc/neutron/dhcp_agent.ini文件并完成以下操作:
[DEFAULT]
# 配置Linux橋接接口驅動程序,Dnsmasq DHCP驅動程序,并啟用隔離的元數據,以便提供商網絡上的實例可以通過網絡訪問元數據:
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
Self-service networks
安裝組件
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
配置服務組件
編輯/etc/neutron/neutron.conf文件并完成以下操作:
[DEFAULT]
# 啟用模塊化第2層(ML2)插件,路由器服務和重疊的IP地址
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
# 配置RabbitMQ 消息隊列訪問
transport_url = rabbit://openstack:openstack@172.16.175.11
auth_strategy = keystone
# 通知Compute網絡拓撲更改
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[database]
# 配置數據庫訪問
connection = mysql+pymysql://neutron:neutron@172.16.175.11/neutron
[keystone_authtoken]
# 配置身份服務訪問
www_authenticate_uri = http://172.16.175.11:5000
auth_url = http://172.16.175.11:5000
memcached_servers = 172.16.175.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
# 配置網絡以通知Compute網絡拓撲更改
[nova]
auth_url = http://172.16.175.11:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
# 配置鎖定路徑
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
配置模塊化第2層(ML2)插件
ML2插件使用Linux橋接機制為實例構建第2層(橋接和交換)虛擬網絡基礎架構。
編輯/etc/neutron/plugins/ml2/ml2_conf.ini文件并完成以下操作:
[ml2]
# 啟用flat,VLAN和VXLAN網絡
type_drivers = flat,vlan,vxlan
# 啟用VXLAN自助服務網絡
tenant_network_types = vxlan
# 啟用Linux橋和第2層填充機制
mechanism_drivers = linuxbridge,l2population
# 啟用端口安全性擴展驅動程序
extension_drivers = port_security
[ml2_type_flat]
# 將提供商虛擬網絡配置為扁平網絡
flat_networks = provider
[ml2_type_vxlan]
# 自助服務網絡配置VXLAN網絡標識符范圍
vni_ranges = 1:1000
[securitygroup]
# 啟用ipset以提高安全組規則的效率
enable_ipset = true
配置Linux橋代理
Linux網橋代理為實例構建第2層(橋接和交換)虛擬網絡基礎架構并處理安全組。
編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件并完成以下操作:
[linux_bridge]
# 提供者虛擬網絡映射到提供者物理網絡接口,這里的eth0為映射的網卡
physical_interface_mappings = provider:eth0
[vxlan]
# 啟用VXLAN重疊網絡,配置處理覆蓋網絡的物理網絡接口的IP地址,并啟用第2層填充
enable_vxlan = true
local_ip = 172.16.175.11
l2_population = true
[securitygroup]
# 啟用安全組并配置Linux橋接iptables防火墻驅動程序:
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
通過驗證以下所有sysctl值設置為1:確保您的Linux操作系統內核支持網橋過濾器:
modprobe br_netfilter
ls /proc/sys/net/bridge
在/etc/sysctl.conf中添加:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
執行生效
sysctl -p
配置第三層代理
第3層(L3)代理為自助虛擬網絡提供路由和NAT服務。
編輯/etc/neutron/l3_agent.ini文件并完成以下操作:
[DEFAULT]
# 配置Linux橋接接口驅動程序和外部網橋
interface_driver = linuxbridge
配置DHCP代理
DHCP代理為虛擬網絡提供DHCP服務。
編輯/etc/neutron/dhcp_agent.ini文件并完成以下操作:
[DEFAULT]
# 配置Linux橋接接口驅動程序,Dnsmasq DHCP驅動程序,并啟用隔離的元數據,以便提供商網絡上的實例可以通過網絡訪問元數據
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
配置metadata 客戶端
metadata數據為虛擬機提供配置信息。
編輯/etc/neutron/metadata_agent.ini文件并完成以下操作
[DEFAULT]
# 配置metadata主機和共享密鑰
nova_metadata_host = controller
metadata_proxy_shared_secret = heleicool
# heleicool 為neutron和nova之間通信的密碼
配置計算服務(nova計算服務)使用網絡服務
編輯/etc/nova/nova.conf文件并執行以下操作
[neutron]
# 配置訪問參數,啟用metadata代理并配置密碼:
url = http://172.16.175.11:9696
auth_url = http://172.16.175.11:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = heleicool
安裝完成
網絡服務初始化腳本需要一個/etc/neutron/plugin.ini指向ML2插件配置文件的符號鏈接/etc/neutron/plugins/ml2/ml2_conf.ini。如果此符號鏈接不存在,請使用以下命令創建它
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
填充數據庫,這里需要用到neutron.conf和ml2_conf.ini
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
? --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
重啟nova 計算服務,因為修改了它的配置文件。
systemctl restart openstack-nova-api.service
啟動網絡服務并將其配置為在系統引導時啟動
systemctl enable neutron-server.service \
? neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
? neutron-metadata-agent.service
systemctl start neutron-server.service \
? neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
? neutron-metadata-agent.service
neutron 計算節點安裝
安裝組件
yum install openstack-neutron-linuxbridge ebtables ipset -y
配置公共組件
Networking公共組件配置包括身份驗證機制,消息隊列和插件。
編輯/etc/neutron/neutron.conf文件并完成以下操作:
注釋掉任何connection選項,因為計算節點不直接訪問數據庫
[DEFAULT]
# 配置RabbitMQ 消息隊列訪問
transport_url = rabbit://openstack:openstack@172.16.175.11
# 配置身份服務訪問
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://172.16.175.11:5000
auth_url = http://172.16.175.11:5000
memcached_servers = 172.16.175.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[oslo_concurrency]
# 配置鎖定路徑
lock_path = /var/lib/neutron/tmp
配置網絡選項
選擇為控制器節點選擇的相同網絡選項,以配置特定于其的服務
Procider Network
配置網橋代理
Linux網橋代理為實例構建第2層(橋接和交換)虛擬網絡基礎架構并處理安全組。
編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件并完成以下操作:
[linux_bridge]
# 將提供者虛擬網絡映射到提供者物理網絡接口
physical_interface_mappings = provider:eth0
[vxlan]
# 禁用VXLAN覆蓋網絡
enable_vxlan = false
[securitygroup]
# 啟用安全組并配置Linux橋接iptables防火墻驅動程序
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
通過驗證以下所有sysctl值設置為1:確保您的Linux操作系統內核支持網橋過濾器:
modprobe br_netfilter
ls /proc/sys/net/bridge
在/etc/sysctl.conf中添加:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
執行生效
sysctl -p
Self-service networks
配置網橋代理
Linux網橋代理為實例構建第2層(橋接和交換)虛擬網絡基礎架構并處理安全組。
編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件并完成以下操作:
[linux_bridge]
# 將提供者虛擬網絡映射到提供者物理網絡接口
physical_interface_mappings = provider:eth0
[vxlan]
# 啟用VXLAN重疊網絡,配置處理覆蓋網絡的物理網絡接口的IP地址,并啟用第2層填充
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true
[securitygroup]
# 啟用安全組并配置Linux橋接iptables防火墻驅動程序
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
通過驗證以下所有sysctl值設置為1:確保您的Linux操作系統內核支持網橋過濾器:
modprobe br_netfilter
ls /proc/sys/net/bridge
在/etc/sysctl.conf中添加:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
執行生效
sysctl -p
配置計算(nova計算服務)服務使用網絡服務
編輯/etc/nova/nova.conf文件并完成以下操作
[neutron]
# ...
url = http://172.16.175.11:9696
auth_url = http://172.16.175.11:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
完成安裝
重啟Compute服務
systemctl restart openstack-nova-compute.service
啟動Linux網橋代理并將其配置為在系統引導時啟動
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service
驗證操作
Provider networks
列出驗證成功連接neutron的代理
openstack network agent list
Self-service networks
列出驗證成功連接neutron的代理
# Metadata agent/Linux brideg agent/L3 agent/DHCP agent四個代理程序
openstack network agent list
啟動實例
以上服務都沒有問題后就可以進行創建啟動虛擬機。
創建虛擬網絡
首先需要創建一個虛擬網絡,根據配置Neutron時選擇的網絡選項進行虛擬網絡的配置。
Provider networks
創建網絡
source admin-openstack.sh
openstack network create? --share --external \
? --provider-physical-network provider \
? --provider-network-type flat public
# --share 選項允許所有的項目使用虛擬網絡
# --external 選項將虛擬網絡定義為外部,如果你希望創建內部網絡,則可以使用--internal。默認時internal
# --provider-physical-network為在ml2_conf.ini中配置的flat_networks。
# --provider-network-type flat 是網絡名稱
在網絡上創建子網
openstack subnet create --network public \
? --allocation-pool start=172.16.175.100,end=172.16.175.250 \
? --dns-nameserver 172.16.175.2 --gateway 172.16.175.2 \
? --subnet-range 172.16.175.0/24 public
# --subnet-range 使用CIDR表示法表示提供IP的子網
# start和end分別為要為實例分配IP的范圍
# --dns-nameserver 指定DNS解析的IP地址
# --gateway 網關地址
Self-service networks
創建自有網絡
source admin-openstack.sh
openstack network create selfservice
在網絡上創建子網
openstack subnet create --network selfservice \
? --dns-nameserver 8.8.8.8 --gateway 192.168.1.1 \
? --subnet-range 192.168.1.0/24 selfservice
創建路由
source demo-openstack.sh
openstack router create router
將自助網絡子網添加為路由器上的接口
openstack router add subnet router selfservice
在路由器上的提供商網絡上設置網關
openstack router set router --external-gateway public
驗證操作
列出網絡命名空間。您應該看到一個qrouter名稱空間和兩個 qdhcp名稱空間
source demo-openstack.sh
ip netns
列出路由器上的端口以確定提供商網絡上的網關IP地址
openstack port list --router router
創建實例配置類型
# 為虛擬機分配資源為1C64M 名為m1.nano的資源類型
openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
配置秘鑰對
# 生成秘鑰文件
ssh-keygen -q -N ""
# openstack創建名為mykey的秘鑰
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
# 查看秘鑰
openstack keypair list
添加安全策略
默認情況下,default安全組適用于所有實例。
# 允許icmp
openstack security group rule create --proto icmp default
# 允許22端口
openstack security group rule create --proto tcp --dst-port 22 default
啟動實例
Provider networks
確定實例選項
查看可用的配置類型
source? demo-openstack.sh
openstack flavor list
查看可用的鏡像
openstack image list
查看可用的網絡
openstack network list
查看可用的安全組
openstack security group list
啟動實例
openstack server create --flavor m1.nano --image cirros \
? --nic net-id=PROVIDER_NET_ID --security-group default \
? --key-name mykey provider-instance
# PROVIDER_NET_ID 為public網絡ID,如果選擇環境只包含一個網絡,則可以省略該--nic選項,因為OpenStack會自動選擇唯一可用的網絡。
檢查實例的狀態
openstack server list
使用虛擬控制臺訪問實例
openstack console url show provider-instance
Self-service networks
確定實例選項
查看可用的配置類型
source? demo-openstack.sh
openstack flavor list
查看可用的鏡像
openstack image list
查看可用的網絡
openstack network list
查看可用的安全組
openstack security group list
啟動實例
# 替換SELFSERVICE_NET_ID為selfservice網絡ID 。
openstack server create --flavor m1.nano --image cirros \
? --nic net-id=SELFSERVICE_NET_ID --security-group default \
? --key-name mykey selfservice-instance
檢查實例的狀態
openstack server list
使用虛擬控制臺訪問實例
openstack console url show provider-instance
horizon服務安裝
horizon服務需要基于 Apache HTTP服務和Memcached服務,我把這個服務安裝在控制節點,所以免去了這些服務的安裝,如果你要單獨部署,則需要安裝這些服務。
安裝和配置組件
安裝包
yum install openstack-dashboard -y
編輯 /etc/openstack-dashboard/local_settings 文件并完成以下操作
# 配置儀表板以在controller節點上使用OpenStack服務
OPENSTACK_HOST = "172.16.175.11"
# 配置允許訪問的主機列表
ALLOWED_HOSTS = ['*', 'two.example.com']
# 配置memcached會話存儲服務
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
? ? 'default': {
? ? ? ? ?'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
? ? ? ? ?'LOCATION': '172.16.175.11:11211',
? ? }
}
# 啟用Identity API版本3
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
# 啟用對域的支持
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
# 配置API版本
OPENSTACK_API_VERSIONS = {
? ? "identity": 3,
? ? "image": 2,
? ? "volume": 2,
}
# 配置Default為通過儀表板創建的用戶的默認域
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
# 配置user為您通過儀表板創建的用戶的默認角色
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "myrole"
# 如果選擇網絡選項1,請禁用對第3層網絡服務的支持
OPENSTACK_NEUTRON_NETWORK = {
? ? ...
? ? 'enable_router': False,
? ? 'enable_quotas': False,
? ? 'enable_distributed_router': False,
? ? 'enable_ha_router': False,
? ? 'enable_lb': False,
? ? 'enable_firewall': False,
? ? 'enable_***': False,
? ? 'enable_fip_topology_check': False,
}
# 配置時區
TIME_ZONE = "Asia/Shanghai"
/etc/httpd/conf.d/openstack-dashboard.conf如果未包含,請添加以下行 。
WSGIApplicationGroup %{GLOBAL}
安裝完成
重新啟動Web服務器和memcached存儲服務:
systemctl restart httpd.service memcached.service
完成
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。