您好,登錄后才能下訂單哦!
4.4 計算服務配置(Compute Service Nova)
部署節點:Controller Node
在Controller節點上需要安裝novaapi novaconductor novaconsoleauth novanovncproxy novascheduler
mysql -u root -p123456 CREATE DATABASE nova_api; CREATE DATABASE nova; GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'novaapi'; GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'novaapi'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
openstack user create --domain default --password-prompt nova openstack role add --project service --user nova admin openstack service create --name nova --description "OpenStack Compute" compute openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1/%\(tenant_id\)s openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1/%\(tenant_id\)s openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1/%\(tenant_id\)s
安裝計算服務組件
① 安裝Nova組件
yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler
② 修改配置文件sudo vi /etc/nova/nova.conf 。
在[DEFAULT] 處只啟用compute 和metadata APIs,將
267 #enabled_apis=osapi_compute,metadata 改為 enabled_apis = osapi_compute,metadata
在[api_database] 和[database] 處配置數據庫訪問連接(若沒有[api_database] 和[database] 標記,則
手動添加)
注:將NOVA_DBPASS 替換為前面設計的實際密碼
[api_database] ... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [database] ... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
在[DEFAULT] 和[oslo_messaging_rabbit] 處配置RabbitMQ消息隊里訪問
注:將RABBIT_PASS 替換為前面設計的實際密碼
[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
在[DEFAULT] 和[keystone_authtoken] 處配置身份服務訪問
注:將NOVA_PASS 替換為前面設計的實際密碼
注:注釋或刪除[keystone_authtoken] 處其他內容
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
在[DEFAULT] 處配置my_ip 為Controller節點Management Network網口地址
my_ip = 10.0.0.11
在[DEFAULT] 處啟用網絡服務支持
注:默認情況下,計算服務使用主機內部防火墻驅動,因此必須禁用OpenStack網絡服務中的防火墻驅動。
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
在[vnc] 處,使用Controller節點Management Network網口地址配置VNC代理(VNC proxy)。
[vnc]
...
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
在[glance] 處配置鏡像服務API位置
[glance]
...
api_servers = http://controller:9292
在[oslo_concurrency] 處配置lock_path
[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp
將配置信息寫入計算服務數據庫nova
# su -s /bin/sh -c "nova-manage api_db sync" nova # su -s /bin/sh -c "nova-manage db sync" nova
重啟計算服務
systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
部署節點:Compute Node
在Compute節點上需要安裝novacompute
。
注:以下步驟在Compute節點上執行
安裝配置計算服務組件
安裝nova‐compute 組件
yum install openstack-nova-compute
修改配置文件sudo vi /etc/nova/nova.conf
① 在[DEFAULT] 和[oslo_messaging_rabbit] 處配置RabbitMQ消息隊列訪問
注:將RABBIT_PASS 替換為前面設計的實際密碼
[DEFAULT] ... rpc_backend = rabbit [oslo_messaging_rabbit] ... rabbit_host = controller rabbit_userid = openstack rabbit_password = RABBIT_PASS
② 在[DEFAULT] 和[keystone_authtoken] 處配置身份服務訪問
注:將NOVA_PASS 替換為前面設計的實際密碼
注:注釋或刪除[keystone_authtoken] 處其他內容
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
在[DEFAULT] 處配置my_ip 為Compute節點Management Network網口地址
my_ip=10.0.0.31
在[DEFAULT] 處啟用網絡服務支持
[DEFAULT] ... use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver
在[vnc] 處配置遠程控制訪問
[vnc] ... enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html
注: VNC服務器端監聽所有地址,VNC代理客戶端只監聽Compute節點Management Network網口地址,
base URL設置Compute節點遠程控制臺瀏覽器訪問地址(若瀏覽無法解析controller,則需替換為相應IP地
址)。
在[glance] 處配置鏡像服務API
api_servers = http://controller:9292
在[oslo_concurrency] 處配置lock_path
lock_path = /var/lib/nova/tmp
完成安裝,重啟計算服務
① 檢測是否支持虛擬機硬件加速
egrep ‐c '(vmx|svm)' /proc/cpuinfo
若返回結果大于等于1,則支持,無需做額外配置;
若返回結果0,則不支持硬件加速,需要做以下額外配置:修改配置文件sudo vi /etc/nova/novacompute.
conf 中的libvirt 設置項,使用QEMU 代替KVM 。
[libvirt]
virt_type = qemu
systemctl enable libvirtd.service openstack-nova-compute.service systemctl start libvirtd.service openstack-nova-compute.service
驗證計算服務是否安裝正確
注:以下步驟需在Controller節點執行
① 設置OpenStack admin 用戶環境變量
source admin-openrc
② 打印服務組件列表,驗證每個成功啟動和注冊的進程。
[root@controller ~]# openstack compute service list +----+-----------------+------------+----------+---------+-------+------------------+ | Id | Binary | Host | Zone | Status | State | Updated At | +----+-----------------+------------+----------+---------+-------+------------------+ | 1 | nova- | controller | internal | enabled | up | 2016-09-03T09:29 | | | consoleauth | | | | | :56.000000 | | 2 | nova-conductor | controller | internal | enabled | up | 2016-09-03T09:29 | | | | | | | | :56.000000 | | 3 | nova-scheduler | controller | internal | enabled | up | 2016-09-03T09:29 | | | | | | | | :56.000000 | | 7 | nova-compute | compute | nova | enabled | up | 2016-09-03T09:29 | | | | | | | | :56.000000 | +----+-----------------+------------+----------+---------+-------+------------------+
4.5 網絡服務配置(Networking Service Neutron)
部署節點:Controller Node
在MariaDB(MySQL)中創建neutron 數據庫
mysql -u root -p CREATE DATABASE neutron; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';
創建網絡服務證書和API路徑
openstack user create --domain default --password-prompt neutron openstack role add --project service --user neutron admin openstack service create --name neutron --description "OpenStack Networking" network openstack endpoint create --region RegionOne network public http://controller:9696 openstack endpoint create --region RegionOne network internal http://controller:9696 openstack endpoint create --region RegionOne network admin http://controller:9696
安裝配置neutron‐server 服務組件
yum install openstack-neutron openstack-neutron-ml2
修改配置文件sudo vi /etc/neutron/neutron.conf
vi /etc/neutron/neutron.conf [database] connection = mysql://neutron:neutron@controller/neutron [DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = True [DEFAULT] rpc_backend = rabbit [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = openstack [DEFAULT] auth_strategy = keystone [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = neutron password = neutron [DEFAULT] notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True [nova] auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default region_name = RegionOne project_name = service username = nova password = nova [oslo_concurrency] lock_path = /var/lib/neutron/tmp [DEFAULT] verbose = True
配置ML2插件 配置LINUX橋接代理 配置元數據代理
ML2 plugin 使用Linux網橋機制為OpenStack實例建立layer2 虛擬網絡設施(橋接和交換)。修改配置文件sudo vi /etc/neutron/plugins/ml2/ml2_conf.ini vi /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan #配置ML2后如果移除此項目會引起數據庫不一致 [ml2] tenant_network_types = vxlan [ml2] mechanism_drivers = linuxbridge,l2population [ml2] extension_drivers = port_security ##啟用端口安全擴展驅動 [ml2_type_flat] flat_networks = public ##運營商虛擬網絡為flat network [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = True ##啟用ipset 來增強安全組規則的效率
將配置信息寫入neutron 數據庫
su ‐s /bin/sh ‐c "neutron‐db‐manage ‐‐config‐file /etc/neutron/neutron.conf ‐‐config‐file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
配置計算節點使用網絡
vi /etc/nova/nova.conf [neutron] url = http://controller:9696 auth_url = http://controller1:35357 auth_plugin = password project_domain_id = default user_domain_id = default region_name = RegionOne project_name = service username = neutron password = neutron service_metadata_proxy = True metadata_proxy_shared_secret = metadata
創建文件連接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
重啟服務
systemctl restart openstack-nova-api.service systemctl restart neutron-server.service systemctl start neutron-metadata-agent.service systemctl enable neutron-server.service systemctl enable neutron-metadata-agent.service
部署節點:Network Node
在Network 節點上部署組件:
網絡服務部署架構有兩種方式Provider Networks 和Self‐Service Networks ,在本文開頭作了簡要介紹。本文采
用Self‐Service Networks 方式部署。
參考文檔:Deploy Networking Service using the Architecture of SelfService
Networks
yum install openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
配置公共服務組件
公共組件配置包括認證機制、消息隊列。修改配置文件sudo vi /etc/neutron/neutron.conf
[DEFAULT] rpc_backend = rabbit [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = openstack [DEFAULT] auth_strategy = keystone [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = neutron 配置Linux網橋代理 Linux bridge agent為實例建立了二層虛擬網絡設施,而且可以管理安全組。 修改配置文件sudo vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini #注意橋接的網卡名稱 [linux_bridge] physical_interface_mappings = public:eth0 [vxlan] enable_vxlan = True local_ip = 10.0.0.21 #物理公共網絡接口地址(controller) l2_population = True [agent] prevent_arp_spoofing = True [securitygroup] enable_security_group = True firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
配置三層網絡代理
L3(Layer‐3) Agent 位自服務網絡提供了路由和NAT服務。
修改配置文件sudo vi /etc/neutron/l3_agent.ini ,在[DEFAULT] 處配置Linux網橋接口驅動(Linux BridgeInterface Driver)和外網網橋。
vi /etc/neutron/l3_agent.ini [DEFAULT] interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver external_network_bridge = ####注: external_network_bridge 值故意空缺,這樣可使多個外部網絡共用一個代理。 [DEFAULT] verbose = True
修改配置文件sudo vi /etc/neutron/dhcp_agent.ini ,在[DEFAULT] 處配置Linux bridge interface
driver 和Dnsmasq DHCP driver ,啟用獨立的metadata 使運營商網絡實例可以訪問虛擬網絡元信息。
vi /etc/neutron/dhcp_agent.ini [DEFAULT] interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = True [DEFAULT] verbose = True
配置元數據代理
元數據代理提供一些諸如證書之類的配置信息。
修改配置文件sudo vi /etc/neutron/metadata_agent.ini ,在[DEFAULT] 處配置元數據主機和共享密鑰。
注:將METADATA_SECRET 替換為前面設計的實際密碼
[DEFAULT] nova_metadata_ip = controller metadata_proxy_shared_secret = metadata
systemctl start neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
部署節點:Compute Node
安裝網絡服務組件
[root@compute ~]# yum install openstack-neutron-linuxbridge
配置公共組件
公共組件配置包括認證機制、消息隊列、插件。
[root@compute ~]# cat /etc/neutron/neutron.conf [DEFAULT] rpc_backend = rabbit [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = openstack [DEFAULT] auth_strategy = keystone [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = neutron
配置網絡設置
配置Linux網橋代理,修改配置文件sudo vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[root@compute ~]# cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:eth0 [vxlan] enable_vxlan = True local_ip = 10.0.0.31 l2_population = True [securitygroup] enable_security_group = True firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
配置計算服務訪問網絡
修改配置文件sudo vi /etc/nova/nova.conf
[root@compute ~]# vi /etc/nova/nova.conf [neutron] url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = neutron
重啟服務
systemctl restart openstack-nova-compute.service systemctl restart neutron-linuxbridge-agent.service systemctl enable neutron-linuxbridge-agent.service
驗證
[root@controller ~]# neutron ext-list [root@controller ~]# neutron agent-list +--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+ | id | agent_type | host | availability_zone | alive | admin_state_up | binary | +--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+ | 0e1c9f6f-a56b-40d1-b43e-91754cabcf75 | Metadata agent | network | | :-) | True | neutron-metadata-agent | | 24c8daec-b495-48ba-b70d-f7d103c8cda1 | Linux bridge agent | compute | | :-) | True | neutron-linuxbridge-agent | | 2e93bf03-e095-444d-8f74-0b832db4a0be | Linux bridge agent | network | | :-) | True | neutron-linuxbridge-agent | | 456c754a-d2c0-4ce5-8d9b-b0089fb77647 | Metadata agent | controller | | :-) | True | neutron-metadata-agent | | 8a1c7895-fc44-407f-b74b-55bb1b4519d8 | DHCP agent | network | nova | :-) | True | neutron-dhcp-agent | | 93ad18bf-d961-4d00-982c-6c617dbc0a5e | L3 agent | network | nova | :-) | True | neutron-l3-agent | +--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
4.6 儀表盤服務配置(Dashboard Service Horizon)
儀表盤是一個Web接口,可使云管理員和用戶管理各種各樣的OpenStack資源和服務。本文采用Apache Web
Server 部署Dashboard 服務。
部署節點:Controller Node
yum install openstack-dashboard
修改配置文件sudo vim /etc/openstack‐dashboard/local_settings
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*', ]
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
"identity": 3,
"p_w_picpath": 2,
"volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_NEUTRON_NETWORK = {
...
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_***': False,
'enable_fip_topology_check': False,
}
TIME_ZONE = "TIME_ZONE"
systemctl restart httpd.service memcached.service
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。