您好,登錄后才能下訂單哦!
社區OpenStack Queens版本部署安裝詳解(附加節點安裝所有組件)
一、 部署軟件環境
操作系統:
Centos7
內核版本:
[root@controller ~]# uname -m
x86_64
[root@controller ~]# uname -r
3.10.0-693.21.1.el7.x86_64
節點間以及網卡配置
controller節點
[root@controller ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
compute節點
[root@compute ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
存儲Cinder節點
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
說明:此次部署搭建采用三臺物理節點手搭建社區openstack Queens環境
二.OpenStack概述
OpenStack項目是一個開源云計算平臺,支持所有類型的云環境。該項目旨在實現簡單,大規模的可擴展性和豐富的功能。
OpenStack通過各種補充服務提供基礎架構即服務(IaaS)解決方案。每項服務都提供了一個應用程序編程接口(API),以促進這種集成。
本文涵蓋了使用適用于具有足夠Linux經驗的OpenStack新用戶的功能性示例體系結構,逐步部署主要OpenStack服務。只用于學習OpenStack最小化環境。
三、OpenStack架構總覽
1.概念性架構
下圖顯示了OpenStack服務之間的關系:
2.邏輯體系結構
下圖顯示了OpenStack云中最常見但不是唯一可能的體系結構:
^* time4.aliyun.com 2 10 377 1015 +115us[ +142us] +/- 14ms
^* leontp.ccgs.wa.edu.au 1 10 377 752
^+ 61-216-153-104.HINET-IP.> 3 10 377 748 -3373us[-
注意:日常運維中經常遇見時鐘飄逸問題,導致集群服務腦裂
openstack服務安裝、配置
說明:無特殊說明,以下操作在所有節點上執行
1.下載安裝openstack軟件倉庫(queens版本)
yum install centos-release-openstack-queens -y
2.更新所有節點軟件包
yum upgrade
3.兩個節點安裝openstack client端
yum install python-openstackclient -y
4.安裝openstack-selinux
yum install openstack-selinux -y
安裝數據庫(controller節點執行)
大多數OpenStack服務使用SQL數據庫來存儲信息,數據庫通常在控制器節點上運行。 本文主要使用MariaDB或MySQL。
安裝軟件包
yum install mariadb mariadb-server python2-PyMySQL -y
編輯/etc/my.cnf.d/mariadb-server.cnf并完成以下操作
[root@controller ~]# vim /etc/my.cnf.d/mariadb-server.cnf
#
[server]
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid
bind-address = 192.168.10.102
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
說明:bind-address使用controller節點的管理IP
設置服務開機啟動
systemctl enable mariadb.service
systemctl start mariadb.service
通過運行mysql_secure_installation腳本來保護數據庫服務。密碼123456
[root@controller ~]# mysql_secure_installation
Thanks for using MariaDB!
在controller節點安裝、配置RabbitMQ
1.安裝配置消息隊列組件
yum install rabbitmq-server -y
2.設置服務開機啟動
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service
3.添加openstack 用戶
rabbitmqctl add_user openstack openstack
4.openstack用戶的權限配置
rabbitmqctl set_permissions openstack "." "." ".*"
8.RabbitMQ 消息隊列安裝及配置 (控制節點)
//啟用rabbitmq_management服務
//添加 openstack 用戶 , openstack 為密碼
//給openstack用戶配置寫和讀權限
訪問 httpd://192.168.0.17:15672 可以看到web管理頁面
若無法訪問,的賦予權限
安裝緩存數據庫Memcached(controller節點)
說明:服務的身份認證服務使用Memcached緩存令牌。 memcached服務通常在控制器節點上運行。 對于生產部署,我們建議啟用防火墻,身份驗證和加密的組合來保護它。
1.安裝配置組件
yum install memcached python-memcached -y
2.編輯/etc/sysconfig/memcached
vim /etc/sysconfig/memcached
OPTIONS="-l 10.71.11.12,::1,controller"
3.設置服務開機啟動
systemctl enable memcached.service
systemctl start memcached.service
Etcd服務安裝(controller)
1.安裝服務
yum install etcd -y
2.編輯/etc/etcd/etcd.conf文件
vim /etc/etcd/etcd.conf
ETCD_INITIAL_CLUSTER
ETCD_INITIAL_ADVERTISE_PEER_URLS
ETCD_ADVERTISE_CLIENT_URLS
ETCD_LISTEN_CLIENT_URLS
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://10.71.11.12:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.71.11.12:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.71.11.12:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://10.71.11.12:2379"
ETCD_INITIAL_CLUSTER="controller=http://10.71.11.12:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
3.設置服務開機啟動
systemctl enable etcd
systemctl start etcd
安裝keystone組件(controller)
1.創建keystone數據庫并授權
mysql -u root -p
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone. TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
GRANT ALL PRIVILEGES ON keystone. TO 'keystone'@'%' IDENTIFIED BY '123456';
2.安裝、配置組件
yum install openstack-keystone httpd mod_wsgi -y
3.編輯 vim /etc/keystone/keystone.conf
[database] 737
connection = mysql+pymysql://keystone:123456@controller/keystone
[token] 2878
provider = fernet
4.同步keystone數據庫
su -s /bin/sh -c "keystone-manage db_sync" keystone
5.數據庫初始化
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
6.引導身份認證服務
keystone-manage bootstrap --bootstrap-password 123456 --bootstrap-admin-url http://controller:35357/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne
配置apache http服務
1.編輯 vim /etc/httpd/conf/httpd.conf,配置ServerName參數
ServerName controller
2.創建 /usr/share/keystone/wsgi-keystone.conf鏈接文件
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
3.設置服務開機啟動
systemctl enable httpd.service
systemctl restart httpd.service
啟動服務報錯
[root@controller ~]# systemctl start httpd.service
經過判斷,是SELinux引發的問題
解決辦法:關閉防火墻
[root@controller ~]# vi /etc/selinux/config
SELINUX=disabled
SELINUXTYPE=targeted
再次重啟服務報錯解決
[root@controller ~]# systemctl enable httpd.service;systemctl start httpd.service
4.配置administrative賬號
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
創建 domain, projects, users, roles
1.創建域
openstack domain create --description "Domain" example
[root@controller ~]# openstack domain create --description "Domain" example
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Domain |
| enabled | True |
| id | 199658b1d0234c3cb8785c944aa05780 |
| name | example |
| tags | [] |
+-------------+----------------------------------+
[root@controller ~]# openstack --os-auth-url http://controller:35357/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name admin --os-username admin token issue
Password:
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
3.demo用戶返回的認證token 密碼deno
[root@controller ~]# openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name demo --os-username demo token issue
Password:
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
創建openstack 客戶端環境腳本
1.創建 vim admin-openrc腳本
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
2.創建 vim demo-openrc腳本
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
3.使用腳本,返回認證token 賦予腳本權限,執行腳本
[root@controller ~]# openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2018-04-01T08:17:29+0000 |
| id | gAAAAABawIeJ0z-3R2ltY6ublCGqZX80AIi4tQUxqEpw0xvPsFP9BLV8ALNsB2B7bsVivGB14KvhUncdoRl_G2ng5BtzVKAfzHyB-OxwiXeqAttkpQsuLCDKRHd3l-K6wRdaDqfNm-D1QjhtFoxHOTotOcjtujBHF12uP49TjJtl1Rrd6uVDk0g |
| project_id | 4205b649750d4ea68ff5bea73de0faae |
| user_id | 475b31138acc4cc5bb42ca64af418963 |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
安裝Glance服務(controller)
1.創建glance數據庫,并授權
mysql -u root -p
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '123456';
2.獲取admin用戶的環境變量,并創建服務認證
. admin-openrc
創建glance用戶 密碼123456
[root@controller ~]# openstack user create --domain default --password-prompt glance
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | dd2363d365624c998dfd788b13e1282b |
| name | glance |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
把admin用戶添加到glance用戶和項目中
openstack role add --project service --user glance admin
說明:此條命令執行不返回不返回
創建glance服務
[root@controller ~]# openstack service create --name glance --description "OpenStack Image" image
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Image |
| enabled | True |
| id | 5927e22c745449869ff75b193ed7d7c6 |
| name | glance |
| type | image |
+-------------+----------------------------------+
3.創建鏡像服務API端點
[root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 0822449bf80f4f6897be5e3240b6bfcc |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 5927e22c745449869ff75b193ed7d7c6 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | f18ae583441b4d118526571cdc204d8a |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 5927e22c745449869ff75b193ed7d7c6 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne image admin http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 79eadf7829274b1b9beb2bfb6be91992 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 5927e22c745449869ff75b193ed7d7c6 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
安裝和配置組件
1.安裝軟件包
yum install openstack-glance -y
2.編輯 vim /etc/glance/glance-api.conf文件
[database] 1924
connection = mysql+pymysql://glance:glance@controller/glance
[keystone_authtoken] 3472
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456
[paste_deploy]
flavor = keystone
[glance_store] 2039
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
3.編輯 vim /etc/glance/glance-registry.conf
[database] 1170
connection = mysql+pymysql://glance:glance@controller/glance
[keystone_authtoken] 1285
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456
[paste_deploy] 2272
flavor = keystone
4.同步鏡像服務數據庫
su -s /bin/sh -c "glance-manage db_sync" glance
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service
驗證操作
使用CirrOS驗證Image服務的操作,這是一個小型Linux映像,可幫助您測試OpenStack部署。
有關如何下載和構建映像的更多信息,請參閱OpenStack虛擬機映像指南https://docs.openstack.org/image-guide/
有關如何管理映像的信息,請參閱OpenStack最終用戶指南https://docs.openstack.org/queens/user/
1.獲取admin用戶的環境變量,且下載鏡像
. admin-openrc
wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img 2.上傳鏡像 使用QCOW2磁盤格式,裸容器格式和公開可見性將圖像上傳到Image服務,以便所有項目都可以訪問它: [root@controller ~]# openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public +------------------+------------------------------------------------------+ |
Field | Value |
---|
3.查看上傳的鏡像
[root@controller ~]# openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 916faa2b-e292-46e0-bfe4-0f535069a1a0 | cirros | active |
+--------------------------------------+--------+--------+
說明:glance具體配置選項:https://docs.openstack.org/glance/queens/configuration/index.html
controller節點安裝和配置compute服務
1.創建nova_api, nova, nova_cell0數據庫
mysql -u root -p
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
數據庫登錄授權
GRANT ALL PRIVILEGES ON nova_api. TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_api. TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'controller' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova. TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova. TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'controller' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0. TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0. TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'controller' IDENTIFIED BY 'nova';
建nova用戶 密碼123456
[root@controller ~]# . admin-openrc
[root@controller ~]# openstack user create --domain default --password-prompt nova
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 8e72103f5cc645669870a630ffb25065 |
| name | nova |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
3.添加admin用戶為nova用戶
openstack role add --project service --user nova admin
4.創建nova服務端點
[root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | 9f8f8d8cb8e542b09694bee6016cc67c |
| name | nova |
| type | compute |
+-------------+----------------------------------+
5.創建compute API 服務端點
[root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | cf260d5a56344c728840e2696f44f9bc |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 9f8f8d8cb8e542b09694bee6016cc67c |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | f308f29a78e04b888c7418e78c3d6a6d |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 9f8f8d8cb8e542b09694bee6016cc67c |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 022d96fa78de4b73b6212c09f13d05be |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 9f8f8d8cb8e542b09694bee6016cc67c |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
創建一個placement服務用戶 密碼123456
[root@controller ~]# openstack user create --domain default --password-prompt placement
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | fa239565fef14492ba18a649deaa6f3c |
| name | placement |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
6.添加placement用戶為項目服務admin角色
openstack role add --project service --user placement admin
7.創建在服務目錄創建Placement API服務
[root@controller ~]# openstack service create --name placement --description "Placement API" placement
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Placement API |
| enabled | True |
| id | 32bb1968c08747ccb14f6e4a20cd509e |
| name | placement |
| type | placement |
+-------------+----------------------------------+
8.創建Placement API服務端點
[root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | b856962188484f4ba6fad500b26b00ee |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 32bb1968c08747ccb14f6e4a20cd509e |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 62e5a3d82a994f048a8bb8ddd1adc959 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 32bb1968c08747ccb14f6e4a20cd509e |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | f12f81ff7b72416aa5d035b8b8cc2605 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 32bb1968c08747ccb14f6e4a20cd509e |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
安裝和配置組件
1.安裝軟件包
yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api
2.編輯 vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack@controller
my_ip = 10.71.11.12
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api_database]
connection = mysql+pymysql://nova:nova@controller/nova_api
[database]
connection = mysql+pymysql://nova:nova@controller/nova
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = 123456
3.由于軟件包的一個bug,需要在/etc/httpd/conf.d/00-nova-placement-api.conf文件中添加如下配置
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
4.重新http服務
systemctl restart httpd
5.同步nova-api數據庫
su -s /bin/sh -c "nova-manage api_db sync" nova
同步數據庫報錯
[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
Traceback (most recent call last):
File "/usr/bin/nova-manage", line 10, in <module>
sys.exit(main())
File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1597, in main
config.parse_args(sys.argv)
File "/usr/lib/python2.7/site-packages/nova/config.py", line 52, in parse_args
default_config_files=default_config_files)
File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2502, in call
else sys.argv[1:])
File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 3166, in _parse_cli_opts
return self._parse_config_files()
File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 3183, in _parse_config_files
ConfigParser._parse_file(config_file, namespace)
File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 1950, in _parse_file
raise ConfigFileParseError(pe.filename, str(pe))
oslo_config.cfg.ConfigFileParseError: Failed to parse /etc/nova/nova.conf: at /etc/nova/nova.conf:8, No ':' or '=' found in assignment: '/etc/nova/nova.conf'
根據報錯,把/etc/nova/nova.conf中第八行注釋掉,解決報錯
[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
exception.NotSupportedWarning
6.注冊cell0數據庫
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
exception.NotSupportedWarning
7.創建cell1 cell
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
exception.NotSupportedWarning
6c689e8c-3e13-4e6d-974c-c2e4e22e510b
8.同步nova數據庫
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
exception.NotSupportedWarning
/usr/lib/python2.7/site-packages/pymysql/cursors.py:165: Warning: (1831, u'Duplicate index block_device_mapping_instance_uuid_virtual_name_device_name_idx
. This is deprecated and will be disallowed in a future release.')
result = self._query(query)
/usr/lib/python2.7/site-packages/pymysql/cursors.py:165: Warning: (1831, u'Duplicate index uniq_instances0uuid
. This is deprecated and will be disallowed in a future release.')
result = self._query(query)
9.驗證 nova、 cell0、 cell1數據庫是否注冊正確
[root@controller ~]# nova-manage cell_v2 list_cells
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
exception.NotSupportedWarning
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+
| Name | UUID | Transport URL | Database Connection |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+
| cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:@controller/nova_cell0 |
| cell1 | 6c689e8c-3e13-4e6d-974c-c2e4e22e510b | rabbit://openstack:@controller | mysql+pymysql://nova:****@controller/nova |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+
10.設置服務為開機啟動
systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl restart openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
計算節點安裝和配置compute節點服務
1.安裝軟件包
yum install openstack-nova-compute -y
2.編輯 vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 10.71.11.13
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456
[vnc]
enabled = True
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = 123456
3.設置服務開機啟動
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
說明:如果nova-compute服務無法啟動,請檢查/var/log/nova/nova-compute.log,會出現如下報錯信息
2018-04-01 12:03:43.362 18612 INFO os_vif [-] Loaded VIF plugins: ovs, linux_bridge
2018-04-01 12:03:43.431 18612 WARNING oslo_config.cfg [-]
控制器:5672上的錯誤消息AMQP服務器無法訪問可能表示控制器節點上的防火墻阻止了對端口5672的訪問。配置防火墻以在控制器節點上打開端口5672,并在計算節點上重新啟動nova-compute服務。
清除controller的防火墻
[root@controller ~]# iptables -F
[root@controller ~]# iptables -X
[root@controller ~]# iptables -Z
重啟計算服務成功
4.添加compute節點到cell數據庫(controller)
驗證有幾個計算節點在數據庫中
[root@controller ~]. admin-openrc
[root@controller ~]# openstack compute service list --service nova-compute
+----+--------------+---------+------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+--------------+---------+------+---------+-------+----------------------------+
| 8 | nova-compute | compute | nova | enabled | up | 2018-04-01T22:24:14.000000 |
+----+--------------+---------+------+---------+-------+----------------------------+
5.發現計算節點
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
exception.NotSupportedWarning
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'cell1': 6c689e8c-3e13-4e6d-974c-c2e4e22e510b
Found 1 unmapped computes in cell: 6c689e8c-3e13-4e6d-974c-c2e4e22e510b
Checking host mapping for compute host 'compute': 32861a0d-894e-4af9-a57c-27662d27e6bd
Creating host mapping for compute host 'compute': 32861a0d-894e-4af9-a57c-27662d27e6b
在controller節點驗證計算服務操作
1.列出服務組件
[root@controller ~]#. admin-openrc
[root@controller ~]# openstack compute service list +----+------------------+----------------+----------+---------+-------+----------------------------+ |
ID | Binary | Host | Zone | Status | State | Updated At | +----+------------------+----------------+----------+---------+-------+----------------------------+ |
1 | nova-consoleauth | controller | internal | enabled | up | 2018-04-01T22:25:29.000000 | 2 | nova-conductor | controller | internal | enabled | up | 2018-04-01T22:25:33.000000 | 3 | nova-scheduler | controller | internal | enabled | up | 2018-04-01T22:25:30.000000 | 6 | nova-conductor | ansible-server | internal | enabled | up | 2018-04-01T22:25:55.000000 | 7 | nova-scheduler | ansible-server | internal | enabled | up | 2018-04-01T22:25:59.000000 | 8 | nova-compute | compute | nova | enabled | up | 2018-04-01T22:25:34.000000 | 9 | nova-consoleauth | ansible-server | internal | enabled | up | 2018-04-01T22:25:57.000000 | +----+------------------+----------------+----------+---------+-------+----------------------------+ 2.列出身份服務中的API端點以驗證與身份服務的連接: [root@controller ~]# openstack catalog list +-----------+-----------+-----------------------------------------+ |
Name | Type | Endpoints | +-----------+-----------+-----------------------------------------+ |
placement | placement | RegionOne | internal: http://controller:8778 | RegionOne | public: http://controller:8778 | RegionOne | admin: http://controller:8778 | |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
keystone | identity | RegionOne | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public: http://controller:5000/v3/ | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
RegionOne | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
admin: http://controller:35357/v3/ | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
RegionOne | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
internal: http://controller:5000/v3/ | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
glance | image | RegionOne | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public: http://controller:9292 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
RegionOne | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
admin: http://controller:9292 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
RegionOne | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
internal: http://controller:9292 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
nova | compute | RegionOne | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
admin: http://controller:8774/v2.1 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
RegionOne | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
public: http://controller:8774/v2.1 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
RegionOne | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
internal: http://controller:8774/v2.1 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
+-----------+-----------+-----------------------------------------+
3.列出鏡像
[root@controller ~]# openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 916faa2b-e292-46e0-bfe4-0f535069a1a0 | cirros | active |
+--------------------------------------+--------+--------+
4.檢查cells和placement API是否正常
[root@controller ~]# nova-status upgrade check
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
exception.NotSupportedWarning
Option "os_region_name" from group "placement" is deprecated. Use option "region-name" from group "placement".
+---------------------------+
| Upgrade Check Results |
+---------------------------+
| Check: Cells v2 |
| Result: Success |
| Details: None |
+---------------------------+
| Check: Placement API |
| Result: Success |
| Details: None |
+---------------------------+
| Check: Resource Providers |
| Result: Success |
| Details: None |
+---------------------------+
nova知識點https://docs.openstack.org/nova/queens/admin/index.html
安裝和配置controller節點neutron網絡配置
1.創建nuetron數據庫和授權
mysql -u root -p
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron. TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
GRANT ALL PRIVILEGES ON neutron. TO 'neutron'@'%' IDENTIFIED BY '123456';
2.創建服務
. admin-openrc 密碼123456
openstack user create --domain default --password-prompt neutron
添加admin角色為neutron用戶
openstack role add --project service --user neutron admin
創建neutron服務
openstack service create --name neutron --description "OpenStack Networking" network
3.創建網絡服務端點
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696
配置網絡部分(controller節點)
1.安裝組件
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
2.配置服務組件,編輯 vim /etc/neutron/neutron.conf
[database]
connect
[DEFAULT]
auth_strategy = keystone
core_plugin = ml2
service_plugins =
transport_url = rabbit://openstack:openstack@controller
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123456
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
配置網絡二層插件
編輯 vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge , l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = true
配置Linux網橋
編輯 vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens37
[vxlan]
enable_vxlan = false 等于true時,寫下面兩行
l2_population = true
local_ip = 192.168.10.18
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[root@controller ~]# vim /etc/neutron/l3_agent.ini
interface_driver = linuxbridge
配置DHCP服務
編輯 vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
配置metadata
編輯 vim /etc/neutron/metadata_agent.ini
DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = 123456
配置計算服務使用網絡服務
編輯/etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
service_metadata_proxy = true
metadata_proxy_shared_secret = 123456
完成安裝
1.創建服務軟連接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
2.同步數據庫
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
3.重啟compute API服務
systemctl restart openstack-nova-api.service
4.配置網絡服務開機啟動
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
配置compute節點網絡服務
1.安裝組件
yum -y install openstack-neutron-linuxbridge ebtables ipset
2.配置公共組件
編輯/etc/neutron/neutron.conf
[DEFAULT]
auth_strategy = keystone
transport_url = rabbit://openstack:123456@controller
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
配置網絡
1.配置Linux網橋,編輯 /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens6f0
[vxlan]
enable_vxlan = false
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
配置計算節點網絡服務
編輯/etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
完成安裝
1.重啟compute服務
systemctl restart openstack-nova-compute.service
2.設置網橋服務開機啟動
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.servic
驗證
[root@controller ~]# source admin-openrc
[root@controller ~]# openstack extension list --network
[root@controller ~]# openstack network agent list
在controller節點安裝Horizon服務
1.安裝軟件包
yum install openstack-dashboard -y
編輯 vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*']
配置memcache會話存儲
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
開啟身份認證API 版本v3
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HO
開啟domains版本支持
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
配置API版本
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
:
OPENSTACK_NEUTRON_NETWORK = {
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_***': False,
'enable_fip_topology_check': False,
}
為了防止服務器報500錯,添加以下內容
[root@controller ~]# vim /etc/httpd/conf.d/openstack-dashboard.conf
WSGIProcessGroup %{Global}
2.完成安裝,重啟web服務和會話存儲
systemctl restart httpd.service memcached.service
在瀏覽器輸入http://10.71.11.12/dashboard.,訪問openstack的web頁面
default
admin
123456
控制節點安裝配置cinder
mysql -u root -p123456
354 source admin-openrc
357 openstack user create --domain default --password-prompt cinder
358 openstack role add --project service --user cinder admin
359 openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
360 openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
361 openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
362 openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
363 openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
364 openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v2/%\(project_id\)s
365 openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v2/%\(project_id\)s
366 openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v2/%\(project_id\)s
367 yum install openstack-cinder python-keystone -y
368 vim /etc/cinder/cinder.conf
369 clear
370 su -s /bin/sh -c "cinder-manage db sync" cinder
371 mysql -uroot -p123456 -e "use cinder;show tables;"
372 clear
373 vim /etc/nova/nova.conf
374 systemctl restart openstack-nova-api.service
375 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
376 systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
377 history
安裝和配置Cinder節點
本節介紹如何為Block Storage服務安裝和配置存儲節點。 為簡單起見,此配置使用空的本地塊存儲設備引用一個存儲節點。
該服務使用LVM驅動程序在該設備上配置邏輯卷,并通過iSCSI傳輸將其提供給實例。 您可以按照這些說明進行小的修改,以便使用其他存儲節點水平擴展您的環境。
1.安裝支持的軟件包
安裝LVM
yum install lvm2 device-mapper-persistent-data
設置LVM服務開機啟動
systemctl enable lvm2-lvmetad.service
systemctl restart lvm2-lvmetad.service
2.創建LVM物理邏輯卷/dev/sdb
[root@cinder ~]# pvcreate /dev/sdb1
Device /dev/sdb not found (or ignored by filtering).
解決方案:
編輯 vim /etc/lvm/lvm.conf,找到global_filter一行,配置如下
global_filter = [ "a|.*/|","a|sdb1|"]
之后再執行pvcreate命令,問題解決。
[root@cinder ~]# pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created.
3.創建cinder-volumes邏輯卷組
[root@cinder ~]# vgcreate cinder-volumes /dev/sdb1
Volume group "cinder-volumes" successfully created
4.安裝和配置組件
安裝軟件包
yum install openstack-cinder targetcli python-keystone -y
編輯 vim /etc/cinder/cinder.conf
[DEFAULT]
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
my_ip = 10.71.11.14
enabled_backends = lvm
glance_api_servers = http://controller:9292
[database]
connection = mysql+pymysql://cinder:123456@controller/cinder
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = 123456
在[lvm]部分中,使用LVM驅動程序,cinder-volumes卷組,iSCSI協議和相應的iSCSI服務配置LVM后端。 如果[lvm]部分不存在,請創建它:
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
設置存儲服務開機啟動
systemctl enable openstack-cinder-volume.service target.service
systemctl restart openstack-cinder-volume.service target.service
控制節點驗證
source admin-openrc
openstack volume service list
五、登錄Dashboard界面
社區Queens Web界面顯示三個角色
? 項目
? 管理員
? 身份管理
六、命令行上傳鏡像
2.轉換原生ISO鏡像格式為qcow2
[root@controller ~]# openstack image create --disk-format qcow2 --container-format bare --public --file /root/CentOS-7-x86_64-Minimal-1708.iso CentOS-7-x86_64
3.查看制作的鏡像信息
七、創建虛擬機流程
參數
--share 允許所有項目使用虛擬網絡
--external 定義外接虛擬網絡 如果需要創建外網使用 --internal
--provider-physical-network provider && --provider-network-type flat 連接flat 虛擬網絡
2.創建子網
openstack subnet create --network provider --allocation-pool start=10.71.11.50,end=10.71.11.60 --dns-nameserver 114.114.114.114 --gateway 10.71.11.254 --subnet-range 10.71.11.0/24 provider
3.創建flavor
openstack flavor create --id 1 --vcpus 4 --ram 128 --disk 1 m2.nano
4.控制節點生成秘鑰對,在啟動實例之前,需要將公鑰添加到Compute服務
. demo-openrc
ssh-keygen -q -N ""
openstack keypair create --public-key ~/.ssh/id_rsa.pub liukey
5.添加安全組,允許ICMP(ping)和安全shell(SSH)
openstack security group rule create --proto icmp default
6.允許安全shell(SSH)訪問
openstack security group rule create --proto tcp --dst-port 22 default
7.列出flavor
openstack flavor list
8.列出可用鏡像
9.列出網絡
10.列出安全組
11.創建虛擬機
12.查看實列狀態
控制節點安裝的組件:
78 yum install centos-release-openstack-queens -y
79 yum install python-openstackclient -y
80 yum install openstack-selinux -y
81 yum install mariadb mariadb-server python2-PyMySQL -y
82 yum install rabbitmq-server -y
83 yum install memcached python-memcached -y
84 yum install etcd -y
85 yum install openstack-keystone httpd mod_wsgi -y
86 yum install openstack-glance -y
87 yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api
88 yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
89 yum install openstack-dashboard -y
90 yum install openstack-cinder -y
計算節點安裝的組件:
75 yum install centos-release-openstack-queens -y
76 yum install python-openstackclient -y
77 yum install openstack-selinux -y
78 yum install openstack-nova-compute
81 yum install openstack-neutron-linuxbridge ebtables ipset
89 yum -y istall libvirt* ##安裝此項才能安裝,不然報錯
91 yum install -y openstack-nova-compute
存儲節點安裝的組件
53 yum install centos-release-openstack-queens -y
54 yum -y install lvm2 openstack-cinder targetcli python-keystone
客戶端使用VNC連接
[root@192 ~]# yum -y install vnc
[root@192 ~]# yum -y install vncview
[root@192 ~]# vncviewer 192.168.0.19:5901
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。