91超碰碰碰碰久久久久久综合_超碰av人澡人澡人澡人澡人掠_国产黄大片在线观看画质优化_txt小说免费全本

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

怎么在Kolla-Ansible中使用Ceph后端存儲

發布時間:2021-12-17 09:37:44 來源:億速云 閱讀:215 作者:小新 欄目:云計算

這篇文章給大家分享的是有關怎么在Kolla-Ansible中使用Ceph后端存儲的內容。小編覺得挺實用的,因此分享給大家做個參考,一起跟隨小編過來看看吧。

配置Ceph

  • osdev用戶登錄:

$ ssh osdev@osdev01
$ cd /opt/ceph/deploy/

創建Pool

創建鏡像Pool
  • 用于保存Glance鏡像:

$ ceph osd pool create images 32 32
pool 'images' created
創建卷Pool
  • 用于保存Cinder的卷:

$ ceph osd pool create volumes 32 32
pool 'volumes' created
  • 用于保存Cinder的卷備份:

$ ceph osd pool create backups 32 32
pool 'backups' created
創建虛擬機Pool
  • 用于保存虛擬機系統卷:

$ ceph osd pool create vms 32 32
pool 'vms' created
查看Pool
$ ceph osd lspools
1 .rgw.root
2 default.rgw.control
3 default.rgw.meta
4 default.rgw.log
6 rbd
8 images
9 volumes
10 backups
11 vms

創建用戶

查看用戶
  • 查看所有用戶:

$ ceph auth list
installed auth entries:

mds.osdev01
	key: AQCabn5b18tHExAAkZ6Aq3IQ4/aqYEBBey5O3Q==
	caps: [mds] allow
	caps: [mon] allow profile mds
	caps: [osd] allow rwx
mds.osdev02
	key: AQCbbn5bcq4yJRAAUfhoqPNfyp2m/ORu/7vHBA==
	caps: [mds] allow
	caps: [mon] allow profile mds
	caps: [osd] allow rwx
mds.osdev03
	key: AQCcbn5bTAIdORAApGu9NJvC3AmS+L3EWXLMdw==
	caps: [mds] allow
	caps: [mon] allow profile mds
	caps: [osd] allow rwx
osd.0
	key: AQCyJH5bG2ZBHRAAsDaLHcoOxv/mLCHwITA7JQ==
	caps: [mgr] allow profile osd
	caps: [mon] allow profile osd
	caps: [osd] allow *
osd.1
	key: AQDTJH5bjvQ8HxAA4cyLttvZwiqFq1srFoSXWg==
	caps: [mgr] allow profile osd
	caps: [mon] allow profile osd
	caps: [osd] allow *
osd.2
	key: AQD9JH5bbPi6IRAA7DbwaCh6JBaa6RfWPoe9VQ==
	caps: [mgr] allow profile osd
	caps: [mon] allow profile osd
	caps: [osd] allow *
client.admin
	key: AQA1In5bZkxwGBAA9bBLE5/NKstK1CRMzfGgKQ==
	caps: [mds] allow *
	caps: [mgr] allow *
	caps: [mon] allow *
	caps: [osd] allow *
client.bootstrap-mds
	key: AQA1In5boIRwGBAAgj5OccvTGYkuB+btlgL0BQ==
	caps: [mon] allow profile bootstrap-mds
client.bootstrap-mgr
	key: AQA1In5bS6pwGBAA379v3LXJrdURLmA1gnTaLQ==
	caps: [mon] allow profile bootstrap-mgr
client.bootstrap-osd
	key: AQA1In5bnMpwGBAAXohUfa4rGS0Rd2weMl4dPg==
	caps: [mon] allow profile bootstrap-osd
client.bootstrap-rbd
	key: AQA1In5buelwGBAANQSalrSzH3yslSc4rYPu1g==
	caps: [mon] allow profile bootstrap-rbd
client.bootstrap-rgw
	key: AQA1In5b0ghxGBAAIGK3WmBSkKZMnSEfvnEQow==
	caps: [mon] allow profile bootstrap-rgw
client.rgw.osdev01
	key: AQDZbn5b6aChEBAAzRuX4UWlxyws+aX1i+D26Q==
	caps: [mon] allow rw
	caps: [osd] allow rwx
client.rgw.osdev02
	key: AQDabn5bypCDJBAAt18L5ppG5lEg6NkGQLYs5w==
	caps: [mon] allow rw
	caps: [osd] allow rwx
client.rgw.osdev03
	key: AQDbbn5bbEVNNBAArX+/AKQu9q3hCRn/05Ya3A==
	caps: [mon] allow rw
	caps: [osd] allow rwx
mgr.osdev01
	key: AQDPIn5beqPTORAAEzcX3fMCCclLR2RiPyvugw==
	caps: [mds] allow *
	caps: [mon] allow profile mgr
	caps: [osd] allow *
mgr.osdev02
	key: AQDRIn5bLRVqDxAA/yWXO8pX6fQynJNyCcoNww==
	caps: [mds] allow *
	caps: [mon] allow profile mgr
	caps: [osd] allow *
mgr.osdev03
	key: AQDSIn5bGyrhHxAAvtAEOveovRxmdDlF45i2Cg==
	caps: [mds] allow *
	caps: [mon] allow profile mgr
	caps: [osd] allow *
  • 查看指定用戶:

$ ceph auth get client.admin
exported keyring for client.admin
[client.admin]
	key = AQA1In5bZkxwGBAA9bBLE5/NKstK1CRMzfGgKQ==
	caps mds = "allow *"
	caps mgr = "allow *"
	caps mon = "allow *"
	caps osd = "allow *"
創建Glance用戶
  • 創建glance用戶,并授予images存儲池訪問權限:

$ ceph auth get-or-create client.glance
[client.glance]
	key = AQBQq4NboVHdGxAAlfK2WJkiZMolluATpvOviQ==

$ ceph auth caps client.glance mon 'allow r' osd 'allow rwx pool=images'
updated caps for client.glance
  • 查看并保存glance用戶的KeyRing文件:

$ ceph auth get client.glance
exported keyring for client.glance
[client.glance]
	key = AQBQq4NboVHdGxAAlfK2WJkiZMolluATpvOviQ==
	caps mon = "allow r"
	caps osd = "allow rwx pool=images"

$ ceph auth get client.glance -o /opt/ceph/deploy/ceph.client.glance.keyring
exported keyring for client.glance
創建Cinder用戶
  • 創建cinder-volume用戶,并授予volumes存儲池訪問權限:

$ ceph auth get-or-create client.cinder-volume
[client.cinder-volume]
	key = AQBKt4NbqROVIxAACnH+pVv141+wOpgWj14RjA==

$ ceph auth caps client.cinder-volume mon 'allow r' osd 'allow rwx pool=volumes'
updated caps for client.cinder-volume
  • 查看并保存cinder-volume用戶的KeyRing文件:

$ ceph auth get client.cinder-volume
exported keyring for client.cinder-volume
[client.cinder-volume]
	key = AQBKt4NbqROVIxAACnH+pVv141+wOpgWj14RjA==
	caps mon = "allow r"
	caps osd = "allow rwx pool=volumes"

$ ceph auth get client.cinder-volume -o /opt/ceph/deploy/ceph.client.cinder-volume.keyring
exported keyring for client.cinder-volume
  • 創建cinder-backup用戶,并授予volumesbackups存儲池訪問權限:

$ ceph auth get-or-create client.cinder-backup
[client.cinder-backup]
	key = AQBit4NbN0rvLRAAYoa4SBM0qvwY8kPo5Md0og==

$ ceph auth caps client.cinder-backup mon 'allow r' osd 'allow rwx pool=volumes, allow rwx pool=backups'
updated caps for client.cinder-backup
  • 查看并保存cinder-backup用戶的KeyRing文件:

$ ceph auth get client.cinder-backup
exported keyring for client.cinder-backup
[client.cinder-backup]
	key = AQBit4NbN0rvLRAAYoa4SBM0qvwY8kPo5Md0og==
	caps mon = "allow r"
	caps osd = "allow rwx pool=volumes, allow rwx pool=backups"

$ ceph auth get client.cinder-backup -o /opt/ceph/deploy/ceph.client.cinder-backup.keyring
exported keyring for client.cinder-backup
創建Nova用戶
  • 創建nova用戶,并授予vms存儲池的訪問權限:

$ ceph auth get-or-create client.nova
[client.nova]
	key = AQD7tINb4A58GRAA7CsAM9EAwFwtIpTdQFGO7A==

$ ceph auth caps client.nova mon 'allow r' osd 'allow rwx pool=vms'
updated caps for client.nova
  • 查看并保存nova用戶的KeyRing文件:

$ ceph auth get client.nova
exported keyring for client.nova
[client.nova]
	key = AQD7tINb4A58GRAA7CsAM9EAwFwtIpTdQFGO7A==
	caps mon = "allow r"
	caps osd = "allow rwx pool=vms"

$ ceph auth get client.nova -o /opt/ceph/deploy/ceph.client.nova.keyring
exported keyring for client.nova

配置Kolla-Ansible

  • root用戶身份登錄osdev01部署節點,并設置好環境變量:

$ ssh root@osdev01
$ export KOLLA_ROOT=/opt/kolla
$ cd ${KOLLA_ROOT}/myconfig

全局配置

  • 編輯globals.yml,禁止部署Ceph

enable_ceph: "no"
  • 開啟Cinder服務,并開啟GlanceCinderNova的后端Ceph功能:

enable_cinder: "yes"

glance_backend_ceph: "yes"
cinder_backend_ceph: "yes"
nova_backend_ceph: "yes"

配置Glance

  • 配置Glance使用glance用戶使用Cephimages存儲池:

$ mkdir -pv config/glance
mkdir: 已創建目錄 "config/glance"

$ vi config/glance/glance-api.conf
[glance_store]
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
  • 新增GlanceCeph客戶端配置和glance用戶的KeyRing文件:

$ vi config/glance/ceph.conf
[global]
fsid = 383237bd-becf-49d5-9bd6-deb0bc35ab2a
mon_initial_members = osdev01, osdev02, osdev03
mon_host = 172.29.101.166,172.29.101.167,172.29.101.168
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

$ cp -v /opt/ceph/deploy/ceph.client.glance.keyring config/glance/ceph.client.glance.keyring
"/opt/ceph/deploy/ceph.client.glance.keyring" -> "config/glance/ceph.client.glance.keyring"

配置Cinder

  • 配置Cinder卷服務使用Cephcinder-volume用戶使用volumes存儲池,Cinder卷備份服務使用Cephcinder-backup用戶使用backups存儲池:

$ mkdir -pv config/cinder/
mkdir: 已創建目錄 "config/cinder/"

$ vi config/cinder/cinder-volume.conf
[DEFAULT]
enabled_backends=rbd-1

[rbd-1]
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=cinder-volume
backend_host=rbd:volumes
rbd_pool=volumes
volume_backend_name=rbd-1
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_secret_uuid = {{ cinder_rbd_secret_uuid }}

$ vi config/cinder/cinder-backup.conf
[DEFAULT]
backup_ceph_conf=/etc/ceph/ceph.conf
backup_ceph_user=cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool=backups
backup_driver = cinder.backup.drivers.ceph
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true
  • 新增Cinder的卷服務和卷備份服務的Ceph客戶端配置和KeyRing文件:

$ cp config/glance/ceph.conf config/cinder/ceph.conf

$ mkdir -pv config/cinder/cinder-backup/ config/cinder/cinder-volume/
mkdir: 已創建目錄 "config/cinder/cinder-backup/"
mkdir: 已創建目錄 "config/cinder/cinder-volume/"

$ cp -v /opt/ceph/deploy/ceph.client.cinder-volume.keyring config/cinder/cinder-backup/ceph.client.cinder-volume.keyring
"/opt/ceph/deploy/ceph.client.cinder-volume.keyring" -> "config/cinder/cinder-backup/ceph.client.cinder-volume.keyring"

$ cp -v /opt/ceph/deploy/ceph.client.cinder-backup.keyring config/cinder/cinder-backup/ceph.client.cinder-backup.keyring
"/opt/ceph/deploy/ceph.client.cinder-backup.keyring" -> "config/cinder/cinder-backup/ceph.client.cinder-backup.keyring"

$ cp -v /opt/ceph/deploy/ceph.client.cinder-volume.keyring config/cinder/cinder-volume/ceph.client.cinder-volume.keyring
"/opt/ceph/deploy/ceph.client.cinder-volume.keyring" -> "config/cinder/cinder-volume/ceph.client.cinder.keyring"

配置Nova

  • 配置Nova使用Cephnova用戶使用vms存儲池:

$ vi config/nova/nova-compute.conf
[libvirt]
images_rbd_pool=vms
images_type=rbd
images_rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=nova
  • 新增Nova的Ceph客戶端配置和nova用戶的KeyRing文件:

$ cp -v config/glance/ceph.conf config/nova/ceph.conf
"config/glance/ceph.conf" -> "config/nova/ceph.conf"

$ cp -v /opt/ceph/deploy/ceph.client.nova.keyring config/nova/ceph.client.nova.keyring
"/opt/ceph/deploy/ceph.client.nova.keyring" -> "config/nova/ceph.client.nova.keyring"

$ cp -v /opt/ceph/deploy/ceph.client.cinder-volume.keyring config/nova/ceph.client.cinder.keyring
"/opt/ceph/deploy/ceph.client.cinder-volume.keyring" -> "config/nova/ceph.client.cinder.keyring"

部署測試

開始部署

  • 編輯部署腳本osdev.sh

#!/bin/bash

set -uexv

usage()
{
	echo -e "usage : \n$0 <action>"
	echo -e "  \$1 action"
}


if [ $# -lt 1 ]; then
	usage
	exit 1
fi

${KOLLA_ROOT}/kolla-ansible/tools/kolla-ansible --configdir ${KOLLA_ROOT}/myconfig --passwords ${KOLLA_ROOT}/myconfig/passwords.yml --inventory ${KOLLA_ROOT}/myconfig/mynodes.conf $1
  • 增加可執行權限:

$ chmod a+x osdev.sh
  • 部署OpenStack集群:

$ ./osdev.sh bootstrap-servers
$ ./osdev.sh prechecks
$ ./osdev.sh pull
$ ./osdev.sh deploy
$ ./osdev.sh post-deploy
# ./osdev.sh "destroy --yes-i-really-really-mean-it"
  • 查看部署的服務概況:

$ openstack service list
+----------------------------------+-------------+----------------+
| ID                               | Name        | Type           |
+----------------------------------+-------------+----------------+
| 304c9c5073f14f4a97ca1c3cf5e1b49e | neutron     | network        |
| 46de4440a5cf4a5697fa94b2d0424ba9 | heat        | orchestration  |
| 60b46b491ce7403aaec0c064384dde49 | heat-cfn    | cloudformation |
| 7726ab5d41c5450d954f073f1a9aff28 | cinderv2    | volumev2       |
| 7a4bd5fc12904cc7b5c3810412f98c57 | gnocchi     | metric         |
| 7ae6f98018fb4d509e862e45ebf10145 | glance      | image          |
| a0ec333149284c09ac0e157753205fd6 | nova        | compute        |
| b15e90c382864723945b15c37d3317a6 | placement   | placement      |
| b5eaa49c50d64316b583eb1c0c4f9ce2 | cinderv3    | volumev3       |
| c6474640f5d9424da0ec51c70c1e6e01 | nova_legacy | compute_legacy |
| db27eb8524be4db3be12b9dd0dab16b8 | keystone    | identity       |
| edf5c8b894a74a69b65bb49d8e014fff | cinder      | volume         |
+----------------------------------+-------------+----------------+

$ openstack volume service list
+------------------+-------------------+------+---------+-------+----------------------------+
| Binary           | Host              | Zone | Status  | State | Updated At                 |
+------------------+-------------------+------+---------+-------+----------------------------+
| cinder-scheduler | osdev02           | nova | enabled | up    | 2018-08-27T11:33:27.000000 |
| cinder-volume    | rbd:volumes@rbd-1 | nova | enabled | up    | 2018-08-27T11:33:18.000000 |
| cinder-backup    | osdev02           | nova | enabled | up    | 2018-08-27T11:33:17.000000 |
+------------------+-------------------+------+---------+-------+----------------------------+

初始化環境

  • 查看初始的RBD存儲池情況,全部是空的:

$ rbd -p images ls
$ rbd -p volumes ls
$ rbd -p vms ls
  • 設置環境變量,并初始化OpenStack環境:

$ . ${KOLLA_ROOT}/myconfig/admin-openrc.sh
$ ${KOLLA_ROOT}/myconfig/init-runonce
  • 查看新增的鏡像信息:

$ openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| 293b25bb-30be-4839-b4e2-1dba3c43a56a | cirros | active |
+--------------------------------------+--------+--------+

$ openstack image show 293b25bb-30be-4839-b4e2-1dba3c43a56a
+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                                                                    |
+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum         | 443b7623e27ecf03dc9e01ee93f67afe                                                                                                                         |
| container_format | bare                                                                                                                                                     |
| created_at       | 2018-08-27T11:25:29Z                                                                                                                                     |
| disk_format      | qcow2                                                                                                                                                    |
| file             | /v2/images/293b25bb-30be-4839-b4e2-1dba3c43a56a/file                                                                                                     |
| id               | 293b25bb-30be-4839-b4e2-1dba3c43a56a                                                                                                                     |
| min_disk         | 0                                                                                                                                                        |
| min_ram          | 0                                                                                                                                                        |
| name             | cirros                                                                                                                                                   |
| owner            | 68ada1726a864e2081a56be0a2dca3a0                                                                                                                         |
| properties       | locations='[{u'url': u'rbd://383237bd-becf-49d5-9bd6-deb0bc35ab2a/images/293b25bb-30be-4839-b4e2-1dba3c43a56a/snap', u'metadata': {}}]', os_type='linux' |
| protected        | False                                                                                                                                                    |
| schema           | /v2/schemas/image                                                                                                                                        |
| size             | 12716032                                                                                                                                                 |
| status           | active                                                                                                                                                   |
| tags             |                                                                                                                                                          |
| updated_at       | 2018-08-27T11:25:30Z                                                                                                                                     |
| virtual_size     | None                                                                                                                                                     |
| visibility       | public                                                                                                                                                   |
+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
  • 查看RBD存儲池的變化,可見鏡像被存儲在images存儲池中,并且有一個快照:

$ rbd -p images ls
293b25bb-30be-4839-b4e2-1dba3c43a56a
$ rbd -p volumes ls
$ rbd -p vms ls

$ rbd -p images info 293b25bb-30be-4839-b4e2-1dba3c43a56a
rbd image '293b25bb-30be-4839-b4e2-1dba3c43a56a':
	size 12 MiB in 2 objects
	order 23 (8 MiB objects)
	id: 178f4008d95
	block_name_prefix: rbd_data.178f4008d95
	format: 2
	features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
	op_features: 
	flags: 
	create_timestamp: Mon Aug 27 19:25:29 2018

$ rbd -p images snap list 293b25bb-30be-4839-b4e2-1dba3c43a56a
SNAPID NAME   SIZE TIMESTAMP                
     6 snap 12 MiB Mon Aug 27 19:25:30 2018

創建虛擬機

  • 創建一個虛擬機:

$ openstack server create --image cirros --flavor m1.tiny --key-name mykey --nic net-id=9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 demo1
+-------------------------------------+-----------------------------------------------+
| Field                               | Value                                         |
+-------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig                   | MANUAL                                        |
| OS-EXT-AZ:availability_zone         |                                               |
| OS-EXT-SRV-ATTR:host                | None                                          |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None                                          |
| OS-EXT-SRV-ATTR:instance_name       |                                               |
| OS-EXT-STS:power_state              | NOSTATE                                       |
| OS-EXT-STS:task_state               | scheduling                                    |
| OS-EXT-STS:vm_state                 | building                                      |
| OS-SRV-USG:launched_at              | None                                          |
| OS-SRV-USG:terminated_at            | None                                          |
| accessIPv4                          |                                               |
| accessIPv6                          |                                               |
| addresses                           |                                               |
| adminPass                           | 65cVBJ7S6yaD                                  |
| config_drive                        |                                               |
| created                             | 2018-08-27T11:29:03Z                          |
| flavor                              | m1.tiny (1)                                   |
| hostId                              |                                               |
| id                                  | 309f1364-4d58-413d-a865-dfc37ff04308          |
| image                               | cirros (293b25bb-30be-4839-b4e2-1dba3c43a56a) |
| key_name                            | mykey                                         |
| name                                | demo1                                         |
| progress                            | 0                                             |
| project_id                          | 68ada1726a864e2081a56be0a2dca3a0              |
| properties                          |                                               |
| security_groups                     | name='default'                                |
| status                              | BUILD                                         |
| updated                             | 2018-08-27T11:29:03Z                          |
| user_id                             | c7111728fbbd4fd79bdd2b60e7d7cb42              |
| volumes_attached                    |                                               |
+-------------------------------------+-----------------------------------------------+

$ openstack server show 309f1364-4d58-413d-a865-dfc37ff04308
+-------------------------------------+----------------------------------------------------------+
| Field                               | Value                                                    |
+-------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                   | MANUAL                                                   |
| OS-EXT-AZ:availability_zone         | nova                                                     |
| OS-EXT-SRV-ATTR:host                | osdev03                                                  |
| OS-EXT-SRV-ATTR:hypervisor_hostname | osdev03                                                  |
| OS-EXT-SRV-ATTR:instance_name       | instance-00000001                                        |
| OS-EXT-STS:power_state              | Running                                                  |
| OS-EXT-STS:task_state               | None                                                     |
| OS-EXT-STS:vm_state                 | active                                                   |
| OS-SRV-USG:launched_at              | 2018-08-27T11:29:16.000000                               |
| OS-SRV-USG:terminated_at            | None                                                     |
| accessIPv4                          |                                                          |
| accessIPv6                          |                                                          |
| addresses                           | demo-net=10.0.0.11                                       |
| config_drive                        |                                                          |
| created                             | 2018-08-27T11:29:03Z                                     |
| flavor                              | m1.tiny (1)                                              |
| hostId                              | 4e345dd9f770f63f80d3eafe97c20d97746e890b2971a8398e26db86 |
| id                                  | 309f1364-4d58-413d-a865-dfc37ff04308                     |
| image                               | cirros (293b25bb-30be-4839-b4e2-1dba3c43a56a)            |
| key_name                            | mykey                                                    |
| name                                | demo1                                                    |
| progress                            | 0                                                        |
| project_id                          | 68ada1726a864e2081a56be0a2dca3a0                         |
| properties                          |                                                          |
| security_groups                     | name='default'                                           |
| status                              | ACTIVE                                                   |
| updated                             | 2018-08-27T11:29:16Z                                     |
| user_id                             | c7111728fbbd4fd79bdd2b60e7d7cb42                         |
| volumes_attached                    |                                                          |
+-------------------------------------+----------------------------------------------------------+
  • 可見虛擬機在vms存儲池中創建了一個卷:

$ rbd -p images ls
293b25bb-30be-4839-b4e2-1dba3c43a56a
$ rbd -p volumes ls
$ rbd -p backups ls
$ rbd -p vms ls
309f1364-4d58-413d-a865-dfc37ff04308_disk
  • 登錄虛擬機所在節點,可以看到虛擬機的系統卷使用的是在vms中創建的這個卷,從進程參數可以看出qemu直接使用的是Cephlibrbd庫訪問的RBD塊設備:

$ ssh osdev@osdev03
$ sudo docker exec -it nova_libvirt virsh list
 Id    Name                           State
----------------------------------------------------
 1     instance-00000001              running

$ sudo docker exec -it nova_libvirt virsh dumpxml 1
...
    <disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <auth username='nova'>
        <secret type='ceph' uuid='2ea5db42-c8f1-4601-927c-3c64426907aa'/>
      </auth>
      <source protocol='rbd' name='vms/309f1364-4d58-413d-a865-dfc37ff04308_disk'>
        <host name='172.29.101.166' port='6789'/>
        <host name='172.29.101.167' port='6789'/>
        <host name='172.29.101.168' port='6789'/>
      </source>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
...

$ ps -aux | grep qemu
42436    2678909  4.6  0.0 1341144 171404 ?      Sl   19:29   0:08 /usr/libexec/qemu-kvm -name guest=instance-00000001,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-instance-00000001/master-key.aes -machine pc-i440fx-rhel7.4.0,accel=kvm,usb=off,dump-guest-core=off -cpu Skylake-Client-IBRS,ss=on,hypervisor=on,tsc_adjust=on,avx512f=on,avx512dq=on,clflushopt=on,clwb=on,avx512cd=on,avx512bw=on,avx512vl=on,pku=on,stibp=on,pdpe1gb=on -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 309f1364-4d58-413d-a865-dfc37ff04308 -smbios type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=17.0.2,serial=74bf926c-70b7-03df-b211-d21d6016081a,uuid=309f1364-4d58-413d-a865-dfc37ff04308,family=Virtual Machine -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-1-instance-00000001/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -object secret,id=virtio-disk0-secret0,data=zNy84nlNYigA4vjbuOxcGQa1/hh8w28i/WoJbO1Xsl4=,keyid=masterKey0,iv=OhX+FApyFyq2XLWq0ff/Ew==,format=base64 -drive file=rbd:vms/309f1364-4d58-413d-a865-dfc37ff04308_disk:id=nova:auth_supported=cephx\;none:mon_host=172.29.101.166\:6789\;172.29.101.167\:6789\;172.29.101.168\:6789,file.password-secret=virtio-disk0-secret0,format=raw,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=79,id=hostnet0,vhost=on,vhostfd=80 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:04:e8:e9,bus=pci.0,addr=0x3 -chardev pty,id=charserial0,logfile=/var/lib/nova/instances/309f1364-4d58-413d-a865-dfc37ff04308/console.log,logappend=off -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 172.29.101.168:0 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on

$ ldd /usr/libexec/qemu-kvm | grep -e ceph -e rbd
	librbd.so.1 => /lib64/librbd.so.1 (0x00007fde38815000)
	libceph-common.so.0 => /usr/lib64/ceph/libceph-common.so.0 (0x00007fde28247000)

創建卷

  • 創建一個卷:

$ openstack volume create --size 1 volume1
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| consistencygroup_id | None                                 |
| created_at          | 2018-08-27T11:33:52.000000           |
| description         | None                                 |
| encrypted           | False                                |
| id                  | 3ccca300-bee3-4b5a-b89b-32e6b8b806d9 |
| migration_status    | None                                 |
| multiattach         | False                                |
| name                | volume1                              |
| properties          |                                      |
| replication_status  | None                                 |
| size                | 1                                    |
| snapshot_id         | None                                 |
| source_volid        | None                                 |
| status              | creating                             |
| type                | None                                 |
| updated_at          | None                                 |
| user_id             | c7111728fbbd4fd79bdd2b60e7d7cb42     |
+---------------------+--------------------------------------+
  • 查看存儲池狀態,可以看到新建的卷被放在volumes存儲池:

$ rbd -p images ls
293b25bb-30be-4839-b4e2-1dba3c43a56a
$ rbd -p volumes ls
volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9
$ rbd -p backups ls
$ rbd -p vms ls
309f1364-4d58-413d-a865-dfc37ff04308_disk

創建備份

  • 創建一個卷備份,可以看到是創建在backups存儲池中:

$ openstack volume backup create 3ccca300-bee3-4b5a-b89b-32e6b8b806d9
+-------+--------------------------------------+
| Field | Value                                |
+-------+--------------------------------------+
| id    | f2321578-88d5-4337-b93c-798855b817ce |
| name  | None                                 |
+-------+--------------------------------------+

$ openstack volume backup list
+--------------------------------------+------+-------------+-----------+------+
| ID                                   | Name | Description | Status    | Size |
+--------------------------------------+------+-------------+-----------+------+
| f2321578-88d5-4337-b93c-798855b817ce | None | None        | available |    1 |
+--------------------------------------+------+-------------+-----------+------+

$ openstack volume backup show f2321578-88d5-4337-b93c-798855b817ce
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| availability_zone     | nova                                 |
| container             | backups                              |
| created_at            | 2018-08-27T11:39:40.000000           |
| data_timestamp        | 2018-08-27T11:39:40.000000           |
| description           | None                                 |
| fail_reason           | None                                 |
| has_dependent_backups | False                                |
| id                    | f2321578-88d5-4337-b93c-798855b817ce |
| is_incremental        | False                                |
| name                  | None                                 |
| object_count          | 0                                    |
| size                  | 1                                    |
| snapshot_id           | None                                 |
| status                | available                            |
| updated_at            | 2018-08-27T11:39:46.000000           |
| volume_id             | 3ccca300-bee3-4b5a-b89b-32e6b8b806d9 |
+-----------------------+--------------------------------------+

$ rbd -p backups ls
volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9.backup.base
  • 在此創建一個備份,發現backups存儲池并無變化,僅僅是在原有的備份卷中增加一個快照:

$ volume backup create 3ccca300-bee3-4b5a-b89b-32e6b8b806d9
+-------+--------------------------------------+
| Field | Value                                |
+-------+--------------------------------------+
| id    | 07132063-9bdb-4391-addd-a791dae2cfea |
| name  | None                                 |
+-------+--------------------------------------+

$ rbd -p backups ls
volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9.backup.base

$ rbd -p backups snap list volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9.backup.base
SNAPID NAME                                                            SIZE TIMESTAMP                
     4 backup.f2321578-88d5-4337-b93c-798855b817ce.snap.1535369984.08 1 GiB Mon Aug 27 19:39:46 2018 
     5 backup.07132063-9bdb-4391-addd-a791dae2cfea.snap.1535370126.76 1 GiB Mon Aug 27 19:42:08 2018

連接卷

  • 把新增的卷鏈接到之前創建的虛擬機中:

$ openstack server add volume demo1 volume1
$ openstack volume show volume1
+--------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                          | Value                                                                                                                                                                                                                                                                                                                        |
+--------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| attachments                    | [{u'server_id': u'309f1364-4d58-413d-a865-dfc37ff04308', u'attachment_id': u'fb4d9ec0-8a33-4ed0-8845-09e6f17aac81', u'attached_at': u'2018-08-27T11:44:51.000000', u'host_name': u'osdev03', u'volume_id': u'3ccca300-bee3-4b5a-b89b-32e6b8b806d9', u'device': u'/dev/vdb', u'id': u'3ccca300-bee3-4b5a-b89b-32e6b8b806d9'}] |
| availability_zone              | nova                                                                                                                                                                                                                                                                                                                         |
| bootable                       | false                                                                                                                                                                                                                                                                                                                        |
| consistencygroup_id            | None                                                                                                                                                                                                                                                                                                                         |
| created_at                     | 2018-08-27T11:33:52.000000                                                                                                                                                                                                                                                                                                   |
| description                    | None                                                                                                                                                                                                                                                                                                                         |
| encrypted                      | False                                                                                                                                                                                                                                                                                                                        |
| id                             | 3ccca300-bee3-4b5a-b89b-32e6b8b806d9                                                                                                                                                                                                                                                                                         |
| migration_status               | None                                                                                                                                                                                                                                                                                                                         |
| multiattach                    | False                                                                                                                                                                                                                                                                                                                        |
| name                           | volume1                                                                                                                                                                                                                                                                                                                      |
| os-vol-host-attr:host          | rbd:volumes@rbd-1#rbd-1                                                                                                                                                                                                                                                                                                      |
| os-vol-mig-status-attr:migstat | None                                                                                                                                                                                                                                                                                                                         |
| os-vol-mig-status-attr:name_id | None                                                                                                                                                                                                                                                                                                                         |
| os-vol-tenant-attr:tenant_id   | 68ada1726a864e2081a56be0a2dca3a0                                                                                                                                                                                                                                                                                             |
| properties                     | attached_mode='rw'                                                                                                                                                                                                                                                                                                           |
| replication_status             | None                                                                                                                                                                                                                                                                                                                         |
| size                           | 1                                                                                                                                                                                                                                                                                                                            |
| snapshot_id                    | None                                                                                                                                                                                                                                                                                                                         |
| source_volid                   | None                                                                                                                                                                                                                                                                                                                         |
| status                         | in-use                                                                                                                                                                                                                                                                                                                       |
| type                           | None                                                                                                                                                                                                                                                                                                                         |
| updated_at                     | 2018-08-27T11:44:52.000000                                                                                                                                                                                                                                                                                                   |
| user_id                        | c7111728fbbd4fd79bdd2b60e7d7cb42                                                                                                                                                                                                                                                                                             |
+--------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  • 到虛擬機所在節點查看其libvirt上參數的變化,發現新增了一個RBD磁盤:

$ sudo docker exec -it nova_libvirt virsh dumpxml 1
...
    <disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <auth username='nova'>
        <secret type='ceph' uuid='2ea5db42-c8f1-4601-927c-3c64426907aa'/>
      </auth>
      <source protocol='rbd' name='vms/309f1364-4d58-413d-a865-dfc37ff04308_disk'>
        <host name='172.29.101.166' port='6789'/>
        <host name='172.29.101.167' port='6789'/>
        <host name='172.29.101.168' port='6789'/>
      </source>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='none' discard='unmap'/>
      <auth username='cinder-volume'>
        <secret type='ceph' uuid='3fa55f7c-b556-4095-9253-b908d5408ec8'/>
      </auth>
      <source protocol='rbd' name='volumes/volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9'>
        <host name='172.29.101.166' port='6789'/>
        <host name='172.29.101.167' port='6789'/>
        <host name='172.29.101.168' port='6789'/>
      </source>
      <target dev='vdb' bus='virtio'/>
      <serial>3ccca300-bee3-4b5a-b89b-32e6b8b806d9</serial>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>
...
  • 為虛擬機創建一個浮動IP,使用SSH登陸進去:

$ openstack console url show demo1
+-------+-------------------------------------------------------------------------------------+
| Field | Value                                                                               |
+-------+-------------------------------------------------------------------------------------+
| type  | novnc                                                                               |
| url   | http://172.29.101.167:6080/vnc_auto.html?token=9f835216-1c53-41ae-849a-44a85429a334 |
+-------+-------------------------------------------------------------------------------------+

$ openstack floating ip create public1
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| created_at          | 2018-08-27T11:49:02Z                 |
| description         |                                      |
| fixed_ip_address    | None                                 |
| floating_ip_address | 192.168.162.52                       |
| floating_network_id | ff69b3ff-c2c4-4474-a7ba-952fa99df919 |
| id                  | 2aa86075-9c62-49f5-84ac-e7b6353c9591 |
| name                | 192.168.162.52                       |
| port_id             | None                                 |
| project_id          | 68ada1726a864e2081a56be0a2dca3a0     |
| qos_policy_id       | None                                 |
| revision_number     | 0                                    |
| router_id           | None                                 |
| status              | DOWN                                 |
| subnet_id           | None                                 |
| tags                | []                                   |
| updated_at          | 2018-08-27T11:49:02Z                 |
+---------------------+--------------------------------------+

$ openstack server add floating ip demo1 192.168.162.52

$ openstack server list
+--------------------------------------+-------+--------+------------------------------------+--------+---------+
| ID                                   | Name  | Status | Networks                           | Image  | Flavor  |
+--------------------------------------+-------+--------+------------------------------------+--------+---------+
| 309f1364-4d58-413d-a865-dfc37ff04308 | demo1 | ACTIVE | demo-net=10.0.0.11, 192.168.162.52 | cirros | m1.tiny |
+--------------------------------------+-------+--------+------------------------------------+--------+---------+

$ ssh root@osdev02
$ ip netns
qrouter-65759e60-6e20-41cc-a79c-fc492232b127 (id: 1)
qdhcp-9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 (id: 0)

$ ip netns exec qrouter-65759e60-6e20-41cc-a79c-fc492232b127 ping 192.168.162.50
$ ip netns exec qdhcp-9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 ping 10.0.0.9

(用戶名"cirros",密碼"gocubsgo")
$ ip netns exec qrouter-65759e60-6e20-41cc-a79c-fc492232b127 ssh cirros@192.168.162.52
$ ip netns exec qdhcp-9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 ssh cirros@10.0.0.11

$ sudo passwd root
Changing password for root
New password: 
Bad password: too weak
Retype password: 
Password for root changed by root
$ su -
Password:
  • 創建分區并寫入測試文件,最后卸載分區:

# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda     253:0    0    1G  0 disk 
|-vda1  253:1    0 1015M  0 part /
`-vda15 253:15   0    8M  0 part 
vdb     253:16   0    1G  0 disk
# mkfs.ext4 /dev/vdb
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 262144 4k blocks and 65536 inodes
Filesystem UUID: ede8d366-bfbc-4b9a-9d3f-306104f410d7
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
# mount /dev/vdb /mnt
# df -h
Filesystem                Size      Used Available Use% Mounted on
/dev                    240.1M         0    240.1M   0% /dev
/dev/vda1               978.9M     23.9M    914.1M   3% /
tmpfs                   244.2M         0    244.2M   0% /dev/shm
tmpfs                   244.2M     92.0K    244.1M   0% /run
/dev/vdb                975.9M      1.3M    907.4M   0% /mnt
# echo "hello openstack, volume test." > /mnt/ceph_rbd_test
# umount /mnt
# df -h
Filesystem                Size      Used Available Use% Mounted on
/dev                    240.1M         0    240.1M   0% /dev
/dev/vda1               978.9M     23.9M    914.1M   3% /
tmpfs                   244.2M         0    244.2M   0% /dev/shm
tmpfs                   244.2M     92.0K    244.1M   0% /run

斷開卷

  • 斷開卷,同時查看虛擬機內部變化:

$ openstack server remove volume demo1 volume1

# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda     253:0    0    1G  0 disk 
|-vda1  253:1    0 1015M  0 part /
`-vda15 253:15   0    8M  0 part
  • 在宿主機映射和掛載RBD卷,并查看之前虛擬機內部創建的文件,完全相同:

$ rbd showmapped
id pool image    snap device    
0  rbd  rbd_test -    /dev/rbd0

$ rbd feature disable volumes/volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9 object-map fast-diff deep-flatten
$ rbd map volumes/volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9
/dev/rbd1

$ mkdir /mnt/volume1
$ mount /dev/rbd1 /mnt/volume1/

$ cat /mnt/volume1/
ceph_rbd_test  lost+found/    
$ cat /mnt/volume1/ceph_rbd_test 
hello openstack, volume test.

感謝各位的閱讀!關于“怎么在Kolla-Ansible中使用Ceph后端存儲”這篇文章就分享到這里了,希望以上內容可以對大家有一定的幫助,讓大家可以學到更多知識,如果覺得文章不錯,可以把它分享出去讓更多的人看到吧!

向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

墨脱县| 靖宇县| 临泉县| 宁城县| 海丰县| 岑巩县| 临沧市| 石屏县| 沾益县| 栾川县| 讷河市| 阜新| 汝城县| 天津市| 黄平县| 白山市| 湖南省| 财经| 金湖县| 施甸县| 湘西| 海林市| 孟州市| 琼海市| 芒康县| 宁夏| 商水县| 茌平县| 安乡县| 嘉鱼县| 鄂州市| 东莞市| 宁海县| 定兴县| 灵武市| 名山县| 政和县| 彭阳县| 获嘉县| 贡山| 驻马店市|