您好,登錄后才能下訂單哦!
這篇文章主要為大家展示了“ceph-deploy中osd模塊有什么用”,內容簡而易懂,條理清晰,希望能夠幫助大家解決疑惑,下面讓小編帶領大家一起研究并學習一下“ceph-deploy中osd模塊有什么用”這篇文章吧。
ceph-deploy的osd.py模塊是用來管理osd守護進程,主要是創建與激活OSD。
osd 子命令格式如下
ceph-deploy osd [-h] {list,create,prepare,activate} ...
list: 顯示osd列表信息
create: 創建OSD,包含prepare與activate
prepare: 準備OSD,通過格式化/分區磁盤
activate: 激活準備的OSD
make函數
priority為50
osd子命令默認執行函數為osd
@priority(50) def make(parser): """ Prepare a data disk on remote host. """ sub_command_help = dedent(""" Manage OSDs by preparing a data disk on remote host. For paths, first prepare and then activate: ceph-deploy osd prepare {osd-node-name}:/path/to/osd ceph-deploy osd activate {osd-node-name}:/path/to/osd For disks or journals the `create` command will do prepare and activate for you. """ ) parser.formatter_class = argparse.RawDescriptionHelpFormatter parser.description = sub_command_help osd_parser = parser.add_subparsers(dest='subcommand') osd_parser.required = True osd_list = osd_parser.add_parser( 'list', help='List OSD info from remote host(s)' ) osd_list.add_argument( 'disk', nargs='+', metavar='HOST:DISK[:JOURNAL]', type=colon_separated, help='remote host to list OSDs from' ) osd_create = osd_parser.add_parser( 'create', help='Create new Ceph OSD daemon by preparing and activating disk' ) osd_create.add_argument( '--zap-disk', action='store_true', help='destroy existing partition table and content for DISK', ) osd_create.add_argument( '--fs-type', metavar='FS_TYPE', choices=['xfs', 'btrfs' ], default='xfs', help='filesystem to use to format DISK (xfs, btrfs)', ) osd_create.add_argument( '--dmcrypt', action='store_true', help='use dm-crypt on DISK', ) osd_create.add_argument( '--dmcrypt-key-dir', metavar='KEYDIR', default='/etc/ceph/dmcrypt-keys', help='directory where dm-crypt keys are stored', ) osd_create.add_argument( '--bluestore', action='store_true', default=None, help='bluestore objectstore', ) osd_create.add_argument( 'disk', nargs='+', metavar='HOST:DISK[:JOURNAL]', type=colon_separated, help='host and disk to prepare', ) osd_prepare = osd_parser.add_parser( 'prepare', help='Prepare a disk for use as Ceph OSD by formatting/partitioning disk' ) osd_prepare.add_argument( '--zap-disk', action='store_true', help='destroy existing partition table and content for DISK', ) osd_prepare.add_argument( '--fs-type', metavar='FS_TYPE', choices=['xfs', 'btrfs' ], default='xfs', help='filesystem to use to format DISK (xfs, btrfs)', ) osd_prepare.add_argument( '--dmcrypt', action='store_true', help='use dm-crypt on DISK', ) osd_prepare.add_argument( '--dmcrypt-key-dir', metavar='KEYDIR', default='/etc/ceph/dmcrypt-keys', help='directory where dm-crypt keys are stored', ) osd_prepare.add_argument( '--bluestore', action='store_true', default=None, help='bluestore objectstore', ) osd_prepare.add_argument( 'disk', nargs='+', metavar='HOST:DISK[:JOURNAL]', type=colon_separated, help='host and disk to prepare', ) osd_activate = osd_parser.add_parser( 'activate', help='Start (activate) Ceph OSD from disk that was previously prepared' ) osd_activate.add_argument( 'disk', nargs='+', metavar='HOST:DISK[:JOURNAL]', type=colon_separated, help='host and disk to activate', ) parser.set_defaults( func=osd, )
osd函數,osd子命令list,create,prepare,activate分別對應的函數為osd_list、prepare、prepare、activate。
def osd(args): cfg = conf.ceph.load(args) if args.subcommand == 'list': osd_list(args, cfg) elif args.subcommand == 'prepare': prepare(args, cfg, activate_prepared_disk=False) elif args.subcommand == 'create': prepare(args, cfg, activate_prepared_disk=True) elif args.subcommand == 'activate': activate(args, cfg) else: LOG.error('subcommand %s not implemented', args.subcommand) sys.exit(1)
命令行格式為:ceph-deploy osd list [-h] HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL] …]
osd_list函數
執行ceph --cluster=ceph osd tree --format=json命令獲取OSD信息
執行ceph-disk list命令獲取磁盤、分區信息
根據兩個命令結果以及osd目錄下文件信息,組裝輸出OSD列表數據
def osd_list(args, cfg): monitors = mon.get_mon_initial_members(args, error_on_empty=True, _cfg=cfg) # get the osd tree from a monitor host mon_host = monitors[0] distro = hosts.get( mon_host, username=args.username, callbacks=[packages.ceph_is_installed] ) # 執行ceph --cluster=ceph osd tree --format=json命令獲取osd信息 tree = osd_tree(distro.conn, args.cluster) distro.conn.exit() interesting_files = ['active', 'magic', 'whoami', 'journal_uuid'] for hostname, disk, journal in args.disk: distro = hosts.get(hostname, username=args.username) remote_module = distro.conn.remote_module #獲取OSD的目錄/var/run/ceph/osd下的osd名稱 osds = distro.conn.remote_module.listdir(constants.osd_path) # 執行ceph-disk list命令獲取磁盤、分區信息 ceph_disk_executable = system.executable_path(distro.conn, 'ceph-disk') output, err, exit_code = remoto.process.check( distro.conn, [ ceph_disk_executable, 'list', ] ) # 循環OSD for _osd in osds: # osd路徑,比如/var/run/ceph/osd/ceph-0 osd_path = os.path.join(constants.osd_path, _osd) # journal路徑 journal_path = os.path.join(osd_path, 'journal') # OSD的id _id = int(_osd.split('-')[-1]) # split on dash, get the id osd_name = 'osd.%s' % _id metadata = {} json_blob = {} # piggy back from ceph-disk and get the mount point # ceph-disk list的結果與osd名稱匹配,獲取磁盤設備 device = get_osd_mount_point(output, osd_name) if device: metadata['device'] = device # read interesting metadata from files # 獲取OSD下的active, magic, whoami, journal_uuid文件信息 for f in interesting_files: osd_f_path = os.path.join(osd_path, f) if remote_module.path_exists(osd_f_path): metadata[f] = remote_module.readline(osd_f_path) # do we have a journal path? # 獲取 journal path if remote_module.path_exists(journal_path): metadata['journal path'] = remote_module.get_realpath(journal_path) # is this OSD in osd tree? for blob in tree['nodes']: if blob.get('id') == _id: # matches our OSD json_blob = blob # 輸出OSD信息 print_osd( distro.conn.logger, hostname, osd_path, json_blob, metadata, ) distro.conn.exit()
創建OSD的命令行格式為:ceph-deploy osd create [-h] [–zap-disk] [–fs-type FS_TYPE] [–dmcrypt] [–dmcrypt-key-dir KEYDIR] [–bluestore] HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL] …]
準備OSD的命令行格式為:ceph-deploy osd prepare [-h] [–zap-disk] [–fs-type FS_TYPE] [–dmcrypt] [–dmcrypt-key-dir KEYDIR] [–bluestore] HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL] …]
prepare函數,參數activate_prepared_disk為True是創建OSD,為False是準備OSD
調用exceeds_max_osds函數,單臺主機超過20個OSD,將會warning
調用get_bootstrap_osd_key函數,獲取當前目錄下的ceph.bootstrap-osd.keyring
循環disk
配置寫入 /etc/ceph/ceph.conf
創建并寫入 /var/lib/ceph/bootstrap-osd/ceph.keyring
調用prepare_disk函數,準備OSD
校驗OSD狀態,并將信息非正常狀態信息寫入warning
def prepare(args, cfg, activate_prepared_disk): LOG.debug( 'Preparing cluster %s disks %s', args.cluster, ' '.join(':'.join(x or '' for x in t) for t in args.disk), ) # 單臺主機超過20個OSD,將會warning hosts_in_danger = exceeds_max_osds(args) if hosts_in_danger: LOG.warning('if ``kernel.pid_max`` is not increased to a high enough value') LOG.warning('the following hosts will encounter issues:') for host, count in hosts_in_danger.items(): LOG.warning('Host: %8s, OSDs: %s' % (host, count)) # 獲取當前目錄下的ceph.bootstrap-osd.keyring key = get_bootstrap_osd_key(cluster=args.cluster) bootstrapped = set() errors = 0 for hostname, disk, journal in args.disk: try: if disk is None: raise exc.NeedDiskError(hostname) distro = hosts.get( hostname, username=args.username, callbacks=[packages.ceph_is_installed] ) LOG.info( 'Distro info: %s %s %s', distro.name, distro.release, distro.codename ) if hostname not in bootstrapped: bootstrapped.add(hostname) LOG.debug('Deploying osd to %s', hostname) conf_data = conf.ceph.load_raw(args) # 配置寫入/etc/ceph/ceph.conf distro.conn.remote_module.write_conf( args.cluster, conf_data, args.overwrite_conf ) # 創建并寫入 /var/lib/ceph/bootstrap-osd/ceph.keyring create_osd_keyring(distro.conn, args.cluster, key) LOG.debug('Preparing host %s disk %s journal %s activate %s', hostname, disk, journal, activate_prepared_disk) storetype = None if args.bluestore: storetype = 'bluestore' # 準備OSD prepare_disk( distro.conn, cluster=args.cluster, disk=disk, journal=journal, activate_prepared_disk=activate_prepared_disk, init=distro.init, zap=args.zap_disk, fs_type=args.fs_type, dmcrypt=args.dmcrypt, dmcrypt_dir=args.dmcrypt_key_dir, storetype=storetype, ) # give the OSD a few seconds to start time.sleep(5) # 校驗OSD狀態,并將信息非正常狀態信息寫入warning catch_osd_errors(distro.conn, distro.conn.logger, args) LOG.debug('Host %s is now ready for osd use.', hostname) distro.conn.exit() except RuntimeError as e: LOG.error(e) errors += 1 if errors: raise exc.GenericError('Failed to create %d OSDs' % errors)
prepare_disk函數
執行 ceph-disk -v prepare 命令準備OSD
如果activate_prepared_disk為True,設置ceph服務開機啟動
def prepare_disk( conn, cluster, disk, journal, activate_prepared_disk, init, zap, fs_type, dmcrypt, dmcrypt_dir, storetype): """ Run on osd node, prepares a data disk for use. """ ceph_disk_executable = system.executable_path(conn, 'ceph-disk') args = [ ceph_disk_executable, '-v', 'prepare', ] if zap: args.append('--zap-disk') if dmcrypt: args.append('--dmcrypt') if dmcrypt_dir is not None: args.append('--dmcrypt-key-dir') args.append(dmcrypt_dir) if storetype: args.append('--' + storetype) args.extend([ '--cluster', cluster, '--fs-type', fs_type, '--', disk, ]) if journal is not None: args.append(journal) # 執行 ceph-disk -v prepare 命令 remoto.process.run( conn, args ) # 是否激活,激活即設置ceph服務開機啟動 if activate_prepared_disk: # we don't simply run activate here because we don't know # which partition ceph-disk prepare created as the data # volume. instead, we rely on udev to do the activation and # just give it a kick to ensure it wakes up. we also enable # ceph.target, the other key piece of activate. if init == 'systemd': system.enable_service(conn, "ceph.target") elif init == 'sysvinit': system.enable_service(conn, "ceph")
命令行格式為:ceph-deploy osd activate [-h] HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL] …]
activate函數
執行 ceph-disk -v activate 命令激活OSD
校驗OSD狀態,并將信息非正常狀態信息寫入warning
設置ceph服務開機啟動
def activate(args, cfg): LOG.debug( 'Activating cluster %s disks %s', args.cluster, # join elements of t with ':', t's with ' ' # allow None in elements of t; print as empty ' '.join(':'.join((s or '') for s in t) for t in args.disk), ) for hostname, disk, journal in args.disk: distro = hosts.get( hostname, username=args.username, callbacks=[packages.ceph_is_installed] ) LOG.info( 'Distro info: %s %s %s', distro.name, distro.release, distro.codename ) LOG.debug('activating host %s disk %s', hostname, disk) LOG.debug('will use init type: %s', distro.init) ceph_disk_executable = system.executable_path(distro.conn, 'ceph-disk') # 執行 ceph-disk -v activate 命令激活OSD remoto.process.run( distro.conn, [ ceph_disk_executable, '-v', 'activate', '--mark-init', distro.init, '--mount', disk, ], ) # give the OSD a few seconds to start time.sleep(5) # 校驗OSD狀態,并將信息非正常狀態信息寫入warning catch_osd_errors(distro.conn, distro.conn.logger, args) # 設置ceph服務開機啟動 if distro.init == 'systemd': system.enable_service(distro.conn, "ceph.target") elif distro.init == 'sysvinit': system.enable_service(distro.conn, "ceph") distro.conn.exit()
以ceph-231上磁盤sdb為例,創建osd。
準備OSD
[root@ceph-231 ~]# ceph-disk -v prepare --zap-disk --cluster ceph --fs-type xfs -- /dev/sdb
創建OSD多一個操作,設置ceph服務開機啟動
[root@ceph-231 ~]# systemctl enable ceph.target
查看init
[root@ceph-231 ~]# cat /proc/1/comm systemd
激活OSD
[root@ceph-231 ~]# ceph-disk -v activate --mark-init systemd --mount /dev/sdb1
設置ceph服務開機啟動
[root@ceph-231 ~]# systemctl enable ceph.target
以上是“ceph-deploy中osd模塊有什么用”這篇文章的所有內容,感謝各位的閱讀!相信大家都有了一定的了解,希望分享的內容對大家有所幫助,如果還想學習更多知識,歡迎關注億速云行業資訊頻道!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。