您好,登錄后才能下訂單哦!
這篇文章將為大家詳細講解有關ceph中rbdmap遇到問題的案例分析,小編覺得挺實用的,因此分享給大家做個參考,希望大家閱讀完這篇文章后可以有所收獲。
運行于centos6.5的rbdmap:
[root@mon0 ceph]# cat /etc/init.d/rbdmap #!/bin/bash # # rbdmap Ceph RBD Mapping # # chkconfig: 2345 70 70 # description: Ceph RBD Mapping ### BEGIN INIT INFO # Provides: rbdmap # Required-Start: $network $remote_fs # Required-Stop: $network $remote_fs # Should-Start: ceph # Should-Stop: ceph # X-Start-Before: $x-display-manager # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Ceph RBD Mapping # Description: Ceph RBD Mapping ### END INIT INFO DESC="RBD Mapping:" RBDMAPFILE="/etc/ceph/rbdmap" . /lib/lsb/init-functions do_map() { if [ ! -f "$RBDMAPFILE" ]; then #log_warning_msg "$DESC : No $RBDMAPFILE found." exit 0 fi # Read /etc/rbdtab to create non-existant mapping RET=0 while read DEV PARAMS; do case "$DEV" in ""|\#*) continue ;; */*) ;; *) DEV=rbd/$DEV ;; esac #log_action_begin_msg "${DESC} '${DEV}'" newrbd="" MAP_RV="" RET_OP=0 OIFS=$IFS IFS=',' for PARAM in ${PARAMS[@]}; do CMDPARAMS="$CMDPARAMS --$(echo $PARAM | tr '=' ' ')" done IFS=$OIFS if [ ! -b /dev/rbd/$DEV ]; then MAP_RV=$(rbd map $DEV $CMDPARAMS 2>&1) if [ $? -eq 0 ]; then newrbd="yes" else RET=$((${RET}+$?)) RET_OP=1 fi fi #log_action_end_msg ${RET_OP} "${MAP_RV}" if [ "$newrbd" ]; then ## Mount new rbd MNT_RV="" mount --fake /dev/rbd/$DEV >>/dev/null 2>&1 \ && MNT_RV=$(mount -v /dev/rbd/$DEV 2>&1) [ -n "${MNT_RV}" ] && log_action_msg "mount: ${MNT_RV}" ## post-mapping if [ -x "/etc/ceph/rbd.d/${DEV}" ]; then #log_action_msg "RBD Running post-map hook '/etc/ceph/rbd.d/${DEV}'" /etc/ceph/rbd.d/${DEV} map "/dev/rbd/${DEV}" fi fi done < $RBDMAPFILE exit ${RET} } do_unmap() { RET=0 ## Unmount and unmap all rbd devices if ls /dev/rbd[0-9]* >/dev/null 2>&1; then for DEV in /dev/rbd[0-9]*; do ## pre-unmapping for L in $(find /dev/rbd -type l); do LL="${L##/dev/rbd/}" if [ "$(readlink -f $L)" = "${DEV}" ] \ && [ -x "/etc/ceph/rbd.d/${LL}" ]; then log_action_msg "RBD pre-unmap: '${DEV}' hook '/etc/ceph/rbd.d/${LL}'" /etc/ceph/rbd.d/${LL} unmap "$L" break fi done #log_action_begin_msg "RBD un-mapping: '${DEV}'" UMNT_RV="" UMAP_RV="" RET_OP=0 MNT=$(findmnt --mtab --source ${DEV} --noheadings | awk '{print $1'}) if [ -n "${MNT}" ]; then # log_action_cont_msg "un-mounting '${MNT}'" UMNT_RV=$(umount "${MNT}" 2>&1) fi if mountpoint -q "${MNT}"; then ## Un-mounting failed. RET_OP=1 RET=$((${RET}+1)) else ## Un-mapping. UMAP_RV=$(rbd unmap $DEV 2>&1) if [ $? -ne 0 ]; then RET=$((${RET}+$?)) RET_OP=1 fi fi #log_action_end_msg ${RET_OP} "${UMAP_RV}" [ -n "${UMNT_RV}" ] && log_action_msg "${UMNT_RV}" done fi exit ${RET} } case "$1" in start) do_map ;; stop) do_unmap ;; restart|force-reload) $0 stop $0 start ;; reload) do_map ;; status) rbd showmapped ;; *) log_success_msg "Usage: rbdmap {start|stop|restart|force-reload|reload|status}" exit 1 ;; esac
修改一些在centos上沒有的log后,這個腳本使用起來還是有問題,具體描述如下:
1. 在只rbd map一個塊得到/dev/rbd0后,使用rbdmap腳本可以正常map/unmap /dev/rbd0。
2. 當將/dev/rbd0格式化后掛載到一個目錄上,再使用rbdmap,關機的時候系統就會hang在unmounting filesystem上了,只能強制斷電;再開機啟動后執行了rbdmap的do_map()函數,一切正常。
排查后發現,當將/dev/rbd0掛載到目錄后,rbdmap就不會執行do_unmap()函數,即使函數中加入顯式umount操作也不會執行。
想了一個折中的辦法,在rbdmap停止優先級高的服務中的stop函數中顯式加入umount操作,重啟時一切正常了。
先來看一下ceph、rbdmap的啟停順序:
head rbdmap #!/bin/bash # # rbdmap Ceph RBD Mapping # # chkconfig: 2345 70 70 # description: Ceph RBD Mapping head ceph #!/bin/sh # Start/stop ceph daemons # chkconfig: - 60 80 ### BEGIN INIT INFO # Provides: ceph # Default-Start: # Default-Stop: # Required-Start: $remote_fs $named $network $time # Required-Stop: $remote_fs $named $network $time
可以看出ceph先于rbdmap啟動,rbdmap先于ceph停止。如果采用nfs,使用rbdmap映射出的塊設備,先看看nfs的啟停順序:
head /etc/init.d/nfs #!/bin/sh # # nfs This shell script takes care of starting and stopping # the NFS services. # # chkconfig: - 30 60 # description: NFS is a popular protocol for file sharing across networks. # This service provides NFS server functionality, which is \ # configured via the /etc/exports file. # probe: true
nfs是這三者中最先啟動的,也是最先停止的。所以在nfs的stop函數中加入umount命令:
umount /mnt/nfs umount /mnt/nfs2
在rbdmap中加入掛載命令:
mount /dev/rbd0 -o rw,noexec,nodev,noatime,nobarrier,discard /mnt/nfs mount /dev/rbd1 -o rw,noexec,nodev,noatime,nobarrier,discard /mnt/nfs2
/etc/ceph/rbdmap的設置如下:
backup1/backup.img backup2/backup.img
記得在ceph-0.80時測試是沒有問題的,到了0.87.1出現了上述問題。
關于“ceph中rbdmap遇到問題的案例分析”這篇文章就分享到這里了,希望以上內容可以對大家有一定的幫助,使各位可以學到更多知識,如果覺得文章不錯,請把它分享出去讓更多的人看到。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。