您好,登錄后才能下訂單哦!
這篇文章將為大家詳細講解有關 Ubuntu下Ceph單節點和多節點安裝的示例分析,小編覺得挺實用的,因此分享給大家做個參考,希望大家閱讀完這篇文章后可以有所收獲。
Ceph是一個分布式文件系統,在維持POSIX兼容性的同時加入了復制和容錯功能。Ceph最大的特點是分布式的元數據服務器,通過CRUSH(Controlled Replication Under Scalable Hashing)這種擬算法來分配文件的location。Ceph的核心是RADOS(ReliableAutonomic Distributed Object Store),一個對象集群存儲,本身提供對象的高可用、錯誤檢測和修復功能。
Ceph生態系統架構可以劃分為四部分:
client:客戶端(數據用戶)。client向外export出一個POSIX文件系統接口,供應用程序調用,并連接mon/mds/osd,進行元數據及數據交互;最原始的client使用FUSE來實現的,現在寫到內核里面了,需要編譯一個ceph.ko內核模塊才能使用。
mon:集群監視器,其對應的daemon程序為cmon(Ceph Monitor)。mon監視和管理整個集群,對客戶端export出一個網絡文件系統,客戶端可以通過mount -t ceph monitor_ip:/ mount_point命令來掛載Ceph文件系統。根據官方的說法,3個mon可以保證集群的可靠性。
mds:元數據服務器,其對應的daemon程序為cmds(Ceph Metadata Server)。Ceph里可以有多個MDS組成分布式元數據服務器集群,就會涉及到Ceph中動態目錄分割來進行負載均衡。
osd:對象存儲集群,其對應的daemon程序為cosd(Ceph Object StorageDevice)。osd將本地文件系統封裝一層,對外提供對象存儲的接口,將數據和元數據作為對象存儲。這里本地的文件系統可以是ext2/3,但Ceph認為這些文件系統并不能適應osd特殊的訪問模式,它們之前自己實現了ebofs,而現在Ceph轉用btrfs。
Ceph支持成百上千甚至更多的節點,以上四個部分最好分布在不同的節點上。當然,對于基本的測試,可以把mon和mds裝在一個節點上,也可以把四個部分全都部署在同一個節點上。
Linux系統版本:Ubuntu Server 12.04.1 LTS;
Ceph版本:0.72.2(稍后安裝);
“網速較慢”或者“安裝軟件失敗”的情況下,可以考慮替換成國內的鏡像:
# sudo sed -i's#us.archive.ubuntu.com#mirrors.163.com#g' /etc/apt/sources.list
# sudo apt-get update
Ubuntu 12.04默認的Ceph版本為0.41,如果希望安裝較新的Ceph版本,可以添加key到APT中,更新sources.list:
# sudo wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'| sudo apt-key add -
# sudo echo deb http://ceph.com/debian/ $(lsb_release -sc) main | sudo tee/etc/apt/sources.list.d/ceph.list
# sudo apt-get update
# date # 查看系統時間是否正確,正確的話則忽略下面兩步
# sudo date -s "2013-11-0415:05:57" # 設置系統時間
# sudo hwclock -w # 寫入硬件時間
請確保已關閉SELinux(Ubuntu默認未開啟)。
另外,建議關閉防火墻:
# sudo ufw disable # 關閉防火墻
192.168.73.129(hostname為ceph2,有兩個分區/dev/sdb1和/dev/sdb2提供給osd,另外還安裝了client/mon/mds);
# apt-get install ceph ceph-common ceph-mds
# ceph -v # 將顯示ceph的版本和key信息
# vim /etc/ceph/ceph.conf
[global]
max open files = 131072
#For version 0.55 and beyond, you must explicitly enable
#or disable authentication with "auth" entries in [global].
auth cluster required = none
auth service required = none
auth client required = none
[osd]
osd journal size = 1000
#The following assumes ext4 filesystem.
filestore xattruse omap = true
#For Bobtail (v 0.56) and subsequent versions, you may
#add settings for mkcephfs so that it will create and mount
#the file system on a particular OSD for you. Remove the comment `#`
#character for the following settings and replace the values
#in braces with appropriate values, or leave the following settings
#commented out to accept the default values. You must specify the
#--mkfs option with mkcephfs in order for the deployment script to
#utilize the following settings, and you must define the 'devs'
#option for each osd instance; see below.
osd mkfs type = xfs
osd mkfs options xfs = -f #default for xfs is "-f"
osd mount options xfs = rw,noatime # default mount option is"rw,noatime"
#For example, for ext4, the mount option might look like this:
#osd mkfs options ext4 = user_xattr,rw,noatime
#Execute $ hostname to retrieve the name of your host,
#and replace {hostname} with the name of your host.
#For the monitor, replace {ip-address} with the IP
#address of your host.
[mon.a]
host = ceph2
mon addr = 192.168.73.129:6789
[osd.0]
host = ceph2
#For Bobtail (v 0.56) and subsequent versions, you may
#add settings for mkcephfs so that it will create and mount
#the file system on a particular OSD for you. Remove the comment `#`
#character for the following setting for each OSD and specify
#a path to the device if you use mkcephfs with the --mkfs option.
devs = /dev/sdb1
[osd.1]
host= ceph2
devs= /dev/sdb2
[mds.a]
host= ceph2
注意,對于較低的Ceph版本(例如0.42),需要在[mon]項下添加一行內容:mondata = /data/$name,以及在[osd]項下添加一行內容:osd data = /data/$name,以作為后續的數據目錄;相應的,后續針對數據目錄的步驟也需要調整。
# mkdir -p /var/lib/ceph/osd/ceph-0
# mkdir -p /var/lib/ceph/osd/ceph-1
# mkdir -p /var/lib/ceph/mon/ceph-a
# mkdir -p /var/lib/ceph/mds/ceph-a
對新分區進行xfs或者btrfs的格式化:
# mkfs.xfs -f /dev/sdb1
# mkfs.xfs -f /dev/sdb2
第一次必須先掛載分區寫入初始化數據:
# mount /dev/sdb1 /var/lib/ceph/osd/ceph-0
# mount /dev/sdb2 /var/lib/ceph/osd/ceph-1
注意,每次執行初始化之前,都需要先停止Ceph服務,并清空原有數據目錄:
# /etc/init.d/ceph stop
# rm -rf /var/lib/ceph/*/ceph-*/*
然后,就可以在mon所在的節點上執行初始化了:
# sudo mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph2.keyring
注意,一旦配置文件ceph.conf發生改變,初始化最好重新執行一遍。
在mon所在的節點上執行:
# sudo service ceph -a start
注意,執行上面這步時,可能會遇到如下提示:
=== osd.0 ===
Mounting xfs onceph5:/var/lib/ceph/osd/ceph-0
Error ENOENT: osd.0 does not exist. create it before updating the crush map
執行如下命令后,再重復執行上面那條啟動服務的命令,就可以解決:
# ceph osd create
# sudo ceph health # 也可以使用ceph -s命令查看狀態
如果返回的是HEALTH_OK,則代表成功!
注意,如果遇到如下提示:
HEALTH_WARN 576 pgs stuckinactive; 576 pgs stuck unclean; no osds
或者遇到如下提示:
HEALTH_WARN 178 pgs peering; 178pgs stuck inactive; 429 pgs stuck unclean; recovery 2/24 objects degraded(8.333%)
執行如下命令,就可以解決:
# ceph pg dump_stuck stale && ceph pg dump_stuck inactive && ceph pg dump_stuck unclean
如果遇到如下提示:
HEALTH_WARN 384 pgs degraded; 384 pgs stuck unclean; recovery 21/42degraded (50.000%)
則說明osd數量不夠,Ceph默認至少需要提供兩個osd。
客戶端掛載mon所在的節點:
# sudo mkdir /mnt/mycephfs
# sudo mount -t ceph 192.168.73.129:6789:/ /mnt/mycephfs
客戶端驗證:
# df -h #如果能查看到/mnt/mycephfs的使用情況,則說明Ceph安裝成功。
對于多節點的情況,Ceph有如下要求:
修改各自的hostname,并能夠通過hostname來互相訪問。
各節點能夠ssh互相訪問而不輸入密碼(通過ssh-keygen命令)。
192.168.73.129(hostname為ceph2,有一個分區/dev/sdb1提供給osd);
192.168.73.130(hostname為ceph3,有一個分區/dev/sdb1提供給osd);
192.168.73.131(hostname為ceph4,安裝了client/mon/mds);
在每個節點上設置相應的主機名,例如:
# vim /etc/hostname
ceph2
修改/etc/hosts,增加如下幾行:
192.168.73.129 ceph2
192.168.73.130 ceph3
192.168.73.131 ceph4
在每個節點上創建RSA秘鑰:
# ssh-keygen -t rsa # 一直按確定鍵即可
# touch /root/.ssh/authorized_keys
先配置ceph2,這樣ceph2就可以無密碼訪問ceph3和ceph4了:
ceph2# scp /root/.ssh/id_rsa.pub ceph3:/root/.ssh/id_rsa.pub_ceph2
ceph2# scp /root/.ssh/id_rsa.pub ceph4:/root/.ssh/id_rsa.pub_ceph2
ceph2# ssh ceph3 "cat /root/.ssh/id_rsa.pub_ceph2>> /root/.ssh/authorized_keys"
ceph2# ssh ceph4 "cat /root/.ssh/id_rsa.pub_ceph2 >> /root/.ssh/authorized_keys"
節點ceph3和ceph4也需要參照上面的命令進行配置。
在每個節點上安裝Ceph庫:
# apt-get install ceph ceph-common ceph-mds
# ceph -v # 將顯示ceph的版本和key信息
# vim /etc/ceph/ceph.conf
[global]
max open files = 131072
#For version 0.55 and beyond, you must explicitly enable
#or disable authentication with "auth" entries in [global].
auth cluster required = none
auth service required = none
auth client required = none
[osd]
osd journal size = 1000
#The following assumes ext4 filesystem.
filestore xattruse omap = true
#For Bobtail (v 0.56) and subsequent versions, you may
#add settings for mkcephfs so that it will create and mount
#the file system on a particular OSD for you. Remove the comment `#`
#character for the following settings and replace the values
#in braces with appropriate values, or leave the following settings
#commented out to accept the default values. You must specify the
#--mkfs option with mkcephfs in order for the deployment script to
#utilize the following settings, and you must define the 'devs'
#option for each osd instance; see below.
osd mkfs type = xfs
osd mkfs options xfs = -f #default for xfs is "-f"
osd mount options xfs = rw,noatime # default mount option is"rw,noatime"
#For example, for ext4, the mount option might look like this:
#osd mkfs options ext4 = user_xattr,rw,noatime
#Execute $ hostname to retrieve the name of your host,
#and replace {hostname} with the name of your host.
#For the monitor, replace {ip-address} with the IP
#address of your host.
[mon.a]
host = ceph4
mon addr = 192.168.73.131:6789
[osd.0]
host = ceph2
#For Bobtail (v 0.56) and subsequent versions, you may
#add settings for mkcephfs so that it will create and mount
#the file system on a particular OSD for you. Remove the comment `#`
#character for the following setting for each OSD and specify
#a path to the device if you use mkcephfs with the --mkfs option.
devs = /dev/sdb1
[osd.1]
host = ceph3
devs= /dev/sdb1
[mds.a]
host = ceph4
配置文件創建成功之后,還需要拷貝到除純客戶端之外的每個節點中(并且后續也要始終保持一致):
ceph2# scp /etc/ceph/ceph.conf ceph3:/etc/ceph/ceph.conf
ceph2# scp /etc/ceph/ceph.conf ceph4:/etc/ceph/ceph.conf
在每個節點上創建數據目錄:
# mkdir -p /var/lib/ceph/osd/ceph-0
# mkdir -p /var/lib/ceph/osd/ceph-1
# mkdir -p /var/lib/ceph/mon/ceph-a
# mkdir -p /var/lib/ceph/mds/ceph-a
對于osd所在的節點ceph2和ceph3,需要對新分區進行xfs或者btrfs的格式化:
# mkfs.xfs -f /dev/sdb1
對于節點ceph2和ceph3,第一次必須先分別掛載分區寫入初始化數據:
ceph2# mount /dev/sdb1 /var/lib/ceph/osd/ceph-0
ceph3# mount /dev/sdb1 /var/lib/ceph/osd/ceph-1
注意,每次執行初始化之前,都需要在每個節點上先停止Ceph服務,并清空原有數據目錄:
# /etc/init.d/ceph stop
# rm -rf /var/lib/ceph/*/ceph-*/*
然后,就可以在mon所在的節點ceph4上執行初始化了:
# sudo mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph4.keyring
注意,一旦配置文件ceph.conf發生改變,初始化最好重新執行一遍。
在mon所在的節點ceph4上執行:
# sudo service ceph -a start
注意,執行上面這步時,可能會遇到如下提示:
=== osd.0 ===
Mounting xfs onceph5:/var/lib/ceph/osd/ceph-0
Error ENOENT: osd.0 does not exist. create it before updating the crush map
執行如下命令后,再重復執行上面那條啟動服務的命令,就可以解決:
# ceph osd create
# sudo ceph health # 也可以使用ceph -s命令查看狀態
如果返回的是HEALTH_OK,則代表成功!
注意,如果遇到如下提示:
HEALTH_WARN 576 pgs stuckinactive; 576 pgs stuck unclean; no osds
或者遇到如下提示:
HEALTH_WARN 178 pgs peering; 178pgs stuck inactive; 429 pgs stuck unclean; recovery 2/24 objects degraded(8.333%)
執行如下命令,就可以解決:
# ceph pg dump_stuck stale && cephpg dump_stuck inactive && ceph pg dump_stuck unclean
如果遇到如下提示:
HEALTH_WARN 384 pgs degraded; 384 pgs stuck unclean; recovery 21/42degraded (50.000%)
則說明osd數量不夠,Ceph默認至少需要提供兩個osd。
客戶端(節點ceph4)掛載mon所在的節點(節點ceph4):
# sudo mkdir /mnt/mycephfs
# sudo mount -t ceph 192.168.73.131:6789:/ /mnt/mycephfs
客戶端驗證:
# df -h #如果能查看到/mnt/mycephfs的使用情況,則說明Ceph安裝成功。
關于“ Ubuntu下Ceph單節點和多節點安裝的示例分析”這篇文章就分享到這里了,希望以上內容可以對大家有一定的幫助,使各位可以學到更多知識,如果覺得文章不錯,請把它分享出去讓更多的人看到。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。