您好,登錄后才能下訂單哦!
本篇內容主要講解“Ceph常用的命令總結”,感興趣的朋友不妨來看看。本文介紹的方法操作簡單快捷,實用性強。下面就讓小編來帶大家學習“Ceph常用的命令總結”吧!
1. 創建自定義pool
ceph osd pool create pg_num pgp_num
其中pgp_num為pg_num的有效歸置組個數,是一個可選參數。pg_num應該足夠大,不要拘泥于官方文檔的計算方法,根據實際情況選擇256、512、1024、2048、4096。
2. 設置pool的副本數、最小副本數、最大副本數
ceph osd pool set <poolname> size 2 ceph osd pool set <poolname> min_size 1 ceph osd pool set <poolname> max_size 10
資源所限,如果不希望保存3副本,可以用該命令對特定的pool更改副本存放數。
利用get可以獲得特定pool的副本數。
ceph osd pool get <poolname> size
3. 增加osd
可以利用ceph-deploy增加osd:
ceph-deploy osd prepare monosd1:/mnt/ceph osd2:/mnt/ceph ceph-deploy osd activate monosd1:/mnt/ceph osd2:/mnt/ceph #相當于: ceph-deploy osd create monosd1:/mnt/ceph osd2:/mnt/ceph #還有一種方法,在安裝osd時同時指定對應的journal的安裝路徑 ceph-deploy osd create osd1:/cephmp1:/dev/sdf1 /cephmp2:/dev/sdf2
也可以手動增加:
## Prepare disk first, create partition and format it <insert parted oneliner> mkfs.xfs -f /dev/sdd mkdir /cephmp1 mount /dev/sdd /cephmp1 cd /cephmp1 ceph-osd -i 12 --mkfs --mkkey ceph auth add osd.12 osd 'allow *' mon 'allow rwx' -i /cephmp1/keyring #change the crushmap ceph osd getcrushmap -o map crushtool -d map -o map.txt vim map.txt crushtool -c map.txt -o map ceph osd setcrushmap -i map ## Start it /etc/init.d/ceph start osd.12
4. 刪除osd
先將此osd停止工作:
## Mark it out ceph osd out 5 ## Wait for data migration to complete (ceph -w), then stop it service ceph -a stop osd.5 ## Now it is marked out and down
再對其進行刪除操作:
## If deleting from active stack, be sure to follow the above to mark it out and down ceph osd crush remove osd.5 ## Remove auth for disk ceph auth del osd.5 ## Remove disk ceph osd rm 5 ## Remove from ceph.conf and copy new conf to all hosts
5. 查看osd總體情況、osd的詳細信息、crush的詳細信息
ceph osd tree ceph osd dump --format=json-pretty ceph osd crush dump --format=json-pretty
6. 獲得并修改CRUSH maps
## save current crushmap in binary ceph osd getcrushmap -o crushmap.bin ## Convert to txt crushtool -d crushmap.bin -o crushmap.txt ## Edit it and re-convert to binary crushtool -c crushmap.txt -o crushmap.bin.new ## Inject into running system ceph osd setcrushmap -i crushmap.bin.new ## If you've added a new ruleset and want to use that for a pool, do something like: ceph osd pool default crush rule = 4 #也可以這樣設置一個pool的rule cpeh osd pool set testpool crush_ruleset <ruleset_id>
-o=output; -d=decompile; -c=compile; -i=input
記住這些縮寫,上面的命令就很容易理解了。
7. 增加/刪除journal
為了提高性能,通常將ceph的journal置于單獨的磁盤或分區中:
先利用以下命令設置ceph集群為nodown:
ceph osd set nodown
# Relevant ceph.conf options -- existing setup -- [osd] osd data = /srv/ceph/osd$id osd journal = /srv/ceph/osd$id/journal osd journal size = 512 # stop the OSD: /etc/init.d/ceph osd.0 stop /etc/init.d/ceph osd.1 stop /etc/init.d/ceph osd.2 stop # Flush the journal: ceph-osd -i 0 --flush-journal ceph-osd -i 1 --flush-journal ceph-osd -i 2 --flush-journal # Now update ceph.conf - this is very important or you'll just recreate journal on the same disk again -- change to [filebased journal] -- [osd] osd data = /srv/ceph/osd$id osd journal = /srv/ceph/journal/osd$id/journal osd journal size = 10000 -- change to [partitionbased journal (journal in this case would be on /dev/sda2)] -- [osd] osd data = /srv/ceph/osd$id osd journal = /dev/sda2 osd journal size = 0 # Create new journal on each disk ceph-osd -i 0 --mkjournal ceph-osd -i 1 --mkjournal ceph-osd -i 2 --mkjournal # Done, now start all OSD again /etc/init.d/ceph osd.0 start /etc/init.d/ceph osd.1 start /etc/init.d/ceph osd.2 start
記得將nodown設置回來:
ceph osd unset nodown
8. ceph cache pool
經初步測試,ceph的cache pool性能并不好,有時甚至低于無cache pool時的性能。可以利用flashcache等替代方案來優化ceph的cache。
ceph osd tier add satapool ssdpool ceph osd tier cache-mode ssdpool writeback ceph osd pool set ssdpool hit_set_type bloom ceph osd pool set ssdpool hit_set_count 1 ## In this example 80-85% of the cache pool is equal to 280GB ceph osd pool set ssdpool target_max_bytes $((280*1024*1024*1024)) ceph osd tier set-overlay satapool ssdpool ceph osd pool set ssdpool hit_set_period 300 ceph osd pool set ssdpool cache_min_flush_age 300 # 10 minutes ceph osd pool set ssdpool cache_min_evict_age 1800 # 30 minutes ceph osd pool set ssdpool cache_target_dirty_ratio .4 ceph osd pool set ssdpool cache_target_full_ratio .8
9. 查看運行時配置
ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show
10. 查看監控集群狀態
ceph health cehp health detail ceph status ceph -s #可以加上--fortmat=json-pretty ceph osd stat ceph osd dump ceph osd tree ceph mon stat ceph quorum_status ceph mon dump ceph mds stat ceph mds dump
11. 查看所有的pool
ceph osd lspools rados lspools
12. 查看kvm和qemu是否支持rbd
qemu-system-x86_64 -drive format=? qemu-img -h | grep rbd
13, 查看特定的pool及其中的文件
rbd ls testpool rbd create testpool/test.img -s 1024 --image-format=2 rbd info testpool/test.img rbd rm testpool/test.img #統計塊數 rados -p testpool ls | grep ^rb.0.11a1 | wc -l
#導入并查看文件 rados makepool testpool rados put -p testpool logo.png logo.png ceph osd map testpool logo.png rbd import logo.png testpool/logo.png rbd info testpool/logo.png
14. 掛載/卸載創建的塊設備
ceph osd pool create testpool 256 256 rbd create testpool/test.img -s 1024 --image-format=2 rbd map testpool/test.img rbd showmapped mkfs.xfs /dev/rbd0 rbd unmap /dev/rbd0
15. 創建快照
#創建 rbd snap create testpool/test.img@test.img-snap1 #查看 rbd snap ls testpool/test.img #回滾 rbd snap rollback testpool/test.img@test.img-snap1 #刪除 rbd snap rm testpool/test.img@test.img-snap1 #清除所有快照 rbd snap purge testpool/test.img
16. 計算合理的pg數
官方建議每OSD50-100個pg。total pgs=osds*100/副本數,例如6osd、2副本的環境,pgs為6*100/2=300
pg數只能增加,無法減少;增加pg_num后必須同時增減pgp_num
17. 對pool的操作
ceph osd pool create testpool 256 256 ceph osd pool delete testpool testpool --yes-i-really-really-mean-it ceph osd pool rename testpool anothertestpool ceph osd pool mksnap testpool testpool-snap
18. 重新安裝前的格式化
ceph-deploy purge osd0 osd1 ceph-deploy purgedata osd0 osd1 ceph-deploy forgetkeys ceph-deploy disk zap --fs-type xfs osd0:/dev/sdb1
19. 修改osd journal的存儲路徑
#noout參數會阻止osd被標記為out,使其權重為0 ceph osd set noout service ceph stop osd.1 ceph-osd -i 1 --flush-journal mount /dev/sdc /journal ceph-osd -i 1 --mkjournal /journal service ceph start osd.1 ceph osd unset noout
20. xfs掛載參數
mkfs.xfs -n size=64k /dev/sdb1 #/etc/fstab掛載參數 rw,noexec,nodev,noatime,nodiratime,nobarrier
21. 認證配置
[global] auth cluser required = none auth service required = none auth client required = none #0.56之前 auth supported = none
22. pg_num不夠用,進行遷移和重命名
ceph osd pool create new-pool pg_num rados cppool old-pool new-pool ceph osd pool delete old-pool ceph osd pool rename new-pool old-pool #或者直接增加pool的pg_num
23. 推送config文件
ceph-deploy --overwrite-conf config push mon1 mon2 mon3
24. 在線修改config參數
ceph tell osd.* injectargs '--mon_clock_drift_allowde 1'
使用此命令需要區分配置的參數屬于mon、mds還是osd。
到此,相信大家對“Ceph常用的命令總結”有了更深的了解,不妨來實際操作一番吧!這里是億速云網站,更多相關內容可以進入相關頻道進行查詢,關注我們,繼續學習!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。