您好,登錄后才能下訂單哦!
Ceph分布式存儲更換磁盤操作步驟
1、確認需要更換的磁盤對應的盤符、OSD ID ;
root@Ceph-B**A**-04-S2-60:~# df -h |grep 17
/dev/sdp3 1.1T 566G 551G 51% /var/lib/ceph/osd/ceph-17
2、確認磁盤對應的槽位;
root@Ceph-B**A**-04-S2-60:~# ll /dev/disk/by-path/ |grep sdp
lrwxrwxrwx 1 root root 9 Aug 17 19:17 pci-0000:02:00.0-scsi-0:0:18:0 -> ../../sdp
lrwxrwxrwx 1 root root 10 Aug 17 19:17 pci-0000:02:00.0-scsi-0:0:18:0-part1 -> ../../sdp1
lrwxrwxrwx 1 root root 10 Aug 17 19:17 pci-0000:02:00.0-scsi-0:0:18:0-part2 -> ../../sdp2
lrwxrwxrwx 1 root root 10 Aug 17 19:17 pci-0000:02:00.0-scsi-0:0:18:0-part3 -> ../../sdp3
3、確認主機SN;
root@Ceph-B**A**-04-S2-60:~# dmidecode -s system-serial-number
HVNHB**
4、設置OSD Flag;
ceph osd out 17
5、查看集群狀態;
ceph -s
6、關閉對應的OSD進程;
stop cep-osd id=17
7、從CrshMap中移除OSD;
ceph osd crush remove osd.17
8、刪除OSD認證信息;
ceph auth del osd.17
9、從集群中移除OSD;
ceph osd rm 17
10、卸載掛載點;
umount /var/lib/ceph/osd/ceph-17
11、更換新的磁盤。。。略;
12、格式化新磁盤;
ceph-deploy disk zap Ceph-B**A**-04-S2-60.domain.tld:sdp
13、創建OSD;
ceph-deploy osd create Ceph-B**A**-04-S2-60.domain.tld:sdp
14、調整CrushMap;
ceph osd crush add osd.17 1.17 root=sata pod=sata-pod-01 chassis=sata-chassis-01 host=Ceph-B**A**-04-S2-60-sata
15、優化掛載參數;
mount -o remount,nobarrier /var/lib/ceph/osd/ceph-17
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。