IP改變引起的Ceph monitor異常及OSD盤崩潰的總結(jié)
公司搬家,所有服務(wù)器的ip改變。對(duì)ceph服務(wù)器配置好ip后啟動(dòng),發(fā)現(xiàn)monitor進(jìn)程啟動(dòng)失敗,monitor進(jìn)程總是試圖綁定到以前的ip地址,那當(dāng)然不可能成功了。開始以為服務(wù)器的ip設(shè)置有問題,在改變hostname、ceph.conf等方法無果后,逐步分析發(fā)現(xiàn),是monmap中的ip地址還是以前的ip,ceph通過讀取monmap來啟動(dòng)monitor進(jìn)程,所以需要修改monmap。方法如下:
- #Add the new monitor locations
- # monmaptool --create --add mon0 192.168.32.2:6789 --add osd1 192.168.32.3:6789 \
- --add osd2 192.168.32.4:6789 --fsid 61a520db-317b-41f1-9752-30cedc5ffb9a \
- --clobber monmap
- #Retrieve the monitor map
- # ceph mon getmap -o monmap.bin
- #Check new contents
- # monmaptool --print monmap.bin
- #Inject the monmap
- # ceph-mon -i mon0 --inject-monmap monmap.bin
- # ceph-mon -i osd1 --inject-monmap monmap.bin
- # ceph-mon -i osd2 --inject-monmap monmap.bin
再啟動(dòng)monitor,一切正常。
但出現(xiàn)了上一篇文章中描述的一塊osd盤掛掉的情況。查了一圈,只搜到ceph的官網(wǎng)上說是ceph的一個(gè)bug。無力修復(fù),于是刪掉這塊osd,再重裝:
- # service ceph stop osd.4
- #不必執(zhí)行ceph osd crush remove osd.4
- # ceph auth del osd.4
- # ceph osd rm 4
- # umount /cephmp1
- # mkfs.xfs -f /dev/sdc
- # mount /dev/sdc /cephmp1
- #此處執(zhí)行create無法正常安裝osd
- # ceph-deploy osd prepare osd2:/cephmp1:/dev/sdf1
- # ceph-deploy osd activate osd2:/cephmp1:/dev/sdf1
完成后重啟該osd,成功運(yùn)行。ceph會(huì)自動(dòng)平衡數(shù)據(jù),***的狀態(tài)是:
- [root@osd2 ~]# ceph -s
- cluster 61a520db-317b-41f1-9752-30cedc5ffb9a
- health HEALTH_WARN 9 pgs incomplete; 9 pgs stuck inactive; 9 pgs stuck unclean; 3 requests are blocked > 32 sec
- monmap e3: 3 mons at {mon0=192.168.32.2:6789/0,osd1=192.168.32.3:6789/0,osd2=192.168.32.4:6789/0}, election epoch 76, quorum 0,1,2 mon0,osd1,osd2
- osdmap e689: 6 osds: 6 up, 6 in
- pgmap v189608: 704 pgs, 5 pools, 34983 MB data, 8966 objects
- 69349 MB used, 11104 GB / 11172 GB avail
- 695 active+clean
- 9 incomplete
出現(xiàn)了9個(gè)pg的incomplete狀態(tài)。
- [root@osd2 ~]# ceph health detail
- HEALTH_WARN 9 pgs incomplete; 9 pgs stuck inactive; 9 pgs stuck unclean; 3 requests are blocked > 32 sec; 1 osds have slow requests
- pg 5.95 is stuck inactive for 838842.634721, current state incomplete, last acting [1,4]
- pg 5.66 is stuck inactive since forever, current state incomplete, last acting [4,0]
- pg 5.de is stuck inactive for 808270.105968, current state incomplete, last acting [0,4]
- pg 5.f5 is stuck inactive for 496137.708887, current state incomplete, last acting [0,4]
- pg 5.11 is stuck inactive since forever, current state incomplete, last acting [4,1]
- pg 5.30 is stuck inactive for 507062.828403, current state incomplete, last acting [0,4]
- pg 5.bc is stuck inactive since forever, current state incomplete, last acting [4,1]
- pg 5.a7 is stuck inactive for 499713.993372, current state incomplete, last acting [1,4]
- pg 5.22 is stuck inactive for 496125.831204, current state incomplete, last acting [0,4]
- pg 5.95 is stuck unclean for 838842.634796, current state incomplete, last acting [1,4]
- pg 5.66 is stuck unclean since forever, current state incomplete, last acting [4,0]
- pg 5.de is stuck unclean for 808270.106039, current state incomplete, last acting [0,4]
- pg 5.f5 is stuck unclean for 496137.708958, current state incomplete, last acting [0,4]
- pg 5.11 is stuck unclean since forever, current state incomplete, last acting [4,1]
- pg 5.30 is stuck unclean for 507062.828475, current state incomplete, last acting [0,4]
- pg 5.bc is stuck unclean since forever, current state incomplete, last acting [4,1]
- pg 5.a7 is stuck unclean for 499713.993443, current state incomplete, last acting [1,4]
- pg 5.22 is stuck unclean for 496125.831274, current state incomplete, last acting [0,4]
- pg 5.de is incomplete, acting [0,4]
- pg 5.bc is incomplete, acting [4,1]
- pg 5.a7 is incomplete, acting [1,4]
- pg 5.95 is incomplete, acting [1,4]
- pg 5.66 is incomplete, acting [4,0]
- pg 5.30 is incomplete, acting [0,4]
- pg 5.22 is incomplete, acting [0,4]
- pg 5.11 is incomplete, acting [4,1]
- pg 5.f5 is incomplete, acting [0,4]
- 2 ops are blocked > 8388.61 sec
- 1 ops are blocked > 4194.3 sec
- 2 ops are blocked > 8388.61 sec on osd.0
- 1 ops are blocked > 4194.3 sec on osd.0
- 1 osds have slow requests
查了一圈無果。一個(gè)有同樣遭遇的人的一段話:
- I already tried "ceph pg repair 4.77", stop/start OSDs, "ceph osd lost", "ceph pg force_create_pg 4.77".
- Most scary thing is "force_create_pg" does not work. At least it should be a way to wipe out a incomplete PG
- without destroying a whole pool.
以上方法嘗試了一下,都不行。暫時(shí)無法解決,感覺有點(diǎn)坑。
PS:常用pg操作
- [root@osd2 ~]# ceph pg map 5.de
- osdmap e689 pg 5.de (5.de) -> up [0,4] acting [0,4]
- [root@osd2 ~]# ceph pg 5.de query
- [root@osd2 ~]# ceph pg scrub 5.de
- instructing pg 5.de on osd.0 to scrub
- [root@osd2 ~]# ceph pg 5.de mark_unfound_lost revert
- pg has no unfound objects
- #ceph pg dump_stuck stale
- #ceph pg dump_stuck inactive
- #ceph pg dump_stuck unclean
- [root@osd2 ~]# ceph osd lost 1
- Error EPERM: are you SURE? this might mean real, permanent data loss. pass --yes-i-really-mean-it if you really do.
- [root@osd2 ~]#
- [root@osd2 ~]# ceph osd lost 4 --yes-i-really-mean-it
- osd.4 is not down or doesn't exist
- [root@osd2 ~]# service ceph stop osd.4
- === osd.4 ===
- Stopping Ceph osd.4 on osd2...kill 22287...kill 22287...done
- [root@osd2 ~]# ceph osd lost 4 --yes-i-really-mean-it
- marked osd lost in epoch 690
- [root@osd1 mnt]# ceph pg repair 5.de
- instructing pg 5.de on osd.0 to repair
- [root@osd1 mnt]# ceph pg repair 5.de
- instructing pg 5.de on osd.0 to repair