一、配置crush class1. 创建ssd class默认情况下,我们所有的osd都会class类型都是hdd:
# ceph osd crush class ls["hdd"]
查看当前的osd布局:
# ceph osd treeID CLASS WEIGHTTYPE NAMESTATUS REWEIGHT PRI-AFF -80 root cache-70host 192.168.3.9-cache-10.37994 root default-20host 192.168.3.9-50.37994host kolla-cloud0hdd 0.10999osd.0up1.00000 1.000001hdd 0.10999osd.1up1.00000 1.000002hdd 0.10999osd.2up1.00000 1.000003hdd 0.04999osd.3up1.00000 1.00000
将osd.3从 hdd class中删除:
# ceph osd crush rm-device-class osd.3done removing class of osd(s): 3
将这些osd.3添加至ssd class
# ceph osd crush set-device-class ssd osd.3set osd(s) 3 to class 'ssd'
添加完成之后,我们再次查看osd布局:
# ceph osd treeID CLASS WEIGHTTYPE NAMESTATUS REWEIGHT PRI-AFF -80 root cache-70host 192.168.3.9-cache-10.37994 root default-20host 192.168.3.9-50.37994host kolla-cloud0hdd 0.10999osd.0up1.00000 1.000001hdd 0.10999osd.1up1.00000 1.000002hdd 0.10999osd.2up1.00000 1.000003ssd 0.04999osd.3up1.00000 1.00000
可以看到我们osd.3的class都变为了ssd 。
然后我们再次查看crush class,也多出了一个名为ssd的class:
# ceph osd crush class ls["hdd","ssd"]
2. 创建基于ssd的class rule创建一个class rule,取名为ssd_rule,使用ssd的osd:
# ceph osd crush rule create-replicated ssd_rule default host ssd
查看集群rule:
# ceph osd crush rule ls replicated_ruledisksssd_rule
通过如下方式查看详细的crushmap信息:
#ceph osd getcrushmap -o crushmap 26# crushtool -d crushmap -o crushmap.txt# cat crushmap.txt# begin crush maptunable choose_local_tries 0tunable choose_local_fallback_tries 0tunable choose_total_tries 50tunable chooseleaf_descend_once 1tunable chooseleaf_vary_r 1tunable chooseleaf_stable 1tunable straw_calc_version 1tunable allowed_bucket_algs 54# devicesdevice 0 osd.0 class hdddevice 1 osd.1 class hdddevice 2 osd.2 class hdddevice 3 osd.3 class ssd# typestype 0 osdtype 1 hosttype 2 chassistype 3 racktype 4 rowtype 5 pdutype 6 podtype 7 roomtype 8 datacentertype 9 regiontype 10 root# bucketshost 192.168.3.9 {id -2# do not change unnecessarilyid -3 class hdd# do not change unnecessarilyid -13 class ssd# do not change unnecessarily# weight 0.000alg straw2hash 0# rjenkins1}host kolla-cloud {id -5# do not change unnecessarilyid -6 class hdd# do not change unnecessarilyid -14 class ssd# do not change unnecessarily# weight 0.380alg straw2hash 0# rjenkins1item osd.2 weight 0.110item osd.1 weight 0.110item osd.0 weight 0.110item osd.3 weight 0.050}root default {id -1# do not change unnecessarilyid -4 class hdd# do not change unnecessarilyid -15 class ssd# do not change unnecessarily# weight 0.380alg straw2hash 0# rjenkins1item 192.168.3.9 weight 0.000item kolla-cloud weight 0.380}host 192.168.3.9-cache {id -7# do not change unnecessarilyid -9 class hdd# do not change unnecessarilyid -11 class ssd# do not change unnecessarily# weight 0.000alg straw2hash 0# rjenkins1}root cache {id -8# do not change unnecessarilyid -10 class hdd# do not change unnecessarilyid -12 class ssd# do not change unnecessarily# weight 0.000alg straw2hash 0# rjenkins1item 192.168.3.9-cache weight 0.000}# rulesrule replicated_rule {id 0type replicatedmin_size 1max_size 10step take defaultstep chooseleaf firstn 0 type hoststep emit}rule disks {id 1type replicatedmin_size 1max_size 10step take defaultstep chooseleaf firstn 0 type hoststep emit}rule ssd_rule {id 2type replicatedmin_size 1max_size 10step take default class ssdstep chooseleaf firstn 0 type hoststep emit}# end crush map
修改crushmap.txt文件中的step take default class改成 step take default class hdd
rule disks {id 1type replicatedmin_size 1max_size 10step take default class hddstep chooseleaf firstn 0 type hoststep emit}
重新编译crushmap并导入进去:
# crushtool -c crushmap.txt -o crushmap.new# ceph osd setcrushmap -i crushmap.new
3. 创建基于ssd_rule规则的存储池创建一个基于该ssd_rule规则的存储池:
# ceph osd pool create cache 64 64 ssd_rulepool 'cache' created
查看cache的信息可以看到使用的crush_rule为1,也就是ssd_rule
# ceph osd pool get cache crush_rulecrush_rule: ssd_rule
查看pool使用rule情况,发现pool使用crush_rule 2
# # ceph osd dump | grep -i sizepool 1 'images' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 64 pgp_num 64 last_change 80 lfor 0/71 flags hashpspool stripe_width 0 Application rbdpool 2 'volumes' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 64 pgp_num 64 last_change 89 lfor 0/73 flags hashpspool stripe_width 0 application rbdpool 3 'backups' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 64 pgp_num 64 last_change 84 lfor 0/75 flags hashpspool stripe_width 0 application rbdpool 4 'vms' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 64 pgp_num 64 last_change 86 lfor 0/77 flags hashpspool stripe_width 0 application rbdpool 5 'cache' replicated size 3 min_size 2 crush_rule 2 object_hash rjenkins pg_num 64 pgp_num 64 last_change 108 flags hashpspool stripe_width 0
推荐阅读
- mac big sur 电池显示,mac os big sur电量-
- 泳池游泳的注意事项是什么?
- 你的MacBook Pro可能符合免费更换电池的条件:以下是检查方法
- 5号电池充多长时间?5号充电电池一般充几个小时
- Redis:缓存被我写满了,该怎么办?
- iPhone12系列电池容量大小?iphone12系列电池容量表
- iPhone13电池有多大?iphone13系列的电池容量_2
- iPhone12系列电池容量大小?iphone12系列电池容量表_4
- ios15.1更新后电池,苹果11升级ios14.4.1耗电严重-
- 技术积淀----NGINX缓存