Ceph provides scalable, distributed object storage that is self-healing and eliminates single points of failure, making it ideal for handling massive data volumes in cloud environments. Its architecture uses daemons like Monitors (MONs) for cluster maps, OSDs for data storage and replication, and the CRUSH algorithm for decentralized data placement.
ceph -s# This command provides a high-level overview of the cluster's health,
ceph orch ls# lists all orchestrated services (daemons like mons, mgrs, osds, rgws) managed by Ceph's Orchestrator
ceph orch host ls# It displays all hosts registered in the orchestrator inventory
Run the following command to check current pools
1
2
3
ceph osd pool ls
ceph osd pool ls detail
create new pool for rbd
1
ceph osd pool create {pool-name} replicated
A Pool should be associated with an application pool before they can be used:
1
2
ceph osd pool application enable{pool-name}{application-name}
ceph fs volume create cephfs
#create cephfs data and metadata pool automatically#or manual
ceph osd pool create cephfs_data 32
ceph osd pool create cephfs_metadata 1
#check data pools
ceph osd pool ls
ceph osd pool ls detail
cephfs.cephfs.meta
cephfs.cephfs.data
#Eanable data pool
ceph osd pool application enable cephfs_data cephfs
ceph osd pool application enable cephfs.cephfs.meta cephfs
ceph osd pool ls detail
#make file system from this data pool
ceph fs new cephfs-dev cephfs_metadata cephfs_data
#control file system
ceph fs ls#if need, delete file system: "ceph fs rm hepapi-cephfs --yes-i-really-mean-it"#create mds
on ceph dashboard: services--> create mds : 2 or cli: ceph fs volume create cephfs