Ceph is a powerful, open-source distributed storage system that provides object, block, and file storage in a single, unified platform. In this guide, we will walk you through the process of how to install and configure Ceph on Rocky Linux. We will cover the necessary steps, including setting up a Ceph cluster, deploying the Object Storage Daemon (OSD), and configuring a Ceph block device.
Table of Contents
- Prerequisites
- Installing Ceph Packages
- Setting Up a Ceph Cluster
- Deploying the Object Storage Daemon (OSD)
- Configuring a Ceph Block Device
- Conclusion
How to Install and Configure Ceph on Rocky Linux
Prerequisites
Before we begin, ensure you have the following:
- A fresh installation of Rocky Linux.
- A minimum of three nodes with at least one extra disk on each for OSD storage.
- Root or sudo access to all nodes.
For other Rocky Linux tutorials, you may refer to our guides on installing GlusterFS, installing Flask, and installing Django.
Installing Ceph Packages on Rocky Linux
Before we can install Ceph, we need to enable the EPEL repository on all nodes. Run the following command:
sudo dnf install epel-release -y
Now, install Ceph on all nodes by running:
dnf search release-ceph
dnf install --assumeyes centos-release-ceph-quincy
dnf install --assumeyes cephadm
Setting Up a Ceph Cluster on Rocky Linux
After installing the necessary packages, it’s time to set up the Ceph cluster. Choose one node as the “bootstrap” node and run the following command:
sudo cephadm bootstrap --mon-ip <bootstrap_node_ip>
Replace <bootstrap_node_ip>
with the IP address of the bootstrap node.
After the bootstrap process is complete, you’ll receive the Ceph dashboard URL, username, and password. Take note of these details as you’ll need them later.
To add the remaining nodes to the cluster, first copy the Ceph configuration file and client admin keyring from the bootstrap node to the other nodes:
scp /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring <remote_node>:/etc/ceph/
Replace <remote_node>
with the IP address or hostname of the remote node.
Then, add the nodes to the Ceph cluster using the following command on the bootstrap node:
sudo ceph orch host add <remote_node>
Run this command for each remote node.
Deploying the Object Storage Daemon on Rocky Linux
Once all nodes are part of the cluster, it’s time to deploy the OSD. You’ll need to prepare the disks on each node before deploying the OSD.
First, identify the disks that you want to use
for the OSDs. You can use the lsblk
command to list all available disks on each node.
lsblk
Note: Make sure you choose disks that are not in use and are dedicated to Ceph storage.
Next, zap the disks to remove any existing data and partition information. This can be done using the ceph-volume
utility:
sudo ceph-volume lvm zap /dev/sdX
Replace /dev/sdX
with the actual disk identifier.
After zapping the disks, you can create the OSD using the ceph-volume
utility. For a simple setup, you can use the following command:
sudo ceph-volume lvm create --bluestore --data /dev/sdX
Replace /dev/sdX
with the actual disk identifier. This command will create a single BlueStore OSD using the specified disk.
Once the OSDs are created on all nodes, you can verify their status by running the following command on the admin node:
sudo ceph osd tree
This command will display a tree-like structure of your OSDs, showing their status and the nodes they are running on.
Now that the OSDs are up and running, you can create a Ceph pool to store your data. Pools are logical groups of OSDs that provide data replication and placement services. To create a pool, run the following command on the admin node:
sudo ceph osd pool create mypool 128
Replace mypool
with the desired name for the pool, and 128
with the desired number of placement groups.
Finally, you can verify the status of your Ceph cluster by running the following command on the admin node:
sudo ceph -s
This command will display the overall health and status of the Ceph cluster, including the number of monitors, OSDs, and pools.
Conclusion
In this tutorial, we’ve covered how to install and configure a Ceph storage cluster on Rocky Linux. With your new Ceph cluster in place, you can now take advantage of its highly scalable and fault-tolerant storage capabilities for a variety of use cases.
If you’re interested in learning more about managing and optimizing your Ceph cluster, check out our other guides on how to install and configure GlusterFS on Rocky Linux and how to install and configure rsync on Rocky Linux.
Additionally, you might want to explore other storage and server management solutions for Rocky Linux, such as how to install and configure Bacula backup server on Rocky Linux or how to set up an NTP server on Rocky Linux.
We hope this guide has been helpful in setting up your Ceph cluster on Rocky Linux. If you have any questions or need further assistance, feel free to leave a comment below.