Containerization has revolutionized the way software applications are developed and deployed. Containers enable developers to create lightweight and portable software packages that can run seamlessly across different platforms and environments. Docker and Kubernetes are two popular tools that are widely used for containerization and management of containerized applications. In this blog, we will discuss how to deploy and manage containerized applications on Ubuntu using Docker and Kubernetes.
How to deploy and manage containerized applications on CentOS 7 with Docker and Kubernetes
Prerequisites:
Before we start, make sure you have the following prerequisites:
- Ubuntu installed on your system
- Docker installed and running
- Kubernetes cluster set up and running
Step 1: Create a Docker image of your application
The first step in deploying a containerized application is to create a Docker image of your application. A Docker image is a lightweight, standalone, executable package that includes everything needed to run your application, including the application code, dependencies, and configuration files.
To create a Docker image of your application, you need to create a Dockerfile. A Dockerfile is a text file that contains a set of instructions for building a Docker image. Here’s an example Dockerfile for a Node.js application:
FROM node:14-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
The above Dockerfile will create a Docker image based on the official Node.js 14 Alpine image. It sets the working directory to /app
, copies the package.json
and package-lock.json
files to the container, installs the dependencies, copies the application code, exposes port 3000, and starts the application.
Once you have created your Dockerfile, you can build the Docker image using the docker build
command. Here’s an example:
docker build -t my-node-app:latest .
The above command will build a Docker image with the tag my-node-app:latest
from the current directory (.
).
Step 2: Push the Docker image to a container registry
Once you have built the Docker image, you need to push it to a container registry. A container registry is a centralized location where Docker images are stored and distributed. There are several container registries available, such as Docker Hub, Google Container Registry, and Amazon Elastic Container Registry.
To push the Docker image to a container registry, you need to tag it with the registry’s URL and your image’s name. Here’s an example:
docker tag my-node-app:latest my-registry/my-node-app:latest
The above command will tag the Docker image with the URL of the container registry (my-registry
) and the image name (my-node-app
), followed by the tag (latest
).
Next, you can push the Docker image to the container registry using the docker push
command. Here’s an example:
docker push my-registry/my-node-app:latest
The above command will push the Docker image to the container registry.
Step 3: Deploy the application to Kubernetes (cont.)
In the example deployment file above, we have specified the Docker image we want to deploy, the number of replicas we want to run, and the port on which the application will be exposed.
To apply the deployment file to your Kubernetes cluster, you can use the kubectl apply
command. Here’s an example:
kubectl apply -f my-node-app-deployment.yaml
The above command will apply the deployment file my-node-app-deployment.yaml
to your Kubernetes cluster.
Step 4: Expose the application with a Kubernetes Service
After you have deployed the application to Kubernetes, you need to expose it to the outside world. To do this, you can use a Kubernetes Service. A Kubernetes Service is an abstraction that defines a logical set of Pods and a policy by which to access them.
To create a Kubernetes Service for your application, you need to create a Kubernetes Service file. A Kubernetes Service file is a YAML file that describes the desired state of the Service, including the port on which the Service will listen and the target port of the Pods.
Here’s an example Kubernetes Service file for our Node.js application:
apiVersion: v1
kind: Service
metadata:
name: my-node-app-service
spec:
selector:
app: my-node-app
ports:
- name: http
port: 80
targetPort: 3000
type: LoadBalancer
In the above Service file, we have specified that the Service should listen on port 80 and forward traffic to the Pods on port 3000. We have also specified that the Service should use a LoadBalancer to expose the application to the outside world.
To apply the Service file to your Kubernetes cluster, you can use the kubectl apply
command. Here’s an example:
kubectl apply -f my-node-app-service.yaml
The above command will apply the Service file my-node-app-service.yaml
to your Kubernetes cluster.
Step 5: Scale the application
One of the benefits of using Kubernetes is that it makes it easy to scale your application up or down depending on demand. To scale the application, you can simply update the number of replicas in the deployment file.
For example, if you want to scale the application to 5 replicas, you can update the deployment file as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-node-app
spec:
replicas: 5
selector:
matchLabels:
app: my-node-app
template:
metadata:
labels:
app: my-node-app
spec:
containers:
- name: my-node-app
image: my-registry/my-node-app:latest
ports:
- containerPort: 3000
After updating the deployment file, you can apply the changes to your Kubernetes cluster using the kubectl apply
command.
Conclusion:
In this blog, we have discussed how to deploy and manage containerized applications on Ubuntu using Docker and Kubernetes. We have covered the steps for creating a Docker image of your application, pushing it to a container registry, deploying it to Kubernetes, exposing it with a Kubernetes Service, and scaling it up or down as needed. By following these steps, you can easily deploy and manage containerized applications on Ubuntu with Docker and Kubernetes.