Running Multiple Services Inside a Single Container using Supervisord

In this blog, I am going to explain you about running multiple services inside a single container and how to effectively use docker compose and persistent volume in a local development environment using supervisord and docker compose. Container is a light weight platform for running application along with its dependencies in an isolated environment. It is good to run a single service inside a container.

Though we can access the different services hosted in different containers using a container network, we can get the same benefits by running multiple services in a single container. There are some situations where we need to run more than one service inside a container and it should be accessible from the container host or network. For example, apache/nginx http server along with ftp server or some microservices that is running inside a same container with different processes.

We can use different methods such as adding multiple commands in ENTRYPOINT at the end of Dockerfile, Writing a wrapper script to spawn multiple services in a container with unique process id. But the above method has some limitations such as increasing complexity when the number of services that has to be managed in a container increases, there will be dependency while starting multiple services.

Here, we can use a simple and user friendly process manager like supervisord to start, stop and restart multiple services inside a container, supervisord is a lightweight tool that can be easily deployed in a container. Its configuration is easy to maintain as the number of services to be handled by a supervisor. Supervisor can be accessed by a web interface, xml rpc interface and command line interface.

Supervisord Configuration File

I am going to deploy a simple multi-service application using docker compose. Make sure docker daemon is installed and running in your machine. Ensure docker-compose is installed in your machine.

Docker compose is a cli tool that helps you to define and deploy a multi-service containerized application environment. It is ideal for developing application in your local environment without connecting to a remote host or a virtual machine.

It is used to deploy and scale the application in different environments such as development to production environment. There are multiple advantages of using docker compose to manage a multi-service application.

We can have a multiple isolated environments in a single host, enable CI/CD for the environment, easy to debug and identify the software bugs.

We can build a specific service of an application, we can also get the benefits of using docker network and persistent volume for accessing and storing data layer of an application.

Assuming, we have a Dockerfile for all services to be created by docker-compose. Create a docker-compose.yml file and define the specification of the application. Once the file is created, Execute docker-compose up in the same directory where the docker compose file exists. It will read the definition and interacts with a docker daemon to create the resources. It is recommended to run it as a daemon process by passing the parameter -d.

Docker Compose File

In the above example, I have defined three services. The first service is app tier and the second service is data tier and the third service is used to monitor the application performance using grafana which uses influx db service as datasource.

Here, app tier host different services such as metrics monitor, metrics collector and a wsgi service. It is controlled and managed by supervisord.

When an app container is started, supervisord will be started by ENTRYPOINT script, supervisord then starts the remaining process.

Supervisord Web Interface

Docker compose takes care of provisioning the services in order and maintaining the dependencies, It exposes the service port as per the definition. Persistent volume is mounted on the container for storing database files, grafana dashboards, application code. Another benefits of using persistent volume for development environment is that, we do not need to build the service for every code change.

Say, if we are developing our application using flask in a debug mode, It detect the changes in source code and rerun the app without restart. It avoids unnecessary builds and makes development much easier. We can directly use the repo path as a persistent volume mount path in a local environment.

Enjoy Learning!

Be the first to comment

Leave a Reply

Your email address will not be published.


*