Defining Bolt Service file

Bolt CLI manages your projects and its services using two types of yaml files. One of them is bolt.service.yaml. Here's how bolt.service.yaml work and manages processes for you.
This file contains your service's information like container name, service-runners platform it can run on. And each service-runner object contains envfile path and build command. Here's how a sample bolt service file looks like —
container_name: <your container name>
stateless: <true | false>
default_service_runner: <local | docker | vmlocal | vmdocker>
service_discovery_offset:
- PORT_NUMBER
supported_service_runners:
- local
- docker
service-runners:
local:
envfile: <path/to/envfile>
build: <your project's run command>
port:
- ${ASSIGNED_PORT}:${ASSIGNED_PORT}
docker:
envfile: <path/to/envfile>
build: <path/to/your/Dockerfile>
volumes:
- <your host machine's path>:<your project's path in docker>
ports:
- <ASSIGNED_PORT>:<your project's running port in docker or ASSIGNED_PORT>
Here's how bolt.service.yaml file would look after running the service:add command —
container_name: myapp
stateless: true
default_service_runner: local
service_discovery_offset:
- 3000
supported_service_runners:
- local
- docker
service-runners:
local:
envfile: .env
build: npm install && npm start
ports:
- ${ASSIGNED_PORT}:${ASSIGNED_PORT}
docker:
envfile: .env
build: ./run.Dockerfile
volumes:
- .:/app
ports:
- ${ASSIGNED_PORT}:${ASSIGNED_PORT}

Definition

container_name

This is the name of your service container. For local service runner, this name will be used as the name of the child process. For docker service runner, this name will be used as the name of the docker container.

stateless

This is still in development and is coming soon...!

service_discovery_offset

This is the offset of your service. For example, if your service runs on port 3000, you can define it here. Bolt CLI will automatically assign a port to your service by adding the offset to the port number and if it is already assigned, it will assign another port to your service.
Note: • You can use more than one port number in the service_discovery_offset array if your service runs on multiple ports. • You can use assigned ports from envs using ${ASSIGNED_PORT} for the first port number and ${ASSIGNED_PORT_1>} for the second port number and so on.

depends_on

This key is used to define the dependencies of your service. For example, if your service depends on a database service, you can define it here.
For example you have service named "nextapp" and it depends on a database service named "postgres". Here's how you can define it in your bolt.service.yaml file —
container_name: nextapp
stateless: true
default_service_runner: local
depends_on:
- postgres
service_discovery_offset:
- 3000
supported_service_runners:
- local
- docker
service-runners:
local:
envfile: .env
build: npm install && npm start
ports:
- ${ASSIGNED_PORT}:${ASSIGNED_PORT}
docker:
envfile: .env
build: ./run.Dockerfile
volumes:
- .:/app
ports:
- ${ASSIGNED_PORT}:${ASSIGNED_PORT}
Note: This key is optional and is not required to be defined in your bolt.service.yaml file if your service does not depend on any other service.

default_service_runner

This is the default service runner for your service. You can use local, docker, vmlocal or vmdocker as your default service runner.
Let's understand what the default service runner values are -
  • If you use local as your default service runner, Bolt CLI will run your services on your host machine's local.
  • If you use docker as your default service runner, Bolt CLI will run your services on your host machine's Docker.
  • If you use vmlocal as your default service runner, Bolt CLI will run your services on your virtual machine's local.
  • If you use vmdocker as your default service runner, Bolt CLI will run your services on your virtual machine's Docker.
Note: • To run your services on Docker, you need to have Docker installed on your machine • To run your services on Virtual Machine, you need to install QEMU Emulator

supported_service_runners

This is an array of strings. Each string contains a service runner name. You can use local or docker as your service runner name.
Supported Service Runners stands for the service runners that your service supports.
For example,
  • If you use local and docker as your supported_service_runners & local as your default_service_runner, your service will be able to run on host machine's local as Process.
  • If you use local and docker as your supported_service_runners & docker as your default_service_runner, your service will be able to run on host machine's Docker.
  • If you use local and docker as your supported_service_runners & vmlocal as your default_service_runner, your service will be able to run on virtual machine's local as Process.
  • If you use local and docker as your supported_service_runners & vmdocker as your default_service_runner, your service will be able to run on virtual machine's Docker.

service-runners

This is an object that contains the service runners for your service. Currently, Bolt CLI supports two service runners — local and docker. We will provide you a plugin architecture which will enable you to add more service runners by creating a plugin.
You don't need to define service runner for vmlocal and vmdocker. Bolt CLI will automatically run your services on your virtual machine's local and Docker using the local and docker service runners.
This is how a local service runner object looks like —
local:
envfile: <path/to/envfile>
build: <your project's run command>
ports:
- ${ASSIGNED_PORT}:${ASSIGNED_PORT}
This is how a docker service runner object looks like —
docker:
envfile: <path/to/envfile>
build: <path/to/your/Dockerfile>
volumes:
- <your host machine's path>:<your project's path in docker>
ports:
- ${ASSIGNED_PORT}:<your project's running port in docker or ${ASSIGNED_PORT}>

service-runners > local or docker > envfile

This is the path to your service's envfile. This envfile will be used to set environment variables for your service. You can use .env.tpl as your envfile. Bolt CLI will automatically generate .env file from .env.tpl file.

service-runners > local or docker > build

This is the command that will be used to run your service.
For local service runner, this command contains string to run your service as a child process.
For docker service runner, this command contains Dockerfile path to your service as a Docker container.

service-runners > local > ports

This is an array of strings. Each string contains a port mapping. The first part of the string is the port number in your host machine. The second part of the string is the port number in your virtual machine's local.
Note: You don't have to define this command if you are planning to run your service always on host machine's local.

service-runners > docker > ports

This is an array of strings. Each string contains a port mapping. The first part of the string is the port number in your host machine. The second part of the string is the port number in your docker container.
Note: This is an optional field. If you don't want to map any port, you can skip this field.

service-runners > docker > volumes

This is an array of strings. Each string contains a volume mount path. The first part of the string is the path to the directory in your host machine. The second part of the string is the path to the directory in your docker container.
Note: This is an optional field. If you don't want to mount any volume, you can skip this field.

Ways to set environment variables in Bolt Service

Bolt allows you to use environment variables in the configuration using ${YOUR_VARIABLE_NAME} syntax.
Bolt Service picks up environment variables for each service runner's envfile attribute, which is basically reference to .env file containing all the environment variables of the same container. Learn more about environment management here.
You can use environment variables by using the following syntax —
# File location: ./services/.env
ASSIGNED_PORT=4002
LOCAL_BUILD_COMMAND="npm install && npm start"
DOCKER_BUILD_FILEPATH="./run.Dockerfile"
# File location: ./services/bolt.service.yaml
container_name: servicename
stateless: true
default_service_runner: docker
service_discovery_offset:
- 4002
supported_service_runners:
- local
- docker
service_runners:
local:
envfile: .env
build: ${LOCAL_BUILD_COMMAND}
ports:
- ${ASSIGNED_PORT}:${ASSIGNED_PORT}
docker:
envfile: .env
build: ${DOCKER_BUILD_FILEPATH}
ports:
- ${ASSIGNED_PORT}:${ASSIGNED_PORT}
volumes:
- ./services/todo-two:/app/todo-two
- /app/todo-two/node_modules