Skip to main content

Podman Play to Deploy Any App

Β· 8 min read
VoidQuark

Podman Play to deploy any app

In this blog post, we will explore how to use Podman play Ansible Role and deploy a popular application in root-less containers from a Kubernetes Pod YAML definition. The application pod runs as a systemd service in your own user namespace.

Using Ansible roles has several benefits. One of them is that you can easily reproduce the same deployment with inventory variables. This means that you can manage your application without having to run manual commands. With Ansible, you have complete control over your application.

If you’re not familiar with Ansible, you can read an introduction here.

What is Podman πŸ¦­β€‹

Podman is a tool that lets you run applications in containers. Containers are like mini-computers that have everything they need to run, but they are isolated from the rest of your system. This way, you can have many different applications running on the same machine, without them interfering with each other. Podman is similar to Docker, which is another popular tool for containers, but Podman has some advantages, such as not needing a special service to run. 🐳

About Podman Play Kube​

The choice of the podman_play module over alternatives like podman_container or podman_pod was deliberate. This decision was made to create a role that can be easily shared with others seeking a similar solution, satisfying the following criteria:

  • Simplified Deployment: Deploy your applications with ease using standard Kubernetes YAML definitions.
  • No Need for Role Development per Application
  • Simple Variables
  • Systemd Unit: Control applications with Systemd units, ensuring they start on boot.
  • Configuration Adjustments: Re-creates the application pod and regenerates the Systemd unit for any future configuration adjustments, such as image tag changes.

These simple criteria should help you test new applications or deploy them in a simple and reproducible way.

Let's Deploy Applications​

Unlike roles tailored for specific application deployments, such as one exclusively for Privatebin, this versatile role is designed to handle various applications. In this blog, we'll showcase the role's capabilities through the deployment of the following applications:

  • Nextcloud: Deployed on a remote RHEL server
  • Hashi Vault: Deployed on a remote RHEL server
  • Dashy: Deployed on a Fedora workstation

Before delving into the specific application deployments mentioned below, it's recommended to read the entire README of the role for a comprehensive understanding of its variables.

Prepare Inventory and Requirements​

Before we embark on the deployment journey, let's ensure our inventory and necessary requirements are in order.

Ansible Structure
ansible_structure
β”œβ”€β”€ playbook
β”‚ β”œβ”€β”€ function_nextcloud_deploy.yml # Playbook to deploy Nextcloud
β”‚ β”œβ”€β”€ function_vault_deploy.yml # Playbook to deploy Hashi Vault
β”‚ β”œβ”€β”€ function_dashy_deploy.yml # Playbook to deploy Dashy
└── inventory
β”œβ”€β”€ group_vars
β”‚ β”œβ”€β”€ nextcloud
β”‚ β”‚ └── nextcloud_vars.yml # Variables for Nextcloud group
β”‚ β”œβ”€β”€ vault
β”‚ β”‚ └── vault_vars.yml # Variables for Vault group
β”‚ └── dashy
β”‚ └── dashy_vars.yml # Variables for Dashy group
β”‚
└── hosts
hosts
[nextcloud]
nextcloud.voidquark.com

[vault]
vault.voidquark.com

[dashy]
localhost

It is assumed that you have fulfilled the role README requirements section, such as having the podman and posix collections installed. Additionally, ensure you have installed the voidquark.podman_play role with the following command:

Install voidquark.podman_play
ansible-galaxy install voidquark.podman_play

In the provided inventory, host_vars have been omitted, as in this simple deployment, they are not required. Variables are stored separately for each group, aligned with the respective playbooks.

Deploy Nextcloud​

Let's dive straight into deployment. Create a simple playbook to deploy Nextcloud containers, which will run in a single pod on the server named podman-play.voidquark.com:

function_nextcloud_deploy.yml
- name: Deploy Nextcloud service
hosts: nextcloud
become: true
gather_facts: true
roles:
- voidquark.podman_play

The above playbook deploys Nextcloud and MariaDB containers in the nextcloud pod. This deployment is for development purposes only and excludes additional environment variables and configuration adjustments. It also does not include the Redis container. The purpose is to showcase role deployment with simplified configuration, avoiding unnecessary complexity.

Now, let's define the required variables for Nextcloud deployment.

nextcloud_vars.yml
podman_play_pod_name: "nextcloud"
podman_play_user: "nextcloud"
podman_play_group: "nextcloud"
podman_play_firewalld_expose_ports:
- "9520/tcp"
podman_play_dirs:
- "{{ podman_play_root_dir }}/var_www_html"
- "{{ podman_play_root_dir }}/data"
- "{{ podman_play_root_dir }}/var_lib_mysql"
- "{{ podman_play_root_dir }}/config"
podman_play_pod_yaml_definition: |
---
apiVersion: v1
kind: Pod
metadata:
labels:
app: "{{ podman_play_pod_name }}"
name: "{{ podman_play_pod_name }}"
spec:
containers:
- name: nextcloud
image: docker.io/nextcloud:production-apache
ports:
- containerPort: 80
hostPort: 9520
stdin: true
tty: true
volumeMounts:
- mountPath: /var/www/html:Z
name: var_www_html
- mountPath: /var/www/html/data:Z
name: data
- mountPath: /var/www/html/config:Z
name: config
env:
- name: MYSQL_DATABASE
value: "nextcloud"
- name: MYSQL_USER
value: "nextcloud"
- name: MYSQL_PASSWORD
value: "dummypassword"
- name: MYSQL_HOST
value: "127.0.0.1"
- name: NEXTCLOUD_ADMIN_USER
value: "admin"
- name: NEXTCLOUD_ADMIN_PASSWORD
value: "dummypassword"
- name: NEXTCLOUD_DATA_DIR
value: "/var/www/html/data"
- name: mariadb
image: docker.io/mariadb:10.6
stdin: true
tty: true
volumeMounts:
- mountPath: /var/lib/mysql:Z
name: var_www_mysql
env:
- name: MYSQL_DATABASE
value: "nextcloud"
- name: MYSQL_USER
value: "nextcloud"
- name: MYSQL_PASSWORD
value: "dummypassword"
- name: MYSQL_ROOT_PASSWORD
value: "dummypassword"
volumes:
- hostPath:
path: "{{ podman_play_root_dir }}/var_www_html"
type: Directory
name: var_www_html
- hostPath:
path: "{{ podman_play_root_dir }}/data"
type: Directory
name: data
- hostPath:
path: "{{ podman_play_root_dir }}/config"
type: Directory
name: config
- hostPath:
path: "{{ podman_play_root_dir }}/var_lib_mysql"
type: Directory
name: var_www_mysql
Execute Deployment
ansible-playbook -i inventory/hosts playbook/function_nextcloud_deploy.yml

Upon completion of the deployment, the Nextcloud instance will be accessible at http://nextcloud.voidquark.com:9520.

Deploy Hashi Vault​

Next in line is Hashi Vault, known for its robust configuration capabilities. For our scenario, deploying a development instance will suffice. Let's kick off the deployment.

function_vault_deploy.yml
- name: Deploy Hashi Vault service
hosts: vault
become: true
gather_facts: true
roles:
- voidquark.podman_play
vault_vars.yml
podman_play_pod_name: "vault"
podman_play_user: "hashivault"
podman_play_group: "hashivault"
podman_play_firewalld_expose_ports:
- "9521/tcp"
podman_play_pod_yaml_definition: |
---
apiVersion: v1
kind: Pod
metadata:
labels:
app: "{{ podman_play_pod_name }}"
name: "{{ podman_play_pod_name }}"
spec:
containers:
- name: "{{ podman_play_pod_name }}"
image: docker.io/hashicorp/vault:latest
ports:
- containerPort: 8200
hostPort: 9521
stdin: true
env:
- name: VAULT_DEV_ROOT_TOKEN_ID
value: "dontUseThisToken"
Execute Deployment
ansible-playbook -i inventory/hosts playbook/function_vault_deploy.yml

Upon completion of this deployment, the development Hashi Vault is accessible at http://vault.voidquark.com:9521.

Deploy Dashy​

To highlight the flexibility of this role and showcase that root privileges are not required, I have chosen to deploy the Dashy application on a workstation.

function_dashy_deploy.yml
- name: Deploy Dashy service
hosts: dashy
connection: local
gather_facts: true
roles:
- voidquark.podman_play
dashy_vars.yml
    podman_play_pod_name: "dashy"
podman_play_pod_yaml_definition: |
---
apiVersion: v1
kind: Pod
metadata:
labels:
app: "{{ podman_play_pod_name }}"
name: "{{ podman_play_pod_name }}"
spec:
containers:
- name: "{{ podman_play_pod_name }}"
image: docker.io/lissy93/dashy:latest
ports:
- containerPort: 80
hostPort: 9522
stdin: true
tty: true
volumeMounts:
- mountPath: /app/public/conf.yml:Z
name: dashy_config
volumes:
- hostPath:
path: "{{ podman_play_template_config_dir }}/conf.yml"
type: File
name: dashy_config
podman_play_custom_conf:
- filename: "conf.yml"
raw_content: |
pageInfo:
title: Dashy Services
description: Example Dashy Dashboard
navLinks:
- title: Dashy docs
path: https://dashy.to/docs/
footerText: "This is footer text"
# Optional app settings and configuration
appConfig:
theme: default
# Main content - An array of sections, each containing an array of items
sections:
- name: VoidQuark - Services
icon: ':rocket:'
items:
- title: VoidQuark
description: VoidQuark Web
icon: favicon
url: https://voidquark.com
- title: Prometheus
description: Prometheus
icon: hl-prometheus
url: https://prometheus.io/
Execute Deployment
ansible-playbook -i inventory/hosts playbook/function_dashy_deploy.yml

After the deployment is complete, the Dashy service is accessible at http://localhost:9522.

Extend Deployment Playbook and Cleanup​

Customizing the deployment process is made easy by leveraging pre_tasks and post_tasks in your playbook, allowing for additional modifications without altering the Ansible role. To get some idea what you can achieve with pre_tasks or post_tasks then you can read more: Set the order of tasks execution in Ansible with these two keywords.

As of now, the role does not natively support cleanup or uninstallation. Future updates may include this capability. When removing your containers, ensure that you also clean up /home/USER/.config/systemd/user directory as systemd unit files are stored here. Additionally, for application data, it is recommended to clean up the /home/USER/POD_NAME directory.

Conclusion​

I believe this role offers significant benefits by simplifying the deployment process. It's important to note that while it may not be suitable for every solution, it provides a streamlined approach for smaller deployments where using the Kubernetes platform might be overkill.

This role is designed for scenarios where a straightforward pod deployment is sufficient, yet having source control in place and avoiding ad-hoc commands like podman run. It has proven useful in various scenarios for me, and I find it valuable to share with the community, as it may meet the needs of others seeking a similar solution.


Thanks for reading. I'm entering the void. πŸ›Έ ➑️ πŸ•³οΈ