Deploying Specify-7 on a RHEL server

Deploying Specify-7 on a RHEL server

I was able to get specify7 all-in-one running in a single Pod using Podman. Below are the steps that I used, along with the environment and configuration files.

Please download the attached zip file.
podman_linux_S0-bsc-alfcwdev.zip (4.4 MB)

Pre-deployment setup

Configuration files

I had to change the nginx and assets configurations, and also the environment variables to get it to work.

In the following configuration example, my server is “s0-bsc-alfcwdev”. You will have to replace this with the name or IP of your own server. Please check the nginx and assets configuration and all the environment files carefully because I had to manually specify the ports and server URL.

  • nginx.conf
  • mysql.env
  • specify.env
  • spasset.env
  • sp_asset_settings.py

NOTE: The report-runner and asset-server both used to run on the same port (8080), so I had to override the settings.py file in asset-server (thus the bind mount in the command) to force it to listen at 5050.

Initial volume setup

This setup is only done once to create the initial data volumes used by Specify-7. You do not need to redo this step when you simply restart or redeploy Specify-7

In the basic setup, you need to create 4 volumes that will be used by the various Specify containers. However, to facilitate the attachment and database folders backup and limit the risk of their accidental deletion, we recommend the bind mount setup. With bind mounts, the attachment files and database volume will be located on the server host outside of Podman.

Make sure you are using the Podman user (sudo -u podman -i) when executing the commands below.

Please choose one of the 2 following setups.

Basic setup:

podman volume create specify6
podman volume create static-files
podman volume create attachments
podman volume create database

Bind mounts setup (Preferred configuration):

podman volume create specify6
podman volume create static-files
mkdir /apps/data/lfc/containers/volumes/specify-attachments
mkdir /apps/data/lfc/containers/volumes/specify-db
podman unshare chown 999:999 -R /apps/data/lfc/containers/volumes/specify-db

In the bind mount example above I created 2 directories (specify-attachments & specify-db) in a specific host server folder (“/apps/data/lfc/containers/volumes”). Note that this folder can be located anywhere on your server. If you change the path, please update the volume paths in the MariaDB and asset server run commands described below in this document.

NOTE: Information on why the podman unshare command is needed to allow MariaDB to work with the database volume: Dealing with user namespaces and SELinux on rootless containers

IMPORTANT: If you ever need to delete the specify-db directory with the bind mount on which you have executed the previous “podman unshare chown” command, you will have to execute the following command from the “podman user”

podman unshare
rm -r /apps/data/lfc/containers/volumes/specify-db
exit

Deployment

The current folder contains all the configuration and miscellaneous files needed to get the above working (including the seed-database). Run the above commands in this directory.

IMPORTANT: The order in which you start the containers matters. Asset server needs to be up before specify7. Redis needs to be up before specify7 worker. Specify6 needs to be started before specify7 and specify7 worker.

Make sure you are using the Podman user when executing the commands below.

sudo -u podman -i

Create the pod

This pod will contain all the SP-7 containers. I’m using 5000 as the specify-7 portal port and 3306 as the DB port. All Pod exposed ports need to be defined in the initial pod creation. You can’t add new exposed ports after the Pod creation.

podman pod create --name specify_pod_demo --publish 5000:80/TCP --publish 3306:3306

Run specify6

podman run --pod specify_pod_demo -u root -v specify6:/opt/Specify:Z specifyconsortium/specify6-service:6.8.03

Run nginx

podman run --detach --pod specify_pod_demo -v static-files:/volumes/static-files -v specify6:/volumes/specify6 -v ./nginx.conf:/etc/nginx/conf.d/default.conf nginx:alpine

Run MariaDB

If using the basic volume setup:

podman run --detach --pod specify_pod_demo -v database:/var/lib/mysql -v ./seed-database:/docker-entrypoint-initdb.d --env-file=‘./mysql.env’ mariadb:10.11

If using the bind-mounts volume setup:ls

podman run --detach --pod specify_pod_demo -v /apps/data/lfc/containers/volumes/specify-db:/var/lib/mysql:Z -v ./seed-database:/docker-entrypoint-initdb.d:Z --env-file=‘./mysql.env’ mariadb:10.11

Test that MariaDB is accessible (ex: DBeaver) on s0-bsc-alfcwdev:3306

Run asset server

If using the basic volume setup:

podman run --detach --pod specify_pod_demo -u root -v attachments:/home/specify/attachments -v ./sp_asset_settings.py:/home/specify/settings.py --env-file=‘./spasset.env’ specifyconsortium/specify-asset-service

If using the bind-mounts volume setup:

podman run --detach --pod specify_pod_demo -u root -v /apps/data/lfc/containers/volumes/specify-attachments:/home/specify/attachments:Z -v ./sp_asset_settings.py:/home/specify/settings.py:Z --env-file=‘./spasset.env’ specifyconsortium/specify-asset-service

Run redis

podman run --detach --pod specify_pod_demo redis:6.0

Run specify7

podman run --detach --pod specify_pod_demo -u root -v specify6:/opt/Specify -v static-files:/volumes/static-files:Z --env-file=‘./specify.env’ specifyconsortium/specify7-service:v7

Run specify7 worker

podman run --detach --pod specify_pod_demo -v specify6:/opt/Specify -v static-files:/volumes/static-files --env-file=‘./specify.env’ specifyconsortium/specify7-service:v7 ve/bin/celery -A specifyweb worker -l INFO --concurrency=1

Run report runner

podman run --detach --pod specify_pod_demo specifyconsortium/report-runner

Test web portal

Test that the web portal is up and accessible:

Portal web site: http://s0-bsc-alfcwdev:5000/

Portal authentication: This is defined in the SQl seeding file. If you are using the current “seed-database/2023_09_07_21_04_28.sql” file for your deployment, here is the authentication information:

  • Username: testiiif
  • Password: testuser