Is specify running locally or on a different machine as the browser?
I’m running Specify7 with Podman on our federal RHEL server.
I think I found the bug…
In your last attachment the specify.env and sp_asset_settings.py were updated. But I think I should also fix spasset.env?
SERVER_NAME=host.containers.internal
SERVER_PORT=5000
ATTACHMENT_KEY=your asset server access key
DEBUG_MODE=true
Should I replace host.containers.internal with 0.0.0.0 ?
PS. Can you confirm that these ports are valid in sp_asset_settings.py? We have 5050 and 5000 in the same config.
HOST = ‘localhost’
PORT = 5050
SERVER_NAME = ‘localhost’
SERVER_PORT = 5000
Ah, if specify is running remotely,
then you put the server IP address at
SERVER_NAME = `$SERVER_IP_ADDRESS`
SERVER_PORT = `$SPECIFY PORT`
You shouldn’t have to modify spasset.env file (actually that file isn’t even being listen to right now since we are directly mounting the py file)
So if I understand correctly this part of sp_asset_settings.py should look like:
HOST = ‘localhost’
PORT = 5050
SERVER_NAME =
$SERVER_IP_ADDRESS
SERVER_PORT =
$SPECIFY PORT
Yes, assuming you are still running everything in a single pod.
Slightly related:
In the specify.env, since everything is running on a single server ,
ASSET_SERVER_URL=http://host.containers.internal:5000/web_asset_store.xml
is valid, and so is http://0.0.0.0:5050/web_asset_store.xml
.
If asset-server is running remotely,
then it would be
http://IP_ADDRESS:SPECIFY_PORT/web_asset_store.xml
But where do the values for theses variables comme from?
$SERVER_IP_ADDRESS
$SPECIFY PORT
They are not defined in the specify.env file.
In the 2 example of your last reply, for ASSET_SERVER_URL you use port 5000 in the first and 5050 in the second. Which one should with your updated code?
Ah, you’d need to hard code them for now. The proper way would be to allow asset-server container to take the port to run the server at as an environment variable. We have to do this because otherwise it would run at port 8080 (which is same as what report-runner runs at)
So if my server is here… s0-bsc-alfcwdev, the port declared for the web_asset_store.xml should be 5000 and not 5050, since 5050 is not accessible outside the pod?
ASSET_SERVER_URL=http://s0-bsc-alfcwdev:5000/web_asset_store.xml
Almost. It would be on the basis of IP address (or a domain-name if you have DNS). So, is s0-bsc-alfcwdev is accessible at IP address 202.202.202.202, and PORT 5000, then it would be
ASSET_SERVER_URL=http://202.202.202.202:5000/web_asset_store.xml)
Whatever IP address and PORT you are using to access the specify portal (right now from the browser) can actually be put in asset settings py file. So, if you are accessing specify portal at 202.202.202.202 and port 5000, it would be
SERVER_NAME = '202.202.202.202'
SERVER_PORT = 5000
IT WORKED! YOU ARE AWESOME!
Thanks
I can upload files!
Now I will test bind mounts with Specify volumes for attachments and DB…
Once I get that running I will share my workspace with the community.
Thank you so much for your hlep!
Here is the updated documentation on how to get Specify-7 working on Linux with Podman. I used your Podman with Windows instructions and added the Linux configuration info.
Deploying Specify-7 on a RHEL server
I was able to get specify7 all-in-one running in a single Pod using Podman. Below are the steps that I used, along with the environment and configuration files.
Please download the attached zip file.
podman_linux_S0-bsc-alfcwdev.zip (4.4 MB)
Pre-deployment setup
Configuration files
I had to change the nginx and assets configurations, and also the environment variables to get it to work.
In the following configuration example, my server is “s0-bsc-alfcwdev”. You will have to replace this with the name or IP of your own server. Please check the nginx and assets configuration and all the environment files carefully because I had to manually specify the ports and server URL.
- nginx.conf
- mysql.env
- specify.env
- spasset.env
- sp_asset_settings.py
NOTE: The report-runner and asset-server both used to run on the same port (8080), so I had to override the settings.py file in asset-server (thus the bind mount in the command) to force it to listen at 5050.
Initial volume setup
This setup is only done once to create the initial data volumes used by Specify-7. You do not need to redo this step when you simply restart or redeploy Specify-7
In the basic setup, you need to create 4 volumes that will be used by the various Specify containers. However, to facilitate the attachment and database folders backup and limit the risk of their accidental deletion, we recommend the bind mount setup. With bind mounts, the attachment files and database volume will be located on the server host outside of Podman.
Make sure you are using the Podman user (sudo -u podman -i) when executing the commands below.
Please choose one of the 2 following setups.
Basic setup:
podman volume create specify6
podman volume create static-files
podman volume create attachments
podman volume create database
Bind mounts setup (Preferred configuration):
podman volume create specify6
podman volume create static-files
mkdir /apps/data/lfc/containers/volumes/specify-attachments
mkdir /apps/data/lfc/containers/volumes/specify-db
podman unshare chown 999:999 -R /apps/data/lfc/containers/volumes/specify-db
In the bind mount example above I created 2 directories (specify-attachments & specify-db) in a specific host server folder (“/apps/data/lfc/containers/volumes”). Note that this folder can be located anywhere on your server. If you change the path, please update the volume paths in the MariaDB and asset server run commands described below in this document.
NOTE: Information on why the podman unshare command is needed to allow MariaDB to work with the database volume: https://www.redhat.com/sysadmin/user-namespaces-selinux-rootless-containers
IMPORTANT: If you ever need to delete the specify-db directory with the bind mount on which you have executed the previous “podman unshare chown” command, you will have to execute the following command from the “podman user”
podman unshare
rm -r /apps/data/lfc/containers/volumes/specify-db
exit
Deployment
The current folder contains all the configuration and miscellaneous files needed to get the above working (including the seed-database). Run the above commands in this directory.
IMPORTANT: The order in which you start the containers matters. Asset server needs to be up before specify7. Redis needs to be up before specify7 worker. Specify6 needs to be started before specify7 and specify7 worker.
Make sure you are using the Podman user when executing the commands below.
sudo -u podman -i
Create the pod
This pod will contain all the SP-7 containers. I’m using 5000 as the specify-7 portal port and 3306 as the DB port. All Pod exposed ports need to be defined in the initial pod creation. You can’t add new exposed ports after the Pod creation.
podman pod create --name specify_pod_demo --publish 5000:80/TCP --publish 3306:3306
Run specify6
podman run --pod specify_pod_demo -u root -v specify6:/opt/Specify:Z specifyconsortium/specify6-service:6.8.03
Run nginx
podman run --detach --pod specify_pod_demo -v static-files:/volumes/static-files -v specify6:/volumes/specify6 -v ./nginx.conf:/etc/nginx/conf.d/default.conf nginx:alpine
Run MariaDB
If using the basic volume setup:
podman run --detach --pod specify_pod_demo -v database:/var/lib/mysql -v ./seed-database:/docker-entrypoint-initdb.d --env-file=‘./mysql.env’ mariadb:10.11
If using the bind-mounts volume setup:ls
podman run --detach --pod specify_pod_demo -v /apps/data/lfc/containers/volumes/specify-db:/var/lib/mysql:Z -v ./seed-database:/docker-entrypoint-initdb.d:Z --env-file=‘./mysql.env’ mariadb:10.11
Test that MariaDB is accessible (ex: DBeaver) on s0-bsc-alfcwdev:3306
Run asset server
If using the basic volume setup:
podman run --detach --pod specify_pod_demo -u root -v attachments:/home/specify/attachments -v ./sp_asset_settings.py:/home/specify/settings.py --env-file=‘./spasset.env’ specifyconsortium/specify-asset-service
If using the bind-mounts volume setup:
podman run --detach --pod specify_pod_demo -u root -v /apps/data/lfc/containers/volumes/specify-attachments:/home/specify/attachments:Z -v ./sp_asset_settings.py:/home/specify/settings.py:Z --env-file=‘./spasset.env’ specifyconsortium/specify-asset-service
Run redis
podman run --detach --pod specify_pod_demo redis:6.0
Run specify7
podman run --detach --pod specify_pod_demo -u root -v specify6:/opt/Specify -v static-files:/volumes/static-files:Z --env-file=‘./specify.env’ specifyconsortium/specify7-service:v7
Run specify7 worker
podman run --detach --pod specify_pod_demo -v specify6:/opt/Specify -v static-files:/volumes/static-files --env-file=‘./specify.env’ specifyconsortium/specify7-service:v7 ve/bin/celery -A specifyweb worker -l INFO --concurrency=1
Run report runner
podman run --detach --pod specify_pod_demo specifyconsortium/report-runner
Test web portal
Test that the web portal is up and accessible:
Portal web site: http://s0-bsc-alfcwdev:5000/
Portal authentication: This is defined in the SQl seeding file. If you are using the current “seed-database/2023_09_07_21_04_28.sql” file for your deployment, here is the authentication information:
- Username: testiiif
- Password: testuser
@vinayakjha How can I attach my configuration files (zip) to this ticket?