Asset Server Configuration Issue with Containerized SP-7 Using Domain Name and SSL Certificate

Hello,

I’m experiencing issues with the integrated Asset Server while running a containerized SP-7 (with Podman). Previously, everything worked smoothly on my local network using the server’s name (e.g., s-bor-vdq350100.nrn.nrcan.gc.ca) and my assigned SP-7 port (e.g., 5003).

The issue started after I switched to using a new domain (mycology.devsp.cfs.nrcan.gc.ca) name and SSL certificate on port 443. I’m unclear about where and when to use either the server’s name or domain name in the SP-7 configuration files.

The Asset Server configuration is spread across three files: spasset.env, spasset.py, and specify.env. I’ve tried multiple combinations of server names and domain names without success. Here’s a summary of my latest settings in each file:

  1. spasset.env:

    #SERVER_NAME=s-bor-vdq350100.nrn.nrcan.gc.ca
    #SERVER_PORT=5003
    SERVER_NAME=mycology.devsp.cfs.nrcan.gc.ca
    SERVER_PORT=443
    
  2. spasset.py:

    HOST = 'localhost'
    PORT = 5050
    # SERVER_NAME = 's-bor-vdq350100.nrn.nrcan.gc.ca'
    # SERVER_PORT = 5003
    SERVER_NAME = 'mycology.devsp.cfs.nrcan.gc.ca'
    SERVER_PORT = 443
    # Port the development test server should listen on.
    DEVELOPMENT_PORT = PORT
    
  3. specify.env:

    ASSET_SERVER_URL=https://mycology.devsp.cfs.nrcan.gc.ca/web_asset_store.xml
    

Despite these settings, the asset URL (https://mycology.devsp.cfs.nrcan.gc.ca/web_asset_store.xml) output defaults to http rather than https.

Any suggestions to resolve this would be greatly appreciated!

PS. https://mycology.devsp.cfs.nrcan.gc.ca is intentionally only accessible on our local network for our collection teams.

Thanks

Hi @Heryk,

ASSET_SERVER_URL seems fine to me. This is telling Specify how to communicate with the asset server, and won’t cause any issues once https is setup and working.

The reason that the links are appearing as http://mycology.devsp.cfs.nrcan.gc.ca:443 within web_asset_store.xml instead of https, I believe is because nginx actually replaces those strings before serving the page via sub_filter. Since you don’t have an nginx proxy running in front of it(?), those links are what are defined in the codebase

nginx configuration with sub_filter

location = /web_asset_store.xml {
                proxy_pass http://localhost:8080/web_asset_store.xml;
                sub_filter 'http://assets1.specifycloud.org:8080' 'https://assets1.specifycloud.org';
                sub_filter_once off;
                sub_filter_types text/xml;
}

Port 443 appears at the end because host is defined as

In the setups I have worked with, nginx handles all traffic on 443 and then proxies those requests to various backend services. Are you able to share more information about how reverse proxying is being done?

1 Like

Hi @markp,

Thanks for the very helpful information! I have Apache acting as a reverse proxy to direct http & https traffic to my rootless Podman setup, which hosts all my containerized Specify-7 components. My SSL certificat is configured in Apache.

After reading your post, I attempted to override the web_asset_store.xml with the correct https and port settings, as suggested here: Python Error SSL: CERTIFICATE_VERIFY_FAILED] with SSL requests - #2 by wphillip

<?xml version="1.0" encoding="UTF-8"?>
<urls>
    <url type="read"><![CDATA[https://mycology.devsp.cfs.nrcan.gc.ca/fileget]]></url>
    <url type="write"><![CDATA[https://mycology.devsp.cfs.nrcan.gc.ca/fileupload]]></url>
    <url type="delete"><![CDATA[https://mycology.devsp.cfs.nrcan.gc.ca/filedelete]]></url>
    <url type="getmetadata"><![CDATA[https://mycology.devsp.cfs.nrcan.gc.ca/getmetadata]]></url>
    <url type="testkey">https://mycology.devsp.cfs.nrcan.gc.ca/testkey</url>
</urls>

I managed to implement the override, but it led to additional SSL issues, as shown in my pod logs:

b913c6b5504f File "/opt/specify7/ve/lib/python3.8/site-packages/requests/adapters.py", line 620, in send

b913c6b5504f raise SSLError(e, request=request)

b913c6b5504f requests.exceptions.SSLError: HTTPSConnectionPool(host='mycology.devsp.cfs.nrcan.gc.ca', port=443): Max retries exceeded with url: /web_asseED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1108)')))

Thanks for any further guidance you can provide!

Héryk

PS. This is my Apache configuration file…


<VirtualHost *:80>

ServerName mycology.devsp.cfs.nrcan.gc.ca

Redirect permanent / https://mycology.devsp.cfs.nrcan.gc.ca/

</VirtualHost>

<VirtualHost *:80>

ServerName mycologie.devsp.scf.rncan.gc.ca

Redirect permanent / https://mycologie.devsp.scf.rncan.gc.ca/

</VirtualHost>

<VirtualHost *:443>

ServerAdmin lfc.imit-cfl.giti@rncan-nrcan.gc.ca

ServerName mycology.devsp.cfs.nrcan.gc.ca

ServerAlias mycologie.devsp.scf.rncan.gc.ca

ProxyPass / http://localhost:5003/

ProxyPassReverse / http://localhost:5003/

# Set X-Original-Host, Host, and X-Forwarded-Proto headers

RequestHeader set X-Forwarded-Proto "https"

#ProxyAddHeaders On

ProxyPreserveHost On

ErrorLog "logs/lfc/mycology_devsp_error_log"

CustomLog "logs/lfc/mycology_devsp_access_log" combined

SSLCertificateFile "/mnt/opt/httpd/conf/extra/custom/lfc/devsp_cfs_nrcan_gc_ca.crt"

SSLCertificateKeyFile "/mnt/opt/httpd/conf/extra/custom/lfc/devsp_cfs_nrcan_gc_ca.key"

</VirtualHost>

Here is my NGINX file used inside POD NGINX container. There is no listening section for port 443 in my NGINX file since all https traffic is captured by Apache, decrypted and then reversed proxied to NGINX:


server {
    listen 80; # ssl;
    #ssl_certificate /etc/letsencrypt/fullchain.pem;
    #ssl_certificate_key /etc/letsencrypt/privkey.pem;
    server_name www.demo-assets.specifycloud.org;

    root /usr/share/nginx;
    #client_max_body_size 128M;
    client_max_body_size 500M;

    # serve static files directly
    location /static/ {
        client_max_body_size 0;
        root /volumes;
        rewrite ^/static/config/(.*)$ /specify6/config/$1 break;
        rewrite ^/static/depository/(.*)$ /static-files/depository/$1 break;
        rewrite ^/static/(.*)$ /static-files/frontend-static/$1 break;
    }

    # proxy these urls to the asset server
    location ~ ^/(fileget|fileupload|filedelete|getmetadata|testkey|web_asset_store.xml) {
        client_max_body_size 0;
        resolver 127.0.0.11 valid=30s;
        set $backend "http://0.0.0.0:5050";
        proxy_pass $backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }

    # proxy everything else to specify 7
    location / {
        client_max_body_size 400M;
        client_body_buffer_size 400M;
        client_body_timeout 120;
        resolver 127.0.0.11 valid=30s;
        set $backend "http://0.0.0.0:8000";
        proxy_pass $backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

Hi @Heryk,

Preamble

I think the issue is the chain of how different components are communicating. I’ll outline my interpretation of how the chain would look.

Understanding of the current configuration

  1. Client sends either an http (80) or https (443) request to mycologie.devsp.scf.rncan.gc.ca
  2. Apache receives that request. If it is an http (80) request, redirects to https (443).
  3. Apache proxies the 443 request to localhost:5003

… there apears to be a break here, what is listening on 5003?

The nginx container is listening on port 80. Even if nginx wasn’t in the picture, the asset server is on 5050 and specify is on 8000?

Suggestions

This situation seems to be pretty similar to what you are trying to do: apache - Configure apache2 and host to pass traffic to a docker container with nginx - Stack Overflow. My guess is that the duel use of apache and nginx is because your networking team uses apache (and thus is needed for firewall management etc), but Specify has configurations written for nginx? Otherwise my interpretation is that they are serving duplicate purposes.

If the two must be used, my understanding is that something like the below would be the pathway. I don’t think the asset server port should be 443.

Client (Internet) - 80/443 → Apache/WAF - 80 → Nginx - 5050 → asset container.

I put 80 between Apache and Nginx because of the below, it seems that at that point it is within the same trusted network:

There is no listening section for port 443 in my NGINX file since all https traffic is captured by Apache, decrypted and then reversed proxied to NGINX

At that point, your configuration would be similar to as if you were exposing nginx to the public web without https, Apache is just the only thing that is able to make requests to it, as opposed to the public internet.

1 Like

Hello Mark,

You’re absolutely correct—the dual use of Apache and NGINX is because our networking team relies on Apache for essential functions like firewall management. Meanwhile, Specify is configured specifically for NGINX.

You also have a solid understanding of the networking setup. One additional element to consider, though, is the role of our Podman Pod. For security reasons, we’ve set up a Pod that includes all the Specify containers, such as NGINX, SP-7, and the Asset Server.

This Pod receives traffic on port 5003 and internally routes it to NGINX on port 80. I’ve included a diagram below to help illustrate this flow. One more thing to note: within a Pod, each container’s port must be unique, which is why I’m using port 5050 for the asset server.

All components of Specify-7 currently work correctly with this routing configuration except for the Asset-Server.

Hi @Heryk,

Thank you for the diagram! This is quite the interesting problem, and below is my thinking

Given Conclusion
Accessing the SP-7 container through https works as expected. There isn’t an issue with containers within the podman pod not working with the certificates
Accessing /web_asset_store.xml, via https from a client browser returns the correct xml. There isn’t a problem with the asset container being accessed from the web over https.

This leads me to believe that the issue is only occurring when the specify pod makes the request (through the requests module in python), and not a browser. Based on the error message unable to get local issuer certificate (_ssl.c:1108), I think that the browser may be filling in intermediate certificate details that specify (via requests) is not.

To test, you could run a python script with the requests module to submit a get request to https://mycologie.devsp.scf.rncan.gc.ca/web_asset_store.xml from a machine that would have access via the firewall configuration.

Hi @markp,
To find a solution to my Asset Server issue, I experimented with 2 approaches to rewriting the web_asset_store.xml output and both appear to work. Here are the two approaches I tested:

  1. Overriding the default XML file with my own xml file (with the corrected urls) when starting the asset-server container.

Example:

podman run --detach --pod sp7_pod_mycology --name sp7_asset-service_mycology -u root -v /data/home/podman/apps/specify/volumes/sp7dev2_mycology/specify-attachments:/home/specify/attachments:Z -v ./spasset.py:/home/specify/settings.py:Z -v ./web_asset_store.xml:/home/specify/web_asset_store.xml:ro,Z --env-file='./spasset.env' specifyconsortium/specify-asset-service
  1. Using the sub_filter directive in the Nginx configuration file to dynamically rewrite the web_asset_store.xml file.

Example:

# proxy these urls to the asset server
    location ~ ^/(web_asset_store.xml) {
        client_max_body_size 0;
        resolver 127.0.0.11 valid=30s;
        set $backend "http://0.0.0.0:5050";
        proxy_pass $backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        sub_filter 'http://mycology.devsp.cfs.nrcan.gc.ca:443' 'https://mycology.devsp.cfs.nrcan.gc.ca';
        sub_filter_once off;
        sub_filter_types text/xml;
    }

    # proxy these urls to the asset server
    location ~ ^/(fileget|fileupload|filedelete|getmetadata|testkey) {
        client_max_body_size 0;
        resolver 127.0.0.11 valid=30s;
        set $backend "http://0.0.0.0:5050";
        proxy_pass $backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }

mycology.devsp.cfs.nrcan.gc.ca/web_asset_store.xml output:

<?xml version="1.0" encoding="UTF-8"?>
<urls>
    <url type="read"><![CDATA[https://mycology.devsp.cfs.nrcan.gc.ca/fileget]]></url>
    <url type="write"><![CDATA[https://mycology.devsp.cfs.nrcan.gc.ca/fileupload]]></url>
    <url type="delete"><![CDATA[https://mycology.devsp.cfs.nrcan.gc.ca/filedelete]]></url>
    <url type="getmetadata"><![CDATA[https://mycology.devsp.cfs.nrcan.gc.ca/getmetadata]]></url>
    <url type="testkey">https://mycology.devsp.cfs.nrcan.gc.ca/testkey</url>
</urls>

After experimenting with different configurations, I think I’ve identified the main issue I’ve been having with the Asset Server: it appears to stem from using “https” in the ASSET_SERVER_URL variable. When I set ASSET_SERVER_URL with an “HTTPS” URL (e.g. https://mycology.devsp.cfs.nrcan.gc.ca/web_asset_store.xml), all Specify containers start as expected, but the SP-7 worker containers immediately stop afterwards.

Since all my HTTP traffic is redirected to HTTPS, I encounter the same problem if I use the HTTP version of my domain name (http://mycology.devsp.cfs.nrcan.gc.ca/web_asset_store.xml).

As soon as the Asset Server URLs use “HTTPS” the SP-7 worker experiences issues. So even if I rewrite the web_asset_store.xml output to use the correct https URLs, it breaks the SP-7 worker container.

Here’s the SP-7 worker log error:

File "/opt/specify7/ve/lib/python3.8/site-packages/requests/adapters.py", line 620, in send
    raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='mycology.devsp.cfs.nrcan.gc.ca', port=443): Max retries exceeded with url: /testkey?random=76b5a60d-6427-4709-a81a-b97137825d87&token=ded10a9bf9842993d14dafaac4fccf01%3A1730923242 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1108)')))

I may be wrong, but it seems there might be an internal SSL issue with the SP-7 worker. Not sure how to move forward with this. :thinking:

P.S. Mark, I’m not a Python programmer, but if you think additional testing in Python might help, could you provide an example request I could try in Postman? Thanks

Hi @Heryk,

Sub filter with Nginx is great, this makes maintenance a lot easier because you can pull container updates without having to worry about overwriting with your own xml each time.

We have narrowed in that this is a certificate issue specifically for the pods making requests. I suggested python because that is the same package/language that specify is using, but I don’t think that it has to be python+requests specifically. What I believe will matter more is that whatever is making the request has the same access to certificate intermediate information as requests does when making the request.

I can replicate the error by using the following

import requests

request = requests.get("https://incomplete-chain.badssl.com")
print(request.text)

Running curl will also get me the same error

curl "https://incomplete-chain.badssl.com"

If you run curl from inside of one of the pods against https://mycology.devsp.cfs.nrcan.gc.ca/web_asset_store.xml can you reproduce the error?

1 Like

Ah! I think you’re onto something, Mark! It seems the application can’t call itself directly on the server. Following your recommendation, I tried using curl to access web_asset_store.xml from the Linux server, but it returned a security error message (SSL).

However, when I request the same page from a web browser on another network machine, it works fine.

Since I’m not admin our federal server, I’ll reach out to our network team to investigate further and let you know how this works out.
Thank you so much! :beers:

1 Like

Hi,
My hypothesis was that my Rootless Podman was blocking the containers inside the Podman pod from accessing the server certificat authority (CA). So, I addressed this issue…

I’ve triple-checked everything and reinstalled on the server a new SSL/TLS certificate along with the Certificate Authority. I’m able to successfully access my Specify installation on my internal network using curl from my laptop (Windows), directly on the server (RHEL), and even from within the Specify-7 container (Ubuntu). To enable curl requests inside the Specify-7 container, I had to build a new SP-7 image that includes the CA and certificate files as well as install curl in the container.

However, Specify-7 still isn’t working correctly with my domain name. When I try to access it via a browser, I get a 502 Bad Gateway error from NGINX. Checking the containers shows that Specify-7 is running, but the worker stops immediately after starting.

I consistently see the following log error in the Specify-7 container… it seems to points to a certificat verification issue:


File "/opt/specify7/ve/lib/python3.8/site-packages/requests/adapters.py", line 563, in send  
    raise SSLError(e, request=request)  
requests.exceptions.SSLError: HTTPSConnectionPool(host='mycology.devsp.cfs.nrcan.gc.ca', port=443): Max retries exceeded with url: /testkey?random=4514acda-cc5e-4390-ba9c-06a9a362ec4f&token=d6a0bb79cac84d2c7cd3ff145cefa143%3A1732137884 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1108)'))

That said, I can access web_asset_store.xml on my network without any issues from my laptop browser (https://mycology.devsp.cfs.nrcan.gc.ca/web_asset_store.xml), via curl on the server, and even from within the Specify-7 container.

At this point, I’m out of ideas on where to look next.

BTW. Here is how I built my new Specify7 image with the certificats:

FROM specifyconsortium/specify7-service:v7

# Copy certificates into the container
# My *.cer file is already in PEM format. I can just copy it into the container and rename it *.crt
COPY ssl/devsp_cfs_nrcan_gc_ca.cer /usr/local/share/ca-certificates/devsp_cfs_nrcan_gc_ca.crt
COPY ssl/NRCAN-RootCA.crt /usr/local/share/ca-certificates/NRCAN-RootCA.crt
COPY ssl/NRCAN-SubCA.crt /usr/local/share/ca-certificates/NRCAN-SubCA.crt

# Update CA certificates
# Run the necessary commands with root permissions
USER root
RUN apt-get update && \
    # apt-get install curl && \
    # apt-get install -y ca-certificates && \
    apt-get install -y --no-install-recommends curl ca-certificates && \
    chmod 644 /usr/local/share/ca-certificates/devsp_cfs_nrcan_gc_ca.crt && \
    chmod 644 /usr/local/share/ca-certificates/NRCAN-RootCA.crt && \
    chmod 644 /usr/local/share/ca-certificates/NRCAN-SubCA.crt && \
    update-ca-certificates && \
    apt-get clean && rm -rf /var/lib/apt/lists/*

# Switch back to the default user
USER specify

Here is my NGINX configuration:

server {
    listen 80; # ssl;
    #ssl_certificate /etc/letsencrypt/fullchain.pem;
    #ssl_certificate_key /etc/letsencrypt/privkey.pem;
    #server_name www.demo-assets.specifycloud.org;
	server_name mycology.devsp.cfs.nrcan.gc.ca;

    root /usr/share/nginx;
    #client_max_body_size 128M;
    client_max_body_size 500M;

    # serve static files directly
    location /static/ {
        client_max_body_size 0;
        root /volumes;
        rewrite ^/static/config/(.*)$ /specify6/config/$1 break;
        rewrite ^/static/depository/(.*)$ /static-files/depository/$1 break;
        rewrite ^/static/(.*)$ /static-files/frontend-static/$1 break;
    }

    # proxy these urls to the asset server
    location ~ ^/(web_asset_store.xml) {
        client_max_body_size 0;
        resolver 127.0.0.11 valid=30s;
        #set $backend "http://0.0.0.0:5050";
        set $backend "http://127.0.0.1:5050";
        proxy_pass $backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
		#sub_filter 'http://s-bor-vdq350100.nrn.nrcan.gc.ca:5003' 'https://mycology.devsp.cfs.nrcan.gc.ca';
		sub_filter 'http://mycology.devsp.cfs.nrcan.gc.ca:443' 'https://mycology.devsp.cfs.nrcan.gc.ca';
        sub_filter_once off;
        sub_filter_types text/xml;
    }

    # proxy these urls to the asset server
    location ~ ^/(fileget|fileupload|filedelete|getmetadata|testkey) {
        client_max_body_size 0;
        resolver 127.0.0.11 valid=30s;
        #set $backend "http://0.0.0.0:5050";
        set $backend "http://127.0.0.1:5050";
        proxy_pass $backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
    # proxy everything else to specify 7
    location / {
        client_max_body_size 400M;
        client_body_buffer_size 400M;
        client_body_timeout 120;
        resolver 127.0.0.11 valid=30s;
        #set $backend "http://0.0.0.0:8000";
        set $backend "http://127.0.0.1:8000";
        proxy_pass $backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

Hi @Heryk,

I’d say it makes sense to wait for a Specify staff to chime in on this one, because they will have more insight into how Specify interacts with the systems certificate store and how environment variables should be set.

I don’t want to lead you down a goose chase, but my belief is that you are almost there. The system now has the necessary certificates, but python’s requests module isn’t using them (hence why curl works from inside the container but the worker is still failing). The accepted answer to this SO post outlines the solution (which should apply to Ubuntu as a debian based distro).

I think that introducing the environment variable is easier than passing any parameters (such as verify= to the requests call, because that would require modifying the source code, which would be overwritten the next time you update the container. I don’t think you want to do anything to do with certi because this would again interfere with the source code.

1 Like

Hi @markp ,

Thank you so much for all your help and guidance over the past few weeks. I truly appreciate the time and effort you’ve dedicated to assisting me with this issue.

Based on your suggestion, I’ll reach out to the Specify staff for further support, as they’ll have deeper insights into how Specify interacts with the system’s certificate store and how the environment variables should be configured.

Your belief that I’m close to resolving this gives me a bit of hope. I agree that modifying the source code would not be ideal, and I’ll aim for a non-intrusive approach as you’ve suggested.

Thanks again for everything—you’ve been a tremendous help! :slight_smile:

FYI. @Grant & @alec.white

PS. This page was incredibly helpful in enhancing my understanding of the concepts behind SSL/TLS: SSL and SSL Certificates Explained For Beginners

Great news! It’s working now!

I’ll be documenting all the fixes soon, but here’s a quick summary of what I did to get Specify-7 working correctly with my SSL certificates.

Just for context, I’m working in a rootless Podman environment (not rootful Docker) on RHEL and using Specify-7 with SSO.

  • I added a sub_filter to my nginx config file to replace “http://” with “https://” in my web_asset_store.xml output file.
  • I built a new “specify7-service” image, added my leaf and CA certificates, and ran the ca-certificates command. I used this new image for my service and worker containers.
  • I added os.environ['REQUESTS_CA_BUNDLE'] = '/etc/ssl/certs/ca-certificates.crt' to my settings.py file just below the import os line. This file is pushed into the specify7 service container at runtime and is required for SSO configuration.
  • I also created a new settings_worker.py file and included it in the specify7 worker container at runtime. The file contains only two lines:
import os
os.environ['REQUESTS_CA_BUNDLE'] = '/etc/ssl/certs/ca-certificates.crt'

My two cents… It might be the rootless environment blocking the container (sp7 service and worker) access to my server CA. That said, shouldn’t the os.environ['REQUESTS_CA_BUNDLE'] be set by default in the Specify service image? Could this be considered for a future update to avoid having to manually add it? Thanks!

FYI. @Grant & @alec.white

2 Likes

Hi,
I wanted to follow up on my previous post. I’ve included my files below in case they might help someone else troubleshoot the issue I had encountered and fixed concerning SSL and the asset server configuration in a rootless Podman containerized environment.
Cheers

  1. I added a sub_filter to my nginx config file to replace “http://” with “https://” in my web_asset_store.xml output file.
server {
    listen 80; # ssl;
    server_name devsp.cfs.nrcan.gc.ca;

    root /usr/share/nginx;
    client_max_body_size 500M;

    # serve static files directly
    location /static/ {
        client_max_body_size 0;
        root /volumes;
        rewrite ^/static/config/(.*)$ /specify6/config/$1 break;
        rewrite ^/static/depository/(.*)$ /static-files/depository/$1 break;
        rewrite ^/static/(.*)$ /static-files/frontend-static/$1 break;
    }

    # proxy these urls to the asset server
    location ~ ^/(web_asset_store.xml) {
        client_max_body_size 0;
        resolver 127.0.0.11 valid=30s;
        set $backend "http://127.0.0.1:5050";
        proxy_pass $backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        sub_filter 'http://devsp.cfs.nrcan.gc.ca:443' 'https://devsp.cfs.nrcan.gc.ca';
        sub_filter_once off;
        sub_filter_types text/xml;
    }

    # proxy these urls to the asset server
    location ~ ^/(fileget|fileupload|filedelete|getmetadata|testkey) {
        client_max_body_size 0;
        resolver 127.0.0.11 valid=30s;
        set $backend "http://127.0.0.1:5050";
        proxy_pass $backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }

    # proxy everything else to specify 7
    location / {
        client_max_body_size 400M;
        client_body_buffer_size 400M;
        client_body_timeout 120;
        resolver 127.0.0.11 valid=30s;
        set $backend "http://127.0.0.1:8000";
        proxy_pass $backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}
  1. I built a new “specify7-service” image, added my leaf and CA certificates (root and intermediate), and ran the ca-certificates command. I now use this new image for my Specify-7 service and worker containers.
FROM specifyconsortium/specify7-service:v7

# Copy certificates into the container
# My *.cer file is already in PEM format. I can just copy it into the container and rename it *.crt
COPY ssl/devsp_cfs_nrcan_gc_ca.cer /usr/local/share/ca-certificates/devsp_cfs_nrcan_gc_ca.crt
COPY ssl/NRCAN-RootCA.crt /usr/local/share/ca-certificates/NRCAN-RootCA.crt
COPY ssl/NRCAN-SubCA.crt /usr/local/share/ca-certificates/NRCAN-SubCA.crt

# Update CA certificates
# Run the necessary commands with root permissions
USER root
RUN apt-get update && \
    apt-get install -y --no-install-recommends curl ca-certificates && \
    chmod 644 /usr/local/share/ca-certificates/devsp_cfs_nrcan_gc_ca.crt && \
    chmod 644 /usr/local/share/ca-certificates/NRCAN-RootCA.crt && \
    chmod 644 /usr/local/share/ca-certificates/NRCAN-SubCA.crt && \
    update-ca-certificates && \
    apt-get clean && rm -rf /var/lib/apt/lists/*
# Switch back to the default user
USER specify

The image was built and pushed to my private Docker Hub image repo.

docker build -t cfservice/specify7-service-v7-ssl:01 .
docker push cfservice/specify7-service-v7-ssl:01
  1. I added os.environ['REQUESTS_CA_BUNDLE'] = '/etc/ssl/certs/ca-certificates.crt' to my specify_settings.py file just below the import os line. This file is pushed into the specify7 service container at runtime and is required for SSO configuration.
podman run --detach --pod sp7_pod --name sp7_specify7 -u root -v specify6:/opt/Specify -v static-files:/volumes/static-files:Z -v ./specify_settings.py:/opt/specify7/settings/local_specify_settings.py:ro,Z --env-file='./specify.env' cfservice/specify7-service-v7-ssl:01
  1. I also created a new settings_worker.py file that contains only two lines.
import os
os.environ['REQUESTS_CA_BUNDLE'] = '/etc/ssl/certs/ca-certificates.crt'

This file is pushed into the specify7 worker container at runtime.

podman run --detach --pod sp7_pod --name sp7_specify7-worker -v specify6:/opt/Specify -v static-files:/volumes/static-files -v ./specify_settings_worker.py:/opt/specify7/settings/local_specify_settings.py:ro,Z --env-file='./specify.env' cfservice/specify7-service-v7-ssl:01 ve/bin/celery -A specifyweb worker -l INFO --concurrency=1

FYI: Sharing this for fellow Podman users—it might be very helpful!

Last week, we upgraded Podman from a 4.. version to 5.2.2, and I started encountering SSL issues again with SP-7. Fortunately, we found a solution.

Context

Starting with Podman 5.0, the default rootless networking application was changed to “pasta” for providing network connectivity to rootless containers. Unlike the previous default, slirp4netns, pasta behaves differently in some key ways. For example, pasta does not use Network Address Translation (NAT) by default. Instead, it copies the host address into the container, so both the host and container namespaces share the same IP address. As a result, attempting to connect to the host IP from within the container points to the container itself rather than the host. This behavior has caused confusion for many users who noticed a lack of connectivity between the host and container.

Solution

We resolved the issue by configuring the Pod to use the slirp4netns network instead of pasta. Everything works perfectly again now.

Example

podman pod create --name sp7_pod_invertebrate --publish 5002:80/TCP --network slirp4netns