Attachments showing production URLs in local server

Hi!!

We updated our test local server that have all-in-one docker version to the 7.11.3 and run smoth till now except for attachments. It has attachment server v2 but when you open any attachment, it doesnt have the test server url, but the production ones. yml , database and global parameters are set to the local url and the web_asset_store.xml has the proper content and is accesible via browser so we dont know where to start to look for fixing it.

If i get that asset1 url and change the domain to the local domain, attachment is there and its returned by asset server, but its like anywhere on the config still having the default attachment url. Any tip?
Thanks!!!

Hi @miquelmAuupa,

In the docker-compose.yml file for the web asset server, there is a place to define the attachment server name and key:

The Specify 7 environment has the similar variables, where the ASSET_SERVER_URL and ASSET_SERVER_KEY should be changed to your local asset server URL and key (matching those defined for the web asset server).

  specify7:
    restart: unless-stopped
    image: specifyconsortium/specify7-service:v7
    init: true
    volumes:
      - "static-files:/volumes/static-files"
    environment:
      - ASSET_SERVER_URL=http://host.docker.internal/web_asset_store.xml
      - ASSET_SERVER_KEY=your asset server access key

Local environments are not configured by default to use assets1.specifycloud.org unless explicitly told to, and the attachment key for that server is not public. Can you share your configuration files (after redacting sensitive credentials and keys)? Thank you!

Yes, is weird because the configuration seems good

{edited}=undisclosed data

{here our domain name} its the domain name we use for this server. like https://specify.testserver.com

{domain name without http/htps} with previous sample. should be specify.testserver.com

Before the upgrade, worked fine with this configuration. only added the CSRF_TRUSTED_ORIGINS AND ALLOWED_HOSTS to our config.

services:

mariadb:
restart: unless-stopped
image: mariadb:10.11
command: --max_allowed_packet=1073741824
ports:
- “3306:3306”
volumes:
- “database:/var/lib/mysql”
- “./seed-database:/docker-entrypoint-initdb.d”
environment:
- MYSQL_ROOT_PASSWORD={edited}
- MYSQL_DATABASE=specify
- MYSQL_USER={edited}
- MYSQL_PASSWORD={edited}

specify7:
restart: unless-stopped
image: specifyconsortium/specify7-service:v7
init: true
volumes:
- “specify6:/opt/Specify:ro”
- “static-files:/volumes/static-files”
environment:
- DATABASE_HOST=mariadb
- DATABASE_PORT=3306
- DATABASE_NAME=specify
- MASTER_NAME={edited}
- MASTER_PASSWORD={edited}
- SECRET_KEY={edited}
- ASSET_SERVER_URL=http://{here our domain name}/web_asset_store.xml
- ASSET_SERVER_KEY={edited}
- REPORT_RUNNER_HOST=report-runner
- REPORT_RUNNER_PORT=8080
- CELERY_BROKER_URL=redis://redis/0
- CELERY_RESULT_BACKEND=redis://redis/1
- LOG_LEVEL=WARNING
- SP7_DEBUG=true
- ALLOWED_HOSTS=\*
- CSRF_TRUSTED_ORIGINS=https://{our domain name}

specify7-worker:
restart: unless-stopped
image: specifyconsortium/specify7-service:v7
command: ve/bin/celery -A specifyweb worker -l INFO --concurrency=1
init: true
volumes:
- “specify6:/opt/Specify:ro”
- “static-files:/volumes/static-files”
environment:
- DATABASE_HOST=mariadb
- DATABASE_PORT=3306
- DATABASE_NAME=specify
- MASTER_NAME={edited}
- MASTER_PASSWORD={edited}
- SECRET_KEY={edited}
- ASSET_SERVER_URL=https://{here our domain name}/web_asset_store.xml
- ASSET_SERVER_KEY={edited}
- REPORT_RUNNER_HOST=report-runner
- REPORT_RUNNER_PORT=8080
- CELERY_BROKER_URL=redis://redis/0
- CELERY_RESULT_BACKEND=redis://redis/1
- LOG_LEVEL=WARNING
- SP7_DEBUG=false
- ALLOWED_HOSTS=\*
- CSRF_TRUSTED_ORIGINS=https://{our domain name}

asset-server:
restart: unless-stopped
image: specifyconsortium/specify-asset-service
init: true
volumes:
- “attachments:/home/specify/attachments”
environment:
- SERVER_NAME={domain name without http/https}
- SERVER_PORT=80
- ATTACHMENT_KEY={edited}
- DEBUG_MODE=true

specify6:
image: specifyconsortium/specify6-service:6.8.03
volumes:
- “specify6:/volumes/Specify”

nginx:
restart: unless-stopped
image: nginx
ports:
- “80:80”
volumes:
- “static-files:/volumes/static-files:ro”
- “specify6:/volumes/specify6:ro”


  - "./nginx/specify.conf:/etc/nginx/conf.d/default.conf:ro"


report-runner:
restart: unless-stopped
image: specifyconsortium/report-runner

redis:
restart: unless-stopped
image: redis:6.0

volumes:
attachments: # the asset-servers attachment files
database: # the data directory for mariadb
specify6: # provides Specify 6 files to Specify 7 and the web server
static-files: # provides Specify 7 static files to the web server

Hi @miquelmAuupa,

One of our developers tried to recreate this using various setups and configurations. However, our default configuration and source code do not reference the assets1.specifycloud.org URL.

We suspect that the containers or images weren’t rebuilt after the environment variables were changed. It’s likely using the build cache from the previous production build. Can you see if rebuilding the cache makes a difference?

docker compose down
docker compose up -d --build

If this doesn’t solve it, you can run printenv in both the Specify 7 and Asset Server container to make sure the variables match what is expected.

Good Evening:

After check some config files and xml where still the production server assets1.XXXXX.org urls set as asset server and rebuild the server. Works fine!!
Checking the logs when specify launches, it throw a warning or message about operations pending to perform.

specify7-1 | Operations to perform:
specify7-1 | Apply all migrations: accounts, attachment_gw, auth, businessrules, contenttypes, notifications, patches, permissions, sessions, specify, workbench
specify7-1 | Running migrations:
specify7-1 | No migrations to apply.
specify7-1 | Your models in app(s): ‘specify’ have changes that are not yet reflected in a migration, and so won’t be applied.
specify7-1 | Run ‘manage.py makemigrations’ to make new migrations, and then re-run ‘manage.py migrate’ to apply them.

Thats normal or is because there was a partial or not finished migration?

Best regards!

1 Like

Hi @miquelmAuupa,

This is normal at the moment, but that warning will no longer occur starting with the 7.12 update early next year (GitHub #7448). :smile:

Glad to hear everything is working!

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.