This means that you need to modify the database timeout configuration so that the maximum allowed packet is increased, but this is done at the DBMS level.
Resolving this issue requires an IT adminsitrator. It is due to the current server configuration.
You can resolve this by increasing the max_allowed_packet for the server by modifying the my.cnf file and/or increasing it globally:
In mariadb, the following command sets the max_allowed_packet limit to 1024M globally temporarily (no restart required)–
SET GLOBAL max_allowed_packet=1024M;
For a permanent solution, modify your my.cnf file where you can set it below the [mysql] or [mariadb] heading:
[mysql]
max_allowed_packet = 1024M
You should now be able to validate/upload your large data sets in the WorkBench without any problems!
Let us know if you need any help with this process!
If these solutions do not solve your issue, we reccomend looking at Stack Overflow or other sites which may have useful instructions to resolve this problem.
If so, if I just stop and delete my DB container… but keep the DB container volume, I imagine my packet modifications will be applied and I will not loose my data once I redeploy my DB container?
For security reasons I have to use Podman instead of Docker to deploy SP-7. I’ve had issues with using docker-compose files with Podman… so I’m manually starting each container.
I was able to successfully configure the DB –max_allowed_packet parameter at the command line. Here is an example: