My Firefish instance was set up in November last year, and the database version it uses is naturally not the latest. Specifically, it’s PGroonga 12, an ancient version based on Postgres from five years ago. To avoid future compatibility and security issues, and for better performance, upgrading this database today is a must.
Upgrading a database running in Docker is quite different from upgrading other Docker containers. It’s not just a matter of pulling a new image and calling it a day. Since Postgres often has breaking changes with each major version, the persistent files generated by the old and new versions are not compatible. If you directly pull a new image and run it, the database won’t start. You must use certain methods to migrate the data.
Postgres has an official upgrade tool called pg_upgrade
1, but I won’t be using it this time because my database has been running for over a year, and it’s based on a version from five years ago. I’m not sure if this tool would work without issues. For this migration, the general idea is to export the SQL and then import it into the new version.
Deployment Overview Before Upgrade
Here is some configuration information you need to know before the upgrade:
Item | Configuration |
---|---|
Instance Deployment Method | Docker Compose |
Database Container Name | firefish_db |
Pre-upgrade Database Container Image | groonga/pgroonga:3.1.9-alpine-12-slim |
Database User | example-firefish-user |
Database Name | firefish |
The database part of the Docker Compose file is as follows:
|
|
Exporting the Current Database
Before upgrading the database, first, take down the Firefish orchestration:
|
|
Start the database container alone and export the current database to SQL:
|
|
Since Firefish’s database is relatively large, the export process may take a long time. For instance, my personal instance took over ten minutes.
Processing the Exported File
It might be due to changes in the authentication method in the new version of Postgres, but if you directly import the previously exported backup.sql
file into the new database, it will change the new database’s Authentication Scheme, causing Firefish to fail to authenticate when connecting to the database later. To avoid this issue, you need to process the current backup.sql
file, extracting only the firefish
database portion instead of importing all the data. 2
|
|
Create a Shell script with the above content and save it as script.sh
to process the current backup.sql
:
|
|
If everything goes well, a file named upgrade.sql
will appear in the current directory. You can open it to check and ensure it was exported correctly.
Importing Existing Data into the New Database
Modify docker-compose.yml
:
|
|
Notice that the database directory mapping configuration has changed from ./db:/var/lib/postgresql/data
to ./database:/var/lib/postgresql/data
. This is to give the new database a fresh start while preserving the old persistent data. Even if something goes wrong, you can always revert to the old database.
Pull and start the new container:
|
|
Import upgrade.sql
into the new database:
|
|
Depending on the size of the database, the import process may also take a long time.
Finishing Up
Once the import is complete, start the entire orchestration:
|
|
It is recommended not to use the -d
parameter for the first startup after the import. Start without any parameters to ensure there are no issues during the startup process, then restart with the -d
parameter.
Finally, log in to the just-started Firefish instance and check for any data loss. If everything is fine, congratulations, you’re done! 🎉
Official documentation: https://www.postgresql.org/docs/current/pgupgrade.html ↩︎
Reference: https://thomasbandt.com/postgres-docker-major-version-upgrade ↩︎