This article lists the prerequisites, restrictions, and a number of doable steps for upgrading an EPAS BigAnimal cluster to a new version utilizing logical replication.
In this article, the older cluster we wish to upgrade will be the source cluster, and the target cluster will be the one created with the more recent version.
REQUIREMENTS
- The superuser role is needed and may be assigned to edb_admin or an upgrade user.
It can be granted using the BigAnimal portal or by sending a request to BigAnimal support. - All tables that must be replicated must have primary keys defined or Identity columns.
- The target BigAnimal cluster, with the newer EPAS version, should consist of a single node.
Before going live the cluster should be changed to add additional nodes for High Availability. - You can read our suggestion for different kinds of migration here:
EDB BigAnimal Migrations
MAIN LIMITATIONS
- DDL statements are not replicated.
So, during replication, no schema changes can be performed in the primary (publisher) cluster. - Large objects cannot be replicated.
- Sequence data is not replicated.
- A final sequence update may be needed to set the right value.
STEPS
- Collect cluster information such as cluster parameters and cluster settings and create a new BigAnimal cluster with similar settings
- Test the database and schema upgrade.
Dump the user and the database and the schema structure from the source cluster and import it in the target cluster.
Check the logs and if you find any error fix it before proceeding.
Error may be due to the different versions and may concern data types, function, procedure and any other database object.
Also pay attention to the user grants. - Test logical replication. This would mean creating a publication in the source database and a subscription in the target database, load initially all the data stored in the source cluster into the target cluster and then establish a continuous replication.
Check if there is errors and fix them before proceeding.
Errors can be related to networking, data types and other db features - Test the application against the target cluster.
Check if any error is logged by the application and compare the queries result.
Sometime queries run against different version may return different result sets or the same result set ordered in different way.
Pay attention also to the performance.
Run a benchmark may be useful at this stage. - Perform the upgrade process in your production environment.
- Finally test the application against the production cluster and go live if no issue
SIMPLE EXAMPLE
- Create the target cluster, setting the parameters, maintenance windows, backups and other feature like extensions and libraries to make the target cluster similar to the source one.
Pay special attention to the following parameters.
The default values should be fine but they may need to be increased.
We have tested upgrades using the default parameters values without having found any issue.max_replication_slots
max_wal_senders
max_logical_replication_workers
max_worker_processes - Create a script to reset sequences after the data loads in the target, if needed.
You may use this select
SELECT 'ALTER SEQUENCE ' || c.relname || ' RESTART WITH ' || (currval(c.relname::regclass)+1) || ';' as "Sequences"
FROM pg_class c WHERE c.relkind = 'S'; - Export globals, database, and schema structures from the source cluster.
You may use the following commands: Please make sure you will use the EDB utilities; the community edition ones may not be compatible.Globals (users,...)
Globals (users,...) -
pg_dumpall -g -h <Source cluster> > globals.dump
Structure
pg_dump -s -h <Source cluster> -p <Source port> -U <Source_user> <DB_name> > DB.dump
- Import into the target cluster first the globals and then the database structure.
Fix any issue as described in the STEPS section.
Create first the database you want to upgrade in the target cluster.
CREATE DATABASE <DB_name>;
Then import
psql -h <Target_host> -p 5432 -U edb_admin <DB_name> -f DB.dump
psql -h <Target_host> -p 5432 -U edb_admin <DB_name> -f globals.dump - Establish a logical replication between source and target clusters (you may replicate just one or more databases or even just few tables)
Create a publication for all tables in the source cluster
This is at database level
CREATE PUBLICATION <publication_name> FOR ALL TABLES;
Create a subscription for that publication from the target clusterCREATE SUBSCRIPTION <subscription_name> CONNECTION 'host=Source_host_host port=Source_port dbname=Source_db_name user=source_user password=source_passwd' PUBLICATION <publication_name>;
If the source and target cluster are hosted in the same BigAnimal project and region there should be no networking issue.
In case of any problem, please get in touch with BigAnimal Support - Test initial load and ongoing replication.
Fix the sequence updating their values.
You may run against target the output from point 2. - Test the application and fix any issue
- Repeat the operation in production environment, test and remember before or just after the golive to add node to the target cluster.
To interrupt the logical replication between source and target cluster you may do the following:
Run against target cluster
DROP SUBSCRIPTION <subscription_name>;
Run against source cluster
DROP PUBLICATION <publication_name>;
Final Advice
For any issue, or if any suggestion would be needed do not hesitate to get in touch with BigAnimal Support.