By default, the 'max_connections' is 100 on the cluster, it is the PostgreSQL default setting. You have the option to set this parameter when creating the cluster or update it after the cluster has been created.
What should I set max_connections to?
The article of The Challenges of Setting max_connections and Why You Should Use a Connection Pooler is highly recommended, and the BigAnimal reserved connections have to be considered.
BigAnimal reserved connections:
1. EDB Postgres for Kubernetes on BigAnimal reserves no more than 3 superuser connections for the purposes:
- The readiness probe checks if the database is up and able to accept connections;
- The creation of specific roles required in PostgreSQL instances.
2. There are tools and extensions using superuser internally, for example:
- edb_wait_states
- pg_cron
- pg_failover_slots
- barman backup/restore
3. streaming_replica connection on HA cluster.
In summary, there are about 10 BigAnimal reserved connections. With the default setting of 100 for 'max_connections', there are 90 connections open for application.
Please do not grant superuser privileges to the application user, since it can use the reserved connections, resulting in service degradation.
When should I update max_connections?
People increase the 'max_connections' when the application requests are failed with:
FATAL: sorry, too many clients already
Actually there are a few items to review for the error:
- Any PgBouncer or Pgpool enabled to expand the client connections, PgBouncer is always right. BigAnimal supports EDB PgBouncer natively, you can simply enable it from BigAnimal UI. See Getting Started with PgBouncer on EDB BigAnimal.
- Review the applications reuse connections, any connections leak, retry connections too aggressive or not, the application connection pooling design;
- Review the cluster resource utilization to check if any CPU, memory, IO bottleneck which blocks the connections released;
- Review connections metrics to get if there are too many idle connections, idle in transaction connections, any timeout settings can help;
- Review the HA cluster architecture to see if all the requests are on the primary node, any option to balance the requests? BigAnimal supports Read-only workloads , it directs read-only operations exclusively toward replicas.
Ideally you review all of the above normally, and update the 'max_connections' before the application failure due to connection exhaustion.
BigAnimal connections metrics:
BigAnimal portal has the connections metrics on the Monitoring & Logging page:
As we expose the connections metrics, and we offer a few monitoring options, see https://www.enterprisedb.com/docs/biganimal/latest/using_cluster/05_monitoring_and_logging/ you can get the the connection detailed information including the activities status:
- active - doing read, update, insert, commit, or commit wait on standby;
- idle - waiting for a client command;
- idle in transaction - waiting for client inside a BEGIN block, waiting for commit/rollback.
How can I update max_connections?
Please note that changing 'max_connections' requires a restart of the database server. To update 'max_connections', go to the Edit Cluster page at BigAnimal portal:
Replica cluster max_connections:
If you have the replica cluster, you must set 'max_connections' to the same or higher value on the replica cluster than the primary cluster. The WAL tracks changes to these parameters on the primary. If the replica cluster gets that 'max_connections' on the primary cluster is higher than its own value, the recovery will be paused(There is 'recovery has paused' message at the PostgreSQL logs).
So to do the update:
- If you want to increase 'max_connections', you should update the parameter on replica cluster first, before applying the changes to the primary cluster;
- If you want to decrease 'max_connections', you should do so on the primary cluster first, before applying the changes to the replica cluster.