Pgbench is a simple program for running benchmark tests on PostgreSQL. It runs the same sequence of SQL commands over and over, possibly in multiple concurrent database sessions, and then calculates the average transaction rate (transactions per second). By default, pgbench tests a scenario that is loosely based on TPC-B, involving five SELECT
, UPDATE
, and INSERT
commands per transaction. However, it is easy to test other cases by writing your own transaction script files.
Typical output from pgbench looks like:
transaction type: <builtin: TPC-B (sort of)> scaling factor: 10 query mode: simple number of clients: 10 number of threads: 1 number of transactions per client: 1000 number of transactions actually processed: 10000/10000 latency average = 11.013 ms latency stddev = 7.351 ms initial connection time = 45.758 ms tps = 896.967014 (without initial connection time)
Below Example for PGBench:
Initialise the pgbench db – this will create the sample tables in the db
pgbench -U postgres -p 5555 -i -s 50 mydb
-I initialize the db. Creates a bunch of default tables
-s scaling option. i.e take the default rows and x 50 or whatever scaling number you require
This step will create 4 tables - which host the test date rows and other metadata
Schema | Name | Type | Owner
--------+------------------+-------+----------
public | pgbench_accounts | table | postgres
public | pgbench_branches | table | postgres
public | pgbench_history | table | postgres
public | pgbench_tellers | table | postgres
Establish a baseline :
pgbench -U postgres -p 5555 -c 10 -j 2 -t 10000 mydb
-c number of clients
-j 2 number of threads
-t amount of transactions
These values are 10000 transactions per client. So : 10 x 10000 = 100,000 transactions
How to run PGBench.
1- Use psql client to connect to PostgreSQL.
2- Initialize pgbench to create sample tables.
pgbench -h [hostname] -p [Port] -U [user name] -i -s [n] [Database name]
3- Run pgbenchpgbench -h [hostname] -p [Port] -U [user name]
-c [n] -j [n] -t [n] [Database name]
Below shows an example of connecting BigAnimal from Azure virtual machine and run pgbench:
For more options please check :
https://www.postgresql.org/docs/current/pgbench.html
PGCollector :
PG Collector for PostgreSQL is a sql script that gathers valuable database information and presents it in a consolidated HTML file . It gains insights on various aspects of the database such as - Database size ,Configuration parameters ,Installed extensions ,Vacuum & Statistics etc .
How to run PG Collector script with BigAnimal ( pg_collector.sql )
1- You need psql to be able to connect to the PostgreSQL and run the pg_collector.sql script
2- Download pg_collector.sql in your laptop or the host that want to access the database from
3- Login to the database using psql
psql -h [hostname or RDS endpoint] -p [Port] -d [Database name ] -U [user name]
4- Run the pg_collector.sql script
\i pg_collector.sql
\q
or use -f option in psql
psql -h [hostname or RDS endpoint] -p [Port] -d [Database name ] -U [user name] -f pg_collector.sql
5- Open the report using any internet browser
Download pg_collector.sql with clone the git or download the file from below the git.
https://github.com/awslabs/pg-collector
Notes : Following error is due to change in SQL statements with different PG Version.(Need to update the SQL Scripts accordingly to the PG Versions)
HTML File : Download the HTML file and open the same in browser