La tecnología nos hace tener menos necesidad de merito para alcanzar los mismos resultados.

good (?) reasons to use an entity-attribute-value design

© Laurenz Albe 2021

Customers have often asked me what I think of “Entity-attribute-value” (EAV) design. So I thought it would be a good idea to lay down my opinion in writing.

What is entity-attribute-value design?

Customers have often asked me what I think of “Entity-attribute-value” (EAV) design. So I thought it would be a good idea to lay down my opinion in writing.

The idea is not to create a table for each entity in the application. Rather, you store each attribute as a separate entry in an attribute table:

CREATE TABLE objects (
objectid bigint PRIMARY KEY
/* other object-level properties */
);

CREATE TABLE attstring (
objectid bigint
REFERENCES objects ON DELETE CASCADE NOT NULL,
attname text NOT NULL,
attval text,
PRIMARY KEY (objectid, attname)
);

CREATE TABLE attint (
objectid bigint
REFERENCES objects ON DELETE CASCADE NOT NULL,
attname text NOT NULL,
attval integer,
PRIMARY KEY (objectid, attname)
);

/* more tables for other data types */

The name of the model is derived from the “att...” tables, which have the three columns: “entity ID”, “attribute name” and “value”.

There are several variations of the basic theme, among them:

  • omit the objects table
  • add additional tables that define “object types”, so that each type can only have certain attributes

Why would anybody consider an entity-attribute-value design?

The principal argument I hear in support of the EAV design is flexibility. You can create new entity types without having to create a database table. Taken to the extreme, each entity can have different attributes.

I suspect that another reason for people to consider such a data model is that they are more familiar with key-value stores than with relational databases.

Performance considerations of entity-attribute-value design

In my opinion, EAV database design is the worst possible design when it comes to performance. You will never get good database performance with such a data model.

The only use cases where EAV shines are when it is used as a key-value store.

INSERT

Inserting an entity will look like this:

INSERT INTO objects (objectid) VALUES (42);

INSERT INTO attstring (objectid, attname, attval)
VALUES (42, ‘name’, ‘myobject’);

INSERT INTO attint (objectid, attname, attval)
VALUES (42, ‘start’, 100),
(42, ‘end’, 1000);

That means that we insert four rows into three tables and have four index modifications. Also, the three statements will require three client-server round trips. You can save on the round trips by using CTEs to turn that into a single statement, or by using the new pipeline mode of libpq. Still, it will be much more expensive than inserting a single table row.

DELETE

If you use cascading delete, you could do that in a single statement:

DELETE FROM objects WHERE objectid = 42;

Still, you will end up deleting four table rows and modifying four indexes. That’s much more work than deleting a single table row.

UPDATE

A single-column update could actually be faster with the EAV design, because only one small table row is modified:

UPDATE attint
SET attval = 2000
WHERE objectid = 42 AND attname = ‘end’;

 

But if you have to modify several columns, you will need to run several UPDATE statements. That will be slower than if you only had to modify a single (albeit bigger) table row.

SELECT

Querying the attributes of an entity requires a join:

SELECT as.attval AS "name",
       ai1.attval AS "start",
       ai2.attval AS "end"
FROM objects AS o
   LEFT JOIN attstring AS as USING (objectid)
   LEFT JOIN attint AS ai1 USING (objectid)
   LEFT JOIN attint AS ai2 USING (objectid)
WHERE objectid = 42
  AND as.attname = 'name'
  AND ai1.attname = 'start'
  AND ai2.attname = 'end';

Alternatively, you could run three separate queries, one for each attribute. No matter how you do it, it will be less efficient than a single-row SELECT from a single table.

Single-column aggregates

As an example for a query that might be faster with the EAV model, consider a query that aggregates data from a single column:

SELECT sum(attval) AS total
FROM othertab
   JOIN attint USING (objectid)
WHERE othertab.col = 'x'
  AND attint.attname = 'attendants';

With a covering index on attint(objectid, attname) INCLUDE (attval), this could be quite a bit faster than aggregating a column from a wider table.

More complicated queries

After these examples, it is clear that writing more complicated queries will be a pain with the EAV design. Imagine a simple join:

SELECT e1a1.attval AS person_name,
       e1a2.attval AS person_id,
       e2a1.attval AS address_street,
       e2a2.attval AS address_city
FROM attint AS e1a2
   JOIN attstring AS e1a1
      ON e1a2.objectid = e1a1.objectid
   LEFT JOIN attint AS e2a0
      ON e1a2.attval = e2a0.attval
   LEFT JOIN attstring AS e2a1
      ON e2a0.objectid = e2a1.objectid
   LEFT JOIN attstring AS e2a2
      ON e2a0.objectid = e2a2.objectid
WHERE e1a1.attname = 'name'
  AND e1a2.attname = 'persnr'
  AND e2a0.attname = 'persnr'
  AND e2a1.attname = 'street'
  AND e2a2.attname = 'city';

If you think that this query is hard to read, I agree with you. In a normal relational data model, the same operation could look like this:

SELECT person.name AS person_name,
       persnr AS person_id
       address.street,
       address.city
FROM person
   LEFT JOIN address
      USING (persnr);

You can guess which query will perform better.

But we need an entity-attribute-value design for flexibility!

Relational data models are not famous for their flexibility. After all, that is the drive behind the NoSQL movement. However, there are good ways to deal with variable entities.

Creating tables on the fly

Nothing keeps you from running statements like CREATE TABLE and CREATE INDEX from your application. So if there is a limited number of entity types, and each type has a certain number of attributes, you can easily model that with a traditional relational model.

Certain problems remain:

  • A data model that grows on the fly may not end up being well-designed. But that’s not different in the entity-attribute-value design.
  • If the application has to create tables, it needs permission to do so. But today, when many applications create their own database tables anyway, few people will worry about that.

Creating tables on the fly will only work well if the set of attributes for each entity is well-defined. If that is not the case, we need a different approach.

Using JSON for a flexible data model

PostgreSQL has extensive JSON support that can be used to model entities with a variable number of attributes.

For that, you model the important and frequently occurring attributes as normal table columns. Then you add an additional column of type jsonb with a GIN index on it. This column contains the “rare attributes” of the entity as key-value pairs.

When using a model like this, you should take care that attributes

  • used in joins
  • on which you need a database constraint
  • that you want to use in a WHERE condition with an operator different from “=

are modeled as regular table columns.

Conclusion

Avoid entity-attribute-value designs in your relational database. EAV causes bad performance, and there are other ways to have a flexible data model in PostgreSQL.

Source: Laurenz Albe @ cybertec-postgresql.com

Suppose that you want to be completely over-the-top paranoid about making sure that when you execute some particular SQL statement on your Postgres database, you’re doing it in the safest and least risky way?

For example, suppose it’s the production database behind your successful startup’s main commercial website. If anything even causes queries to block/pause for a few minutes then people will quickly be tweeting about how they can’t place orders and it hurt both your company’s revenue and reputation.

You know that it’s really important to save regular snapshots and keep a history of important metrics. You’re writing some of your own code to capture a few specific stats like the physical size of your most important application tables or maybe the number of outstanding orders over time. Maybe you’re writing some java code that gets scheduled by quartz, or maybe some python code that you’ll run with cron.

Or another situation might be that you’re planning to make an update to your schema – adding a new table, adding a new column to an existing table, modifying a constraint, etc. You plan to execute this change as an online operation during the weekly period of lowest activity on the system – maybe it’s very late Monday night, if you’re in an industry that’s busiest over weekends.

How can you make sure that your SQL is executed on the database in the safest possible way?

Here are a few ideas I’ve come up with:

  • Setting connect_timeout to something short, for example 2 seconds.
  • Setting lock_timeout to something appropriate. For example, 2ms on queries that shouldn’t be doing any locking. (I’ve seen entire systems brown-out because a “quick” DDL had to get in line behind an app transaction, and then all the new app transactions piled up behind the DDL that was waiting!)
  • Setting statement_timeout to something reasonable for the query you’re running – thus putting an upper bound on execution time.
  • Using an appropriate client-side timeout, for cases when the server fails to kill the query using statement_timeout. For example, in Java the Statement class has native support for this.
  • When writing SQL, fully qualify names of tables and functions with the schema/namespace. (This can be a security feature; I have heard of attacks where someone manages to change the search_path for connections.)
  • Check at least one explain plan and make sure it’s doing what you would expect it to be doing, and that it seems likely to be the most efficient way to get the information you need.
  • Don’t use system views that join in unneeded data sources; go direct to needed raw relation or a raw function.
  • Access each data source exactly once, never more than once. In that single pass, get all data that will be needed. Analytic or window functions are very useful for avoiding self-joins.
  • Restrict the user to minimum needed privileges. For example, the pg_read_all_stats role on an otherwise unprivileged user might be useful.
  • Make sure your code has back-off logic for when failures or unexpected results are encountered.
  • Prevent connection pile-ups resulting from database slowdowns or hangs. For example, by using a dedicated client-side connection pool with dynamic sizing entirely disabled or with a small max pool size.
  • Run the query against a physical replica/hot standby (e.g. pulling a metric for the physical size of important tables) or logical copy (e.g. any query against application data), instead of running the query against the primary production database. (However, note that when hot_standby_feedback is enabled, long-running transactions on the PostgreSQL hot standby can still impact the primary system.)
  • For all DDL, carefully check the level of locking that it will require and test to get a feel for possible execution time. Watch out for table rewrites. Many DDLs that used to require a rewrite no longer do in current versions of PostgreSQL, but there are still a few out there. ALTER TABLE statements must be evaluated very carefully. Frankly ALTER TABLE is a bit notorious for being unclear about which incantations cause table rewrites and which ones don’t. (I have a friend who just tests every specific ALTER TABLE operation first on an empty table and then checks if pg_class changes show that a rewrite happened.)

 

What am I missing? What other ideas are out there for executing SQL in Postgres with a “paranoid” level of safety?

Source: Jeremy from ardentperf.com

Tables in PostgreSQL have an index for the primary key and separate row storage called the heap. Other databases integrate these together or support “index-organized tables”. In this arrangement, the primary key lookup process leads directly to the row data without a secondary fetch to get the full row and the requisite additional CPU and I/O utilization.

The CLUSTER command in PostgreSQL reorganizes a table according to an index to improve performance, but doesn’t really work for most real-world OLTP cases. It rewrites the entire table under an exclusive lock, blocking any reads or writes. PostgreSQL doesn’t maintain the clustered layout for new data, so this operation must be ran periodically. So it is really only useful if you can take your database offline for long periods of time on a regular basis.

But more critically, index-organized tables save space as the index doesn’t require a separate copy of the row data. For tables with small rows that are mostly covered by the primary key, such as join tables, this can easily cut the table’s storage footprint in half.

Consider the following table which stores social “likes” for arbitrary objects:

CREATE TABLE likes (
 object_type INTEGER NOT NULL,
 object_id BIGINT NOT NULL GENERATED ALWAYS AS IDENTITY,
 user_id BIGINT NOT NULL,
 created_at TIMESTAMP WITH TIME ZONE NOT NULL,
 PRIMARY KEY(object_type, object_id, user_id)
);

PostgreSQL will maintain an index for the primary key which is separate from the base table storage. This index will contain a full copy of the object_type, object_id, and user_id columns for every row. 20 out of 28 bytes in each row (~70%) will be duplicated. If PostgreSQL supported index-organized tables, it wouldn’t consume all that additional space.

Fuente: medium.com/@rbranson/10-things-i-hate-about-postgresql-20dbab8c2791

Would you like to learn how to monitor a PostgreSQL server using Zabbix? In this tutorial, we are going to show you how to use Zabbix to monitor a PostgreSQL server.

Zabbix Playlist:

On this page, we offer quick access to a list of videos related to Zabbix installation.

Don’t forget to subscribe to our youtube channel named FKIT.

Configure a PostgreSQL user to Zabbix

• IP – 192.168.15.11
• Operational System – Ubuntu 19.10
• Hostname – POSTGRESQL

Zabbix requires an account to access the PostgreSQL server and perform the database monitoring.

On the PostgreSQL database server, create an account to Zabbix.

Give monitoring privilege over all PostgreSQL databases to the Zabbix user.

Copy to Clipboard

Take note of the PostgreSQL username and password created.

Locate and edit the pg_hba.conf configuration file.

Copy to Clipboard

Add the following lines at the beginning of this file.

Copy to Clipboard

Here is our pg_hba.conf file content:

Copy to Clipboard

Restart the PostgreSQL service.

Copy to Clipboard

Test if you are able to connect to the local PostgreSQL server using the zbx_monitor account.

The system should not ask for a password.

Copy to Clipboard

You have finished the PostgreSQL service configuration.

Install the Zabbix Agent on the PostgreSQL Server

• IP – 192.168.15.11
• Operational System – Ubuntu 19.10
• Hostname – POSTGRESQL

Now, we need to install the Zabbix agent on the computer running the PostgreSQL service.

On the Linux console, use the following commands to install the required packages.

Copy to Clipboard

Download the Zabbix installation package.

Copy to Clipboard

Extract the Zabbix installation package, compile and install the Zabbix agent.

Copy to Clipboard

Find the location of the zabbix_agentd.conf file on your system.

Edit the zabbix_agentd.conf file.

Copy to Clipboard

Here is the original file, before our configuration.

Copy to Clipboard

Here is the new file with our configuration.

Copy to Clipboard

In our example, the Zabbix agent is configured to allow the connection from the Zabbix server 192.168.15.10.

The Zabbix server with the IP address 192.168.15.10 is allowed to request and receive information from this agent.

The Localhost, 127.0.0.1, is allowed to request and receive information from the agent.

Create the following required directories.

Copy to Clipboard

Download the Zabbix source code from Github.

Copy to Clipboard

Copy the required files to monitor the PostgreSQL server from the Zabbix repository.

Copy to Clipboard

The Zabbix installation package comes with a service startup script.

Copy the startup script using the commands below.

Copy to Clipboard

You can now use the following commands to start the Zabbix agent service.

Copy to Clipboard

In our example, the Zabbix agent will monitor the PostgreSQL service installed on the localhost.

Restart the Zabbix Agent.

Copy to Clipboard

You have finished the Zabbix agent installation.

You can now use the Zabbix server dashboard to add this computer to the network monitoring service.

Zabbix Monitor PostgreSQL

Access the Zabbix server dashboard and add the Linux computer running PostgreSQL as a Host.

Open your browser and enter the IP address of your web server plus /zabbix.

In our example, the following URL was entered in the Browser:

• http://192.168.15.10/zabbix

On the login screen, use the default username and default password.

• Default Username: Admin
• Default Password: zabbix

zabbix login

After a successful login, you will be sent to the Zabbix Dashboard.

Zabbix 4.4 dashboard

On the dashboard screen, access the Configuration menu and select the Host option.

zabbix add host

On the top right of the screen, click on the Create host button.

Zabbix Create Host

Enter the following information:

• Host Name – Enter a Hostname to identify the Linux server running PostgreSQL.
• Visible Hostname – Repeat the hostname.
• Group – Select the name of a group to identify similar devices.
• Agent Interface – Enter the IP address of the Linux server.

Here is the original image, before our configuration.

zabbix linux - Antes

Here is the new image with our configuration.

zabbix monitor postgresql

Next, we need to associate the host with a specific network monitor template.

By default, Zabbix comes with a large variety of monitoring templates.

Access the Templates tab on the top of the screen.

Click on the Select button and locate the template named: Template DB PostgreSQL

zabbix monitoring postgresql

Click on the Add option.

Click on the Add button.

After a few minutes, you will be able to see the initial result on the Zabbix Dashboard.

In order to test your configuration, access the Monitoring menu and click on the Graphs option.

Zabbix Graphs Menu

On the top right of the screen, select the group named ALL.

Select your PostgreSQL computer hostname.

Select the graph named: PostgreSQL connections

monitor postgresql server

You should be able to see the graphic named PostgreSQL connections

postgresql monitor zabbix

Congratulations! You have configured the Zabbix server to monitor PostgreSQL.

Desde https://techexpert.tips/zabbix/monitor-postgresql-using-zabbix/

Many developers still install a database on their computer and work on it.
It may be a preferable method, but using Docker takes only seconds to set up a database. I used PostgreSQL and Docker in my latest project. As a result, I have achieved efficiency and convenience. I will give you my experience with PostgreSQL and Docker.
I can start with finding the Docker image we will use. There are two easy ways to do this. I prefer to visit the Docker Hub website. Alternatively, it can be used the command below. It will list you similar results to the Docker Hub.
docker search postgresql
I will use the postgres, the official image can be obtained in the search results. I prepared a Docker command with important and necessary parameters. It can be easily customized and used this command.
docker run --name postgresqldb -e POSTGRES_USER=myusername -e POSTGRES_PASSWORD=mypassword -p 5432:5432 -v /data:/var/lib/postgresql/data -d postgres
It looks a little confused, but don’t worry. I will explain these parameters.
  • –name : I set a name for the container.
  • -e POSTGRES_USER : I set the database username.
  • -e POSTGRES_PASSWORD : I set the database password.
  • -p : 5432 port is used by default. But it can be changed this according to preference. Local port and container port can be set separately. The first port is for local, and the second is for container.
  • -v : This parameter is related to the Docker Volume feature. This is important for not losing data. With this feature, I synchronize with the folder containing the data in the database and a folder of my choice on the computer or server. When the container is closed or restarted for any reason, it can be easily started with the same data. The first path can be any folder you prefer. I set it to /data as an example. Please do not change the second path because data is in this path by default.
  • -d : I will run detached mode(background mode). If you run it without using it, the docker container will be stopped when you press the Ctrl + C combination or close the terminal screen. This is undesirable because the container should be running in the background.
  • postgres : Image name.
Let’s run the command together and see the result.
If you have encountered a similar output, you can be sure that everything is fine. You can see the working Docker containers by running the command below.
docker ps
You can also stop or restart your container using start and stop commands. In this example I set the container name as postgresqldb. You can run the start and stop commands with your own chosen name.
docker start postgresqldb
docker stop postgresqldb
You may think it took a long time because I explained the details. Don’t be biased, you can create a PostgreSQL database just by running a single Docker command.

 

Tomado de https://hackernoon.com/how-to-install-postgresql-with-docker-quickly-ki7g363m

I’ve installed 19.04 and removed dsnmasq, disabled systemd-resolve using

sudo systemctl disable systemd-resolved.service
sudo systemctl stop systemd-resolved

and want to use Network Manager to set manually my DNS servers. Now I want to configure a different DNS server, so I went to the network icon on the bar and clicked Wired -> Wired Settings -> Configure -> IPV4, select Automatic (DHCP) and disable DNS as automatic, putting my desired DNS servers. Even after rebooting, my /etc/resolv.conf file now only gets 127.0.0.53, not matter what I do on the Nettwork Manager configuration. Is there a way for the /etc/resolv.conf file to be configured by Network manager?

I’ve found a solution.

First, disable systemd-resolve:

sudo systemctl disable systemd-resolved.service
sudo systemctl stop systemd-resolved

Remove the symlink for resolv.conf and create a file:

rm -rf /etc/resolv.conf
touch /etc/resolv.conf
chmod a+rw /etc/resolv.conf

Now change the configuration file for Network Manager to restore the expected result: vi /etc/NetworkManager/NetworkManager.conf:

[main]
plugins=ifupdown,keyfile
# add this line
dns=default

and leave the rest of the file as is. Restart Network Manager:

sudo /etc/init.d/network-manager restart

and configure your preferred DNS servers. Done!

https://askubuntu.com/questions/1159084/make-dns-follow-configuration-from-network-manager-in-19-04

Aca mis pruebas, abajo la original:

yum install wal2json

create database daftest;
daftest=# CREATE TABLE test_table (
id char(10) NOT NULL,
code char(10),
PRIMARY KEY (id)
);


export PATH="$PATH:/usr/pgsql-12/bin"
pg_recvlogical
pg_recvlogical: error: no slot specified

pg_recvlogical -d daftest --slot test_slot --create-slot -P wal2json
pg_recvlogical -d daftest --slot test_slot --start -o pretty-print=1 -f - ## Va a quedar como esperando algo...

En una segunda terminal ejecuto: 
daftest=# insert into test_table (id,code) select 1, 'Hola';
INSERT 0 1

En la primera se ve:

{
"change": [
{
"kind": "insert",
"schema": "public",
"table": "test_table",
"columnnames": ["id", "code"],
"columntypes": ["character(10)", "character(10)"],
"columnvalues": ["1 ", "Hola "]
}
]
}

En la 2da: daftest=# update test_table set code = 'Chau' where id = '1';
UPDATE 1

En la 1ra:
{
"change": [
{
"kind": "update",
"schema": "public",
"table": "test_table",
"columnnames": ["id", "code"],
"columntypes": ["character(10)", "character(10)"],
"columnvalues": ["1 ", "Chau "],
"oldkeys": {
"keynames": ["id"],
"keytypes": ["character(10)"],
"keyvalues": ["1 "]
}
}
]
}

En la 2da: daftest=# delete from test_table where id = '1';
DELETE 1

En la 1ra:
{
"change": [
{
"kind": "delete",
"schema": "public",
"table": "test_table",
"oldkeys": {
"keynames": ["id"],
"keytypes": ["character(10)"],
"keyvalues": ["1 "]
}
}
]
}

ctrl+C
pg_recvlogical -d daftest --slot test_slot --drop-slot

Logical Decoding Output Plug-in Installation for PostgreSQL

This document describes the database setup required for streaming data changes out of PostgreSQL. This comprises configuration applying to the database itself as well as the installation of the wal2json logical decoding output plug-in. The installation and the tests are performed at the following environment/configuration:

Similar steps need to be taken for other Postgres and OS versions and the Decoderbufs logical decoding plug-in which also is supported by Debezium.

As of Debezium 0.10, the connector supports PostgreSQL 10+ logical replication streaming using pgoutput. This means that a logical decoding output plug-in is no longer necessary and changes can be emitted directly from the replication stream by the connector.

Logical Decoding Plug-ins

Logical decoding is the process of extracting all persistent changes to a database’s tables into a coherent, easy to understand format which can be interpreted without detailed knowledge of the database’s internal state.

As of PostgreSQL 9.4, logical decoding is implemented by decoding the contents of the write-ahead log, which describe changes on a storage level, into an application-specific form such as a stream of tuples or SQL statements. In the context of logical replication, a slot represents a stream of changes that can be replayed to a client in the order they were made on the origin server. Each slot streams a sequence of changes from a single database. The output plug-ins transform the data from the write-ahead log’s internal representation into the format the consumer of a replication slot desires. Plug-ins are written in C, compiled, and installed on the machine which runs the PostgreSQL server, and they use a number of PostgreSQL specific APIs, as described by the PostgreSQL documentation.

Debezium’s PostgreSQL connector works with one of Debezium’s supported logical decoding plug-ins,

to encode the changes in either Protobuf format or JSON format.

For simplicity, Debezium also provides a Docker image based on a vanilla PostgreSQL server image on top of which it compiles and installs the plug-ins.

The Debezium logical decoding plug-ins have only been installed and tested on Linux machines. For Windows and other platforms it may require different installation steps

Differences between Plug-ins

The plug-ins’ behaviour is not completely same for all cases. So far these differences have been identified

  • wal2json plug-in is not able to process quoted identifiers (issue)
  • wal2json plug-in does not emit events for tables without primary keys
  • wal2json plug-in does not support special values (NaN or infinity) for floating point types

All up-to-date differences are tracked in a test suite Java class.

More information about the logical decoding and output plug-ins can be found at:

Installation

At the current installation example, the wal2json output plug-in for logical decoding is used. The wal2json output plug-in produces a JSON object per transaction. All of the new/old tuples are available in the JSON object. The plug-in compilation and installation is performed by executing the related commands extracted from the Debezium docker image file.

Before executing the commands, make sure that the user has the privileges to write the wal2json library at the PostgreSQL lib directory (at the test environment, the directory is: /usr/pgsql-9.6/lib/). Also note that the installation process requires the PostgreSQL utility pg_config. Verify that the PATH environment variable is set so as the utility can be found. If not, update the PATH environment variable appropriately. For example at the test environment:

export PATH="$PATH:/usr/pgsql-9.6/bin"
wal2json installation commands
$ git clone https://github.com/eulerto/wal2json -b master --single-branch \
&& cd wal2json \
&& git checkout d2b7fef021c46e0d429f2c1768de361069e58696 \
&& make && make install \
&& cd .. \
&& rm -rf wal2json
wal2json installation output
Cloning into 'wal2json'...
remote: Counting objects: 445, done.
remote: Total 445 (delta 0), reused 0 (delta 0), pack-reused 445
Receiving objects: 100% (445/445), 180.70 KiB | 0 bytes/s, done.
Resolving deltas: 100% (317/317), done.
Note: checking out 'd2b7fef021c46e0d429f2c1768de361069e58696'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b new_branch_name

HEAD is now at d2b7fef... Improve style
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fPIC -I. -I./ -I/usr/pgsql-9.6/include/server -I/usr/pgsql-9.6/include/internal -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include  -c -o wal2json.o wal2json.c
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fPIC -L/usr/pgsql-9.6/lib -Wl,--as-needed  -L/usr/lib64 -Wl,--as-needed -Wl,-rpath,'/usr/pgsql-9.6/lib',--enable-new-dtags  -shared -o wal2json.so wal2json.o
/usr/bin/mkdir -p '/usr/pgsql-9.6/lib'
/usr/bin/install -c -m 755  wal2json.so '/usr/pgsql-9.6/lib/'

Installation on Fedora 30+

Debezium provides RPM package for Fedora operating system too. The package is updated always after a final Debezium release is done. To use the RPM in question just issue the standard Fedora installation command:

$ sudo dnf -y install postgres-decoderbufs

The rest of the configuration is same as described below for wal2json plugin.

PostgreSQL Server Configuration

Once the wal2json plug-in has been installed, the database server should be configured.

Setting up libraries, WAL and replication parameters

Add the following lines at the end of the postgresql.conf PostgreSQL configuration file in order to include the plug-in at the shared libraries and to adjust some WAL and streaming replication settings. The configuration is extracted from postgresql.conf.sample. You may need to modify it, if for example you have additionally installed shared_preload_libraries.

postgresql.conf , configuration file parameters settings
############ REPLICATION ##############
# MODULES
shared_preload_libraries = 'wal2json'   

# REPLICATION
wal_level = logical                     
max_wal_senders = 4                     
max_replication_slots = 4               
tells the server that it should load at startup the wal2json (use decoderbufs for protobuf) logical decoding plug-in(s) (the names of the plug-ins are set in protobuf and wal2json Makefiles)
tells the server that it should use logical decoding with the write-ahead log
tells the server that it should use a maximum of 4 separate processes for processing WAL changes
tells the server that it should allow a maximum of 4 replication slots to be created for streaming WAL changes

Debezium needs a PostgreSQL’s WAL to be kept during Debezium outages. If your WAL retention is too small and outages too long, then Debezium will not be able to recover after restart as it will miss part of the data changes. The usual indicator is an error similar to this thrown during the startup: ERROR: requested WAL segment 000000010000000000000001 has already been removed.

When this happens then it is necessary to re-execute the snapshot of the database. We also recommend to set parameter wal_keep_segments = 0. Please follow PostgreSQL official documentation for fine-tuning of WAL retention.

We strongly recommend reading and understanding the official documentation regarding the mechanics and configuration of the PostgreSQL write-ahead log.

Setting up replication permissions

Replication can only be performed by a database user that has appropriate permissions and only for a configured number of hosts. In order to give a user replication permissions, define a PostgreSQL role that has at least the REPLICATION and LOGIN permissions. For example:

CREATE ROLE name REPLICATION LOGIN;

Superusers have by default both of the above roles.

Add the following lines at the end of the pg_hba.conf PostgreSQL configuration file, so as to configure the client authentication for the database replication. The PostgreSQL server should allow replication to take place between the server machine and the host on which the Debezium PostgreSQL connector is running.

Note that the authentication refers to the database superuser postgres. You may change this accordingly, if some other user with REPLICATION and LOGIN permissions has been created.

pg_hba.conf , configuration file parameters settings
############ REPLICATION ##############
local   replication     postgres                          trust		
host    replication     postgres  127.0.0.1/32            trust		
host    replication     postgres  ::1/128                 trust		
tells the server to allow replication for postgres locally (i.e. on the server machine)
tells the server to allow postgres on localhost to receive replication changes using IPV4
tells the server to allow postgres on localhost to receive replication changes using IPV6

See the PostgreSQL documentation for more information on network masks.

Database Test Environment Set-up

For the testing purposes, a database named test with a table named test_table are created with the following DDL commands:

Database SQL commands for test database/table creation
CREATE DATABASE test;

CREATE TABLE test_table (
    id char(10) NOT NULL,
    code        char(10),
    PRIMARY KEY (id)
);

Decoding Output Plug-in Test

Test that the wal2json is working properly by obtaining the test_table changes using the pg_recvlogical PostgreSQL client application that controls PostgreSQL logical decoding streams.

Before starting make sure that you have logged in as a user with database replication permissions, as configured at a previous step. Otherwise, the slot creation and streaming fails with the following error message:

pg_recvlogical: could not connect to server: FATAL:  no pg_hba.conf entry for replication connection from host "[local]", user "root", SSL off

At the test environment, the user with replication permission is the postgres.

Also, make sure that the PATH environment variable is set so as the pg_recvlogical can be found. If not, update the PATH environment variable appropriately. For example at the test environment:

export PATH="$PATH:/usr/pgsql-9.6/bin"
  • Create a slot named test_slot for the database named test, using the logical output plug-in wal2json
$ pg_recvlogical -d test --slot test_slot --create-slot -P wal2json
  • Begin streaming changes from the logical replication slot test_slot for the database test
$ pg_recvlogical -d test --slot test_slot --start -o pretty-print=1 -f -
  • Perform some basic DML operations at test_table to trigger INSERT/UPDATE/DELETE change events
Interactive PostgreSQL terminal, SQL commands
test=# INSERT INTO test_table (id, code) VALUES('id1', 'code1');
INSERT 0 1
test=# update test_table set code='code2' where id='id1';
UPDATE 1
test=# delete from test_table where id='id1';
DELETE 1

Upon the INSERT, UPDATE and DELETE events, the wal2json plug-in outputs the table changes as captured by pg_recvlogical.

Output for INSERT event
{
  "change": [
    {
      "kind": "insert",
      "schema": "public",
      "table": "test_table",
      "columnnames": ["id", "code"],
      "columntypes": ["character(10)", "character(10)"],
      "columnvalues": ["id1       ", "code1     "]
    }
  ]
}
Output for UPDATE event
{
  "change": [
    {
      "kind": "update",
      "schema": "public",
      "table": "test_table",
      "columnnames": ["id", "code"],
      "columntypes": ["character(10)", "character(10)"],
      "columnvalues": ["id1       ", "code2     "],
      "oldkeys": {
        "keynames": ["id"],
        "keytypes": ["character(10)"],
        "keyvalues": ["id1       "]
      }
    }
  ]
}
Output for DELETE event
{
  "change": [
    {
      "kind": "delete",
      "schema": "public",
      "table": "test_table",
      "oldkeys": {
        "keynames": ["id"],
        "keytypes": ["character(10)"],
        "keyvalues": ["id1       "]
      }
    }
  ]
}

Note that the REPLICA IDENTITY of the table test_table is set to DEFAULT.

When the test is finished, the slot test_slot for the database test can be removed by the following command:

$ pg_recvlogical -d test --slot test_slot --drop-slot

REPLICA IDENTITY, is a PostgreSQL specific table-level setting which determines the amount of information that is available to logical decoding in case of UPDATE and DELETE events.

There are 4 possible values for REPLICA IDENTITY:

  • DEFAULTUPDATE and DELETE events will only contain the previous values for the primary key columns of a table
  • NOTHINGUPDATE and DELETE events will not contain any information about the previous value on any of the table columns
  • FULLUPDATE and DELETE events will contain the previous values of all the table’s columns
  • INDEX index nameUPDATE and DELETE events will contains the previous values of the columns contained in the index definition named index name

You can modify and check the replica REPLICA IDENTITY for a table with the following commands:

ALTER TABLE test_table REPLICA IDENTITY FULL;
test=# \d+ test_table
                         Table "public.test_table"
 Column |     Type      | Modifiers | Storage  | Stats target | Description
 -------+---------------+-----------+----------+--------------+------------
 id     | character(10) | not null  | extended |              |
 code   | character(10) |           | extended |              |
Indexes:
    "test_table_pkey" PRIMARY KEY, btree (id)
Replica Identity: FULL

Here is the output of wal2json plug-in on DELETE event and REPLICA IDENTITY set to FULL. Compare with the respective output when REPLICA IDENTITY is set to DEFAULT.

Output for `UPDATE`
{
  "change": [
    {
      "kind": "update",
      "schema": "public",
      "table": "test_table",
      "columnnames": ["id", "code"],
      "columntypes": ["character(10)", "character(10)"],
      "columnvalues": ["id1       ", "code2     "],
      "oldkeys": {
        "keynames": ["id", "code"],
        "keytypes": ["character(10)", "character(10)"],
        "keyvalues": ["id1       ", "code1     "]
      }
    }
  ]
}


https://debezium.io/documentation/reference/0.10/postgres-plugins.html

Coleccion de cositas para el .psqlrc

--------------------------------------------------
-- 20200115 DAF agrego funcionalidades
-- 20200108 DAF .psqlrc  Version inicial
--------------------------------------------------


-- Para que no muestre el output de los siquientes comandos
\set QUIET ON


-- Prompts
-- Prompt1 / Prompt2 / Prompt3
-- %M : server
-- %m : host name
-- %> : port
-- %n : user
-- %/ : current database
-- %~ : like %/ but ~ for default database
-- %# : '#' if superuser, '>' otherwise
-- %R
-- %x
-- %[...%] : terminal control characters
-- Examples

\set PROMPT1 'u: %n db: %/ %# '
\set PROMPT2 '%# '
\set PROMPT3 ''


\timing on

-- Pone en mayusculas las palabras autocompletadas
\set COMP_KEYWORD_CASE upper

--tipo de lineas, unicode, ascii etc
--\pset linestype ascii

-- Borde de la tabla de resultados
\pset border 0

-- unicode_border_linestyle: border drawing style for unicode style.
-- Values: single, double.
\pset unicode_border_linestyle single

-- unicode_column_linestyle: column drawing style for unicode style.
-- Values: single, double.
\pset unicode_column_linestyle single

-- unicode_header_linestyle: header drawing style for unicode style.
-- Values: single, double.
\pset unicode_header_linestyle single


-- columns: target width for wrapped format
-- Values: number
\pset columns 0

-- expanded: extended display
-- Values: auto, on, off
\x off


-- fieldsep_zero: set field separator to a zero byte.
--Values: no value.
--\pset fieldsep_zero 


-- fieldsep_zero: set field separator to a zero byte.
--Values: no value.
--\pset fieldsep_zero

-- recordsep_zero: set the line separator in unaligned output format to
-- zero byte.
-- Values: no values.
--\pset recordsep_zero

-- tableattr: attributes in the table tag for HTML format.
-- Values: string. Without value, will unset attributes.
--\pset tableattr TODO

-- title: table title for printed tables.
-- Values: string. Without value, title is unset.

-- tuples_only: can also be abreviated with \t
-- Values: on, off. Without value, will toggle.
\pset tuples_only off


-- Para que no pagine
\pset pager off

-- Como va a imprimir los NULOS
\pset null '[null]'

-- Nombre del archivo history
\set HISTFILE ~/scripts/psqlhists/psql_history- :HOST - :PORT - :DBNAME - :USER

-- Cantidad de comandos en el history
\set HISTSIZE 5000

-- Sin duplicados
\set HISTCONTROL ignoredups

-- Nivel de debug
\set VERBOSITY verbose
\errverbose

-- como el @echo off para imprimir algun mensaje
\set ECHO_HIDDEN ON

\set QUIET OFF

\echo '\nCurrent Host Server Date Time : '`date` '\n'

\echo '\t:activity                 :cache_hit              :nspsize                  :tablesize1'
\echo '\t:activity1                :conninfo               :nspsize_pgs              :total_index_size'
\echo '\t:activity2                :dbsize                 :ps                       :unused_indexes'
\echo '\t:activity96               :idletxn                :queries                  :uptime'
\echo '\t:activitytho              :index_size             :seq_scans                :uselesscol'
\echo '\t:                         :index_usage            :settings                 :vacuuminfo'
\echo '\t:autovacs                 :index_usage_adv        :show_slow_queries        :waits'
\echo '\t:backends                 :kill_old_transactions  :spcsize                  :wrapstats'
\echo '\t:bloat                    :locks                  :statrelfilepath          :wrapstats_raw'
\echo '\t:bloat2                   :locks1                 :statrelfilepath_agg      :wrapstats_rels'
\echo '\t:blocking                 :long_running_queries   :statrelspcfilepath_agg   :wrapstats_spc'
\echo '\t:buffer_cache_hit_ratio   :missing_indexes        :tablesize'                

\echo '\n\t:menu -- Help Menu'
\echo '\t\h-- Help with SQL commands'
\echo '\t\?-- Help with psql commands'

\echo '\nDevelopment queries:\n'
\echo '\t:sp-- Current Search Path'
\echo '\t:clear -- Clear screen'
\echo '\t:ll -- List\n'


\set sp 'SHOW search_path;'
\set clear '\\! clear;'
\set ll '\\! ls -lrt'

\set uptime 'select now() - pg_postmaster_start_time() AS uptime;'


\set menu '\\i ~/.psqlrc'



-- buffer cache hit ratio
\set buffer_cache_hit_ratio 'select datname, blks_hit::float/(blks_hit+blks_read) as hit_ratio from pg_stat_database where blks_read+blks_hit <> 0;'
\set cache_hit 'SELECT ''index hit rate'' AS name, (sum(idx_blks_hit)) / sum(idx_blks_hit + idx_blks_read) AS ratio FROM pg_statio_user_indexes UNION ALL SELECT ''cache hit rate'' AS name,       sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) AS ratio FROM pg_statio_user_tables;'

-- kill transactions that have been "idle in transaction" for more than 10 minutes
\set kill_old_transactions 'select count(pg_terminate_backend(pid)) as nb_killed_proc from pg_stat_activity where query = \' in transaction\' and current_timestamp - query_start > \'10 min\';'

-- running queries
\set queries  'select current_timestamp - query_start as runtime,pid,datname,usename,query from pg_stat_activity where query != \'\' order by 1 desc; '



\set autovacs 'WITH q0_0 AS  (SELECT relid,          CASE              WHEN coalesce(last_vacuum, last_analyze, last_autovacuum, last_autoanalyze) IS NULL THEN NULL              WHEN last_vacuum =                     (SELECT MAX(UNNEST)                      FROM                        (SELECT UNNEST(ARRAY[last_vacuum,                                             last_analyze,                                             last_autovacuum,                                             last_autoanalyze]))s0_0) THEN '' v ''              WHEN last_analyze =                     (SELECT MAX(UNNEST)                      FROM                        (SELECT UNNEST(ARRAY[last_vacuum,                                             last_analyze,                                             last_autovacuum,                                             last_autoanalyze]))s1_0) THEN '' z ''              WHEN last_autoanalyze =                     (SELECT MAX(UNNEST)                      FROM                        (SELECT UNNEST(ARRAY[last_vacuum,                                             last_analyze,                                             last_autovacuum,                                             last_autoanalyze]))s2_0) THEN ''az ''              WHEN last_autovacuum =                     (SELECT MAX(UNNEST)                      FROM                        (SELECT UNNEST(ARRAY[last_vacuum,                                             last_analyze,                                             last_autovacuum,                                             last_autoanalyze]))s3_0) THEN ''av ''          END AS last_stats_str,          CASE              WHEN coalesce(last_vacuum, last_analyze, last_autovacuum, last_autoanalyze) IS NULL THEN NULL              WHEN last_vacuum =                     (SELECT MAX(UNNEST)                      FROM                        (SELECT UNNEST(ARRAY[last_vacuum,                                             last_analyze,                                             last_autovacuum,                                             last_autoanalyze]))s0_0) THEN last_vacuum::TIMESTAMP              WHEN last_analyze =                     (SELECT MAX(UNNEST)                      FROM                        (SELECT UNNEST(ARRAY[last_vacuum,                                             last_analyze,                                             last_autovacuum,                                             last_autoanalyze]))s1_0) THEN last_analyze::TIMESTAMP              WHEN last_autoanalyze =                     (SELECT MAX(UNNEST)                      FROM                        (SELECT UNNEST(ARRAY[last_vacuum,                                             last_analyze,                                             last_autovacuum,                                             last_autoanalyze]))s2_0) THEN last_autoanalyze::TIMESTAMP              WHEN last_autovacuum =                     (SELECT MAX(UNNEST)                      FROM                        (SELECT UNNEST(ARRAY[last_vacuum,                                             last_analyze,                                             last_autovacuum,                                             last_autoanalyze]))s3_0) THEN last_autovacuum::TIMESTAMP          END AS last_stats_stamp   FROM pg_stat_user_tables),     q0 AS  (SELECT relid::regclass rel,          n_tup_ins ti,          n_tup_upd + n_tup_del tm,          n_live_tup tliv,          n_dead_tup nded,          n_mod_since_analyze tmod,          last_stats_str||last_stats_stamp AS last_stats,          autoanalyze_count naz,          autovacuum_count nav,          analyze_count nz,          vacuum_count nv   FROM pg_stat_user_tables   JOIN q0_0 USING (relid)   ORDER BY last_stats_stamp DESC nulls LAST,                                        schemaname,                                        relname)SELECT * FROM q0 ;'

\set bloat2 'SELECT tablename AS TABLE_NAME,       ROUND(CASE WHEN otta=0 THEN 0.0 ELSE sml.relpages/otta::numeric END,1) AS table_bloat,       CASE           WHEN relpages < otta THEN ''0''           ELSE pg_size_pretty((bs*(sml.relpages-otta)::bigint)::bigint)       END AS table_waste,       iname AS index_name,       ROUND(CASE WHEN iotta=0             OR ipages=0 THEN 0.0 ELSE ipages/iotta::numeric END,1) AS index_bloat,       CASE           WHEN ipages < iotta THEN ''0''           ELSE pg_size_pretty((bs*(ipages-iotta))::bigint)       END AS index_waste FROM  (SELECT schemaname,          tablename,          cc.reltuples,          cc.relpages,          bs,          CEIL((cc.reltuples*((datahdr+ma- (CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma END))+nullhdr2+4))/(bs-20::float)) AS otta,          COALESCE(c2.relname,''?'') AS iname,          COALESCE(c2.reltuples,0) AS ituples,          COALESCE(c2.relpages,0) AS ipages,          COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::float)),0) AS iotta   FROM     (SELECT ma,             bs,             schemaname,             tablename,             (datawidth+(hdr+ma-(CASE WHEN hdr%ma=0 THEN ma ELSE hdr%ma END)))::numeric AS datahdr,             (maxfracsum*(nullhdr+ma-(CASE WHEN nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2      FROM        (SELECT schemaname,                tablename,                hdr,                ma,                bs,                SUM((1-null_frac)*avg_width) AS datawidth,                MAX(null_frac) AS maxfracsum,                hdr+           (SELECT 1+count(*)/8            FROM pg_stats s2            WHERE null_frac<>0              AND s2.schemaname = s.schemaname              AND s2.tablename = s.tablename) AS nullhdr         FROM pg_stats s,           (SELECT              (SELECT current_setting(''block_size'')::numeric) AS bs,                   CASE WHEN substring(v,12,3) IN (''8.0'',                                                   ''8.1'',                                                   ''8.2'') THEN 27 ELSE 23 END AS hdr,                                                                                 CASE WHEN v ~ ''mingw32'' THEN 8 ELSE 4 END AS ma            FROM              (SELECT version() AS v) AS foo) AS constants         GROUP BY 1,                  2,                  3,                  4,                  5) AS foo) AS rs   JOIN pg_class cc ON cc.relname = rs.tablename   JOIN pg_namespace nn ON cc.relnamespace = nn.oid   AND nn.nspname = rs.schemaname   AND nn.nspname <> ''information_schema''   LEFT JOIN pg_index i ON indrelid = cc.oid   LEFT JOIN pg_class c2 ON c2.oid = i.indexrelid) AS sml ORDER BY CASE             WHEN relpages < otta THEN 0             ELSE bs*(sml.relpages-otta)::bigint         END DESC;'

\set bloat 'SELECT tablename AS TABLE_NAME,       ROUND(CASE WHEN otta=0 THEN 0.0 ELSE sml.relpages/otta::numeric END,1) AS table_bloat,       CASE           WHEN relpages < otta THEN ''0''           ELSE pg_size_pretty((bs*(sml.relpages-otta)::bigint)::bigint)       END AS table_waste,       iname AS index_name,       ROUND(CASE WHEN iotta=0             OR ipages=0 THEN 0.0 ELSE ipages/iotta::numeric END,1) AS index_bloat,       CASE           WHEN ipages < iotta THEN ''0''           ELSE pg_size_pretty((bs*(ipages-iotta))::bigint)       END AS index_waste FROM  (SELECT schemaname,          tablename,          cc.reltuples,          cc.relpages,          bs,          CEIL((cc.reltuples*((datahdr+ma- (CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma END))+nullhdr2+4))/(bs-20::float)) AS otta,          COALESCE(c2.relname,''?'') AS iname,          COALESCE(c2.reltuples,0) AS ituples,          COALESCE(c2.relpages,0) AS ipages,          COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::float)),0) AS iotta   FROM     (SELECT ma,             bs,             schemaname,             tablename,             (datawidth+(hdr+ma-(CASE WHEN hdr%ma=0 THEN ma ELSE hdr%ma END)))::numeric AS datahdr,             (maxfracsum*(nullhdr+ma-(CASE WHEN nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2      FROM        (SELECT schemaname,                tablename,                hdr,                ma,                bs,                SUM((1-null_frac)*avg_width) AS datawidth,                MAX(null_frac) AS maxfracsum,                hdr+           (SELECT 1+count(*)/8            FROM pg_stats s2            WHERE null_frac<>0              AND s2.schemaname = s.schemaname              AND s2.tablename = s.tablename) AS nullhdr         FROM pg_stats s,           (SELECT              (SELECT current_setting(''block_size'')::numeric) AS bs,                   CASE WHEN substring(v,12,3) IN (''8.0'',                                                   ''8.1'',                                                   ''8.2'') THEN 27 ELSE 23 END AS hdr,                                                                                 CASE WHEN v ~ ''mingw32'' THEN 8 ELSE 4 END AS ma            FROM              (SELECT version() AS v) AS foo) AS constants         GROUP BY 1,                  2,                  3,                  4,                  5) AS foo) AS rs   JOIN pg_class cc ON cc.relname = rs.tablename   JOIN pg_namespace nn ON cc.relnamespace = nn.oid   AND nn.nspname = rs.schemaname   AND nn.nspname <> ''information_schema''   LEFT JOIN pg_index i ON indrelid = cc.oid   LEFT JOIN pg_class c2 ON c2.oid = i.indexrelid) AS sml ORDER BY CASE             WHEN relpages < otta THEN 0             ELSE bs*(sml.relpages-otta)::bigint         END DESC;'

\set blocking 'SELECT bl.pid AS blocked_pid,       ka.query AS blocking_statement,       now() - ka.query_start AS blocking_duration,       kl.pid AS blocking_pid,       a.query AS blocked_statement,       now() - a.query_start AS blocked_duration FROM pg_catalog.pg_locks bl JOIN pg_catalog.pg_stat_activity a ON bl.pid = a.pid JOIN pg_catalog.pg_locks kl JOIN pg_catalog.pg_stat_activity ka ON kl.pid = ka.pid ON bl.transactionid = kl.transactionid AND bl.pid != kl.pid WHERE NOT bl.granted;'



\set idletxn 'SELECT datname,       pid,       (clock_timestamp() - backend_start) AS bk_age,       (clock_timestamp() - state_change) state_age,       (clock_timestamp() - xact_start) xact_age,       usename,       application_name app,       client_addr,       (STATE || CASE                     WHEN waiting THEN ''(w)''                     ELSE ''''                 END) AS STATE,       queryFROM pg_stat_activityWHERE xact_start IS NOT NULL  AND pid <> pg_backend_pid()ORDER BY xact_start;'

\set index_size 'SELECT relname AS name,       pg_size_pretty(sum(relpages*1024)) AS SIZEFROM pg_classWHERE reltype=0GROUP BY relnameORDER BY sum(relpages) DESC;'

\set index_usage_adv 'SELECT *FROM  (SELECT stat.relname AS TABLE,          stai.indexrelname AS INDEX,          CASE stai.idx_scan              WHEN 0 THEN ''Insufficient data''              ELSE (100 * stai.idx_scan / (stat.seq_scan + stai.idx_scan))::text || ''%''          END hit_rate,              CASE stat.idx_scan                  WHEN 0 THEN ''Insufficient data''                  ELSE (100 * stat.idx_scan / (stat.seq_scan + stat.idx_scan))::text || ''%''              END all_index_hit_rate,                  ARRAY     (SELECT pg_get_indexdef(idx.indexrelid, k + 1, TRUE)      FROM generate_subscripts(idx.indkey, 1) AS k      ORDER BY k) AS cols,                  stat.n_live_tup rows_in_table   FROM pg_stat_user_indexes AS stai   JOIN pg_stat_user_tables AS stat ON stai.relid = stat.relid   JOIN pg_index AS idx ON (idx.indexrelid = stai.indexrelid)) AS sub_innerORDER BY rows_in_table DESC,         hit_rate ASC;'

\set index_usage 'SELECT relname,       CASE idx_scan           WHEN 0 THEN ''Insufficient data''           ELSE (100 * idx_scan / (seq_scan + idx_scan))::text       END percent_of_times_index_used,       n_live_tup rows_in_tableFROM pg_stat_user_tables ORDER BY n_live_tup DESC;'

\set locks 'SELECT pg_stat_activity.pid,       pg_class.relname,       pg_locks.transactionid,       pg_locks.granted,       substring(pg_stat_activity.query                 FROM ''([^]*?){1,3}'') AS query_snippet,       age(now(),pg_stat_activity.query_start) AS "age"FROM pg_stat_activity,     pg_locksLEFT OUTER JOIN pg_class ON (pg_locks.relation = pg_class.oid)WHERE pg_stat_activity.query <> ''''  AND pg_locks.pid=pg_stat_activity.pid  AND pg_locks.mode = ''ExclusiveLock''ORDER BY query_start;'


\set locks1  'SELECT bl.pid AS blocked_pid, a.usename AS blocked_user, kl.pid AS blocking_pid, ka.usename AS blocking_user, a.query AS blocked_statement FROM pg_catalog.pg_locks bl JOIN pg_catalog.pg_stat_activity a ON bl.pid = a.pid JOIN pg_catalog.pg_locks kl JOIN pg_catalog.pg_stat_activity ka ON kl.pid = ka.pid ON bl.transactionid = kl.transactionid AND bl.pid != kl.pid WHERE NOT bl.granted;'


\set long_running_queries 'SELECT pid,       now() - pg_stat_activity.query_start AS duration,       query AS queryFROM pg_stat_activityWHERE pg_stat_activity.query <> ''''::text  AND now() - pg_stat_activity.query_start > interval ''5 minutes''ORDER BY now() - pg_stat_activity.query_start DESC;'

\set missing_indexes 'SELECT relname,       seq_scan-idx_scan AS too_much_seq,       CASE           WHEN seq_scan-idx_scan > 0 THEN ''Missing Index?''           ELSE ''OK''       END,       pg_relation_size(relname::regclass) AS rel_size,       seq_scan,       idx_scanFROM pg_stat_all_tablesWHERE schemaname=''public''  AND pg_relation_size(relname::regclass) > 80000ORDER BY too_much_seq DESC;'

\set nspsize_pgs 'SELECT * from (SELECT    nspname,    pg_size_pretty(sum(8192::bigint*nullif(tbl.relpages, 0))) "tables",    pg_size_pretty(sum(8192::bigint*nullif(idx.relpages, 0))) "indexes",    pg_size_pretty(sum(8192::bigint*nullif(tst.relpages, 0))) "toast",    pg_size_pretty(sum(8192::bigint*(        coalesce(tbl.relpages, 0)      + coalesce(idx.relpages, 0)      + coalesce(tst.relpages, 0)))) AS "total",    concat(       count(distinct tbl.oid) FILTER (where tbl.relkind=''r'')::text,       ''/'',       count(distinct tst.oid)::text,       ''/'',       count(distinct idx.oid)::text,       ''/'',       count(distinct tbl.oid) FILTER (where tbl.relkind=''m'')::text    ) AS "#r/t/i/m"FROM    pg_class AS tblJOIN    pg_namespace AS nsp ON (tbl.relnamespace=nsp.oid)LEFT JOIN    pg_class tst ON (tbl.reltoastrelid=tst.oid)LEFT JOIN    pg_index ON (pg_index.indrelid=tbl.oid)LEFT JOIN    pg_class idx ON pg_index.indexrelid=idx.oidWHERE    (tbl.relkind = ''r'' OR tbl.relkind = ''m'')GROUP BY    nspnameORDER BY    sum(coalesce(tbl.relpages, 0)+coalesce(idx.relpages, 0)+coalesce(tst.relpages, 0))DESC) _'

\set nspsize 'SELECT * from (SELECT    nspname,    pg_size_pretty(sum(pg_table_size(pg_class.oid))) "Schema Size",    pg_size_pretty(sum(pg_indexes_size(pg_class.oid))) "Indexes",    count(pg_class.oid) "Tables"FROM pg_classJOIN pg_namespace ON (pg_class.relnamespace=pg_namespace.oid)WHERE relkind = ''r'' or relkind = ''m''GROUP BY nspnameORDER BY sum(pg_total_relation_size(pg_class.oid)) DESC) _'

\set ps 'SELECT pid,       application_name AS SOURCE,       age(now(),query_start) AS running_for,       waiting,       query AS queryFROM pg_stat_activityWHERE query <> ''''  AND STATE <> ''idle''  AND pid <> pg_backend_pid()ORDER BY 3 DESC;'

\set seq_scans 'SELECT relname AS name,       seq_scan AS COUNTFROM pg_stat_user_tables ORDER BY seq_scan DESC;'


-- number of connected backends
\set settings 'select name, setting,unit,context from pg_settings;'

\set show_slow_queries 'SELECT (total_time / 1000 / 60) AS total_minutes,       (total_time/calls) AS average_time,       queryFROM pg_stat_statementsORDER BY 1 DESC LIMIT 100;'

\set spcsize 'SELECT    coalesce(spcname, ''[Default]'') "Tablespace",    pg_size_pretty(sum(pg_relation_size(c.oid)) FILTER (WHERE relkind = ''r'' or relkind = ''m'')) "Data Size",    pg_size_pretty(sum(pg_relation_size(c.oid)) FILTER (WHERE relkind = ''i''                 )) "Index Size",    count(c.oid) "# Tables"FROM pg_class cLEFT JOIN pg_tablespace spc ON (c.reltablespace=spc.oid)WHERE relkind = ''r'' or relkind = ''m'' or relkind = ''i''GROUP BY 1/*ORDER BY sum(pg_total_relation_size(c.oid)) DESC;*/ORDER BY 1;'

\set statrelfilepath_agg 'select * from (select relid,    nsp.nspname,    c.relname,    fpath,numsegs,tot_size,access_bnds,modification_bnds,change_bnds,creation_bnds,isdir from (SELECT c.oid relid,       relpages / segsz.seg_size AS numsegs,       pg_relation_filepath(c.oid) AS fpath,       sum(size) tot_size,       tstzrange(min(fstat.access), max(fstat.access), ''[]'') access_bnds,       tstzrange(min(fstat.modification), max(fstat.modification), ''[]'') modification_bnds,       tstzrange(min(fstat.change), max(fstat.change), ''[]'') change_bnds,       tstzrange(min(fstat.creation), max(fstat.creation), ''[]'') creation_bnds,       fstat.isdirFROM (SELECT setting::bigint seg_sizeFROM pg_settingsWHERE name = ''segment_size'') AS segsz,pg_class c,generate_series(0::bigint, relpages / segsz.seg_size) segnum,concat(pg_relation_filepath(c.oid), coalesce(''.''||nullif(segnum, 0), '''')) _fpath,pg_stat_file(_fpath) fstatwhere relkind=''r'' group by relid, numsegs, fstat.isdir)_JOIN pg_class c on c.oid=relidJOIN pg_namespace nsp ON c.relnamespace=nsp.oid)_'

\set statrelfilepath 'SELECT c.oid relid,       nsp.nspname,       c.relname,       segnum,       fpath,       fstat.size,       fstat.access,       fstat.modification,       fstat.change,       fstat.creation,       fstat.isdirFROM (SELECT setting::bigint seg_sizeFROM pg_settingsWHERE name = ''segment_size'') AS segsz,pg_class cJOIN pg_namespace nsp ON c.relnamespace=nsp.oid,generate_series(0::bigint, relpages / segsz.seg_size) segnum,concat(pg_relation_filepath(c.oid), coalesce(''.''||nullif(segnum, 0), '''')) fpath,pg_stat_file(fpath) fstat'

\set statrelspcfilepath_agg 'SELECTrelid,nspname,relname,spcname,fpath,numsegs,tot_size,access_bnds,modification_bnds,change_bnds,creation_bnds,isdirFROM (SELECT relid,    nsp.nspname,    c.relname,    spc.spcname,    fpath,    numsegs,    tot_size,    access_bnds,    modification_bnds,    change_bnds,    creation_bnds,    isdirFROM (SELECT c.oid relid,       1 + relpages/segsz.seg_size AS numsegs,       pg_relation_filepath(c.oid) AS fpath,       sum(size) tot_size,       tstzrange(min(fstat.access), max(fstat.access), ''[]'') access_bnds,       tstzrange(min(fstat.modification), max(fstat.modification), ''[]'') modification_bnds,       tstzrange(min(fstat.change), max(fstat.change), ''[]'') change_bnds,       tstzrange(min(fstat.creation), max(fstat.creation), ''[]'') creation_bnds,       fstat.isdirFROM (SELECT setting::bigint seg_sizeFROM pg_settingsWHERE name = ''segment_size'') AS segsz,pg_class c,generate_series(0::bigint, relpages / segsz.seg_size) segnum,concat(pg_relation_filepath(c.oid), coalesce(''.''||nullif(segnum, 0), '''')) _fpath,pg_stat_file(_fpath) fstatwhere relkind=''r'' group by relid, numsegs, fstat.isdir)_JOIN pg_class c    ON c.oid=relidJOIN pg_namespace nsp    ON c.relnamespace=nsp.oidJOIN pg_tablespace spc    ON ((c.reltablespace=spc.oid) OR (spc.spcname=''pg_default'' and c.reltablespace=0)))_'

\set total_index_size 'SELECT pg_size_pretty(sum(relpages*1024)) AS SIZEFROM pg_classWHERE reltype=0;'

\set unused_indexes 'SELECT schemaname || ''.'' || relname AS TABLE,       indexrelname AS INDEX,       pg_size_pretty(pg_relation_size(i.indexrelid)) AS index_size,       idx_scan AS index_scansFROM pg_stat_user_indexes uiJOIN pg_index i ON ui.indexrelid = i.indexrelidWHERE NOT indisunique  AND idx_scan < 50  AND pg_relation_size(relid) > 5 * 8192ORDER BY pg_relation_size(i.indexrelid) / nullif(idx_scan, 0) DESC NULLS FIRST,                                                                         pg_relation_size(i.indexrelid) DESC;'


\set vacuuminfo '/*https://gist.githubusercontent.com/skehlet/36aad599171b25826e82/raw/b84501643ec16d05f8b307ad87bac884bc14fd52/gistfile1.sql*/WITH table_opts AS (    SELECT      pg_class.oid, relname, nspname, array_to_string(reloptions, '''') AS relopts    FROM      pg_class INNER JOIN pg_namespace ns ON relnamespace = ns.oid), storage_settings AS (    SELECT      oid, relname, nspname,      CASE        WHEN relopts LIKE ''%autovacuum_vacuum_threshold%''          THEN regexp_replace(relopts, ''.*autovacuum_vacuum_threshold=([0-9.]+).*'', E''\\1'')::integer          ELSE current_setting(''autovacuum_vacuum_threshold'')::integer      END AS autovacuum_vacuum_threshold,      CASE        WHEN relopts LIKE ''%autovacuum_vacuum_scale_factor%''          THEN regexp_replace(relopts, ''.*autovacuum_vacuum_scale_factor=([0-9.]+).*'', E''\\1'')::real          ELSE current_setting(''autovacuum_vacuum_scale_factor'')::real      END AS autovacuum_vacuum_scale_factor,      CASE        WHEN relopts LIKE ''%autovacuum_freeze_min_age%''          THEN regexp_replace(relopts, ''.*autovacuum_freeze_min_age=([0-9.]+).*'', E''\\1'')::integer          ELSE current_setting(''vacuum_freeze_min_age'')::integer      END AS autovacuum_freeze_min_age,      CASE        WHEN relopts LIKE ''%autovacuum_freeze_table_age%''          THEN regexp_replace(relopts, ''.*autovacuum_freeze_table_age=([0-9.]+).*'', E''\\1'')::real          ELSE current_setting(''vacuum_freeze_table_age'')::real      END AS autovacuum_freeze_table_age,      CASE        WHEN relopts LIKE ''%autovacuum_freeze_max_age%''          THEN regexp_replace(relopts, ''.*autovacuum_freeze_max_age=([0-9.]+).*'', E''\\1'')::real          ELSE current_setting(''autovacuum_freeze_max_age'')::real      END AS autovacuum_freeze_max_age    FROM      table_opts)SELECT  storage_settings.nspname AS schema,  storage_settings.relname AS table,  to_char(pg_class.reltuples, ''9G999G999G999'') AS rowcount,  to_char(psut.n_dead_tup, ''9G999G999G999'') AS dead_rowcount,  to_char(autovacuum_vacuum_threshold          + (autovacuum_vacuum_scale_factor::numeric * pg_class.reltuples), ''9G999G999G999'') AS autovacuum_threshold,  CASE    WHEN autovacuum_vacuum_threshold + (autovacuum_vacuum_scale_factor::numeric * pg_class.reltuples) < psut.n_dead_tup      THEN ''yes''  END AS expect_autovacuum,  age(relfrozenxid) as relfrozenxid_age,  autovacuum_freeze_table_age,  CASE    WHEN age(relfrozenxid) > autovacuum_freeze_table_age      THEN ''yes''  END AS next_autovacuum_will_be_a_freeze,  autovacuum_freeze_max_age,  ROUND(100.0 * age(relfrozenxid) / autovacuum_freeze_max_age::numeric, 1) || ''%'' AS "% til forced vacuum freeze"FROM  pg_stat_user_tables psut INNER JOIN pg_class ON psut.relid = pg_class.oid  INNER JOIN storage_settings ON pg_class.oid = storage_settings.oidORDER BY storage_settings.relname;'

\set waits 'SELECT pg_stat_activity.pid,       pg_stat_activity.query,       pg_stat_activity.state,       now() - pg_stat_activity.query_start AS \"totaltime\", pg_stat_activity.backend_start FROM pg_stat_activity WHERE pg_stat_activity.query !~ \'%IDLE%\'::text  AND pg_stat_activity.waiting = TRUE;'

\set waits1 'SELECT pg_stat_activity.pid, pg_stat_activity.query, pg_stat_activity.state, now() - pg_stat_activity.query_start AS \"totaltime\", pg_stat_activity.backend_start FROM pg_stat_activity WHERE pg_stat_activity.query !~ \'%IDLE%\'::text AND pg_stat_activity.waiting = true;'


\set wrapstats_raw '/*https://gist.githubusercontent.com/skehlet/36aad599171b25826e82/raw/b84501643ec16d05f8b307ad87bac884bc14fd52/gistfile1.sql*/select relid, spcid, spcname, nspid, nspname, reltuples, relpages, relsize_pgs, frozenage, iceinfrom (SELECT  storage_settings.oid AS relid,  spc.oid spcid,  spc.spcname,  nsp.oid nspid,  nsp.nspname,  c.reltuples,  c.relpages,  8192::bigint*c.relpages relsize_pgs,  age(relfrozenxid) as frozenage,  autovacuum_freeze_max_age - age(relfrozenxid) AS iceinFROM  pg_stat_user_tables psut  JOIN pg_class c ON psut.relid = c.oid  JOIN (    SELECT      oid, relname, nspname,      CASE        WHEN relopts LIKE ''%autovacuum_vacuum_threshold%''          THEN regexp_replace(relopts, ''.*autovacuum_vacuum_threshold=([0-9.]+).*'', E''\\1'')::integer          ELSE current_setting(''autovacuum_vacuum_threshold'')::integer      END AS autovacuum_vacuum_threshold,      CASE        WHEN relopts LIKE ''%autovacuum_vacuum_scale_factor%''          THEN regexp_replace(relopts, ''.*autovacuum_vacuum_scale_factor=([0-9.]+).*'', E''\\1'')::real          ELSE current_setting(''autovacuum_vacuum_scale_factor'')::real      END AS autovacuum_vacuum_scale_factor,      CASE        WHEN relopts LIKE ''%autovacuum_freeze_min_age%''          THEN regexp_replace(relopts, ''.*autovacuum_freeze_min_age=([0-9.]+).*'', E''\\1'')::integer          ELSE current_setting(''vacuum_freeze_min_age'')::integer      END AS autovacuum_freeze_min_age,      CASE        WHEN relopts LIKE ''%autovacuum_freeze_table_age%''          THEN regexp_replace(relopts, ''.*autovacuum_freeze_table_age=([0-9.]+).*'', E''\\1'')::real          ELSE current_setting(''vacuum_freeze_table_age'')::real      END AS autovacuum_freeze_table_age,      CASE        WHEN relopts LIKE ''%autovacuum_freeze_max_age%''          THEN regexp_replace(relopts, ''.*autovacuum_freeze_max_age=([0-9.]+).*'', E''\\1'')::real          ELSE current_setting(''autovacuum_freeze_max_age'')::real      END AS autovacuum_freeze_max_age    FROM      (    SELECT      pg_class.oid, relname, nspname, array_to_string(reloptions, '''') AS relopts    FROM      pg_class INNER JOIN pg_namespace ns ON relnamespace = ns.oid      )table_opts)storage_settings ON c.oid = storage_settings.oid  JOIN pg_tablespace spc on c.reltablespace=spc.oid or (c.reltablespace=0 and spc.spcname=''pg_default'')  JOIN pg_namespace nsp on c.relnamespace=nsp.oidORDER BY icein)_'

\set wrapstats_rels '/*https://gist.githubusercontent.com/skehlet/36aad599171b25826e82/raw/b84501643ec16d05f8b307ad87bac884bc14fd52/gistfile1.sql*/WITH table_opts AS (    SELECT      pg_class.oid, relname, nspname, array_to_string(reloptions, '''') AS relopts    FROM      pg_class INNER JOIN pg_namespace ns ON relnamespace = ns.oid), storage_settings AS (    SELECT      oid, relname, nspname,      CASE        WHEN relopts LIKE ''%autovacuum_vacuum_threshold%''          THEN regexp_replace(relopts, ''.*autovacuum_vacuum_threshold=([0-9.]+).*'', E''\\1'')::integer          ELSE current_setting(''autovacuum_vacuum_threshold'')::integer      END AS autovacuum_vacuum_threshold,      CASE        WHEN relopts LIKE ''%autovacuum_vacuum_scale_factor%''          THEN regexp_replace(relopts, ''.*autovacuum_vacuum_scale_factor=([0-9.]+).*'', E''\\1'')::real          ELSE current_setting(''autovacuum_vacuum_scale_factor'')::real      END AS autovacuum_vacuum_scale_factor,      CASE        WHEN relopts LIKE ''%autovacuum_freeze_min_age%''          THEN regexp_replace(relopts, ''.*autovacuum_freeze_min_age=([0-9.]+).*'', E''\\1'')::integer          ELSE current_setting(''vacuum_freeze_min_age'')::integer      END AS autovacuum_freeze_min_age,      CASE        WHEN relopts LIKE ''%autovacuum_freeze_table_age%''          THEN regexp_replace(relopts, ''.*autovacuum_freeze_table_age=([0-9.]+).*'', E''\\1'')::real          ELSE current_setting(''vacuum_freeze_table_age'')::real      END AS autovacuum_freeze_table_age,      CASE        WHEN relopts LIKE ''%autovacuum_freeze_max_age%''          THEN regexp_replace(relopts, ''.*autovacuum_freeze_max_age=([0-9.]+).*'', E''\\1'')::real          ELSE current_setting(''autovacuum_freeze_max_age'')::real      END AS autovacuum_freeze_max_age    FROM      table_opts),_ as (SELECT  storage_settings.oid::regclass AS "rel",  to_char(pg_class.reltuples, ''9G999G999G999'') AS "Rows",  to_char(pg_class.relpages, ''9G999G999'') AS "Pages",  pg_size_pretty(pg_table_size(pg_class.oid)) AS "Size",  pg_table_size(pg_class.oid) "size",  age(relfrozenxid) as "icedXID",  autovacuum_freeze_table_age "AV TblIceAge",  CASE    WHEN age(relfrozenxid) > autovacuum_freeze_table_age      THEN ''AV''    WHEN age(relfrozenxid) > current_setting(''vacuum_freeze_table_age'')::integer      THEN ''V''  END AS "Icing",  autovacuum_freeze_max_age "IceMaxAV",  autovacuum_freeze_max_age - age(relfrozenxid) AS "ice_in",  ROUND(100.0 * age(relfrozenxid) / autovacuum_freeze_max_age::numeric, 1) || ''%'' AS "IceAV%"FROM  pg_stat_user_tables psut INNER JOIN pg_class ON psut.relid = pg_class.oid  INNER JOIN storage_settings ON pg_class.oid = storage_settings.oidORDER BY ice_in)select "rel", "Pages", "Size", "icedXID", ice_in, "IceAV%" from _'

\set wrapstats_spc '/*https://gist.githubusercontent.com/skehlet/36aad599171b25826e82/raw/b84501643ec16d05f8b307ad87bac884bc14fd52/gistfile1.sql*/WITH table_opts AS (    SELECT      pg_class.oid, relname, nspname, array_to_string(reloptions, '''') AS relopts    FROM      pg_class INNER JOIN pg_namespace ns ON relnamespace = ns.oid), storage_settings AS (    SELECT      oid, relname, nspname,      CASE        WHEN relopts LIKE ''%autovacuum_vacuum_threshold%''          THEN regexp_replace(relopts, ''.*autovacuum_vacuum_threshold=([0-9.]+).*'', E''\\1'')::integer          ELSE current_setting(''autovacuum_vacuum_threshold'')::integer      END AS autovacuum_vacuum_threshold,      CASE        WHEN relopts LIKE ''%autovacuum_vacuum_scale_factor%''          THEN regexp_replace(relopts, ''.*autovacuum_vacuum_scale_factor=([0-9.]+).*'', E''\\1'')::real          ELSE current_setting(''autovacuum_vacuum_scale_factor'')::real      END AS autovacuum_vacuum_scale_factor,      CASE        WHEN relopts LIKE ''%autovacuum_freeze_min_age%''          THEN regexp_replace(relopts, ''.*autovacuum_freeze_min_age=([0-9.]+).*'', E''\\1'')::integer          ELSE current_setting(''vacuum_freeze_min_age'')::integer      END AS autovacuum_freeze_min_age,      CASE        WHEN relopts LIKE ''%autovacuum_freeze_table_age%''          THEN regexp_replace(relopts, ''.*autovacuum_freeze_table_age=([0-9.]+).*'', E''\\1'')::real          ELSE current_setting(''vacuum_freeze_table_age'')::real      END AS autovacuum_freeze_table_age,      CASE        WHEN relopts LIKE ''%autovacuum_freeze_max_age%''          THEN regexp_replace(relopts, ''.*autovacuum_freeze_max_age=([0-9.]+).*'', E''\\1'')::real          ELSE current_setting(''autovacuum_freeze_max_age'')::real      END AS autovacuum_freeze_max_age    FROM      table_opts),_ as (SELECT  storage_settings.oid::regclass AS "rel",  spc.spcname,  to_char(c.reltuples, ''9G999G999G999'') AS "Rows",  to_char(c.relpages, ''9G999G999'') AS "Pages",  pg_size_pretty(8192::bigint*c.relpages) AS "Size",  8192::bigint*c.relpages "size",  age(relfrozenxid) as "icedXID",  autovacuum_freeze_table_age "AV TblIceAge",  CASE    WHEN age(relfrozenxid) > autovacuum_freeze_table_age      THEN ''AV''    WHEN age(relfrozenxid) > current_setting(''vacuum_freeze_table_age'')::integer      THEN ''V''  END AS "Icing",  autovacuum_freeze_max_age "IceMaxAV",  autovacuum_freeze_max_age - age(relfrozenxid) AS "ice_in",  ROUND(100.0 * age(relfrozenxid) / autovacuum_freeze_max_age::numeric, 1) || ''%'' AS "IceAV%"FROM  pg_stat_user_tables psut  JOIN pg_class c ON psut.relid = c.oid  JOIN storage_settings ON c.oid = storage_settings.oid  JOIN pg_tablespace spc on c.reltablespace=spc.oid or (c.reltablespace=0 and spc.spcname=''pg_default'')  /*LEFT JOIN pg_locks ON c.oid=pg_locks.relation WHERE pg_locks.relation is null*/ORDER BY ice_in)select "rel", spcname, "Pages", "Size", "icedXID", ice_in, "IceAV%" from _'

\set wrapstats '/*https://gist.githubusercontent.com/skehlet/36aad599171b25826e82/raw/b84501643ec16d05f8b307ad87bac884bc14fd52/gistfile1.sql*/WITH table_opts AS (    SELECT      pg_class.oid, relname, nspname, array_to_string(reloptions, '''') AS relopts    FROM      pg_class INNER JOIN pg_namespace ns ON relnamespace = ns.oid), storage_settings AS (    SELECT      oid, relname, nspname,      CASE        WHEN relopts LIKE ''%autovacuum_vacuum_threshold%''          THEN regexp_replace(relopts, ''.*autovacuum_vacuum_threshold=([0-9.]+).*'', E''\\1'')::integer          ELSE current_setting(''autovacuum_vacuum_threshold'')::integer      END AS autovacuum_vacuum_threshold,      CASE        WHEN relopts LIKE ''%autovacuum_vacuum_scale_factor%''          THEN regexp_replace(relopts, ''.*autovacuum_vacuum_scale_factor=([0-9.]+).*'', E''\\1'')::real          ELSE current_setting(''autovacuum_vacuum_scale_factor'')::real      END AS autovacuum_vacuum_scale_factor,      CASE        WHEN relopts LIKE ''%autovacuum_freeze_min_age%''          THEN regexp_replace(relopts, ''.*autovacuum_freeze_min_age=([0-9.]+).*'', E''\\1'')::integer          ELSE current_setting(''vacuum_freeze_min_age'')::integer      END AS autovacuum_freeze_min_age,      CASE        WHEN relopts LIKE ''%autovacuum_freeze_table_age%''          THEN regexp_replace(relopts, ''.*autovacuum_freeze_table_age=([0-9.]+).*'', E''\\1'')::real          ELSE current_setting(''vacuum_freeze_table_age'')::real      END AS autovacuum_freeze_table_age,      CASE        WHEN relopts LIKE ''%autovacuum_freeze_max_age%''          THEN regexp_replace(relopts, ''.*autovacuum_freeze_max_age=([0-9.]+).*'', E''\\1'')::real          ELSE current_setting(''autovacuum_freeze_max_age'')::real      END AS autovacuum_freeze_max_age    FROM      table_opts),_ as (SELECT  storage_settings.nspname AS "Schema",  storage_settings.relname AS "Table",  to_char(pg_class.reltuples, ''9G999G999G999'') AS "Rows",  to_char(pg_class.relpages, ''9G999G999'') AS "Pages",  pg_size_pretty(pg_table_size(pg_class.oid)) AS "Size",  age(relfrozenxid) as "icedXID",  autovacuum_freeze_table_age "AV TblIceAge",  CASE    WHEN age(relfrozenxid) > autovacuum_freeze_table_age      THEN ''AV''    WHEN age(relfrozenxid) > current_setting(''vacuum_freeze_table_age'')::integer      THEN ''V''  END AS "Icing",  autovacuum_freeze_max_age "IceMaxAV",  autovacuum_freeze_max_age - age(relfrozenxid) AS "ice_in",  ROUND(100.0 * age(relfrozenxid) / autovacuum_freeze_max_age::numeric, 1) || ''%'' AS "IceAV%"FROM  pg_stat_user_tables psut INNER JOIN pg_class ON psut.relid = pg_class.oid  INNER JOIN storage_settings ON pg_class.oid = storage_settings.oidORDER BY ice_in)select "Schema", "Table", "Pages", "Size", "icedXID", ice_in, "IceAV%" from _'



-- number of connected backends
\set backends 'SELECT datname, numbackends FROM pg_catalog.pg_stat_database;'

\set conninfo 'select usename, count(*) from pg_stat_activity group by usename;'

\set activity 'select datname, pid, usename, application_name,client_addr, client_hostname, client_port, state, left(query,20) query1 from pg_stat_activity;'

\set activity1 'WITH _0 AS (SELECT datname, pid, (clock_timestamp() - backend_start) AS bk_age, (clock_timestamp() - state_change) state_age, (clock_timestamp() - xact_start) xact_age,          (clock_timestamp() - query_start) query_age, usename, application_name app, client_addr, state, left(query,20) query1 FROM pg_stat_activity order by xact_start desc nulls last, query_start desc) SELECT * FROM _0 ;'

\set activity96 'WITH _0 AS (SELECT datname, pid, (clock_timestamp() - backend_start) AS bk_age, (clock_timestamp() - state_change) state_age, (clock_timestamp() - xact_start) xact_age,          (clock_timestamp() - query_start) query_age, usename, application_name app, client_addr, concat(STATE, CASE WHEN waiting THEN ''[WAIT]'' ELSE '''' END) AS STATE, regexp_replace(query, ''[ ]+'', '' '', ''g'') query1   FROM pg_stat_activity order by xact_start desc nulls last, query_start desc)SELECT * FROM _0; '

\set activity2 'WITH _0 AS (SELECT datname, pid, (clock_timestamp() - backend_start) AS bk_age, (clock_timestamp() - state_change) state_age, (clock_timestamp() - xact_start) xact_age,          (clock_timestamp() - query_start) query_age, usename, application_name app, client_addr, state, left(regexp_replace(query, ''[ ]+'', '' '', ''g''),20) query1   FROM pg_stat_activity order by xact_start desc nulls last, query_start desc) SELECT * FROM _0;'

--\set activitytho1 'WITH _0 AS  (SELECT datname, pid, (clock_timestamp() - backend_start) AS bk_age, (clock_timestamp() - state_change) state_age, (clock_timestamp() - xact_start) xact_age,          (clock_timestamp() - query_start) query_age, usename, application_name app, client_addr, state, left(regexp_replace(query, ''[ ]+'', '' '', ''g''),20) query1 FROM pg_stat_activity  WHERE pid <> pg_backend_pid()    AND state <> ''idle''  ORDER BY xact_start DESC NULLS LAST, query_start DESC)SELECT * FROM _0 ;'

\set activitytho 'WITH _0 AS  (SELECT datname, pid, (clock_timestamp() - backend_start) AS bk_age, (clock_timestamp() - state_change) state_age, (clock_timestamp() - xact_start) xact_age,          (clock_timestamp() - query_start) query_age, usename, application_name app, client_addr, state, left(query,20) query1,  FROM pg_stat_activity  WHERE pid <> pg_backend_pid()    AND state <> ''idle''  ORDER BY xact_start DESC NULLS LAST,           query_start DESC)SELECT * FROM _0 ;'



\set dbsize 'SELECT datname, pg_size_pretty(pg_database_size(datname)) db_size FROM pg_database ORDER BY db_size;'

\set tablesize 'SELECT nspname || \'.\' || relname AS \"relation\", pg_size_pretty(pg_relation_size(C.oid)) AS "size" FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE nspname NOT IN (\'pg_catalog\', \'information_schema\') ORDER BY pg_relation_size(C.oid) DESC LIMIT 40;'

\set tablesize1 'WITH _0 AS  (SELECT (nspname || \'.\' || relname) AS rel,    pg_size_pretty(pg_relation_size(C.oid)) AS "size"    FROM pg_class C    LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)    WHERE nspname NOT IN (\'pg_catalog\', \'information_schema\')   ORDER BY pg_relation_size(C.oid) DESC)SELECT * from _0 ;'


\set uselesscol 'SELECT nspname, relname, attname, typname, (stanullfrac*100)::int AS null_percent, case when stadistinct >= 0 then stadistinct else abs(stadistinct)*reltuples end AS \"distinct\", case 1 when stakind1 then stavalues1 when stakind2 then stavalues2 end AS \"values\" FROM pg_class c JOIN pg_namespace ns ON (ns.oid=relnamespace) JOIN pg_attribute ON (c.oid=attrelid) JOIN pg_type t ON (t.oid=atttypid) JOIN pg_statistic ON (c.oid=starelid AND staattnum=attnum) WHERE nspname NOT LIKE E\'pg\\\\_%\' AND nspname != \'information_schema\' AND relkind=\'r\' AND NOT attisdropped AND attstattarget != 0 AND reltuples >= 100 AND stadistinct BETWEEN 0 AND 1 ORDER BY nspname, relname, attname;'



Mas info en el gi: https://github.com/DiegoDAF/pgScrips