Please provide a short (approximately 100 word) summary of the following web Content, written in the voice of the original author. If there is anything controversial please highlight the controversy. If there is something surprising, unique, or clever, please highlight that as well. Content: Title: PostgreSQL Logical Replication Explained Site: www.postgresql.fastware.com Logical replication allows fine-grained control over both data replication and security. In this blog I'll go through the fundamentals of Logical Replication and some use cases.  My paper on the Internals of Logical Replication was one of the 27 CFP's selected from 120 submissions. During the event, I covered the following topics: Introduction Logical replication is a method of replicating data changes from publisher to subscriber. The node where a publication is defined is referred to as the publisher. The node where a subscription is defined is referred to as the subscriber. Logical replication allows fine-grained control over both data replication and security. Logical replication uses a publish and subscribe model with one or more subscribers subscribing to one or more publications on a publisher node. Subscribers pull data from the publications they subscribe to and may subsequently re-publish data to allow cascading replication or more complex configurations. Use cases Sending incremental changes in a single database or a subset of a database to subscribers as they occur. Firing triggers for individual changes as they arrive on the subscriber. Consolidating multiple databases into a single one (e.g., for analytical purposes).  Replicating between different major versions of PostgreSQL. Replicating between PostgreSQL instances on different platforms (e.g., Linux to Windows). Giving access to replicated data to different groups of users. Sharing a subset of the database between multiple databases. Architecture Below, I illustrate how logical replication works in PostgreSQL 15. I will refer back to this diagram later in this post. Publication Publications can be defined on the primary node whose changes should be replicated. A publication is a set of changes generated from a table or a group of tables and might also be described as a change set or replication set. Each publication exists in only one database. Each table can be added to multiple publications if needed. Publications may currently only contain tables and all tables in schema. Publications can choose to limit the changes they produce to any combination of INSERT , UPDATE , DELETE , and TRUNCATE , similar to how triggers are fired by particular event types. By default, all operation types are replicated. When a publication is created, the publication information will be added to pg_publication catalogue table: postgres=# CREATE PUBLICATION pub_alltables FOR ALL TABLES; CREATE PUBLICATION postgres=# SELECT * FROM pg_publication ;   oid  |    pubname    | pubowner | puballtables | pubinsert | pubupdate | pubdelete | pubtruncate | pubviaroot -------+---------------+----------+--------------+-----------+-----------+-----------+-------------+------------  16392 | pub_alltables |       10 | t            | t         | t         | t         | t           | f (1 row) Information about table publication is added to pg_publication_rel catalog table: postgres=# CREATE PUBLICATION pub_employee FOR TABLE employee; CREATE PUBLICATION postgres=# SELECT oid, prpubid, prrelid::regclass FROM pg_publication_rel ;   oid  | prpubid | prrelid -------+---------+----------  16407 |   16406 | employee (1 row) Information about schema publications is added to pg_publication_namespace catalog table: postgres=# CREATE PUBLICATION pub_sales_info FOR TABLES IN SCHEMA marketing, sales; CREATE PUBLICATION postgres=# SELECT oid, pnpubid, pnnspid::regnamespace FROM pg_publication_namespace ;   oid  | pnpubid |  pnnspid -------+---------+-----------  16410 |   16408 | marketing  16411 |   16408 | sales (2 rows) Subscription A subscription is the downstream side of logical replication.  It defines the connection to another database and set of publications (one or more) to which it wants to subscribe. The subscriber database behaves in the same way as any other PostgreSQL instance, and can be used as a publisher for other databases by defining its own publications. A subscriber node may have multiple subscriptions.  It is possible to define multiple subscriptions between a single publisher-subscriber pair, in which case care must be taken to ensure that the subscribed publication objects don't overlap. Each subscription will receive changes via one replication slot.  Additional replication slots may be required for the initial synchronization of pre-existing table data, which will be dropped at the end of data synchronization. When a subscription is created, the subscription information will be added to the pg_subscription catalog table: postgres=# CREATE SUBSCRIPTION sub_alltables CONNECTION 'dbname=postgres host=localhost port=5432' PUBLICATION pub_alltables ; NOTICE:  created replication slot "sub_alltables" on publisher CREATE SUBSCRIPTION postgres=# SELECT oid, subdbid, subname, subconninfo, subpublications FROM pg_subscription ;   oid  | subdbid |    subname       |               subconninfo                | subpublications -------+---------+------------------+------------------------------------------+-----------------  16393 |       5 | sub_alltables    | dbname=postgres host=localhost port=5432 | {pub_alltables} (1 row) The subscriber will connect to the publisher and get the list of tables that the publisher is publishing. In our earlier example, we created pub_alltables to publish data of all tables - the publication relations will be added to the pg_subscription_rel catalog tables: postgres=# SELECT srsubid, srerelid::regclass FROM pg_subscription_rel ;  srsubid | srrelid ---------+---------    16399 |   accounts    16399 |   accounts_roles    16399 |   roles    16399 |   department    16399 |   employee (5 rows) The subscriber connects to the publisher and creates a replication slot, whose information is available in pg_replication_slots : postgres=# SELECT slot_name, plugin, type, datoid, database, temporary, active, active_pid, restart_lsn, confrm_flush_lsn FROM pg_replication_slots ;  slot_name     | plugin   | slot_type | datoid | database | temporary | active | active_pid | restart_lsn | confirmed_flush_lsn ---------------+----------+-----------+--------+----------+-----------+--------+------------+-------------+---------------------  sub_alltables | pgoutput | logical   |      5 | postgres | f         | t      |      24473 | 0/1550900   | 0/1550938           (1 row) Subscribers add the subscription stats information to pg_stat_subscription postgres=# SELECT subid, subname, received_lsn FROM pg_stat_subscription ; subid  | subname         |   received_lsn -------+-----------------+----------------  16399 | sub_alltables    | 0/1550938 (1 row) The initial part of the CREATE SUBSCRIPTION command will be completed and returned to the user. The remaining work will be done in the background by the replication launcher , walsender , apply worker, and tablesync worker after the CREATE SUBSCRIPTION command is completed. Processes Replication launcher This process is started by the postmaster during the start of the instance. It will periodically check the pg_subscription catalog table to see if any subscriptions have been added or enabled. The logical replication worker launcher uses the background worker infrastructure to start the logical replication workers for every enabled subscription. vignesh 24438 /home/vignesh/postgres/inst/bin/postgres -D subscriber vignesh 24439 postgres: checkpointer vignesh 24440 postgres: background writer vignesh 24442 postgres: walwriter vignesh 24443 postgres: autovacuum launcher vignesh 24444 postgres: logical replication launcher Once the launcher process identifies that a new subscription has been created or enabled, it will start an apply worker process. The apply worker running can be seen from the process list: vignesh 24438 /home/vignesh/postgres/inst/bin/postgres -D subscriber vignesh 24439 postgres: checkpointer vignesh 24440 postgres: background writer vignesh 24442 postgres: walwriter vignesh 24443 postgres: autovacuum launcher vignesh 24444 postgres: logical replication launcher vignesh 24472 postgres: logical replication apply worker for subscription 16399 vignesh 24473 postgres: walsender vignesh postgres 127.0.0.1(55020) START_REPLICATION The above information illustrates step 1 mentioned in the Architecture section above. Apply worker The apply worker will iterate through the table list and launch tablesync workers to synchronize the tables. Each table will be synchronized by one tablesync worker. Multiple tablesync workers (one for each table) will run in parallel based on the max_sync_workers_per_subscription configuration. The apply worker will wait until the tablesync worker copies the initial table data and sets the table state to ready state in pg_subscription_rel . postgres=# SELECT srsubid, srrelid::regclass, srsubstate, srsublsn FROM pg_subscription_rel ;  srsubid |    srrelid     | srsubstate | srsublsn  ---------+----------------+------------+-----------    16399 | accounts       | r          | 0/156B8D0    16399 | accounts_roles | r          | 0/156B8D0    16399 | department     | r          | 0/156B940    16399 | employee       | r          | 0/156B940    16399 | roles          | r          | 0/156B978 (5 rows) The above information illustrates step 2 mentioned in the Architecture section above. Note: Currently, DDL operations are not supported by logical replication. Only DML changes will be replicated.  Tablesync worker The initial data synchronization is done separately for each table, by a separate tablesync worker. Create a replication slot with the USE_SNAPSHOT option and copy table data with the COPY command. The tablesync worker will request the publisher to start replicating data from the publisher. The tablesync worker will synchronize data from walsender until it reaches the syncworker’s LSN set by the apply worker.  The above information illustrates step 3 mentioned in the Architecture section above. Walsender The walsender is started when the subscriber connects to the publisher and requests WAL. It then reads the WAL record by record, and decodes it to get the tuple data and size. The changes are queued into the reorderbufferqueue . The reorderbufferqueue collects individual pieces of transactions in the order they are written to the WAL. When a transaction is completed, it will reassemble the transaction and call the output plugin with the changes. If the reorderbufferqueue exceeds logical_decoding_work_mem, then find the largest transaction and evict it to disk. If streaming is enabled, then this transaction data will be sent to the subscriber, but will be applied in the subscriber only after the transaction is committed in the publisher. Once the transaction is committed, the walsender performs the following: Checks if this relation should be published (based on ALL TABLES or TABLE list or TABLES IN SCHEMA list specified in the publication). Checks if this operation should be published (based on what the user has specified for the publish option – insert/update/delete/truncate). Changes the publish relation ID if publish_via_partition_root is set . In this case, the relation ID of the ancestor will be sent. Checks if this row should be sent based on the condition specified by row filter Checks if this column should be sent based on the column list specified. The walsender then updates the statistics like txn count, txn bytes, spill count, spill bytes, spill txns, strea