This section contains information about:
- The behavior of how Datastream handles data that's being pulled from a source PostgreSQL database
- The versions of PostgreSQL database that Datastream supports
- An overview of how to setup a source PostgreSQL database so that data can be streamed from it to a destination
- Known limitations for using PostgreSQL database as a source
Behavior
The source PostgreSQL database relies upon its logical decoding feature. Logical decoding exposes all changes committed to the database and allows consuming and processing these changes in a user-friendly format using an output plugin. Datastream uses the pgoutput plugin, which is the standard PostgreSQL logical decoding plugin for PostgreSQL 10 and later.
- All schemas or specific schemas from a given PostgreSQL source, as well as all tables from the schema or specific tables, can be selected.
- All historical data is replicated.
- All data manipulation language (DML) changes, such as inserts, updates, and deletes from the specified databases and tables, are replicated.
- Only committed changes are replicated.
- If you define a REPLICA IDENTITY on a table, Datastream treats the specified columns as primary keys.
- Datastream periodically sends heartbeat messages to the source database. As a result, logical decoding message events (
op:"m") are inserted directly into the WAL file. These messages are required by Datastream to ensure source availability and to calculate freshness. We recommend taking that into consideration if other replication setups read from the same source database.
Versions
Datastream supports PostgreSQL version 10 and later.
Datastream supports the following types of PostgreSQL database:
- Self-hosted PostgreSQL
- Cloud SQL for PostgreSQL
- AlloyDB for PostgreSQL
- AlloyDB Omni
- Amazon RDS for PostgreSQL
- Amazon Aurora PostgreSQL
Best practices
This section describes recommended best practices for configuring your PostgreSQL source for use with Datastream.
Use multiple streams to prevent head-of-line blocking
For PostgreSQL sources, Datastream uses a single logical replication slot for an entire stream. A large transaction or multiple updates on one high-volume table can delay data replication for all other tables in the same stream.
To prevent head-of-line blocking, create separate streams for different sets of tables. For example, you can create one stream for high-volume tables, and another stream for low-volume tables. This isolates high-churn tables and prevents them from delaying replication for other tables.
Recommendation: identify tables with exceptionally high write
(INSERT/UPDATE/DELETE) rates and place them in their own
dedicated Datastream stream with a separate replication slot.
Avoid long-running transactions
Long-running transactions can lead to Write-Ahead Log (WAL) buildup. Because WAL is sequential, PostgreSQL can't flush the WAL until the long transaction completes, even as other transactions are being consumed. This can increase the replication slot size and slow down logical decoding, because changes from long-running transactions that overlap with the current transaction must be decoded repeatedly.
Recommendation: on the source database, configure the statement_timeout
and idle_in_transaction_session_timeout parameters to avoid long-running
transactions. For more information, see the
PostgreSQL documentation.
Use table filtering when creating publications
If you're replicating changes from only a few tables, ensure that you create a
PUBLICATION that includes only those tables. When a publication
is scoped to specific tables, PostgreSQL efficiently persists changes only for
those tables in the replication slot. This helps reduce the size of the
replication slot and improves logical decoding performance.
Proactively manage replication slots
Datastream uses a logical replication slot on your PostgreSQL primary instance, which ensures that WAL files are retained until Datastream confirms that they've been processed. If a stream fails, is paused or deleted without dropping the replication slot, PostgreSQL continues to retain WAL files indefinitely. This can fill up your database server disk and lead to a production outage.
Recommendation: Set up efficient alerting and monitor WAL disk usage on your source PostgreSQL server.
Configure replica identity correctly
The REPLICA IDENTITY setting tells PostgreSQL what data to write to the WAL
for UPDATE and DELETE events, allowing Datastream to identify
which rows were changed.
If you use BigQuery as a destination, avoid setting REPLICA IDENTITY
to FULL. Datastream uses the logged columns as a logical key for
BigQuery MERGE operations.
If REPLICA IDENTITY is set to FULL and a table has more than 16 columns,
this exceeds the BigQuery 16-column limit for primary keys in MERGE
operations and breaks the stream.
Recommendations (in order of preference):
- Best: use a primary key. The default setting of
REPLICA IDENTITY DEFAULTautomatically and efficiently uses the existing primary key. - Good: if no primary key exists, create a
UNIQUE NOT NULLindex and setREPLICA IDENTITY USING INDEX INDEX_NAME. - Least recommended: only use the
REPLICA IDENTITY FULLsetting on tables with no unique identifier. Be aware of the performance impact and the 16-column limit and the restriction on supported data types for primary keys if replicating to BigQuery.
Known limitations
Known limitations for using Datastream with a PostgreSQL database as a source include:
- Streams are limited to 10,000 tables.
- A table that has more than 500 million rows can't be backfilled unless the following conditions are met:
- The table has a unique B-tree index.
- The index doesn't include columns of the following types:
DOUBLE,FLOAT,MONEY,REAL,JSON,JSONB,BYTEA,TXID,XML, composite data types or geometric data types. - None of the columns of the index are nullable.
- All columns of the index are in ascending order, or all columns of the index are in descending order.
- All columns of the index are included in the stream.
- Tables without primary keys must have a REPLICA IDENTITY. Otherwise, only
INSERTevents are replicated to the destination. - Tables with primary keys can't have the REPLICA IDENTITY set to
FULLorNOTHING. It has to be set toDEFAULT. - Datastream can't replicate from a read replica instance, because PostgreSQL doesn't support logical decoding in read replicas.
- Not all changes to the source schema can be detected automatically, in which case data corruption may occur. The following schema changes may cause data corruption or failure to process the events downstream:
- Dropping columns.
- Adding columns to the middle of a table.
- Changing the data type of a column.
- Reordering columns.
- Dropping tables (relevant if the same table is then recreated with new data added).
- Datastream doesn't support columns of the
geometricdata types. - Datastream doesn't support columns of the
rangedata types. - Datastream doesn't support arrays of unsupported data types, arrays of user-defined data types (including
ENUM) or arrays ofDATE,TIMESTAMPorTIMESTAMP WITH TIME ZONEdata types. Such columns are ignored. - Datastream doesn't support replicating
UPDATEevents for rows which includeTOASTvalues in columns that are part of the table's replica identity. Such events are discarded. - Datastream doesn't support replicating rows which include
JSONorJSONBvalues with more than 2950 nested objects. Events containing suchJSONorJSONBvalues aren't replicated to the destination database. - Datastream doesn't support replicating rows which include
NaNvalues inNUMERIC (precision, scale)columns. The values in such columns are replaced withNULLvalues. - Datastream doesn't support replicating columns of the hstore data type. The values in such columns are replaced with
NULLvalues. - Datastream doesn't support replicating non-ASCII records from a SQL_ASCII encoded source database. Such records are discarded.
- Datastream doesn't support replicating tables with Row-Level Security (RLS) policies defined. For information about how to bypass this limitation, see PostgreSQL source behavior and limitations.
- Datastream doesn't capture changes made to generated columns.
- Datastream might stop working or not capture any new events when a PostgreSQL major version upgrade is performed on the database. We suggest you to drop the replication slots before the upgrade, then upgrade the database, and then recreate the replication slots. If the streams fail, recover the stream by specifying the new replication slot name, and perform a backfill if data consistency is required.
What's next
- Learn how to configure a PostgreSQL source for use with Datastream.