You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are planning a significant migration from a PostgreSQL 17 cluster managed by repmgr and fronted by HAProxy, to a new PostgreSQL 18 cluster utilizing pg_auto_failover for automated High Availability. Our target environment is geo-distributed across multiple datacenters.
We are currently using logical replication to facilitate the data migration between the PG17 and PG18 clusters.
The Core Requirement: Application-Agnostic Read/Write Splitting
Our core constraint is that we have numerous legacy client applications whose database connection strings cannot be changed. We need a connection proxy that achieves the following goals while presenting a single endpoint (host:port):
Read/Write (R/W) Splitting: Direct all INSERT, UPDATE, and DELETE statements to the primary node.
Read Load Balancing: Distribute SELECT statements across all available synchronous secondary nodes to maximize resource utilization and improve read performance.
HA Decoupling: The proxy must gracefully handle pg_auto_failover promotions/demotions without application interruption.
Architectural Challenge: Stateless Proxying and Tooling
We plan to deploy multiple proxy instances (behind a Docker Swarm/Kubernetes service), making them stateless and relying on the external orchestrator for the proxy's own HA. This means the chosen proxy must support running without its own internal clustering mechanisms enabled.
We are actively seeking advice on the optimal architectural approach for this scenario.
Question for the Community:
Preferred R/W Splitting Architecture & Technology: Given the constraints of immutable client connection strings, the use of pg_auto_failover, and the requirement for a stateless proxy, what is the community's preferred architectural pattern and proxy technology for achieving reliable R/W splitting? Specifically, how do solutions that directly support R/W splitting (like Pgpool-II in stateless mode) compare against simpler poolers that require external routing logic (like PGBouncer)?
Read Consistency vs. Load Balancing: What advanced configurations, health checks, or session management patterns do you use with your chosen proxy to ensure secondaries are fully utilized while maintaining strict read-after-write consistency?
Automation & Integration: What recommended tools or scripts are used to seamlessly monitor the state reported by the pg_auto_failover monitor and dynamically update the chosen proxy's backend configuration in real-time?
Any architectural insights regarding the seamless interaction between pg_auto_failover and the chosen proxy's backend node detection would be greatly appreciated.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
We are planning a significant migration from a PostgreSQL 17 cluster managed by
repmgrand fronted byHAProxy, to a new PostgreSQL 18 cluster utilizingpg_auto_failoverfor automated High Availability. Our target environment is geo-distributed across multiple datacenters.We are currently using logical replication to facilitate the data migration between the PG17 and PG18 clusters.
The Core Requirement: Application-Agnostic Read/Write Splitting
Our core constraint is that we have numerous legacy client applications whose database connection strings cannot be changed. We need a connection proxy that achieves the following goals while presenting a single endpoint (
host:port):INSERT,UPDATE, andDELETEstatements to the primary node.SELECTstatements across all available synchronous secondary nodes to maximize resource utilization and improve read performance.pg_auto_failoverpromotions/demotions without application interruption.Architectural Challenge: Stateless Proxying and Tooling
We plan to deploy multiple proxy instances (behind a Docker Swarm/Kubernetes service), making them stateless and relying on the external orchestrator for the proxy's own HA. This means the chosen proxy must support running without its own internal clustering mechanisms enabled.
We are actively seeking advice on the optimal architectural approach for this scenario.
Question for the Community:
Any architectural insights regarding the seamless interaction between pg_auto_failover and the chosen proxy's backend node detection would be greatly appreciated.
Beta Was this translation helpful? Give feedback.
All reactions