Skip to main content

PostgreSQL 9.4 : Log shipping Standby Server setup (Step-by-step)


Consider following servers:
  • Master (Production site, 10.88.66.225): Actual production Master site.
  • Storage Server (10.88.66.27): Master would put WAL archived logs on Storage server from where the Slave would pick it up & restore.
  • Slave (Hot Standby Server, 10.88.66.28): Slave would keep on restoring archived WAL files. This would be exact replica of production site. It would be use for read only queries & if production server crashes then Slave would take over the operations.   
Steps for Replication setup:
    • Install the PostgreSQL binary on both Master and Slave servers. Make sure disk mount points for data directory and tablespace directories are same on both the servers.
    • For postgres user setup password less ssh connection between Master, storage and Slave; so that they could copy files across the network.
    • Storage setup:
      • Mount disks & Create storage Directory 
      • Grand read-write permission to postgres user.
      • Export this storage directory and open access to Master and Slave using NFS.
      • Mount the shared directory on Master and slave.
    • Master Configurations:
      • Update the iptables setting to allow TCP connection on port 5432 (postmaster service port)
      • Update /etc/sysconfig/selinux to disable selinux settings.
      • Update postgresql.conf with following parameters
        listen_addresses='*'
        wal_level = hot_standby                                  
        archive_mode = on
        archive_command = 'test -s %p && gzip < %p > /mnt/nfs/archive/%f'
        archive_timeout = 60
        max_wal_senders=3
        wal_keep_segments=32

      • Update pg_hba.conf and add following lines
        host all postgres RemoteIP trust
        host replication postgres SlaveIP trust

      • Restart the postmaster service, now database would be in archive mode and would start copying files on archived location. 
    • Slave Configurations:
      • Mount shared storage directory on slave.
      • Stop postgres database.
      • Move/delete default data directory content, which was created during installation. Make sure data directory is empty, owned by postgres user and directory has permission mode as 700.
      • Backup and restore the base backup from master node:
        pg_basebackup -h 10.88.66.225 -U postgres -D /opt/PostgreSQL/9.4/data -R
      • Update Postgresql.conf with following values:
        listen_addresses='*'
        host_standby='on'
      • Update pg_hba.conf and add following lines
        host all postgres RemoteIP trust
        host replication postgres targetIP trust
      • Update recovery.conf with following values:
        standby_mode='on'
        recovery_target_timeline='latest'
        restore_command = 'test –s /mnt/nfs/archive/%f && gunzip < /mnt/nfs/archive/%f > %p'
        archive_cleanup_command = 'pg_archivecleanup /mnt/nfs/archive %r'
      • Start postgres Database. It would start restoring files from Archived storage location.

Comments

Popular posts from this blog

Some facts and Figures of WCF

SOAP Message in WCF: 1.        The max size of SOAP message in WCF is 9,223,372,036,854,775,807 bytes. Including metadata. 2.        For actual user data we can use 2,147,483,647 bytes out of it. 3.        With default setting WCF uses only 65536 bytes. 4.        We can change it by setting maxReceivedMessageSize in clients app.config file.    5.        So selection of data types in Data Contract and Data table will matter a lot! 6.       Reference :   http://blogs.msdn.com/drnick/archive/2006/03/16/552628.aspx          http://blogs.msdn.com/drnick/archive/2006/03/10/547568.aspx       “Amazing blog for WCF!” Data Contract: 1.        By Default WCF can serialize 65536 DataMember. 2.        We can change it to max  2147483646. 3.       How?  Please go to: http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/4a81a226-175b-41d3-864a-181792c71ffe Tracing WCF service message: 1.       Enable message log of WCF: http://www.avingtonsolutions.com/blog/post/2008/07/25/Tracing-SOAP-Message

Drop all Objects from Schema In Postgres

To Drop all objects from Postgres Schema there could be following two approaches: Drop Schema with cascade all and re-create it again.  In some cases where you dont want to/not allowed to drop and recreate schema, its easy to look for objects on current schema and drop them. following script would help to do so, Create function which would do the task and then drop that function too. --- CREATE OR REPLACE FUNCTION drop_DB_objects() RETURNS VOID AS $$ DECLARE  rd_object RECORD; v_idx_statement VARCHAR(500);   BEGIN ---1. Dropping all stored functions RAISE NOTICE '%', 'Dropping all stored functions...'; FOR rd_object IN ( SELECT format('%I.%I(%s)', ns.nspname, p.proname, oidvectortypes(p.proargtypes)) as functionDef     FROM pg_proc p     INNER JOIN pg_namespace ns ON (p.pronamespace = ns.oid)    WHERE ns.nspname = current_schema      AND p.proname <> 'drop_db_objects' )

Vacuum Analyze Full Schema in PostgreSQL Database

To analyze full database schema, you can use following shell script : -------------------------------------------------------------------------------------------------------------------------- #!/bin/sh rm -rf /tmp/vacuum_analyze_MyData.sql dbname="mydata" username="mydatadatamart" namespace="mydatadatamart" # Vacuum only those tables which are fragmented over 30% # Analyze tables only if they are not already # Process CMF and Groups Master table every time as they are master Tables=`psql $dbname $username << EOF SELECT 'VACUUM ANALYZE VERBOSE ' || tablename || ' ;' AS TableList   FROM ( SELECT *,       n_dead_tup > av_threshold AS "av_needed",       CASE WHEN reltuples > 0      THEN round(100.0 * n_dead_tup / (reltuples))       ELSE 0       END AS pct_dead FROM (SELECT C.relname AS TableName, pg_stat_get_tuples_inserted(C.oid) AS n_tup_ins, pg_stat_get_tuples_updated(