Skip to main content

Upgrade PostgreSQL 9.1 to 9.3 - Step by Step

PostgreSQL can be upgrade can be done in following two methods.
A. using pg_upgrade 
B. using pg_dump and pg_restore 
Detail steps are as explained bellow
A)  pg_upgrade 9.3 from 9.1
1. Take full system downtime and stop streaming replication
2. Take full dump of database from Old 9.1 version.
3. Install New version PostgreSQL9.3 under new location say /usr/lib/postgresql/9.3/
4. Stop both(V9.1 and 9.3) the databases.
$/usr/lib/postgresql/9.1/bin/pg_ctl -D /usr/lib/postgresql/9.1/main stop
$/usr/lib/postgresql/9.3/bin/pg_ctl -D /usr/lib/postgresql/9.3/main stop
5. Run pg_upgrade
  • Please make sure that the port no. 50432 is free and no other process is using it. 50432 is used for pg_upgread.
  • Command format
pg_upgrade -b oldbindir -B newbindir -d olddatadir -D newdatadir [option...]
  • Consider old as 9.1 database and new as 9.3 database directory 
    /usr/lib/postgresql/9.3/bin/pg_upgrade -b /usr/lib/postgresql/9.1/bin -B /usr/lib/postgresql/9.3/bin -d /var/lib/postgresql/9.1/main/ -D /var/lib/postgresql/9.3/main/ -O "-c config_file=/etc/postgresql/9.3/main/postgresql.conf" -o "-c config_file=/etc/postgresql/9.1/main/postgresql.conf"
6. Restore pg_hba.conf from old installation to new.
7. Start new v9.3 database.
8. Post-Upgrade processing: If any post-upgrade processing is required, pg_upgrade will issue warnings as it completes. It will also generate script files that must be run by the administrator. The script files will connect to each database that needs post-upgrade processing. Each script should be run using:
psql --username postgres --file script.sql postgres
9.  Statistics : Because optimizer statistics are not transferred by pg_upgrade, you will be instructed to run a command to regenerate that information at the end of the upgrade. You might need to set connection parameters to match your new cluster.
10. Configure: streaming replication
Reference:
B)  pg_dump 9.1 and pg_restore in 9.3
1. Take full system down time and stop Streaming replication
2. Take full database dump from old 9.1 server
3. Stop 9.1 server
4. Install New version 9.3.
5. Create all required role and databases.
6. Restore the 9.1 database dump on 9.3
7. Restore pg_hba.conf from old installation to new.
 Reference:
pg_upgrade will able to handle the creation of all required roles, permission same as old database server. No manual intervention required in this case. where as in dump-restore case all database and user role, permission creation needs to handled manually. Hence we recommend using pg_upgrade...
Before doing upgrade in production we recommend to try this on test setup. All will get the idea on time required to perform the upgrade. 

Comments

Post a Comment

Popular posts from this blog

Some facts and Figures of WCF

SOAP Message in WCF: 1.        The max size of SOAP message in WCF is 9,223,372,036,854,775,807 bytes. Including metadata. 2.        For actual user data we can use 2,147,483,647 bytes out of it. 3.        With default setting WCF uses only 65536 bytes. 4.        We can change it by setting maxReceivedMessageSize in clients app.config file.    5.        So selection of data types in Data Contract and Data table will matter a lot! 6.       Reference :   http://blogs.msdn.com/drnick/archive/2006/03/16/552628.aspx          http://blogs.msdn.com/drnick/archive/2006/03/10/547568.aspx       “Amazing blog for WCF!” Data Contract: 1.        By Default WCF can serialize 65536 DataMember. 2.        We can change it to max  2147483646. 3.       How?  Please go to: http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/4a81a226-175b-41d3-864a-181792c71ffe Tracing WCF service message: 1.       Enable message log of WCF: http://www.avingtonsolutions.com/blog/post/2008/07/25/Tracing-SOAP-Message

Drop all Objects from Schema In Postgres

To Drop all objects from Postgres Schema there could be following two approaches: Drop Schema with cascade all and re-create it again.  In some cases where you dont want to/not allowed to drop and recreate schema, its easy to look for objects on current schema and drop them. following script would help to do so, Create function which would do the task and then drop that function too. --- CREATE OR REPLACE FUNCTION drop_DB_objects() RETURNS VOID AS $$ DECLARE  rd_object RECORD; v_idx_statement VARCHAR(500);   BEGIN ---1. Dropping all stored functions RAISE NOTICE '%', 'Dropping all stored functions...'; FOR rd_object IN ( SELECT format('%I.%I(%s)', ns.nspname, p.proname, oidvectortypes(p.proargtypes)) as functionDef     FROM pg_proc p     INNER JOIN pg_namespace ns ON (p.pronamespace = ns.oid)    WHERE ns.nspname = current_schema      AND p.proname <> 'drop_db_objects' )

Vacuum Analyze Full Schema in PostgreSQL Database

To analyze full database schema, you can use following shell script : -------------------------------------------------------------------------------------------------------------------------- #!/bin/sh rm -rf /tmp/vacuum_analyze_MyData.sql dbname="mydata" username="mydatadatamart" namespace="mydatadatamart" # Vacuum only those tables which are fragmented over 30% # Analyze tables only if they are not already # Process CMF and Groups Master table every time as they are master Tables=`psql $dbname $username << EOF SELECT 'VACUUM ANALYZE VERBOSE ' || tablename || ' ;' AS TableList   FROM ( SELECT *,       n_dead_tup > av_threshold AS "av_needed",       CASE WHEN reltuples > 0      THEN round(100.0 * n_dead_tup / (reltuples))       ELSE 0       END AS pct_dead FROM (SELECT C.relname AS TableName, pg_stat_get_tuples_inserted(C.oid) AS n_tup_ins, pg_stat_get_tuples_updated(