To analyze full database schema, you can use following shell script : -------------------------------------------------------------------------------------------------------------------------- #!/bin/sh rm -rf /tmp/vacuum_analyze_MyData.sql dbname="mydata" username="mydatadatamart" namespace="mydatadatamart" # Vacuum only those tables which are fragmented over 30% # Analyze tables only if they are not already # Process CMF and Groups Master table every time as they are master Tables=`psql $dbname $username << EOF SELECT 'VACUUM ANALYZE VERBOSE ' || tablename || ' ;' AS TableList FROM ( SELECT *, n_dead_tup > av_threshold AS "av_needed", CASE WHEN reltuples > 0 THEN round(100.0 * n_dead_tup / (reltuples)) ELSE 0 END AS pct_dead FROM (SELECT C.relname AS TableName, pg_stat_get_tuples_inserted(C.oid) AS n_tup_ins, pg_stat_get_tuples_updated(
1. Tool : pg_stat_statements a. Enabling: to enable statement log add following entries to postgresql.conf and restart the database. shared_preload_libraries = 'pg_stat_statements' pg_stat_statements.max = 10000 pg_stat_statements.track = all Login to database as super user and issue following command CREATE extension pg_stat_statements; b. Monitor: Following SQL statement would show all the latest SQL statements executed on database server. select * from pg_stat_statements; c. Space & Recommendation: as value for pg_stat_statements.max is set to 10000, postgres would keep latest 10,000 SQLs and flush older ones. So the table size would remain in control. This could be enabled on production system as it has les performance cost. 2. Tool : auto_explain a. Enabling: to enable execution plan log add following entries to postgresql.conf and restart the database. shared_preload_libraries = 'a