Skip to main content

Enable Row movement in ASM ( Auto shrinking in Oracle)


Well, as all other Oracle databases, we too facing classical problem of data fragmentation on OLTP system handling 2500 + Concurrent users. With daily data archival process for around 3.5K+ master rows and its 60 child tables in proportion of 2 - 7 times bigger.
So much of concurrent operations and every day data deletion was causing data fragmentation and index badness; we have to do shrinking activity periodically. Then, I have started investigation as to how can we do this auto? Can oracle take care of data fragmentation by auto shrinking? Can’t DML operation de-fragment underling data?   

I have ASM as underling data storage. After investigation and talking with Tom Keyt, I found that Keeping “Row movement Enable” is the solution that oracle has provided with 9i and above version.
Enable Row movement is just a permission given to oracle to change the row address(ROWID) when Inert/Update DML operation executes. Oracle will check if it can move the current rows in upper blocks wherever space is available and shrink data auto. Oracle will skip the process if movement is too expensive and no space is available in upper blocks. Well this approach of oracle looks safe for me!
If so then, why Row movement is disabled by default ?? Tom says, “In older versions < Oracle 9i this feature wasn’t there and Rowid used to be constant across the life of Row. In 9i onward we able to move rows online. But changing something which is default is not good and its misleading.”

Before applying the changes on Production, I have to test and prove it to my DBA that this will solve the problem and at least will not add any negative effect…..Big challenge!

To prove this I have done following test,       
1. Two big table with same data (9228649 Rows) and Indexes.
a. BigTable_DRM  : with Disabled row movement
b. BigTable _ERM : with Enabled row movement
2. Inserted the same data and capture the segment stats before any DML operation’s (using IndexAndTablefragmentationStats.sql)
3. Now, fired 4 parallel sessions as follows. where it will operate on the same Key values on both the tables and perform concurrent Insert and Deletes on same table. ( somewhat like we have in Production).
a. Session 1 : Delete Data from BigTable_DRM with the chunk size of 1,000 Key values. Total 6,00,000 Keys values and 170K + Rows
b. Session 2 : Delete Data from BigTable_ERM with the chunk size of 1,000 Key values. Total 6,00,000 Keys values and 170K + Rows
c.  Session 3 : Insert Data Into BigTable_DRM, 1 to 10 row in bulk each time in loop, Total 6,00,000 Keys values and 170K + Rows
d. Session 4 : Insert Data Into BigTable_ERM, 1 to 10 row in bulk each time in loop, Total 6,00,000 Keys values and 170K + Rows
So this will Delete and insert the same number of Rows concurrently.   
4. I have done 2 such passes and gather segment stats using  IndexAndTablefragmentationStats.sql after each pass.

Summary of test :
1. Average rows per block in table before and After DML operations :

Option
Before any Change
After Pass 1
After pass 2
With Disable Row movement
75.958
75.5825
50.501
With Enable Row movement
75.958
76.663
52.544

2. Index Stats :
a. All indexes are showing same badness in both the cases.  

As per may above test and analysis,
With Enabled row movement average number of rows per block are more as compare to Disabled mode. With more concurrent operation in real time system, I believe this difference will be considerable. If we keep row movement enable, it may take care of Auto segment shrinking and should have less data fragmentation in ASM.
Let’s see How it goes on Live production environment…..
IndexAndTablefragmentationStats.sql

========================================================================
-- Santosh Kangane 
-- Script : 1. Get Table fragmentation statistics 
--    2. index fragmentation statistics 
--=======================================================================
-- Table Stats  
--=======================================================================
--1) Row Distribution : 
   SELECT fno, bno, COUNT (*) rows_per_block
    FROM (SELECT DBMS_ROWID.rowid_relative_fno (ROWID) AS fno,
                 DBMS_ROWID.rowid_block_number (ROWID) AS bno
            FROM BigTable_ERM) -- BigTable_DRM
GROUP BY fno, bno;
--2) Summary :
--=======================================================================
select
    twentieth,
    min(rows_per_block)            min_rows,
    max(rows_per_block)            max_rows,
    sum(block_ct)                tot_blocks,
    sum(row_total)                tot_rows,
    round(sum(row_total)/sum(block_ct),2)    avg_rows
from
    (
    select
        ntile(20) over (order by rows_per_block) twentieth,
        rows_per_block,
        count(*)            block_ct,
        rows_per_block * count(*)    row_total
    from
        (
        select
            fno, bno, count(*) rows_per_block
        from
            (
            select
                dbms_rowid.rowid_relative_fno(rowid)    as fno,
                dbms_rOwId.rowid_block_number(rowid)    as bno
            from
                BigTable_ERM -- BigTable_DRM
            )
        group by
            fno, bno
        )
    group by
        rows_per_block
    order by
        rows_per_block
    )
group by
    twentieth
order by
    twentieth; 
--=======================================================================
--                   Index Stats
--=======================================================================
-- Validate Indexes
-- For Each of following index validate, execute following SQL and note the stats
validate index BigTable_ERM_Index1;
validate index BigTable_ERM_Index2;
validate index BigTable_ERM_Index3;
validate index BigTable_DRM_Index1;
validate index BigTable_DRM_Index2;
validate index BigTable_DRM_Index3;
-- Reports index fragmentation statistics 
select  
  name as index_name,  
  to_char(del_lf_rows,'999,999,990')  as leaf_rows_deleted,  
  to_char(lf_rows-del_lf_rows,'999,999,990') as leaf_rows_in_use,  
  to_char(del_lf_rows/(lf_rows+0.00001),'999,990.999') as index_badness  
from  
  index_stats;
--Report index statistics.  
select 
  name as index_name,
  to_char(height,     '999,999,990') as  height, 
  to_char(blocks,     '999,999,990')     blocks, 
  to_char(del_lf_rows,'999,999,990')     del_lf_rows, 
  to_char(del_lf_rows_len,'999,999,990') del_lf_rows_len, 
  to_char(distinct_keys,'999,999,990')   distinct_keys, 
  to_char(most_repeated_key,'999,999,990') most_repeated_key, 
  to_char(btree_space,'999,999,990')       btree_space, 
  to_char(used_space,'999,999,990')        used_space, 
  to_char(pct_used,'990')          pct_used, 
  to_char(rows_per_key,'999,999,990')      rows_per_key, 
  to_char(blks_gets_per_access,'999,999,990') blks_gets_per_access, 
  to_char(lf_rows,    '999,999,990') as lf_rows, 
  to_char(br_rows,    '999,999,990') as br_rows, 
  to_char(lf_blks,    '999,999,990') as lf_blks, 
  to_char(br_blks,    '999,999,990') as br_blks, 
  to_char(lf_rows_len,'999,999,990') as lf_rows_len, 
  to_char(br_rows_len,'999,999,990') as br_rows_len, 
  to_char(lf_blk_len, '999,999,990') as lf_blk_len, 
  to_char(br_blk_len, '999,999,990') as br_blk_len 
from 
  index_stats;  
--=======================================================================

Comments

Post a Comment

Popular posts from this blog

Some facts and Figures of WCF

SOAP Message in WCF: 1.        The max size of SOAP message in WCF is 9,223,372,036,854,775,807 bytes. Including metadata. 2.        For actual user data we can use 2,147,483,647 bytes out of it. 3.        With default setting WCF uses only 65536 bytes. 4.        We can change it by setting maxReceivedMessageSize in clients app.config file.    5.        So selection of data types in Data Contract and Data table will matter a lot! 6.       Reference :   http://blogs.msdn.com/drnick/archive/2006/03/16/552628.aspx          http://blogs.msdn.com/drnick/archive/2006/03/10/547568.aspx       “Amazing blog for WCF!” Data Contract: 1.        By Default WCF can serialize 65536 DataMember. 2.        We can change it to max  2147483646. 3.       How?  Please go to: http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/4a81a226-175b-41d3-864a-181792c71ffe Tracing WCF service message: 1.       Enable message log of WCF: http://www.avingtonsolutions.com/blog/post/2008/07/25/Tracing-SOAP-Message

Drop all Objects from Schema In Postgres

To Drop all objects from Postgres Schema there could be following two approaches: Drop Schema with cascade all and re-create it again.  In some cases where you dont want to/not allowed to drop and recreate schema, its easy to look for objects on current schema and drop them. following script would help to do so, Create function which would do the task and then drop that function too. --- CREATE OR REPLACE FUNCTION drop_DB_objects() RETURNS VOID AS $$ DECLARE  rd_object RECORD; v_idx_statement VARCHAR(500);   BEGIN ---1. Dropping all stored functions RAISE NOTICE '%', 'Dropping all stored functions...'; FOR rd_object IN ( SELECT format('%I.%I(%s)', ns.nspname, p.proname, oidvectortypes(p.proargtypes)) as functionDef     FROM pg_proc p     INNER JOIN pg_namespace ns ON (p.pronamespace = ns.oid)    WHERE ns.nspname = current_schema      AND p.proname <> 'drop_db_objects' )

Vacuum Analyze Full Schema in PostgreSQL Database

To analyze full database schema, you can use following shell script : -------------------------------------------------------------------------------------------------------------------------- #!/bin/sh rm -rf /tmp/vacuum_analyze_MyData.sql dbname="mydata" username="mydatadatamart" namespace="mydatadatamart" # Vacuum only those tables which are fragmented over 30% # Analyze tables only if they are not already # Process CMF and Groups Master table every time as they are master Tables=`psql $dbname $username << EOF SELECT 'VACUUM ANALYZE VERBOSE ' || tablename || ' ;' AS TableList   FROM ( SELECT *,       n_dead_tup > av_threshold AS "av_needed",       CASE WHEN reltuples > 0      THEN round(100.0 * n_dead_tup / (reltuples))       ELSE 0       END AS pct_dead FROM (SELECT C.relname AS TableName, pg_stat_get_tuples_inserted(C.oid) AS n_tup_ins, pg_stat_get_tuples_updated(