Well, as all other Oracle databases, we too facing classical problem of data fragmentation on OLTP
system handling 2500 + Concurrent users. With daily data archival process for
around 3.5K+ master rows and its 60 child tables in proportion of 2 - 7 times
bigger.
So much of concurrent operations and every day data deletion was causing data fragmentation and index badness; we have to do shrinking activity periodically.
Then, I have started investigation as to how can we do this auto? Can oracle
take care of data fragmentation by auto shrinking? Can’t DML operation de-fragment
underling data?
I have ASM as underling data storage. After investigation and talking
with Tom Keyt, I found that Keeping “Row movement Enable” is the solution that
oracle has provided with 9i and above version.
Enable Row movement is just a permission given to oracle to change the
row address(ROWID) when Inert/Update DML operation executes. Oracle will check
if it can move the current rows in upper blocks wherever space is available and
shrink data auto. Oracle will skip the process if movement is too expensive and
no space is available in upper blocks. Well this approach of oracle looks safe
for me!
If so then, why Row movement is disabled by default ?? Tom says, “In
older versions < Oracle 9i this feature wasn’t there and Rowid used to be constant
across the life of Row. In 9i onward we able to move rows online. But changing
something which is default is not good and its misleading.”
Before applying the changes on Production, I have to test and prove it
to my DBA that this will solve the problem and at least will not add any
negative effect…..Big challenge!
To prove this I have done following test,
1. Two big table with same data (9228649 Rows) and Indexes.
a. BigTable_DRM : with Disabled row
movement
b. BigTable _ERM : with Enabled row movement
2. Inserted the same data and capture the segment stats before any DML
operation’s (using IndexAndTablefragmentationStats.sql)
3. Now, fired 4 parallel sessions as follows. where it will operate on the
same Key values on both the tables and perform concurrent Insert and Deletes on
same table. ( somewhat like we have in Production).
a. Session 1 : Delete Data from BigTable_DRM with the chunk size of 1,000 Key
values. Total 6,00,000 Keys values and 170K + Rows
b. Session 2 : Delete Data from BigTable_ERM with the chunk size of 1,000 Key
values. Total 6,00,000 Keys values and 170K + Rows
c. Session 3 : Insert Data
Into BigTable_DRM, 1 to 10 row in bulk each time in loop, Total 6,00,000 Keys values
and 170K + Rows
d. Session 4 : Insert Data
Into BigTable_ERM, 1 to 10 row in bulk each time in loop, Total 6,00,000 Keys values
and 170K + Rows
So this will Delete and
insert the same number of Rows concurrently.
4. I have done 2 such passes and gather segment stats using
IndexAndTablefragmentationStats.sql after each pass.
Summary of test :
1. Average rows per block in table before and After
DML operations :
Option
|
Before any Change
|
After Pass 1
|
After pass 2
|
With Disable Row movement
|
75.958
|
75.5825
|
50.501
|
With Enable Row movement
|
75.958
|
76.663
|
52.544
|
2. Index Stats :
a. All indexes are showing same badness in both the
cases.
As per may above test and analysis,
With Enabled row movement average number
of rows per block are more as compare to Disabled mode. With more concurrent
operation in real time system, I believe this difference will be considerable.
If we keep row movement enable, it may take care of Auto segment shrinking and
should have less data fragmentation in ASM.
Let’s see How it goes on Live production
environment…..
IndexAndTablefragmentationStats.sql
========================================================================
-- Santosh Kangane
-- Script : 1. Get Table fragmentation statistics
-- 2. index fragmentation statistics
--=======================================================================
-- Table Stats
--=======================================================================
--1) Row Distribution :
SELECT fno, bno, COUNT (*) rows_per_block
FROM (SELECT DBMS_ROWID.rowid_relative_fno (ROWID) AS fno,
DBMS_ROWID.rowid_block_number (ROWID) AS bno
FROM BigTable_ERM) -- BigTable_DRM
GROUP BY fno, bno;
--2) Summary :
--=======================================================================
select
twentieth,
min(rows_per_block) min_rows,
max(rows_per_block) max_rows,
sum(block_ct) tot_blocks,
sum(row_total) tot_rows,
round(sum(row_total)/sum(block_ct),2) avg_rows
from
(
select
ntile(20) over (order by rows_per_block) twentieth,
rows_per_block,
count(*) block_ct,
rows_per_block * count(*) row_total
from
(
select
fno, bno, count(*) rows_per_block
from
(
select
dbms_rowid.rowid_relative_fno(rowid) as fno,
dbms_rOwId.rowid_block_number(rowid) as bno
from
BigTable_ERM -- BigTable_DRM
)
group by
fno, bno
)
group by
rows_per_block
order by
rows_per_block
)
group by
twentieth
order by
twentieth;
--=======================================================================
-- Index Stats
--=======================================================================
-- Validate Indexes
-- For Each of following index validate, execute following SQL and note the stats
validate index BigTable_ERM_Index1;
validate index BigTable_ERM_Index2;
validate index BigTable_ERM_Index3;
validate index BigTable_DRM_Index1;
validate index BigTable_DRM_Index2;
validate index BigTable_DRM_Index3;
-- Reports index fragmentation statistics
select
name as index_name,
to_char(del_lf_rows,'999,999,990') as leaf_rows_deleted,
to_char(lf_rows-del_lf_rows,'999,999,990') as leaf_rows_in_use,
to_char(del_lf_rows/(lf_rows+0.00001),'999,990.999') as index_badness
from
index_stats;
--Report index statistics.
select
name as index_name,
to_char(height, '999,999,990') as height,
to_char(blocks, '999,999,990') blocks,
to_char(del_lf_rows,'999,999,990') del_lf_rows,
to_char(del_lf_rows_len,'999,999,990') del_lf_rows_len,
to_char(distinct_keys,'999,999,990') distinct_keys,
to_char(most_repeated_key,'999,999,990') most_repeated_key,
to_char(btree_space,'999,999,990') btree_space,
to_char(used_space,'999,999,990') used_space,
to_char(pct_used,'990') pct_used,
to_char(rows_per_key,'999,999,990') rows_per_key,
to_char(blks_gets_per_access,'999,999,990') blks_gets_per_access,
to_char(lf_rows, '999,999,990') as lf_rows,
to_char(br_rows, '999,999,990') as br_rows,
to_char(lf_blks, '999,999,990') as lf_blks,
to_char(br_blks, '999,999,990') as br_blks,
to_char(lf_rows_len,'999,999,990') as lf_rows_len,
to_char(br_rows_len,'999,999,990') as br_rows_len,
to_char(lf_blk_len, '999,999,990') as lf_blk_len,
to_char(br_blk_len, '999,999,990') as br_blk_len
from
index_stats;
--=======================================================================
IndexAndTablefragmentationStats.sql
========================================================================
-- Santosh Kangane
-- Script : 1. Get Table fragmentation statistics
-- 2. index fragmentation statistics
--=======================================================================
-- Table Stats
--=======================================================================
--1) Row Distribution :
SELECT fno, bno, COUNT (*) rows_per_block
FROM (SELECT DBMS_ROWID.rowid_relative_fno (ROWID) AS fno,
DBMS_ROWID.rowid_block_number (ROWID) AS bno
FROM BigTable_ERM) -- BigTable_DRM
GROUP BY fno, bno;
--2) Summary :
--=======================================================================
select
twentieth,
min(rows_per_block) min_rows,
max(rows_per_block) max_rows,
sum(block_ct) tot_blocks,
sum(row_total) tot_rows,
round(sum(row_total)/sum(block_ct),2) avg_rows
from
(
select
ntile(20) over (order by rows_per_block) twentieth,
rows_per_block,
count(*) block_ct,
rows_per_block * count(*) row_total
from
(
select
fno, bno, count(*) rows_per_block
from
(
select
dbms_rowid.rowid_relative_fno(rowid) as fno,
dbms_rOwId.rowid_block_number(rowid) as bno
from
BigTable_ERM -- BigTable_DRM
)
group by
fno, bno
)
group by
rows_per_block
order by
rows_per_block
)
group by
twentieth
order by
twentieth;
--=======================================================================
-- Index Stats
--=======================================================================
-- Validate Indexes
-- For Each of following index validate, execute following SQL and note the stats
validate index BigTable_ERM_Index1;
validate index BigTable_ERM_Index2;
validate index BigTable_ERM_Index3;
validate index BigTable_DRM_Index1;
validate index BigTable_DRM_Index2;
validate index BigTable_DRM_Index3;
-- Reports index fragmentation statistics
select
name as index_name,
to_char(del_lf_rows,'999,999,990') as leaf_rows_deleted,
to_char(lf_rows-del_lf_rows,'999,999,990') as leaf_rows_in_use,
to_char(del_lf_rows/(lf_rows+0.00001),'999,990.999') as index_badness
from
index_stats;
--Report index statistics.
select
name as index_name,
to_char(height, '999,999,990') as height,
to_char(blocks, '999,999,990') blocks,
to_char(del_lf_rows,'999,999,990') del_lf_rows,
to_char(del_lf_rows_len,'999,999,990') del_lf_rows_len,
to_char(distinct_keys,'999,999,990') distinct_keys,
to_char(most_repeated_key,'999,999,990') most_repeated_key,
to_char(btree_space,'999,999,990') btree_space,
to_char(used_space,'999,999,990') used_space,
to_char(pct_used,'990') pct_used,
to_char(rows_per_key,'999,999,990') rows_per_key,
to_char(blks_gets_per_access,'999,999,990') blks_gets_per_access,
to_char(lf_rows, '999,999,990') as lf_rows,
to_char(br_rows, '999,999,990') as br_rows,
to_char(lf_blks, '999,999,990') as lf_blks,
to_char(br_blks, '999,999,990') as br_blks,
to_char(lf_rows_len,'999,999,990') as lf_rows_len,
to_char(br_rows_len,'999,999,990') as br_rows_len,
to_char(lf_blk_len, '999,999,990') as lf_blk_len,
to_char(br_blk_len, '999,999,990') as br_blk_len
from
index_stats;
--=======================================================================
Its really nice article.
ReplyDelete