Skip to main content

Timestamp - Drift in Oracle RAC


To track the data changes in Oracle database, I’m planning to use Timestamp based approach. During implementation Rajesh has taken my attention to an interesting case,
Where I have 2 Nodes in Oracle RAC, these 2 node can return different  timestamp even if you fire the concurrent requests. This difference can be up to 2mins as Timestamp is dependent on machine clock cycle. Oracle RAC keeps timestamp in Sync with the help of NTP (Network Time Protocol), which triggers after every 15mins (http://www.oracledatabase12g.com/wp-content/uploads/html/RAC-Frequently%20Asked%20Questions.htm#A10074)

Solution to this problem could be :
         1.  Use of System Change Number (SCN) : A sequence number allocated by oracle to keep track of the changes. Oracle keep all the nodes and data changes in sync by allocating unique SCN number and uses this for backup and restore purpose.  SCN gets allocated at Block level on Commit operation by default. To change this behaviour and to keep track of row change will have to enable the ROWDEPENDENCIES at table level. This was right and most reliable approach in terms of keeping track of all concurrent transactions. However, I couldn’t use this as SCN number changes if DBA dose Export/Import for backup and restore or Issues command RESTLOG. So I thought of finding some new approach to this,
         2.   Approach that I took,
-          I have been saving the last successfully processed timestamp in a table say Timestamp1.
-          Now when the process starts next time, find
o   DiffInMinutes = Minutes (Current system Timestamp – TimeStamp1 ) – 5 (Or any number > 2 which will be sufficient to avoid the time drift issue).
o   TimeStamp2 = TimeStamp1 + DiffInMinutes.
o   Finally Select changed data from tables between TimeStamp1 and TimeStamp2.
Now, here
-           I’m not restricting my upper boundary limit with currant systimestamp (which is Node specific), but just adding the reference value to Last processed time stamp saved in table which is common to all the node. So I don’t have to worry about which node is updating data and which node reading it …..
-          Selection of the data older than 5 min’s (Or any number > 2) should avoid the interference of other online transactions which are uncommitted. ( Expecting all DML transaction should finish in 5 min’s) Calculative risk as if DML transaction runs for more than 5mins, users will throw away the system J

Comments

Popular posts from this blog

Some facts and Figures of WCF

SOAP Message in WCF: 1.        The max size of SOAP message in WCF is 9,223,372,036,854,775,807 bytes. Including metadata. 2.        For actual user data we can use 2,147,483,647 bytes out of it. 3.        With default setting WCF uses only 65536 bytes. 4.        We can change it by setting maxReceivedMessageSize in clients app.config file.    5.        So selection of data types in Data Contract and Data table will matter a lot! 6.       Reference :   http://blogs.msdn.com/drnick/archive/2006/03/16/552628.aspx          http://blogs.msdn.com/drnick/archive/2006/03/10/547568.aspx       “Amazing blog for WCF!” Data Contract: 1.        By Default WCF can serialize 65536 DataMember. 2.        We can change it to max  2147483646. 3.       How?  Please go to: http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/4a81a226-175b-41d3-864a-181792c71ffe Tracing WCF service message: 1.       Enable message log of WCF: http://www.avingtonsolutions.com/blog/post/2008/07/25/Tracing-SOAP-Message

Drop all Objects from Schema In Postgres

To Drop all objects from Postgres Schema there could be following two approaches: Drop Schema with cascade all and re-create it again.  In some cases where you dont want to/not allowed to drop and recreate schema, its easy to look for objects on current schema and drop them. following script would help to do so, Create function which would do the task and then drop that function too. --- CREATE OR REPLACE FUNCTION drop_DB_objects() RETURNS VOID AS $$ DECLARE  rd_object RECORD; v_idx_statement VARCHAR(500);   BEGIN ---1. Dropping all stored functions RAISE NOTICE '%', 'Dropping all stored functions...'; FOR rd_object IN ( SELECT format('%I.%I(%s)', ns.nspname, p.proname, oidvectortypes(p.proargtypes)) as functionDef     FROM pg_proc p     INNER JOIN pg_namespace ns ON (p.pronamespace = ns.oid)    WHERE ns.nspname = current_schema      AND p.proname <> 'drop_db_objects' )

Vacuum Analyze Full Schema in PostgreSQL Database

To analyze full database schema, you can use following shell script : -------------------------------------------------------------------------------------------------------------------------- #!/bin/sh rm -rf /tmp/vacuum_analyze_MyData.sql dbname="mydata" username="mydatadatamart" namespace="mydatadatamart" # Vacuum only those tables which are fragmented over 30% # Analyze tables only if they are not already # Process CMF and Groups Master table every time as they are master Tables=`psql $dbname $username << EOF SELECT 'VACUUM ANALYZE VERBOSE ' || tablename || ' ;' AS TableList   FROM ( SELECT *,       n_dead_tup > av_threshold AS "av_needed",       CASE WHEN reltuples > 0      THEN round(100.0 * n_dead_tup / (reltuples))       ELSE 0       END AS pct_dead FROM (SELECT C.relname AS TableName, pg_stat_get_tuples_inserted(C.oid) AS n_tup_ins, pg_stat_get_tuples_updated(