Skip to main content

Save Exceptions Vs DML Error Logs


Hi Friends,

Recently for one of the functionality in my app, I was working on the bulk data operation. To handle the bulk data movement with DML exception handling Oracle Provides us two inherent functionality,
            1.       Save Exceptions : http://rwijk.blogspot.com/2007/11/save-exceptions.html
            2.       DML Error Logs : http://www.orafaq.com/node/76

So, I was evaluating on both the approaches, here is what I have came up to so far,

Performance for Save Exceptions Vs DML Error Logs,
a.       Save exception works better for the large data volume over DML Error logs.
-          You can keep better control on the number of rows been process in particular iteration by configuring Bulk collect row count limit.
-          This will help you in keeping the resources free by putting the commit after particular/Logical interval.
b.      For lesser size of the data we can go with DML Error log option.
-          Using DML error log with Direct path inserts gives us the optimal performance over processing row by row in a loop.  

When to use Save Exceptions and when DML Error Logs? Why?
Master of Oracle, Thomas Kyte (Tom) Says,
When you HAVE to do row/row (also known as slow by slow) processing.  When you have to read data, procedurally process data, insert data.  Then use save exceptions otherwise use a single sql statement”

Why can’t we use GTT as an Error log table?
One more problem that has come across is, in DML error log, can we use Oracle Global Temporary table as error log table? (As, I don’t want to expose the errors to other sessions).
 After a small investigation, found the answer is NO. Oracle internally uses autonomous transaction to log the errors in Error table. And that will not be visible to your session. Hence need to use real table only.  


Comments

Popular posts from this blog

Some facts and Figures of WCF

SOAP Message in WCF: 1.        The max size of SOAP message in WCF is 9,223,372,036,854,775,807 bytes. Including metadata. 2.        For actual user data we can use 2,147,483,647 bytes out of it. 3.        With default setting WCF uses only 65536 bytes. 4.        We can change it by setting maxReceivedMessageSize in clients app.config file.    5.        So selection of data types in Data Contract and Data table will matter a lot! 6.       Reference :   http://blogs.msdn.com/drnick/archive/2006/03/16/552628.aspx          http://blogs.msdn.com/drnick/archive/2006/03/10/547568.aspx       “Amazing blog for WCF!” Data Contract: 1.        By Default WCF can serialize 65536 DataMember. 2.        We can change it to max  2147483646. 3.       How?  Please go to: http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/4a81a226-175b-41d3-864a-181792c71ffe Tracing WCF service message: 1.       Enable message log of WCF: http://www.avingtonsolutions.com/blog/post/2008/07/25/Tracing-SOAP-Message

Drop all Objects from Schema In Postgres

To Drop all objects from Postgres Schema there could be following two approaches: Drop Schema with cascade all and re-create it again.  In some cases where you dont want to/not allowed to drop and recreate schema, its easy to look for objects on current schema and drop them. following script would help to do so, Create function which would do the task and then drop that function too. --- CREATE OR REPLACE FUNCTION drop_DB_objects() RETURNS VOID AS $$ DECLARE  rd_object RECORD; v_idx_statement VARCHAR(500);   BEGIN ---1. Dropping all stored functions RAISE NOTICE '%', 'Dropping all stored functions...'; FOR rd_object IN ( SELECT format('%I.%I(%s)', ns.nspname, p.proname, oidvectortypes(p.proargtypes)) as functionDef     FROM pg_proc p     INNER JOIN pg_namespace ns ON (p.pronamespace = ns.oid)    WHERE ns.nspname = current_schema      AND p.proname <> 'drop_db_objects' )

Vacuum Analyze Full Schema in PostgreSQL Database

To analyze full database schema, you can use following shell script : -------------------------------------------------------------------------------------------------------------------------- #!/bin/sh rm -rf /tmp/vacuum_analyze_MyData.sql dbname="mydata" username="mydatadatamart" namespace="mydatadatamart" # Vacuum only those tables which are fragmented over 30% # Analyze tables only if they are not already # Process CMF and Groups Master table every time as they are master Tables=`psql $dbname $username << EOF SELECT 'VACUUM ANALYZE VERBOSE ' || tablename || ' ;' AS TableList   FROM ( SELECT *,       n_dead_tup > av_threshold AS "av_needed",       CASE WHEN reltuples > 0      THEN round(100.0 * n_dead_tup / (reltuples))       ELSE 0       END AS pct_dead FROM (SELECT C.relname AS TableName, pg_stat_get_tuples_inserted(C.oid) AS n_tup_ins, pg_stat_get_tuples_updated(