Skip to main content

Distributed transaction in Oracle ( Over Oracle DBLink)

To fetch the data from one server to and other Oracle server over DBLink, I experienced the following facts of Oracle Distributed transactions:

Security issue:
-          We cannot create Public synonym for the remote object accessed over Private DBLink of other Database.  
-          It’s allowed to create private synonym for remote object, but you cannot grant the access over this synonym to any other schema.
If you try to provide the grants to other schema Oracle raises an error:

             [ORA-02021: DDL operations are not allowed on a remote database]

“In an all you can access remote objects over private DBLink in the same schema where DBLink is created”.

Fetching the Ref Cursor at Remote site:  
                Let’s say we have two sites involved in Distributed transaction, Server1 and Server2. The Ref Cursor opened on Server1 procedure, cannot be fetched at Server2 site. If we try to fetch this cursor oracle raises an exception:
[ORA-02055: distributed update operation failed; rollback required
 ORA-24338: statement handle not executed]

                                “We cannot use the Ref Cursor over DBLink” 
1.       Use PL-SQL Data table. OR
2.       Provide select grant and use select command over DBLink from initiator site instead of opening the Cursor.    

Transaction issue:
                If remotely called procedure/Function has Out or In-Out argument, we cannot use commit in remote procedure.
Oracle raises an exception:

[ORA-02064: distributed operation not supported
 ORA-06512: at "DBA.PR_DATATRANFER", line 332
 ORA-06512: at "LIVE.PR_DOWNLOADEDDATA", line 74
 ORA-06512: at line 8]

1.       Use PRAGMA AUTONOMOUS_TRANSACTION (if possible).
2.       Simplify your transaction and check if the transaction initiator site can take care of commit\rollback of transaction.
Use of Global temporary table in Distributed transaction :
                In your distributed transaction, if remote server procedure/Function is using the GTT (on Commit Delete\Preserve rows). Even after using commit or rollback Oracle does not release the locks on temporary table (Ref: And when you run the same procedure again oracle raises an exception:

[ORA-14450: attempt to access a transactional temp table already in use
 ORA-06512: at "DBA.PR_DATATRANFER", line 319
 ORA-06512: at "LIVE.PR_DOWNLOADEDDATA", line 74
 ORA-06512: at line 8]
Only alternative to release the existing locks is Disconnect and re-connect the session.
Oracle Documentation says that you cannot use GTT in Distributed transactions:


  1. I am not sure if I have understood your problem. So, you have,
    - Script saved on centralized server as CLOB
    - And you are querying it from remote servers?
    So you have used GTT on Central server or on Local (target) server? Can you explain it a bit?


Post a Comment

Popular posts from this blog

How to Troubleshoot Connectivity Issue with 11gR2 SCAN

I installed the Oracle 11g RAC successfully, But when tried connecting from remote client via SCAN, it used to raise ORA-12537. This is the most common issues that the happens with SCAN listener. And there are few common mistakes that generally cause this issue.. So, here is how we can troubleshoot the connectivity issues with SCAN, and cases out the possibilities.
1)  Check if Local_listener and remote_listener parameter are set properly on all nodes. 2)  The very common issue is with permissions. SCAN will always be created under grid user (Grid cluserware installation user). Oracle will also create one local listener “LISTENER” during grid infrastructure installation. But if that is not present then always make sure that you create a local listener with grid user. This is required to handover the connection between remote and local listener. 3)  Also “oracle” executable should have given to oracle and grid user i.e. 6751.  Under $ORACLE_HOME/bin. If permission are not proper the use …

Vacuum Analyze Full Schema in PostgreSQL Database

To analyze full database schema, you can use following shell script :


rm -rf /tmp/vacuum_analyze_MyData.sql


# Vacuum only those tables which are fragmented over 30%
# Analyze tables only if they are not already
# Process CMF and Groups Master table every time as they are master

Tables=`psql $dbname $username << EOF
SELECT 'VACUUM ANALYZE VERBOSE ' || tablename || ' ;' AS TableList
  FROM (
      n_dead_tup > av_threshold AS "av_needed",
      CASE WHEN reltuples > 0
     THEN round(100.0 * n_dead_tup / (reltuples))
      ELSE 0
      END AS pct_dead
C.relname AS TableName,
pg_stat_get_tuples_inserted(C.oid) AS n_tup_ins,
pg_stat_get_tuples_updated(C.oid) AS n_tup_upd,

Drop all Objects from Schema In Postgres

To Drop all objects from Postgres Schema there could be following two approaches:

Drop Schema with cascade all and re-create it again. In some cases where you dont want to/not allowed to drop and recreate schema, its easy to look for objects on current schema and drop them. following script would help to do so, Create function which would do the task and then drop that function too.

DECLARE  rd_object RECORD; v_idx_statement VARCHAR(500);
---1. Dropping all stored functions RAISE NOTICE '%', 'Dropping all stored functions...'; FOR rd_object IN ( SELECT format('%I.%I(%s)', ns.nspname, p.proname, oidvectortypes(p.proargtypes)) as functionDef     FROM pg_proc p     INNER JOIN pg_namespace ns ON (p.pronamespace = ns.oid)    WHERE ns.nspname = current_schema      AND p.proname <> 'drop_db_objects' ) LOOP
v_idx_statement = 'DROP FUNCTION ' || rd_object.functionDef; RAISE NOTICE &#…