Skip to main content

Software Fault Tolerance


For critical OLTP applications system wide down time is absolutely no-no. Also handling the Software faults (Especially bugs in underdevelopment application), How to allow user to perform UAT while they are working on their routine operations?

Common solution that almost every social web application uses, pilot releases and let only selected region of the user test the system.  But can that be possible in database heavy operational systems? Such as Banking, Supply chain management, Income tax returns, accounting where database is heart of the business and let you not miss even single byte of data.
Here is our small approach which we used for Supply chain management system and rolled out across all India seamlessly...

This we did on Oracle database (And it can be extended to almost all the databases with littlie changes – with equivalent to Oracle schema concept).
So here it is...
1. You have all the tables created in single set of schemas, which will hold only data tables and no operational logic! Of course any table level constraints and triggers will go in this set of schemas only.
2. Then you have two set of schemas which will have only code (operational logic) and application config (static table which act as app config settings).
a. One set of schemas have say Version 1.0 and another set of schemas have version 2.0
b. With every new release you replace the old most versions.
Say Version 3.0 -> Version 1.0 & Version 4.0 -> Version 2.0 and so on...
3. This work well as you will always have one older (Ideally stable) and one new version (may have some issue and should be tested).
4. Allow only set of users to go over new versions and evaluate it.  Even if there are error/faults they will have choice to go back to old stable version and admin will have window to fix the issues without affecting complete user base.
5. Most valuable feature is you have all the data at one place, no matter which application you are using. So that gives you portability across versions.

Challenges:
1. No –ve changes (removal of column/table or change in data types of column) are not allowed as old version might be using it.
2. Be careful while using “INSERT INTO <<TABLE>> SELECT *” kind of SQLs. Shouldn’t be used with * anyways!
3. Complex setup as you need complex synonym and grant network across data and operation code schemas.
4. Eventually complex release process.
5. Be careful when you make design/deployment changes.

With proper deployment and design processes in place this can be achieved easily.  This defiantly helps in fault management and reducing system wide down times.


Comments

Popular posts from this blog

Some facts and Figures of WCF

SOAP Message in WCF: 1.        The max size of SOAP message in WCF is 9,223,372,036,854,775,807 bytes. Including metadata. 2.        For actual user data we can use 2,147,483,647 bytes out of it. 3.        With default setting WCF uses only 65536 bytes. 4.        We can change it by setting maxReceivedMessageSize in clients app.config file.    5.        So selection of data types in Data Contract and Data table will matter a lot! 6.       Reference :   http://blogs.msdn.com/drnick/archive/2006/03/16/552628.aspx          http://blogs.msdn.com/drnick/archive/2006/03/10/547568.aspx       “Amazing blog for WCF!” Data Contract: 1.        By Default WCF can serialize 65536 DataMember. 2.        We can change it to max  2147483646. 3.       How?  Please go to: http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/4a81a226-175b-41d3-864a-181792c71ffe Tracing WCF service message: 1.       Enable message log of WCF: http://www.avingtonsolutions.com/blog/post/2008/07/25/Tracing-SOAP-Message

Drop all Objects from Schema In Postgres

To Drop all objects from Postgres Schema there could be following two approaches: Drop Schema with cascade all and re-create it again.  In some cases where you dont want to/not allowed to drop and recreate schema, its easy to look for objects on current schema and drop them. following script would help to do so, Create function which would do the task and then drop that function too. --- CREATE OR REPLACE FUNCTION drop_DB_objects() RETURNS VOID AS $$ DECLARE  rd_object RECORD; v_idx_statement VARCHAR(500);   BEGIN ---1. Dropping all stored functions RAISE NOTICE '%', 'Dropping all stored functions...'; FOR rd_object IN ( SELECT format('%I.%I(%s)', ns.nspname, p.proname, oidvectortypes(p.proargtypes)) as functionDef     FROM pg_proc p     INNER JOIN pg_namespace ns ON (p.pronamespace = ns.oid)    WHERE ns.nspname = current_schema      AND p.proname <> 'drop_db_objects' )

Vacuum Analyze Full Schema in PostgreSQL Database

To analyze full database schema, you can use following shell script : -------------------------------------------------------------------------------------------------------------------------- #!/bin/sh rm -rf /tmp/vacuum_analyze_MyData.sql dbname="mydata" username="mydatadatamart" namespace="mydatadatamart" # Vacuum only those tables which are fragmented over 30% # Analyze tables only if they are not already # Process CMF and Groups Master table every time as they are master Tables=`psql $dbname $username << EOF SELECT 'VACUUM ANALYZE VERBOSE ' || tablename || ' ;' AS TableList   FROM ( SELECT *,       n_dead_tup > av_threshold AS "av_needed",       CASE WHEN reltuples > 0      THEN round(100.0 * n_dead_tup / (reltuples))       ELSE 0       END AS pct_dead FROM (SELECT C.relname AS TableName, pg_stat_get_tuples_inserted(C.oid) AS n_tup_ins, pg_stat_get_tuples_updated(