Skip to main content

Some facts and Figures of WCF

SOAP Message in WCF:
1.       The max size of SOAP message in WCF is 9,223,372,036,854,775,807 bytes. Including metadata.
2.       For actual user data we can use 2,147,483,647 bytes out of it.
3.       With default setting WCF uses only 65536 bytes.
4.       We can change it by setting maxReceivedMessageSize in clients app.config file.   
5.       So selection of data types in Data Contract and Data table will matter a lot!
      “Amazing blog for WCF!”
Data Contract:
1.       By Default WCF can serialize 65536 DataMember.
2.       We can change it to max  2147483646.

Tracing WCF service message:
2.       Run SvcTraceViewer.exe utility from Visual Studio Command prompt.
3.       Open log file in SvcTraceViewer.

Scalability of WCF:
  1. Max Concurrent Calls (default = 16; Max = 2147483647 ) [Per-message]:
 The maximum number of messages that can actively be processed. Increase this value if you want your service to be able to process a larger message load.
  1. Max Concurrent Instances (default = Int32.Max; Max = 2147483647
The maximum number of InstanceContext objects in a service that can execute at one time. What this setting translates into depends on the InstanceContextMode that is set on the ServiceBehaviorAttribute on the service. 
a.      If this is set to "PerSession", then this would represent the maximum number of sessions. 
b.      If this is set to "PerCall", then this is the maximum number of concurrent calls. 
c.       If this is set to "Single", then this value doesn’t really mean anything anymore.
    1. When a message comes into the service and the maximum number of InstanceContext objects already exists, then the message goes into a wait pattern until an existing InstanceContext is closed.  
  1. Max Concurrent Sessions (default = 10; Max = 2147483647) [Per-channel]
The maximum number of sessions that a service can accept at one time.  This setting only affects "session" enabled channels. Once this threshold is reached, no channels will not be accepted to the service. 

Advantages of WCF:

A)     On windows platform if we camper the other technology with WCF :

1.       WCF is 25% - 50% faster than ASP.NET Web Services.
2.       25% Faster than .NET remoting.
3.      WCF will provide the most significant performance gains of almost 4x.
4.       Here is A Performance Comparison of WCF with Existing Distributed Communication Technologies

B)      WCF provides internal Reliable Messaging support.

C)      For Authentication, Authorization, and user Identities WCF provide multiple options. Also provide high level of Security.

D)     WCF provides multiple hosting options:
·         Windows Forms applications
·         Console applications
·         Windows services
·         Web applications (ASP.NET) hosted on Internet Information Services (IIS)
·         WCF services inside IIS 7.0 and WAS on Windows Vista or Windows Server 2007

E)      Managing Data Base Operation in WCF will provide central access for all clients.

Comments

Popular posts from this blog

Drop all Objects from Schema In Postgres

To Drop all objects from Postgres Schema there could be following two approaches: Drop Schema with cascade all and re-create it again.  In some cases where you dont want to/not allowed to drop and recreate schema, its easy to look for objects on current schema and drop them. following script would help to do so, Create function which would do the task and then drop that function too. --- CREATE OR REPLACE FUNCTION drop_DB_objects() RETURNS VOID AS $$ DECLARE  rd_object RECORD; v_idx_statement VARCHAR(500);   BEGIN ---1. Dropping all stored functions RAISE NOTICE '%', 'Dropping all stored functions...'; FOR rd_object IN ( SELECT format('%I.%I(%s)', ns.nspname, p.proname, oidvectortypes(p.proargtypes)) as functionDef     FROM pg_proc p     INNER JOIN pg_namespace ns ON (p.pronamespace = ns.oid)    WHERE ns.nspname = current_schema      AND p.proname <> 'drop_db_objects' )

Vacuum Analyze Full Schema in PostgreSQL Database

To analyze full database schema, you can use following shell script : -------------------------------------------------------------------------------------------------------------------------- #!/bin/sh rm -rf /tmp/vacuum_analyze_MyData.sql dbname="mydata" username="mydatadatamart" namespace="mydatadatamart" # Vacuum only those tables which are fragmented over 30% # Analyze tables only if they are not already # Process CMF and Groups Master table every time as they are master Tables=`psql $dbname $username << EOF SELECT 'VACUUM ANALYZE VERBOSE ' || tablename || ' ;' AS TableList   FROM ( SELECT *,       n_dead_tup > av_threshold AS "av_needed",       CASE WHEN reltuples > 0      THEN round(100.0 * n_dead_tup / (reltuples))       ELSE 0       END AS pct_dead FROM (SELECT C.relname AS TableName, pg_stat_get_tuples_inserted(C.oid) AS n_tup_ins, pg_stat_get_tuples_updated(