Facade Pattern

  • They fall into structural pattern categories.
  • Façade pattern sits on the top of group of subsystems and allows them to communicate in a unified manner.
  • The façade takes care of orchestration part and present more simplified interface to the client.

Facade Pattern 1

 

Example:

In a typical online shopping cart when a customer places any order following things happens.

  • Get Product Details
  • Make Payment
  • Print Invoice

Facade Pattern 2

The above example can be implemented using Facade pattern as follows:

  1. The main class (clsOrder) is exposed to the client which internally takes care of calling other related classes  (clsProduct, clsPayment, clsInvoice).
  2. The clsOrder class acts as a facade class which takes care of orchestration by calling other related classes internally.

Façade Pattern

Singleton Pattern

The singleton pattern falls into Creational patterns.

When it is used

  • Used when we want only one object to be shared between clients.
  • The client should not create an object.
  • The object should be created only once and then the same object should service all the client requests.

How to Implement

There are various different ways of implementing the singleton pattern in C# based on thread safety, lazy-initialization and performance. But all these implementations share some common characteristics:

  • A single constructor, which is private and parameter-less. This prevents other classes from instantiating it. It also prevents sub-classing – if a singleton can be sub-classed and if each of those subclasses can create an instance, the pattern is violated.
  • The class is sealed, though this is optional/unnecessary, but may help the JIT to optimize things more.
  • A static variable which holds a reference to the single created instance.
  • A public static means of getting the reference to the single created instance, creating one if necessary.
Singleton implementation without thread safety

public sealed class SingletonImplementation
{
//static variable which holds the single instance at any given time
private static SingletonImplementationtheInstance;

//constructor is private so we cannot create instance of the class directly
private SingletonImplementation()
{
}

//static method used to create and return the instance of the class
public static SingletonImplementationGetInstance()
{
if(theInstance == null)
{
theInstance = new SingletonImplementation();
}
return theInstance;
}
}

The above code is a bad implementation as it’s not thread-safe and we should avoid it. Two different threads could both have evaluated the test if (instance==null) and found it to be true, then both create instances, which violates the singleton pattern. Note that in fact the instance may already have been created before the expression is evaluated, but the memory model doesn’t guarantee that the new value of instance will be seen by other threads unless suitable memory barriers have been passed.

Singleton implementation with thread safety

public class SingletonImplementation
{
//static variable which holds the single instance at any given time
private static SingletonImplementationtheInstance;

//constructor is private so we cannot create instance of the class directly
private SingletonImplementation()
{
}

//static method used to create and return the instance of the class
public static SingletonImplementationGetInstance()
{
lock(typeof(SingletonImplementation))
{
if(theInstance == null)
{
theInstance = new SingletonImplementation();
}
returntheInstance;
}
}
}

The thread takes out a lock on a shared object, and then checks whether or not the instance has been created before creating the instance. This takes care of the memory barrier issue (as locking makes sure that all reads occur logically after the lock acquire, and unlocking makes sure that all writes occur logically before the lock release) and ensures that only one thread will create an instance. Unfortunately, performance suffers as a lock is acquired every time the instance is requested.

Singleton implementation with thread safety without using locks- not quite as lazy

public class SingletonImplementation
{
//static variable which holds the single instance at any given time
private static readonly SingletonImplementationtheInstance = new                                                                                                                      SingletonImplementation();

//constructor is private so we cannot create instance of the class directly
private SingletonImplementation()
{
}

// Explicit static constructor to tell C# compiler not to mark type as beforefieldinit
static SingletonImplementation()
{
}

//static method used to create and return the instance of the class
public static SingletonImplementation GetInstance()
{
return theInstance;
}
}

As you can see, this is really is extremely simple – but why is it thread-safe and how lazy is it? Well, static constructors in C# are specified to execute only when an instance of the class is created or a static member is referenced, and to execute only once per AppDomain. Given that this check for the type being newly constructed needs to be executed whatever else happens, it will be faster than adding extra checking as in the previous examples. There are a couple of wrinkles, however:

  • It’s not as lazy as the other implementations. In particular, if you have static members other than Instance, the first reference to those members will involve creating the instance.
  • There are complications if one static constructor invokes another which invokes the first again.
  • The laziness of type initializers is only guaranteed by .NET when the type isn’t marked with a special flag called beforefieldinit. Unfortunately, the C# compiler marks all types which don’t have a static constructor (i.e. a block which looks like a constructor but is marked static) as beforefieldinit. It also affects performance.
Singleton implementation with thread safety without using locks- fully lazy instantiation

public class SingletonImplementation
{
//constructor is private so we cannot create instance of the class directly
privateSingletonImplementation()
{
}

public static SingletonImplementation theInstance { get { return Nested.instance; } }

private class Nested
{
// Explicit static constructor to tell C# compiler not to mark type as beforefieldinit                             static Nested()
{
}
internal static readonly SingletonImplementation                                                                         theInstance = new SingletonImplementation ();
}
}

Here, instantiation is triggered by the first reference to the static member of the nested class, which only occurs in Instance. This means the implementation is fully lazy, but has all the performance benefits of the previous ones. Note that although nested classes have access to the enclosing class’s private members, the reverse is not true, hence the need for instance to be internal here. That doesn’t raise any other problems, though, as the class itself is private. Though the code is a bit more complicated in order to make the instantiation lazy.

Singleton implementation with thread safety without using locks- using .NET 4’s Lazy type

If you’re using .NET 4 (or higher), you can use the System.Lazytype to make the laziness really simple. All you need to do is pass a delegate to the constructor which calls the Singleton constructor – which is done most easily with a lambda expression.

public class SingletonImplementation
{
//static variable which holds the single instance at any given time
private static readonly Lazy< SingletonImplementation > lazy =
new Lazy< SingletonImplementation >(() => new SingletonImplementation ());

//constructor is private so we cannot create instance of the class directly
privateSingletonImplementation()
{
}

//static method used to create and return the instance of the class
public static SingletonImplementationGetInstance()
{
returnlazy.Value;
}

public int Addition(int i, int j)
{
returni+j;
}

}

It’s simple and performs well. It also allows you to check whether or not the instance has been created yet with the IsValueCreated property if you need that.

Performance vs laziness

In many cases, there won’t be any actual requirement of full laziness – unless your class initialization does something particularly time-consuming, or has some side-effect elsewhere, it’s probably fine to leave out the explicit static constructor shown above. This can increase performance as it allows the JIT compiler to make a single check (for instance at the start of a method) to ensure that the type has been initialized, and then assume it from then on. If your singleton instance is referenced within a relatively tight loop, this can make a (relatively) significant performance difference. You should decide whether or not fully lazy instantiation is required, and document this decision appropriately within the class.

Implementation of locking is good practice and give us thread safety but it comes with the cost of performance.

Abstract Factory Patterns

  • This is a rarely used pattern. It exists only when you have factory patterns in the project.
  • Abstract factory expands on the basic factory pattern.
  • Abstract factory helps us to unite similar factory patterns classes into one unified interface. This leads to more simplified interface for the client.

65d74-dp8

Example:

Abstract Factory Patterns

 

Abstract Factory Patterns

 

Steps:

  • We have common an Interface or Abstract class

35bed-af1

  • The concrete classes implement this interface or abstract class.

43d7a-af2

  • Then a new factory class is created which takes care of object creation. A similar abstract factory class takes the decision of returning the asked factory object. This abstract class will be used by the client.

44e6e-af3

  • The client calls the static class object in above by creating an abstract class instead of the factory classes, as follows:

c38ee-af4

Factory Pattern

Introduction

  • One of the most widely used creational patterns is the Factory.
  • This pattern is aptly named, as it calls for the use of a specialized object solely to create other objects, much like a real-world factory.
  • We need Factory pattern:
    • To eliminate new keywords to create on client.
    • The client is not aware of the concrete classes.

Logical Model

As with other design patterns, there are countless variations of the Factory pattern, although most variants typically used the same set of primary actors, a client, a factory, and a product. A client is an object that requires an instance of another object (the product) for some purpose. Rather than creating the product instance directly, the client delegates this responsibility to the factory. Once invoked, the factory creates a new instance of the product, passing it back to the client. Put simply, the client uses the factory to create an instance of the product. Figure 1 shows this logical relationship between these elements of the pattern.

Factory Pattern

Figure 1. Factory pattern logical model

The factory completely abstracts the creation and initialization of the product from the client. This indirection enables the client to focus on its discrete role in the application without concerning itself with the details of how the product is created. Thus, as the product implementation changes over time, the client remains unchanged.

While this indirection is a tangible benefit, the most important aspect of this pattern is the fact that the client is abstracted from both the type of product and the type of factory used to create the product. Presuming that the product interface is invariant, this enables the factory to create any product type it deems appropriate. Furthermore, presuming that the factory interface is invariant, the entire factory along with the associated products it creates can be replaced in a wholesale fashion. Both of these radical modifications can occur without any changes to the client.

Physical Model

Most implementations of the Factory pattern use two abstract classes, Factory and Product, to provide the invariant interfaces discussed in the logical model. Although we could utilize pure interfaces for this purpose, the use of abstract classes enables us to place common instance behavior in the abstract class. In addition, abstract classes offer versioning benefits over pure interfaces.

Factory Pattern Code

SQL SERVER INDEXES

Definition

Indexes are one of the biggest determinate of database performance. Indexes allow you to speed query performance on commonly used columns and improve the overall processing speed of your database.

Types

Microsoft SQL Server supports two types of indexes:

Clustered indexes define the physical sorting of a database table’s rows in the storage media. For this reason, each database table may have only one clustered index.

Non-clustered indexes are created outside of the database table and contain a sorted list of references to the table itself.

When and Where

Each database table can have only one clustered index. If a PRIMARY KEY constraint is created for a database table and no clustered index currently exists for that table, SQL Server automatically creates a clustered index on the primary key. SQL Server supports a maximum of 249 non-clustered indexes per table. However, it’s important to keep in mind that non-clustered indexes slow down the data modification and insertion process, so indexes should be kept to a minimum.

We can get potential performance benefits only through the judicious use of indexes on database tables. Indexes will allow you to speed query performance on commonly used columns and improve the overall processing speed of your database. But the un-necessary use of index can adversely affect the performance of the application.

One of the hardest tasks is the selection of appropriate columns for clustered or non-clustered indexes. When reviewing what type of index to create, you should identify the data type and the column(s) stores. Also, you must consider what query types will run and the frequency with which they will be executed. We can use following guidelines for this:

  • When tables use primary keys, SQL Server automatically (by default) creates a unique cluster index on the column(s) comprising the key. Clearly, the uniqueness of the primary key is enforced through the creation of the unique index on the column(s).
  • Columns that contain a relatively high degree of duplicate values and that are accessed in sequence are good candidates for clustered indexes because SQL Server physically reorders the data rows and stores the values in either ascending (the default) or descending order so they can be quickly retrieved.
  • Columns that are searched in a range are good candidates for clustered indexes.
    Columns that contain many unique values are good candidates for non-clustered indexes.
  • Consider creating non-clustered indexes on any columns that are frequently referenced in the WHERE clauses of SQL statements.
  • Columns referenced by JOIN and GROUP BY operations are good candidates for non-clustered indexes.
  • We can also consider creating non-clustered indexes that cover all of the columns used by certain frequently issued queries. These queries are referred to as “covered queries” and experience excellent performance gains.
  • When creating foreign key relationships, it’s a good idea to create a non-clustered index on the foreign key column if you’re planning on using it frequently in joins.
    Tables that have a clustered index maintain the relationship between the data pages via a linked list (e.g., leaf and non-leaf level). Conversely, if no clustered index exists on a table, SQL Server will store the data pages in a heap.

Data pages

When an index is created, SQL Server creates data pages, which are pointers that assist in finding the data row that contains the information of interest. When the index is established, a fillfactor is set. The purpose of a fillfactor is to designate the percentage of the data pages filled upon index creation. Over time, the free space is consumed as modifications occur, which causes page splits. The result of the page splits is that they degrade the performance of the indexes and thus the queries that utilize them by causing the data storage to become fragmented. The index’s fillfactor is set at the time of the indexes creation and isn’t dynamically maintained.
To update the fillfactor in the data pages, we can drop and recreate the indexes and reset the fillfactor. Keep in mind that this will negatively impact concurrent database activity and should be used judiciously in production systems. DBCC INDEXDEFRAG and DBCC DBREINDEX are statements that defragment both clustered and non-clustered indexes. INDEXDEFRAG is an online operation (i.e., it does not block other table activity, such as a query), whereas DBREINDEX physically rebuilds the index(s). In most cases, rebuilding an index achieves greater defragmentation, but it comes at the cost of blocking concurrent activity on that table. INDEXDEFRAG can take longer to complete when large fragmented indexes are present because it executes in small transactional blocks.

The Fillfactor

When you perform one of these actions, the database engine can more efficiently return indexed data. Fillfactor tweaking is beyond the scope of this article, but should be employed with a careful eye toward the intended usage of the table for which the index is created.
SQL Server dynamically chooses which indexes to use during the execution of a query. To make this choice, SQL Server uses statistics about the distribution of keys in each index to determine which indexes it will use to process the query. It is essential to consider that the statistics SQL Server is using can become out of date during the course of normal database activity (e.g., inserts, deletes, and updates on the table). To ascertain the current status of your statistics, you can execute DBCC SHOWCONTIG. When you determine that your statistics are out of date, you should run the UPDATE STATISTICS statement on the table to allow SQL Server to refresh its information about the indexes.

Establish a plan

SQL Server provides a utility that simplifies and automates the maintenance of a database, including the indexes. This tool is called the Database Maintenance Plan Wizard (DMPW). If you run this wizard, you’ll see that you can schedule the statistics on the indexes in the database to be updated as a regularly scheduled job, thus alleviating the burden of manually rebuilding your indexes. Another mutually exclusive choice in the DMPW is to re-organise data and data pages, which effectively drops and recreates your indexes with a specified fill factor.

Index Tuning Wizard

SQL Server provides a wonderful facility known as the Index Tuning Wizard which greatly enhances the index selection process. To use this tool, first use SQL Profiler to capture a trace of the activity for which you wish to optimize performance. You may wish to run the trace for an extended period of time to capture a wide range of activity. Then, using Enterprise Manager, start the Index Tuning Wizard and instruct it to recommend indexes based upon the captured trace. It will not only suggest appropriate columns for queries but also provide you with an estimate of the performance increase you’ll experience after making those changes!

Difference between TRUNCATE and DELETE commands in SQL-Server

  • TRUNCATE is a DDL command whereas DELETE is a DML command.
  • TRUNCATE is much faster than DELETE.
  • When you type DELETE. All the data get copied into the Rollback Tablespace first. Then delete operation get performed. That’s why when you type ROLLBACK after deleting a table; you can get back the data (The system gets it for you from the Rollback Tablespace). All this process takes time.
  • But when you type TRUNCATE, it removes data directly without copying it into the Rollback Tablespace. That’s why TRUNCATE is faster. Once you truncate you can’t get back the data.
  • You can’t rollback in TRUNCATE but in DELETE you can rollback. TRUNCATE removes the record permanently.
  • In case of TRUNCATE, Trigger doesn’t get fired. But in DML commands like DELETE Trigger get fired.
  • You can’t use conditions (WHERE clause) in TRUNCATE. But in DELETE you can write conditions using WHERE clause.

A brief introduction to UML

Unified Modeling Language (UML) is a standardized modeling language enabling developers to specify, visualize, construct and document artifacts of a software system. Thus, UML makes these artifacts scalable, secure and robust in execution. UML is an important aspect involved in object-oriented software development. It uses graphic notation to create visual models of software systems.

Some of the important components of UML are:

Use Case Diagrams

  • One of the most important, compulsory diagrams.
  • It will answer “What will the system will do?” not concern about how and where.
  • Used in the requirement phase of the SDLC.

Actors

  • Who of the system/end user of the system.
  • Two types of actors Primary and Secondary.
  • Primary: actively participants and they initiate the use case.
  • Secondary: passively participate in the use case.

Use Case: They represent action, represented by strong verbs.

Use Case Table: Represent the details of Use Case diagram in textual format. The basic content is as follows:

  • Use Case ID
  • Use Case Name
  • Description
  • Primary Actors
  • Trigger
  • Pre-condition
  • Assumption
  • Failed End conditions / action on them
  • Main Scenarios / action on them
  • Alternate Scenario / Action on them
  • Success Scenarios
  • Note and Open Issues

Use Case Relationship: There are two types of relationships Extend and Include.
Include: represent invocation of one use case by the other. (Like one function called by other function). Has-a relationship. Depicted by a dotted arrow.
Extend: extend existing use case. Work same as the base use case only that some new functionality will be inserted in the extended use case. Parent-child, Is-a relationship. Depicted by a solid arrow with Solid filled triangle arrow.

Class Diagrams

  • A prototype which helps us creates objects.
  • They define static part of the project, blueprint.
  • Three Sections:
    • Class Name
    • Properties/ Attributes
    • Methods
  • + indicates Public
  • # indicates protected
  • – indicates private.
  • Multiplicity
    • 0…* è zero or many instances
    • 1…* è at least one or more instances
  • Aggregation, check below notes for definition. The whole object can exist without the aggregated object. Depicted by an empty diamond arrow.
  • Composition, check below notes for definition. The Whole object does not exist without the aggregated object. Depicted by a dark Diamond Arrow.
  • Generalization and Specialization, Parent-Child relationship. The arrow points from Specialization to Generalization (i.e. from child to Parent).
  • Interface, represented as <interface name>. The attribute part is blank as the interface does not contain as attributes of its own.
  • Abstract Class, represented by class name {abstract}.

Class Diagram

 

Object Diagram

  • Objects are created from Classes, they contain live data.
  • They give a pictorial representation of class diagram at any point of time.
  • They should be only drawn to represent complicated relationship bet objects.

e7978-umldia3

Sequence Diagram

  • They show interaction bet objects over a specific time period.
  • The message flow is shown vertically in waterfall manner i.e. starts from the top and flows to the bottom.

UML (A brief introduction)

  • Requests are shown with dark arrows and return requests are shown in dotted arrows.
  • Vertical rectangle shows the lifetime of the object.
  • There can be different kinds of messages in a Sequence diagram.
    • Synchronous message: are those messages where caller waits for the response. Represented by a dark arrow with bold dark head.UML (A brief introduction)
    • Asynchronous message: are those messages where the caller does not wait for the response. And does something else and return back when receives Represented by a dark arrow with a thin dark head.
    • Recursive Messages: object/Function calling itself like factorial.

7fb50-umldia6

  • Message Constraints: Represent the condition while executing the message. Shown by [] braces.

UML (A brief introduction)

  • Message Branching: It shows a conditional statement execution.

UML (A brief introduction)

Collaboration Diagram

  • They show the same information as Sequence diagram, but in a different way.
  • Sequence diagram we pay more attention to Time and sequence.
  • Collaboration diagram we pay more attention on the interaction of messages bet the objects.
  • Objects are represented by rectangles.
  • Messages are represented by an arrow and sequence number.
  • Conditional statements are denoted by [] brackets.

UML (A brief introduction)

For Each Collaboration: is represented by the () brackets.

98e25-umldia10

 

Activity Diagram

  • Are used to capture complicated process flows in a system.
  • Used especially to explain complex pseudo code.
  • Start of an activity diagram is by a dark circle.
  • End of activity is denoted by a dark circle inside a white circle.
  • Activities are represented by ovals.
  • Decision making by diamond shapes.
  • Decision in side [] brackets.
  • Parallel Processing is indicated by dark solid lines.
  • Swimlanes are used to indicate the responsibility using partition as shown below.

UML (A brief introduction)

State Chart Diagram

  • They are rarely used diagram. They depict how an object responds to events.
  • If the project has scenarios where the object goes through a lot of complicated states and transitions then these diagrams should be used.
  • If we do not have such scenarios then Sequence, collaboration or activity would be enough.
  • There are 3 main things in State Chart Diagram:
    • Action: triggers an object state from one state to another.
    • Events: triggers the action.
    • Guard: is condition on which it evaluates which action to be triggered.

5d162-umldia12

  • Start of diagram is by a dark circle.
  • End is denoted by a dark circle inside a white circle.
  • States are represented by ovals.
  • Events are left of “/” and actions are written on right hand side.

UML (A brief introduction)

 

39458-umldia14

Component Diagrams

  • Somewhat same as package diagram.
  • Package diagram shows logical grouping while component dig shows interactions.

UML (A brief introduction)

  • Components are represented by Rectangles.
  • Stereo types <> are used to represent the name of component.

 

Deployment Diagram

  • They show a static view how software and hardware in the application.
  • They show what components are installed on the hardware.
  • Hardware is shown with the solid box.

020d0-umldia16

  • We can have more than 1 component on a single hardware.

 

Stereo Type

  • Are not diagrams, but way to define variations on existing UML diagrams.
  • Can be used with any diagram using “<>” signs.

 

UML Flow

UML (A brief introduction)

 

What are must have diagrams

Any standard technical document at least should have following diagrams,

  • Use case Diagram
  • Sequence Diagram
  • Class Diagram
  • Deployment Diagram
  • Component Diagram

these diagrams will make the document more relevant and helpful to the technical teams in immediate or long term.

Some SQL-Server Performance Enhancement Tips

  • Choose the Appropriate Data Types
  • Use Triggers Cautiously
  • Use views and stored procedures instead of heavy queries.
It reduces network traffic, because client will send to server only stored procedure or view name (in certain cases heavy-duty queries might degrade performance up to 70%) instead of large heavy-duty queries text. This also facilitates permission management as you can restrict user access to table columns.
  • Use constraints instead of triggers
Constraints are much more efficient than triggers and can boost performance. So use constraints instead of triggers, whenever possible.

  • Use UNION ALL statement instead of UNION, whenever possible.
The UNION ALL statement is much faster than UNION, because UNION ALL statement does not look for duplicate rows, and UNION statement does look for duplicate rows, whether or not they exist.
  • Try to avoid using the DISTINCT clause, whenever possible.
Because using the DISTINCT clause will result in some performance degradation, you should use this clause only when it is necessary.
  • Try to avoid the HAVING clause, whenever possible.

The HAVING clause is used to restrict the result set returned by the GROUP BY clause. When you use GROUP BY with the HAVING clause, the GROUP BY clause divides the rows into sets of grouped rows and aggregates their values, and then the HAVING clause eliminates undesired aggregated groups. In many cases, you can write your select statement so, that it will contain only WHERE and GROUP BY clauses without HAVING clause. This can improve the performance of your query.  If you need to return the total table’s row count, you can use alternative way instead of SELECT COUNT(*) statement. There r 2 ways to do this
SELECT COUNT(*) statement makes a full table scan to return the total table’s row count, it can take very many time for the large table. There is another way to determine the total row count in a table. You can use sysindexes system table, in this case. There is ROWS column in the sysindexes table. This column contains the total row count for each table in your database.

 

So, you can also use the following select statement instead of
                    SELECT COUNT(*)
                    
                    SELECT rows FROM sysindexes WHERE id = OBJECT_ID('table_name') 
 

So, you can improve the speed of such queries in several times.

  • Use SET NOCOUNT ON statement into your stored procedures to stop the message indicating the number of rows affected by a T-SQL statement. This can reduce network traffic, because your client will not receive the message indicating the number of rows affected by a T-SQL statement.
  • Use the WHERE clause.
Results in good performance benefits, because SQL Server will return to client only particular rows, not all rows from the table(s). This can reduce network traffic and boost the overall performance of the query.
  • Use the select statements with TOP keyword or the SET ROWCOUNT statement if you need to return only the first n rows. This can improve the performance of your queries because smaller result set will be returned. This can also reduce the traffic between the server and the clients.
  • Return only the particular columns from the table, not all columns/ Avoid using “Select *”
Gives u good performance benefits, because SQL Server will return to client only particular columns, not all table’s columns. This can reduce network traffic and boost the overall performance of the query.
  •  Create Index where appropriate
Table indexing will boost the performance of the queries a lot. SQL Server can perform a table scan, or it can use an index scan. When performing a table scan, SQL Server starts at the beginning of the table, goes row by row in the table, and extracts the rows that meet the criteria of the query. When SQL Server uses an index, it finds the storage location of the rows needed by the query and extracts only the needed rows.
Avoid creating indexes on small tables since it will take more time for SQL Server to perform an index scan than to perform a simple table scan. If a table has too many transactions happening in it (INSERT/ UPDATE/DELETE), keep the number of indexes minimal. For each transaction, indexes created in the table are re-organized by SQL Sever, which reduces performance.
Index Tuning Wizard, which is available in the SQL Server Enterprise Manager, is a good tool to create optimized indexes. You can find it in Tools->Wizards->Management->Index Tuning Wizard.
  •  Avoid using Temp Tables inside stored procedures
If temporary tables are used inside stored procedures, SQL Server may not reuse the execution plan each time the stored procedure is called.  So this will reduce performance.
  •  Use table variables instead of temporary tables.
Table variables require less locking and logging resources than temporary tables, so table variables should be used whenever possible (available in SQL Server 2000 only).
  • Avoid Using Cursors 

SQL Server cursors can result in some performance degradation in comparison with select statements. Try to    use      correlated sub-query or derived tables, if you need to perform row-by-row operations

  • Use Joins Appropriately
One of the best ways to boost JOIN performance is to limit how many rows need to be joined. This is especially beneficial for the outer table in a JOIN. Only return absolutely only those rows needed to be joined, and no more.
 
If you perform regular joins between two or more tables in your queries, performance will be optimized if each of the joined columns has their own indexes. This includes adding indexes to the columns in each table used to join the tables. Generally speaking, a clustered key is better than a non-clustered key for optimum JOIN performance.
 
If you have two or more tables that are frequently joined together, then the columns used for the joins on all tables should have an appropriate index. If the columns used for the joins are not naturally compact, then considering adding surrogate keys to the tables that are compact in order to reduce the size of the keys, thus decreasing read I/O during the join process, increasing overall performance.
 
JOIN performance has a lot to do with how many rows you can stuff in a data page. For example, let’s say you want to JOIN two tables. Most likely, one of these two tables will be smaller than the other, and SQL-Server will most likely select the smaller of the two tables to be the inner table of the JOIN. When this happens, SQL Server tries to put the relevant contents of this table into the buffer cache for faster performance. If there is not enough room to put all the relevant data into cache, then SQL Server will have to use additional resources in order to get data into and out of the cache as the JOIN is performed. If all of the data can be cached, the performance of the JOIN will be faster than if it is not. This comes back to the original statement, that the number of rows in a table can affect JOIN performance. In other words, if a table has no wasted space, it is much more likely to get all of the relevant inner table data into cache, boosting speed. The moral to this story is to try to get as much data stuffed into a data page as possible. This can be done through the use of a high fill-factor, rebuilding indexes often to get rid of empty space, and to optimize data types and widths when creating columns in tables.
 
Keep in mind that when you create foreign keys, an index is not automatically created at the same time. If you ever plan to join a table to the table with the foreign key, using the foreign key as the linking column, then you should consider adding an index to the foreign key column. An index on a foreign key column can substantially boost the performance of many joins.
 
Avoid joining tables based on columns with few unique values. If columns used for joining aren’t mostly unique, then the SQL-Server optimizer may not be able to use an existing index in order to speed up the join. Ideally, for best performance, joins should be done on columns that have unique indexes. For best join performance, the indexes on the columns being joined should ideally be numeric data types, not CHAR or VARCHAR, or other non-numeric data types. The overhead is lower and join performance is faster. For maximum performance when joining two or more tables, the indexes on the columns to be joined should have the same data type, and ideally, the same width. This also means that you shouldn’t mix non-Unicode and Unicode datatypes when using SQL Server 7.0 or later. (e.g. VARCHAR and NVARCHAR). If SQL Server has to implicitly convert the data types to perform the join, this not only slows the joining process, but it also could mean that SQL Server may not use available indexes, performing a table scan instead.
 
When you create joins using Transact-SQL, you can choose between two different types of syntax: either ANSI or Microsoft. ANSI refers to the ANSI standard for writing joins, and Microsoftrefers to the old Microsoft style of writing joins. For example:
 
ANSI JOIN Syntax
SELECT fname, lname, department
FROM names INNER JOIN departments ON names.employeeid = departments.employeeid
 
Former Microsoft JOIN Syntax
SELECT fname, lname, department 
FROM names, departments
WHERE names.employeeid = departments.employeeid
 
If written correctly, either format will produce identical results. But that is a big if. The older Microsoft join syntax lends itself to mistakes because the syntax is a little less obvious. On the other hand, the ANSI syntax is very explicit and there is little chance you can make a mistake.
 
If you have to regularly join four or more tables to get the recordset you need, consider denormalizing the tables so that the number of joined tables is reduced. Often, by adding one or two columns from one table to another, the number of joins can be reduced, boosting performance.
 
If your join is slow, and currently includes hints, remove the hints to see if the optimizer can do a better job on the join optimization than you can. This is especially important if your application has been upgraded from version 6.5 to 7.0, or from 7.0 to 2000.
 
One of the best ways to boost JOIN performance is to ensure that the JOINed tables include an appropriate WHERE clause to minimize the number of rows that need to be JOINed.
For example, I have seen many developers perform a simple JOIN on two tables, which is not all that unusual. The problem is that each table may contain over a million rows each. Instead of just JOINing the tables, appropriate restrictive clauses needed to be added to the WHERE clause of each table in order to reduce the total number of rows to be JOINed. This simple step can really boost the performance of a JOIN of two large tables.
In the SELECT statement that creates your JOIN, don’t use an * (asterisk) to return all of the columns in both tables. This is bad form for a couple of reasons. First, you should only return those columns you need, as the less data you return, the faster your query will run. It would be rare that you would need all of the columns in all of the tables you have joined. Second, you will be returning two of each column used in your JOIN condition, which ends up returning way more data that you need, and hurting performance.
 
Take a look at these two queries:
USE Northwind
SELECT *  FROM Orders INNER JOIN [Order Details] ON Orders.OrderID = [Order Details].OrderID
 
And
 
USE Northwind
SELECT Orders.OrderID, Orders.OrderDate, [Order Details].UnitPrice, [Order Details].Quantity,  [Order Details].Discount
FROM Orders INNER JOIN [Order Details]  ON Orders.OrderID = [Order Details].OrderID
 
Both of these queries perform essentially the same function. The problem with the first one is that it returns not only too many columns (they aren’t all needed by the application), but the OrderID column is returned twice, which doesn’t provide any useful benefits. Both of these problems contribute to unnecessary server overhead, hurting performance. The moral of this story is never to use the * in your joins.
 
While high index selectivity is generally an important factor that the Query Optimizer uses to determine whether or not to use an index, there is one special case where indexes with low selectivity can be useful speeding up SQL Server. This is in the case of indexes on foreign keys. Whether an index on a foreign key has either high or low selectivity, an index on a foreign key can be used by the Query Optimizer to perform a merge join on the tables in question. A merge join occurs when a row from each table is taken and then they are compared to see if they match the specified join criteria. If the tables being joined have the appropriate indexes (no matter the selectivity), a merge join can be performed, which is often much faster than a join to a table with a foreign key that does not have an index.
 
For very large joins, consider placing the tables to be joined in separate physical files in the same filegroup. This allows SQL Server to spawn a separate thread for each file being accessed, boosting performance.
 
Don’t use CROSS JOINS, unless this is the only way to accomplish your goal. What some inexperienced developers do is to join two tables using a CROSS JOIN, and then they use either the DISTINCT or the GROUP BY clauses to “clean up” the mess they have created. This, as you might imagine, can be a huge waste of SQL Server resources.
 
If you have the choice of using a JOIN or a subquery to perform the same task, generally the JOIN (often an OUTER JOIN) is faster. But this is not always the case. For example, if the returned data is going to be small, or if there are no indexes on the joined columns, then a subquery may indeed be faster. The only way to really know for sure is to try both methods and then look at their query plans. If this operation is run often, you should seriously consider writing the code both ways, and then select the most efficient code. Joins should be used instead of sub-selects when the sub-select contain aggregate functions.
 
If you have a query with many joins, one alternative to de-normalizing a table to boost performance is to use an Indexed View to pre-join the tables. An Indexed View, which is only available from SQL-Server 2000 Enterprise Edition, allows you to create a view that is actually a physical object that has its own clustered index. Whenever a base table of the Indexed View is updated, the Indexed View is also updated. As you can imagine, this can potentially reduce INSERT, UPDATE, and DELETE performance on the base tables. You will have to perform tests, comparing the pros and cons of performance in order to determine whether or not using an Indexed View to avoid joins in query is worth the extra performance cost caused by using them.
 
If you have a query that uses a LEFT OUTER JOIN, check it carefully to be sure that is the type of join you really want to use. As you may know, a LEFT OUTER JOIN is used to create a result set that includes all of the rows from the left table specified in the clause, not just the ones in which the joined columns match. In addition, when a row in the left table has no matching rows in the right table, the result set row contains NULL values for all the selected columns coming from the right table. If this is what you want, then use this type of join. The problem is that in the real world, a LEFT OUTER JOIN is rarely needed, and many developers use them by mistake. While you may end up with the data you want, you may also end up with more than the data you want, which contributes to unnecessary overhead and poor performance. Because of this, always closely examine why you are using a LEFT OUTER JOIN in your queries, and only use them if they are exactly what you need. Otherwise, use a JOIN that is more appropriate to your needs.
 
If you are having difficulty tuning the performance of a poorly performing query that has one or more JOINs, check to see if the query plan created by the query optimizer is using a hash join. When the query optimizer is asked to join two tables that don’t have appropriate indexes, it will often perform a hash join. A hash join is resource intensive (especially CPU and I/O) and can slow the performance of your join. If the query in question is run often, you should consider adding appropriate indexes. For example, if you are joining column1 in table1 to column5 in table2, then column1 in table1 and column5 in table2 need to have indexes. Once indexes are added to the appropriate columns used in the joins in your query, the query optimizer will most likely be able to use these indexes, performing a nested-loop join instead of a hash join, and performance will improve.

ASP.NET State Management techniques

State management is the process by which you maintain state and page information over multiple requests for the same or different pages. As is true for any HTTP-based technology, Web Forms pages are stateless, which means that they do not automatically indicate whether the requests in a sequence are all from the same client or even whether a single browser instance is still actively viewing a page or site. Furthermore, pages are destroyed and recreated with each round trip to the server; therefore page information will not exist beyond the life cycle of a single page.

Unlike a client-server application, there is no automatic storage of information from the previous browser page. Therefore the ASP.Net developer has to take steps to save important information from the post-back to the server.

In ASP.NET we have 4 methods of client state management.

  1. Query String
  2. Cookies
  3. Hidden Fields
  4. View State

and 3 methods for server state management

  1. Application state
  2. Session state
  3. Database

Client-Side State Management

The information is stored on the client side/browser without using the server resources. This type of implementation faster the whole process. The Implementation is very easy too.

QueryString
In this technique, the page information is stored with the URL.The  URL, with a question mark ?, followed by a key-value pair. The main disadvantage is we can store a limited size of data.

Cookies
A cookie is a small amount of data stored either in a text file on the client’s file system or in-memory in the client browser session. Cookies are mainly used for tracking data settings. The main disadvantage is users can disable cookies in their browsers.Hidden

Fields
Hidden fields are HTML controls, but are not visible to the user. It is the best way to store small amounts of frequently changed data on the client. The disadvantage is hidden field can be tampered with. The information in the hidden field can be seen if the page output source is viewed directly, creating a potential security issue.

ViewState
Web Forms pages provide the ViewState property as a built-in structure for automatically retaining values between multiple requests for the same page. View state is maintained as a hidden field in the page. ViewState has advantages the other 3 methods don’t have. One of the most important is the ability of ViewState to support structured data. This means that control values are maintainable across page postbacks.The main disadvantage is we can’t persist data across multiple pages.

Server-Side State Management

The data in Server-side state management will be stored in the server. So the security will be very high.

Application State
The Application object provides a mechanism for storing data that is accessible to all code running within the Web application, The ideal data to insert into application state variables is data that is shared by multiple sessions and does not change often. And just because it is visible to the entire application, you need to used Lock and UnLock pair to avoid having conflict values.

Session State
ASP.NET provides a session state, available as the HttpSessionState class, as a method of storing session-specific information that is visible in the session only. Each active ASP.NET session is identified and tracked using a 120-bit SessionID string containing URL-legal ASCII characters. SessionID values are generated using an algorithm that guarantees uniqueness so that sessions do not collide, and SessionID’s randomness makes it harder to guess the session ID of an existing session.

Database

In some cases, you may wish to use database support to maintain state on your Web site. Typically, database support is used in conjunction with cookies or session state. For example, it is quite common for an e-commerce Web site to maintain state information using a relational database for the following reasons:

  • Security
  • Personalization
  • Consistency
  • Data mining

Aggregation & Composition in OOPs

  • Aggregation and composition in real project define HAS-A relationship.
  • Aggregation differs from ordinary composition in that it does not imply ownership. 
  • In composition, when the owning object is destroyed, so are the contained objects. 
  • In aggregation, this is not necessarily true.
 For example, a university owns various departments (e.g., Physics, chemistry), and each department has a number of professors.

Now due to some reasons, if the university closes, the departments will no longer exist, but the professors in those departments will. Therefore, a University can be seen as a composition of departments, whereas departments have an aggregation of professors.

  • Aggregation represents ownership or a whole/part relationship.
  • The composition represents an even stronger form of ownership. With composition, we get a coincident lifetime of part with the whole. The composite object has sole responsibility for the disposition of its parts in terms of creation and destruction. In implementation terms, the composite is responsible for memory allocation and de-allocation. An object may be part of only one composite at a time. If the composite is destroyed, it must either destroy all its parts or else give responsibility for them to some other object. A composite object can be designed with the knowledge that no other object will destroy its parts.
  • Composition – One class has another class.
  • Aggregation – One class is a kind of another class.