Facade Pattern
- They fall into structural pattern categories.
- Façade pattern sits on the top of group of subsystems and allows them to communicate in a unified manner.
- The façade takes care of orchestration part and present more simplified interface to the client.
Example:
In a typical online shopping cart when a customer places any order following things happens.
- Get Product Details
- Make Payment
- Print Invoice
The above example can be implemented using Facade pattern as follows:
- The main class (clsOrder) is exposed to the client which internally takes care of calling other related classes (clsProduct, clsPayment, clsInvoice).
- The clsOrder class acts as a facade class which takes care of orchestration by calling other related classes internally.
Singleton Pattern
The singleton pattern falls into Creational patterns.
When it is used
- Used when we want only one object to be shared between clients.
- The client should not create an object.
- The object should be created only once and then the same object should service all the client requests.
How to Implement
There are various different ways of implementing the singleton pattern in C# based on thread safety, lazy-initialization and performance. But all these implementations share some common characteristics:
- A single constructor, which is private and parameter-less. This prevents other classes from instantiating it. It also prevents sub-classing – if a singleton can be sub-classed and if each of those subclasses can create an instance, the pattern is violated.
- The class is sealed, though this is optional/unnecessary, but may help the JIT to optimize things more.
- A static variable which holds a reference to the single created instance.
- A public static means of getting the reference to the single created instance, creating one if necessary.
Singleton implementation without thread safety
public sealed class SingletonImplementation
{
//static variable which holds the single instance at any given time
private static SingletonImplementationtheInstance;
//constructor is private so we cannot create instance of the class directly
private SingletonImplementation()
{
}
//static method used to create and return the instance of the class
public static SingletonImplementationGetInstance()
{
if(theInstance == null)
{
theInstance = new SingletonImplementation();
}
return theInstance;
}
}
The above code is a bad implementation as it’s not thread-safe and we should avoid it. Two different threads could both have evaluated the test if (instance==null) and found it to be true, then both create instances, which violates the singleton pattern. Note that in fact the instance may already have been created before the expression is evaluated, but the memory model doesn’t guarantee that the new value of instance will be seen by other threads unless suitable memory barriers have been passed.
Singleton implementation with thread safety
public class SingletonImplementation
{
//static variable which holds the single instance at any given time
private static SingletonImplementationtheInstance;
//constructor is private so we cannot create instance of the class directly
private SingletonImplementation()
{
}
//static method used to create and return the instance of the class
public static SingletonImplementationGetInstance()
{
lock(typeof(SingletonImplementation))
{
if(theInstance == null)
{
theInstance = new SingletonImplementation();
}
returntheInstance;
}
}
}
The thread takes out a lock on a shared object, and then checks whether or not the instance has been created before creating the instance. This takes care of the memory barrier issue (as locking makes sure that all reads occur logically after the lock acquire, and unlocking makes sure that all writes occur logically before the lock release) and ensures that only one thread will create an instance. Unfortunately, performance suffers as a lock is acquired every time the instance is requested.
Singleton implementation with thread safety without using locks- not quite as lazy
public class SingletonImplementation
{
//static variable which holds the single instance at any given time
private static readonly SingletonImplementationtheInstance = new SingletonImplementation();
//constructor is private so we cannot create instance of the class directly
private SingletonImplementation()
{
}
// Explicit static constructor to tell C# compiler not to mark type as beforefieldinit
static SingletonImplementation()
{
}
//static method used to create and return the instance of the class
public static SingletonImplementation GetInstance()
{
return theInstance;
}
}
As you can see, this is really is extremely simple – but why is it thread-safe and how lazy is it? Well, static constructors in C# are specified to execute only when an instance of the class is created or a static member is referenced, and to execute only once per AppDomain. Given that this check for the type being newly constructed needs to be executed whatever else happens, it will be faster than adding extra checking as in the previous examples. There are a couple of wrinkles, however:
- It’s not as lazy as the other implementations. In particular, if you have static members other than Instance, the first reference to those members will involve creating the instance.
- There are complications if one static constructor invokes another which invokes the first again.
- The laziness of type initializers is only guaranteed by .NET when the type isn’t marked with a special flag called beforefieldinit. Unfortunately, the C# compiler marks all types which don’t have a static constructor (i.e. a block which looks like a constructor but is marked static) as beforefieldinit. It also affects performance.
Singleton implementation with thread safety without using locks- fully lazy instantiation
public class SingletonImplementation
{
//constructor is private so we cannot create instance of the class directly
privateSingletonImplementation()
{
}
public static SingletonImplementation theInstance { get { return Nested.instance; } }
private class Nested
{
// Explicit static constructor to tell C# compiler not to mark type as beforefieldinit static Nested()
{
}
internal static readonly SingletonImplementation theInstance = new SingletonImplementation ();
}
}
Here, instantiation is triggered by the first reference to the static member of the nested class, which only occurs in Instance. This means the implementation is fully lazy, but has all the performance benefits of the previous ones. Note that although nested classes have access to the enclosing class’s private members, the reverse is not true, hence the need for instance to be internal here. That doesn’t raise any other problems, though, as the class itself is private. Though the code is a bit more complicated in order to make the instantiation lazy.
Singleton implementation with thread safety without using locks- using .NET 4’s Lazy type
If you’re using .NET 4 (or higher), you can use the System.Lazytype to make the laziness really simple. All you need to do is pass a delegate to the constructor which calls the Singleton constructor – which is done most easily with a lambda expression.
public class SingletonImplementation
{
//static variable which holds the single instance at any given time
private static readonly Lazy< SingletonImplementation > lazy =
new Lazy< SingletonImplementation >(() => new SingletonImplementation ());
//constructor is private so we cannot create instance of the class directly
privateSingletonImplementation()
{
}
//static method used to create and return the instance of the class
public static SingletonImplementationGetInstance()
{
returnlazy.Value;
}
public int Addition(int i, int j)
{
returni+j;
}
}
It’s simple and performs well. It also allows you to check whether or not the instance has been created yet with the IsValueCreated property if you need that.
Performance vs laziness
In many cases, there won’t be any actual requirement of full laziness – unless your class initialization does something particularly time-consuming, or has some side-effect elsewhere, it’s probably fine to leave out the explicit static constructor shown above. This can increase performance as it allows the JIT compiler to make a single check (for instance at the start of a method) to ensure that the type has been initialized, and then assume it from then on. If your singleton instance is referenced within a relatively tight loop, this can make a (relatively) significant performance difference. You should decide whether or not fully lazy instantiation is required, and document this decision appropriately within the class.
Implementation of locking is good practice and give us thread safety but it comes with the cost of performance.
Abstract Factory Patterns
- This is a rarely used pattern. It exists only when you have factory patterns in the project.
- Abstract factory expands on the basic factory pattern.
- Abstract factory helps us to unite similar factory patterns classes into one unified interface. This leads to more simplified interface for the client.
Example:
Steps:
- We have common an Interface or Abstract class
- The concrete classes implement this interface or abstract class.
- Then a new factory class is created which takes care of object creation. A similar abstract factory class takes the decision of returning the asked factory object. This abstract class will be used by the client.
- The client calls the static class object in above by creating an abstract class instead of the factory classes, as follows:
Factory Pattern
Introduction
- One of the most widely used creational patterns is the Factory.
- This pattern is aptly named, as it calls for the use of a specialized object solely to create other objects, much like a real-world factory.
- We need Factory pattern:
- To eliminate new keywords to create on client.
- The client is not aware of the concrete classes.
Logical Model
As with other design patterns, there are countless variations of the Factory pattern, although most variants typically used the same set of primary actors, a client, a factory, and a product. A client is an object that requires an instance of another object (the product) for some purpose. Rather than creating the product instance directly, the client delegates this responsibility to the factory. Once invoked, the factory creates a new instance of the product, passing it back to the client. Put simply, the client uses the factory to create an instance of the product. Figure 1 shows this logical relationship between these elements of the pattern.
Figure 1. Factory pattern logical model
The factory completely abstracts the creation and initialization of the product from the client. This indirection enables the client to focus on its discrete role in the application without concerning itself with the details of how the product is created. Thus, as the product implementation changes over time, the client remains unchanged.
While this indirection is a tangible benefit, the most important aspect of this pattern is the fact that the client is abstracted from both the type of product and the type of factory used to create the product. Presuming that the product interface is invariant, this enables the factory to create any product type it deems appropriate. Furthermore, presuming that the factory interface is invariant, the entire factory along with the associated products it creates can be replaced in a wholesale fashion. Both of these radical modifications can occur without any changes to the client.
Physical Model
Most implementations of the Factory pattern use two abstract classes, Factory and Product, to provide the invariant interfaces discussed in the logical model. Although we could utilize pure interfaces for this purpose, the use of abstract classes enables us to place common instance behavior in the abstract class. In addition, abstract classes offer versioning benefits over pure interfaces.
SQL SERVER INDEXES
Definition
Indexes are one of the biggest determinate of database performance. Indexes allow you to speed query performance on commonly used columns and improve the overall processing speed of your database.
Types
Microsoft SQL Server supports two types of indexes:
Clustered indexes define the physical sorting of a database table’s rows in the storage media. For this reason, each database table may have only one clustered index.
Non-clustered indexes are created outside of the database table and contain a sorted list of references to the table itself.
When and Where
Each database table can have only one clustered index. If a PRIMARY KEY constraint is created for a database table and no clustered index currently exists for that table, SQL Server automatically creates a clustered index on the primary key. SQL Server supports a maximum of 249 non-clustered indexes per table. However, it’s important to keep in mind that non-clustered indexes slow down the data modification and insertion process, so indexes should be kept to a minimum.
We can get potential performance benefits only through the judicious use of indexes on database tables. Indexes will allow you to speed query performance on commonly used columns and improve the overall processing speed of your database. But the un-necessary use of index can adversely affect the performance of the application.
One of the hardest tasks is the selection of appropriate columns for clustered or non-clustered indexes. When reviewing what type of index to create, you should identify the data type and the column(s) stores. Also, you must consider what query types will run and the frequency with which they will be executed. We can use following guidelines for this:
- When tables use primary keys, SQL Server automatically (by default) creates a unique cluster index on the column(s) comprising the key. Clearly, the uniqueness of the primary key is enforced through the creation of the unique index on the column(s).
- Columns that contain a relatively high degree of duplicate values and that are accessed in sequence are good candidates for clustered indexes because SQL Server physically reorders the data rows and stores the values in either ascending (the default) or descending order so they can be quickly retrieved.
- Columns that are searched in a range are good candidates for clustered indexes.
Columns that contain many unique values are good candidates for non-clustered indexes. - Consider creating non-clustered indexes on any columns that are frequently referenced in the WHERE clauses of SQL statements.
- Columns referenced by JOIN and GROUP BY operations are good candidates for non-clustered indexes.
- We can also consider creating non-clustered indexes that cover all of the columns used by certain frequently issued queries. These queries are referred to as “covered queries” and experience excellent performance gains.
- When creating foreign key relationships, it’s a good idea to create a non-clustered index on the foreign key column if you’re planning on using it frequently in joins.
Tables that have a clustered index maintain the relationship between the data pages via a linked list (e.g., leaf and non-leaf level). Conversely, if no clustered index exists on a table, SQL Server will store the data pages in a heap.
Data pages
When an index is created, SQL Server creates data pages, which are pointers that assist in finding the data row that contains the information of interest. When the index is established, a fillfactor is set. The purpose of a fillfactor is to designate the percentage of the data pages filled upon index creation. Over time, the free space is consumed as modifications occur, which causes page splits. The result of the page splits is that they degrade the performance of the indexes and thus the queries that utilize them by causing the data storage to become fragmented. The index’s fillfactor is set at the time of the indexes creation and isn’t dynamically maintained.
To update the fillfactor in the data pages, we can drop and recreate the indexes and reset the fillfactor. Keep in mind that this will negatively impact concurrent database activity and should be used judiciously in production systems. DBCC INDEXDEFRAG and DBCC DBREINDEX are statements that defragment both clustered and non-clustered indexes. INDEXDEFRAG is an online operation (i.e., it does not block other table activity, such as a query), whereas DBREINDEX physically rebuilds the index(s). In most cases, rebuilding an index achieves greater defragmentation, but it comes at the cost of blocking concurrent activity on that table. INDEXDEFRAG can take longer to complete when large fragmented indexes are present because it executes in small transactional blocks.
The Fillfactor
When you perform one of these actions, the database engine can more efficiently return indexed data. Fillfactor tweaking is beyond the scope of this article, but should be employed with a careful eye toward the intended usage of the table for which the index is created.
SQL Server dynamically chooses which indexes to use during the execution of a query. To make this choice, SQL Server uses statistics about the distribution of keys in each index to determine which indexes it will use to process the query. It is essential to consider that the statistics SQL Server is using can become out of date during the course of normal database activity (e.g., inserts, deletes, and updates on the table). To ascertain the current status of your statistics, you can execute DBCC SHOWCONTIG. When you determine that your statistics are out of date, you should run the UPDATE STATISTICS statement on the table to allow SQL Server to refresh its information about the indexes.
Establish a plan
SQL Server provides a utility that simplifies and automates the maintenance of a database, including the indexes. This tool is called the Database Maintenance Plan Wizard (DMPW). If you run this wizard, you’ll see that you can schedule the statistics on the indexes in the database to be updated as a regularly scheduled job, thus alleviating the burden of manually rebuilding your indexes. Another mutually exclusive choice in the DMPW is to re-organise data and data pages, which effectively drops and recreates your indexes with a specified fill factor.
Index Tuning Wizard
SQL Server provides a wonderful facility known as the Index Tuning Wizard which greatly enhances the index selection process. To use this tool, first use SQL Profiler to capture a trace of the activity for which you wish to optimize performance. You may wish to run the trace for an extended period of time to capture a wide range of activity. Then, using Enterprise Manager, start the Index Tuning Wizard and instruct it to recommend indexes based upon the captured trace. It will not only suggest appropriate columns for queries but also provide you with an estimate of the performance increase you’ll experience after making those changes!
Difference between TRUNCATE and DELETE commands in SQL-Server
- TRUNCATE is a DDL command whereas DELETE is a DML command.
- TRUNCATE is much faster than DELETE.
- When you type DELETE. All the data get copied into the Rollback Tablespace first. Then delete operation get performed. That’s why when you type ROLLBACK after deleting a table; you can get back the data (The system gets it for you from the Rollback Tablespace). All this process takes time.
- But when you type TRUNCATE, it removes data directly without copying it into the Rollback Tablespace. That’s why TRUNCATE is faster. Once you truncate you can’t get back the data.
- You can’t rollback in TRUNCATE but in DELETE you can rollback. TRUNCATE removes the record permanently.
- In case of TRUNCATE, Trigger doesn’t get fired. But in DML commands like DELETE Trigger get fired.
- You can’t use conditions (WHERE clause) in TRUNCATE. But in DELETE you can write conditions using WHERE clause.
A brief introduction to UML
Unified Modeling Language (UML) is a standardized modeling language enabling developers to specify, visualize, construct and document artifacts of a software system. Thus, UML makes these artifacts scalable, secure and robust in execution. UML is an important aspect involved in object-oriented software development. It uses graphic notation to create visual models of software systems.
Some of the important components of UML are:
Use Case Diagrams
- One of the most important, compulsory diagrams.
- It will answer “What will the system will do?” not concern about how and where.
- Used in the requirement phase of the SDLC.
Actors
- Who of the system/end user of the system.
- Two types of actors Primary and Secondary.
- Primary: actively participants and they initiate the use case.
- Secondary: passively participate in the use case.
Use Case: They represent action, represented by strong verbs.
Use Case Table: Represent the details of Use Case diagram in textual format. The basic content is as follows:
- Use Case ID
- Use Case Name
- Description
- Primary Actors
- Trigger
- Pre-condition
- Assumption
- Failed End conditions / action on them
- Main Scenarios / action on them
- Alternate Scenario / Action on them
- Success Scenarios
- Note and Open Issues
Use Case Relationship: There are two types of relationships Extend and Include.
Include: represent invocation of one use case by the other. (Like one function called by other function). Has-a relationship. Depicted by a dotted arrow.
Extend: extend existing use case. Work same as the base use case only that some new functionality will be inserted in the extended use case. Parent-child, Is-a relationship. Depicted by a solid arrow with Solid filled triangle arrow.
Class Diagrams
- A prototype which helps us creates objects.
- They define static part of the project, blueprint.
- Three Sections:
- Class Name
- Properties/ Attributes
- Methods
- + indicates Public
- # indicates protected
- – indicates private.
- Multiplicity
- 0…* è zero or many instances
- 1…* è at least one or more instances
- Aggregation, check below notes for definition. The whole object can exist without the aggregated object. Depicted by an empty diamond arrow.
- Composition, check below notes for definition. The Whole object does not exist without the aggregated object. Depicted by a dark Diamond Arrow.
- Generalization and Specialization, Parent-Child relationship. The arrow points from Specialization to Generalization (i.e. from child to Parent).
- Interface, represented as <interface name>. The attribute part is blank as the interface does not contain as attributes of its own.
- Abstract Class, represented by class name {abstract}.
Object Diagram
- Objects are created from Classes, they contain live data.
- They give a pictorial representation of class diagram at any point of time.
- They should be only drawn to represent complicated relationship bet objects.
Sequence Diagram
- They show interaction bet objects over a specific time period.
- The message flow is shown vertically in waterfall manner i.e. starts from the top and flows to the bottom.
- Requests are shown with dark arrows and return requests are shown in dotted arrows.
- Vertical rectangle shows the lifetime of the object.
- There can be different kinds of messages in a Sequence diagram.
- Synchronous message: are those messages where caller waits for the response. Represented by a dark arrow with bold dark head.
- Asynchronous message: are those messages where the caller does not wait for the response. And does something else and return back when receives Represented by a dark arrow with a thin dark head.
- Recursive Messages: object/Function calling itself like factorial.
- Message Constraints: Represent the condition while executing the message. Shown by [] braces.
- Message Branching: It shows a conditional statement execution.
Collaboration Diagram
- They show the same information as Sequence diagram, but in a different way.
- Sequence diagram we pay more attention to Time and sequence.
- Collaboration diagram we pay more attention on the interaction of messages bet the objects.
- Objects are represented by rectangles.
- Messages are represented by an arrow and sequence number.
- Conditional statements are denoted by [] brackets.
For Each Collaboration: is represented by the () brackets.
Activity Diagram
- Are used to capture complicated process flows in a system.
- Used especially to explain complex pseudo code.
- Start of an activity diagram is by a dark circle.
- End of activity is denoted by a dark circle inside a white circle.
- Activities are represented by ovals.
- Decision making by diamond shapes.
- Decision in side [] brackets.
- Parallel Processing is indicated by dark solid lines.
- Swimlanes are used to indicate the responsibility using partition as shown below.
State Chart Diagram
- They are rarely used diagram. They depict how an object responds to events.
- If the project has scenarios where the object goes through a lot of complicated states and transitions then these diagrams should be used.
- If we do not have such scenarios then Sequence, collaboration or activity would be enough.
- There are 3 main things in State Chart Diagram:
- Action: triggers an object state from one state to another.
- Events: triggers the action.
- Guard: is condition on which it evaluates which action to be triggered.
- Start of diagram is by a dark circle.
- End is denoted by a dark circle inside a white circle.
- States are represented by ovals.
- Events are left of “/” and actions are written on right hand side.
Component Diagrams
- Somewhat same as package diagram.
- Package diagram shows logical grouping while component dig shows interactions.
- Components are represented by Rectangles.
- Stereo types <> are used to represent the name of component.
Deployment Diagram
- They show a static view how software and hardware in the application.
- They show what components are installed on the hardware.
- Hardware is shown with the solid box.
- We can have more than 1 component on a single hardware.
Stereo Type
- Are not diagrams, but way to define variations on existing UML diagrams.
- Can be used with any diagram using “<>” signs.
UML Flow
What are must have diagrams
Any standard technical document at least should have following diagrams,
- Use case Diagram
- Sequence Diagram
- Class Diagram
- Deployment Diagram
- Component Diagram
these diagrams will make the document more relevant and helpful to the technical teams in immediate or long term.
Some SQL-Server Performance Enhancement Tips
- Choose the Appropriate Data Types
- Use Triggers Cautiously
- Use views and stored procedures instead of heavy queries.
- Use constraints instead of triggers
- Use UNION ALL statement instead of UNION, whenever possible.
- Try to avoid using the DISTINCT clause, whenever possible.
- Try to avoid the HAVING clause, whenever possible.
The HAVING clause is used to restrict the result set returned by the GROUP BY clause. When you use GROUP BY with the HAVING clause, the GROUP BY clause divides the rows into sets of grouped rows and aggregates their values, and then the HAVING clause eliminates undesired aggregated groups. In many cases, you can write your select statement so, that it will contain only WHERE and GROUP BY clauses without HAVING clause. This can improve the performance of your query. If you need to return the total table’s row count, you can use alternative way instead of SELECT COUNT(*) statement. There r 2 ways to do this
SELECT COUNT(*) statement makes a full table scan to return the total table’s row count, it can take very many time for the large table. There is another way to determine the total row count in a table. You can use sysindexes system table, in this case. There is ROWS column in the sysindexes table. This column contains the total row count for each table in your database.
SELECT COUNT(*)
SELECT rows FROM sysindexes WHERE id = OBJECT_ID('table_name')
So, you can improve the speed of such queries in several times.
- Use SET NOCOUNT ON statement into your stored procedures to stop the message indicating the number of rows affected by a T-SQL statement. This can reduce network traffic, because your client will not receive the message indicating the number of rows affected by a T-SQL statement.
- Use the WHERE clause.
- Use the select statements with TOP keyword or the SET ROWCOUNT statement if you need to return only the first n rows. This can improve the performance of your queries because smaller result set will be returned. This can also reduce the traffic between the server and the clients.
- Return only the particular columns from the table, not all columns/ Avoid using “Select *”
- Create Index where appropriate
- Avoid using Temp Tables inside stored procedures
- Use table variables instead of temporary tables.
- Avoid Using Cursors
SQL Server cursors can result in some performance degradation in comparison with select statements. Try to use correlated sub-query or derived tables, if you need to perform row-by-row operations
- Use Joins Appropriately
FROM names INNER JOIN departments ON names.employeeid = departments.employeeid
WHERE names.employeeid = departments.employeeid
SELECT * FROM Orders INNER JOIN [Order Details] ON Orders.OrderID = [Order Details].OrderID
FROM Orders INNER JOIN [Order Details] ON Orders.OrderID = [Order Details].OrderID
ASP.NET State Management techniques
State management is the process by which you maintain state and page information over multiple requests for the same or different pages. As is true for any HTTP-based technology, Web Forms pages are stateless, which means that they do not automatically indicate whether the requests in a sequence are all from the same client or even whether a single browser instance is still actively viewing a page or site. Furthermore, pages are destroyed and recreated with each round trip to the server; therefore page information will not exist beyond the life cycle of a single page.
Unlike a client-server application, there is no automatic storage of information from the previous browser page. Therefore the ASP.Net developer has to take steps to save important information from the post-back to the server.
In ASP.NET we have 4 methods of client state management.
- Query String
- Cookies
- Hidden Fields
- View State
and 3 methods for server state management
- Application state
- Session state
- Database
Client-Side State Management
The information is stored on the client side/browser without using the server resources. This type of implementation faster the whole process. The Implementation is very easy too.
QueryString
In this technique, the page information is stored with the URL.The URL, with a question mark ?, followed by a key-value pair. The main disadvantage is we can store a limited size of data.
Cookies
A cookie is a small amount of data stored either in a text file on the client’s file system or in-memory in the client browser session. Cookies are mainly used for tracking data settings. The main disadvantage is users can disable cookies in their browsers.Hidden
Fields
Hidden fields are HTML controls, but are not visible to the user. It is the best way to store small amounts of frequently changed data on the client. The disadvantage is hidden field can be tampered with. The information in the hidden field can be seen if the page output source is viewed directly, creating a potential security issue.
ViewState
Web Forms pages provide the ViewState property as a built-in structure for automatically retaining values between multiple requests for the same page. View state is maintained as a hidden field in the page. ViewState has advantages the other 3 methods don’t have. One of the most important is the ability of ViewState to support structured data. This means that control values are maintainable across page postbacks.The main disadvantage is we can’t persist data across multiple pages.
Server-Side State Management
The data in Server-side state management will be stored in the server. So the security will be very high.
Application State
The Application object provides a mechanism for storing data that is accessible to all code running within the Web application, The ideal data to insert into application state variables is data that is shared by multiple sessions and does not change often. And just because it is visible to the entire application, you need to used Lock and UnLock pair to avoid having conflict values.
Session State
ASP.NET provides a session state, available as the HttpSessionState class, as a method of storing session-specific information that is visible in the session only. Each active ASP.NET session is identified and tracked using a 120-bit SessionID string containing URL-legal ASCII characters. SessionID values are generated using an algorithm that guarantees uniqueness so that sessions do not collide, and SessionID’s randomness makes it harder to guess the session ID of an existing session.
Database
In some cases, you may wish to use database support to maintain state on your Web site. Typically, database support is used in conjunction with cookies or session state. For example, it is quite common for an e-commerce Web site to maintain state information using a relational database for the following reasons:
- Security
- Personalization
- Consistency
- Data mining
Aggregation & Composition in OOPs
- Aggregation and composition in real project define HAS-A relationship.
- Aggregation differs from ordinary composition in that it does not imply ownership.
- In composition, when the owning object is destroyed, so are the contained objects.
- In aggregation, this is not necessarily true.
Now due to some reasons, if the university closes, the departments will no longer exist, but the professors in those departments will. Therefore, a University can be seen as a composition of departments, whereas departments have an aggregation of professors.
- Aggregation represents ownership or a whole/part relationship.
- The composition represents an even stronger form of ownership. With composition, we get a coincident lifetime of part with the whole. The composite object has sole responsibility for the disposition of its parts in terms of creation and destruction. In implementation terms, the composite is responsible for memory allocation and de-allocation. An object may be part of only one composite at a time. If the composite is destroyed, it must either destroy all its parts or else give responsibility for them to some other object. A composite object can be designed with the knowledge that no other object will destroy its parts.
- Composition – One class has another class.
- Aggregation – One class is a kind of another class.