Part 2- Why mobile development is priority for any business?

This is part 2 of a multiple part series on Mobile Development. The first Introductory blog can be found here. I will keep updating the introduction page whenever I add a new blog post in this series. Keep visiting!!!

Mobile application development has become a critical function as enterprises look to generate revenue and improve the customer experience through mobile apps.

As the demand for mobile apps grows, so does the development queue. According to a study by Opinion Matters, 85% of companies have a mobile backlog of between one and 20 applications, with half having a backlog of between 10 and 20 apps.

You can’t afford to have your competitive differentiator sitting in the development queue. If you know exactly what you want, it can be convenient to just outsource the work for a price, and simply build its cost into your budget. But developing a mobile application is not a one-time effort. Hiring a freelance developer or marketing agency to deliver a ready-to-ship mobile app is often a costly temporary fix, with long-term implications that are often overlooked.

According to MGI Research, most mobile apps will experience, in a two-year time frame, at least four major update cycles stemming from operating system and device updates. This means that buyers often find themselves in an unexpected “money pit” because they need to keep engaging with the original developer to fix things so their app remains compatible with each new wave of mobile operating systems and devices. Not to mention an inevitable, growing list of desired feature additions and functional tweaks.

In the past few years mobile app development has become a booming industry. Currently, it is estimated that there are 2.3 million mobile app developers who are devoted to keeping up with the industry demand. In fact, according to Apple, in 2013 1.25 million apps were registered in the Apple app store and accounted for 50 billion downloads and $5 billion paid to developers. With these types of industry numbers, it soon becomes clear that mobile app development is a key factor for business success.

The Biggest Benefits of Mobile Apps for Businesses

With the growing number of people accessing the Internet via smartphones and tablets, mobile app development has the unique ability to access a large number of potential consumers. According the PewResearch Internet Project an estimated 67 percent of U.S. smartphone owners use their smartphones to access the Internet on a daily basis. Recent studies also suggest that by 2017 app downloads will have grown to 200 billion and the subsequent mobile app revenues will have increased to $63.5 billion. The reason behind these exceptional numbers lies in the continued growth of smartphone and tablet sales.
Not only have the sales of smartphone and tablets increased, but the amount of mobile apps installed has also grown exponentially. The PewResearch Internet Project indicates that approximately 50 percent of all smartphone users have mobile apps installed; of this percentage, two-thirds of the individuals are regular mobile app users. These statistics show that mobile apps have a unique opportunity to engage with an entirely new type of customer, one whom is constantly connected to the Internet and the global commerce space. In essence, a mobile app allows you to have millions of new customers at your fingertips. All that is left for you to do, is to develop an effective app and reap the benefits of your labors.

Be with your customers and build loyalty

As more and more people have started spending more time on the smartphone, it is very important for business to reach where the people are and having mobile apps gives you an edge before others. Mobile apps work to consistently increase customer loyalty, especially in the retail sector.

Reinforce your Brand

Mobile apps offer the unique opportunity for brand reinforcement through a new channel. Through mobile apps, customers are encouraged to download the free branded version, where they can customize preferences to fit their specific needs.

Increase your Visibility

In 2013, there were over 50 billion mobile app downloads on the Apple store. A mobile app is like your website in the app store and eventually everyone would be searching their need in the mobile store.Having your apps gives you an additional visibility. Your business will get up-to-date image among your target audience and builds credibility.

Increase your Accessibility

Smartphone and tablet users are constantly on the go; this means that they don’t always have time to sign into a mobile website. And these mobile websites are designed for readability and navigation, NOT for process management. Mobile apps allow users to have easy, functional access to information, products, services and processes that they need in real-time and are optimized for hands on interaction.

Increase Sell-through

Recent analysis suggests that mobile app users spend more time on a company’s mobile app, then they spend on the company’s mobile website.

Revenue

More satisfied customers ultimately helps in earning more revenue from mobile sources.


Stay tuned for more…

Part 1- Mobile Development Series- Introduction

Hi Friends,
We, me and my team, working on mobile development for a long now. Last 5 years, I have worked with different style of mobile development using different technologies. Currently I have a team fully dedicated to develop native apps using Xamarin.Forms. We target iOS, Android and Windows devices. It’s fun as well as very challenging. Making sure that the same code runs smoothly in all platforms and it’s versions is definitely not cake walk but that all where fun and challenges come from.
We have learned a lot and the idea here is to share some of these learning with everyone out there. May be the solutions to issues faced by us may help others too. Many team members of mine will also help me in completing many topics. Hope the content will be helpful to you in learning as well as solving your own issues.

The blog series parts are as follows:

Part 15: Caching data using Passive requests

This is part 15 of a multiple part series on web performance improvements. The first Introductory blog can be found here. In previous part we discussed how the location of the script files impact your web page performance. In this part we will discuss the importance of active and passive caching and how it  can help us improve the performance of web pages.

Caching data can improve the performance of a web page a lot. A key factor to whether the user might be kept waiting is whether the Ajax requests are passive or active. But this advantage is achieved only if the data in present in the cache when requested. But in most of the case the data is not available in cache when the call to the required data is made. This is shown in Fig 1 below. So we do not get the advantage of cached data performance.


Active requests are made based on the user’s current actions (As depicted in Fig 1). An example is finding all the email messages that match the user’s search criteria. Even though active Ajax requests are asynchronous, the user may still be kept waiting for the response. It is true that, the user won’t have to endure a complete page reload, and the UI is still responsive while the user waits. Nevertheless, the user is most likely sitting, waiting for the search results to be displayed before taking any further action.

But this data is cache after first request, so in case when the same data is requested second time we get the performance advantage due to data is being served from cache as shown in Fig 2. 


In many applications the development team can predict the user behavior pattern inside a particular application. In this case they can preload he data based on the predication. In this way we can eliminate the waiting when making active Ajax requests. To improve performance, it’s important to optimize these requests. 

Passive requests are made in anticipation of a future need. For example, in a web-based email client, a passive request might be used to download the user’s address book before it’s actually needed. By loading it passively, the client makes sure the address book is already in its cache when the user needs to address an email message.


The techniques for optimizing active Ajax requests are equally applicable to passive Ajax requests, but since active requests have a greater impact on the user experience, you should start with them. Make sure Ajax requests follow the performance guidelines, especially having a far future Expires header.
To improve the performance of AJAX request we need to follow all the previously studied rules like
  • Make the responses cache-able (this is most important way to improve Active Ajax requests performance).
  • Gzip Components
  • Reduce DNS Lookups
  • Minify JavaScript
  • Avoid Redirects
  • Reconfigure or remove ETags





Part 14: Put Scripts( js) files at bottom of HTML page

This is part 14 of a multiple part series on web performance improvements. The first Introductory blog can be found here. In previous part we discussed the pro and cons of putting style sheets at top of the HTML page. In this part we will why we should always prefer to put the script (js) files at bottom of the HTML document.

Whenever user request a webpage, the biggest impact on response time is the number of components in the page. Each component generates an HTTP request when the cache is empty, and sometimes even when the cache is primed. Though the browser performs HTTP requests in parallel, the browser cannot download them all at once. The explanation goes back to the HTTP/1.1 specification, which suggests that browsers download two components in parallel per hostname. Many web pages download all their components from a single hostname. Following figure shows a generic request-response pattern.

If a web page evenly distributed its components across two hostnames, the overall response time would be about twice as fast. Most of the browsers follow the guideline by default, but users can override this default behavior. For example Internet Explorer stores the value in the Registry Editor. You can modify this default setting in Firefox by using the network.http.maxpersistent-connections-per-server setting in the about:config page. It’s interesting to note that for HTTP/1.0, Firefox’s default is to download eight components in parallel per hostname. Most web sites today use HTTP/1.1, but the idea of increasing parallel downloads beyond two per hostname may backfire. Instead of relying on users to modify their browser settings, frontend engineers could simply use CNAMEs (DNS aliases) to split their components across multiple hostnames. But maximizing parallel downloads may cost dearer in many cases. Depending on bandwidth and CPU speed, too many parallel downloads can degrade performance. Research shows that splitting components across two hostnames leads to better performance than using 1, 4, or 10 hostnames.

Scripts Block Downloads

The benefits of downloading components in parallel are clear. However, when the browser is downloading a script it will disable parallel downloads. The browser won’t start any other downloads, even on different hostnames. Some reasons for this behaviour are as follows:

  • The script may use document.write to alter the page content, so the browser waits to make sure the page is laid out appropriately.
  • To guarantee that the scripts are executed in the proper order. If multiple scripts were downloaded in parallel, there’s no guarantee the responses would arrive in the order specified. For example, if the last script was smaller than scripts that appear earlier on the page, it might return first. If there were dependencies between the scripts, executing them out of order would result in JavaScript errors.

Issues with scripts at the Top

If the scripts are placed on top of any web pages fowloing issues will happen, which will hurt performance of the page as well as user experience.

  • Everything in the page is below the script, and the entire page is blocked from rendering and downloading until the script is loaded.
  • All components below the script are blocked from being downloaded.
  • Because this entire page is blocked from rendering, it results in the blank white screen phenomenon. Progressive rendering is critical for a good user experience, but slow scripts delay the user’s feedback.
  • The reduction of parallelized downloads delays how quickly images are displayed in the page.

Scripts at Bottom

The best place to put scripts is at the bottom of the page. The page contents aren’t blocked from rendering, and the viewable components in the page are downloaded as early as possible. So all the issues faced due to script on Top can be solved by putting script at bottom.

Other practical scenarios

It is possible for a script to take longer than expected and for the user’s bandwidth to affect the response time of a script. Having multiple scripts in your page compounds the problem. In some situations, it’s not easy to move scripts to the bottom. If, for example, the script uses document.write to insert part of the page’s content, it can’t be moved lower in the page. There might also be scoping issues. In many cases, there are ways to work around these situations.

An alternative is to use deferred scripts. The DEFER attribute indicates that the script does not contain document.write, and is a clue to browsers that they can continue rendering. Unfortunately, in Firefox, even deferred scripts block rendering and parallel downloads. In Internet Explorer, components lower in the page are downloaded slightly later. If a script can be deferred, it can also be moved to the bottom of the page. That’s the best thing to do to speed up your web pages.

Part 13: Style-sheets at the Top of the HTML page

This is part 13 of a multiple part series on web performance improvements. The first Introductory blog can be found here. In previous part we discussed the performance benefit of Gzipping java-script and style sheets. In this part we will discuss the importance of putting style-sheets on top of the HTML document.

Consider a situation where a page has lots of data to load. While the data is loaded the page will look blank because the page data is still not downloaded from server. This will confuse the user as nothing is happening on the page. He may feel that the page is not responding and may close the browser/tab. But in reality the page is not hanged its just downloading the required data. This is classic case where we need to make user aware of the happening by rendering page progressively.

The importance of giving users visual feedback has been well researched and documented. When we talk about performance we need to make sure the page load progressively. In other words browser to display whatever contents it has as soon as possible. This is especially important for pages with a lot of content and for users on slower Internet connections. Progress indicators have three main advantages:

  • It reassures the user that the system has not crashed and is working on the request.
  • It indicates approximately how long the user is expected to wait, thus allowing the user to do other activities during long waits.
  • It provides something for the user to look at, thus making the wait less painful. This latter advantage should not be underestimated and is one reason for recommending a graphic progress bar instead of just stating the expected remaining time in numbers.

In our case the HTML page is the progress indicator. When the browser loads the page progressively, the header, the navigation bar, the logo at the top, etc. all serve as visual feedback for the user who is waiting for the page. This improves the overall user experience.

The problem with putting style-sheets near the bottom of the document is that it prohibits progressive rendering in many browsers. Browsers block rendering to avoid having to redraw elements of the page if their styles change. This rule has less to do with the actual time to load the page’s components and more to do with how the browser reacts to the order of those components. In fact, the page that feels slower is ironically the page that loads the visible components faster. The browser delays showing any visible components while it and the user wait for the style-sheet at the bottom.


CSS at the Bottom

Putting style-sheets near the end of the document can delay page loading. The page is completely blank until all the content blasts onto the screen at once. Progressive rendering has been thwarted. This is a bad user experience because there is no visual feedback to reassure the user that her request is being handled correctly. Instead, the user is left to wonder whether anything is happening. That’s the moment when a user abandons your web site and navigates to your competitor.


CSS at the Top

To avoid the blank white screen, move the style-sheet to the top in the document’s HEAD. If we do this, no matter how the page is loaded—whether in a new window, as a reload, or as a home page—the page renders progressively.

There are two ways you can include a style-sheet in your document: the LINK tag and the @import rule.

  • LINK tag
  • @import rule
  • @import url(“styles2.css”);

A STYLE block can contain multiple @import rules, but @import rules must precede all other rules. If this is overlooked, the style-sheet isn’t loaded from an @import rule. Using the @import rule causes an unexpected ordering in how the components are downloaded. Below figure shows the HTTP traffic for all three examples. Each page contains 8 HTTP requests:

  • 5 Images
  • 1 HTML
  • 2 style-sheets

fig1: Page with CSS at bottom

fig2: Page with CSS at Top using Link

fig3: Page with CSS at Top using @Import

The components in page in fig1 and fig2 are downloaded in the order in which they appear in the document. However, even though the page in fig3 has the style-sheet at the top in the document HEAD, the style-sheet is downloaded last because it uses @import. As a result, it has the blank white screen problem, just like the page in fig1.

For this reason, using the LINK tag is more preferred way. Beyond the easier syntax, there are also performance benefits to using LINK instead of @import. The @import rule causes the blank white screen phenomenon, even if used in the document HEAD.

Part 12: Gzip Components like scripts, style-sheets etc

This is part 12 of a multiple part series on web performance improvements. The first Introductory blog can be found here. In previous part we discussed the impact of duplicate scripts on web page performance and how we can eliminate such silly mistakes, which can, sometimes, damage the performance a lot.


What is Gzip

Gzip reduces the size of HTTP response, which results in reduced web page response time. If an HTTP request results in a smaller response, the transfer time decreases because fewer data packets travel from server to client. This effect is even better for slower bandwidth speeds.

How Compression Works

Starting with HTTP/1.1, web clients indicate support for compression with the Accept-Encoding header in the HTTP request.

  • Accept-Encoding: gzip, deflate

If the web server sees this header in the request, it may compress the response using one of the methods listed by the client. The web server notifies the web client of this via the Content-Encoding header in the response.

  • Content-Encoding: gzip


What to compress and what not?

  • Based on compression configuration on server they choose which file type to Gzip.
  • Many web sites Gzip their HTML documents.
  • It’s also worth to Gzip your scripts and style-sheets even though they are minified.
  • Image and PDF files should not be Gzipped because they are already compressed. Trying to Gzip them not only wastes CPU resources, it can also potentially increase file sizes.
  • Gzipping has it’s own disadvantages. It takes additional CPU cycles on the server to carry out the compression and on the client to decompress the Gzipped file. To determine whether the benefits outweigh the costs you would have to consider the size of the response, the bandwidth of the connection and the Internet distance between the client and the server. This information isn’t generally available, and even if it were, there would be too many variables to take into consideration.
  • Generally, it’s worth Gzipping any file greater than 1 or 2K. The mod_gzip_minimum_file_size directive controls the minimum file size you’re willing to compress. The default value is 500 bytes.


Edge Cases

The page could easily break if either the client or server makes a mistake by sending Gzipped content to a client that can’t understand it, forgetting to declare a compressed response as gzip-encoded, etc. Mistakes don’t happen often, but there are edge cases to take into consideration.

Approximately 90% of today’s Internet traffic travels through browsers that claim to support Gzip. If a browser says it supports Gzip you can generally trust it. There are other known problems, but they occur on browsers that represent less than 1% of Internet traffic. A safe approach is to serve compressed content only for browsers that are proven to support it, such as Internet Explorer 6.0 and later and Mozilla 5.0 and later. This is called a browser whitelist approach.

Part 11: Remove Duplicate Scripts

This is part 11 of a multiple part series on web performance improvements. The first Introductory blog can be found here. In previous part we discussed the impact of minifying java-scripts on performance. In this part we will discuss how duplicate script references can impact performance of web pages negatively.

As we know that whenever a script is referenced inside a html page, a call is made to the server to download that referenced script file. This has a direct impact on page performance. So we need to be very careful that we not only avoid referencing unnecessary files in the web page but also avoid duplicate references.

In today development scenario there are large distributed teams working on same application/ module. The same source code is shared between teams. There is a possibility that the same script file is referenced more than once by different team members. This results in unnecessary calls to the server for the same file.

Another reason being large no of scripts references in web pages. So instead of going through all the files to check if it is already included or not, team member may just include it again. Not all team members know the impact of their laziness. But this will surely hurt the performance of the web page.

Whatsoever may be the reason but it will impact the page performance following ways.

  • Unnecessary HTTP requests happen to download the duplicate scripts. This may not be the case in Firefox but is definitely in Internet Explorer. In Internet Explorer, if an external script is included twice and if not cacheable, the browser generates two HTTP requests during page loading. This won’t be an issue if we add a far future Expires header to scripts, but if not, and make the mistake of including the script twice, there will be an extra HTTP request. Downloading scripts has negative impact on response times. Even if the script is cacheable, extra HTTP requests occur when the user reloads the page.
  • In addition to generating unnecessary HTTP requests in Internet Explorer, time is wasted evaluating the script multiple times. This redundant JavaScript execution happens in both Firefox and Internet Explorer, regardless of whether the script is cacheable. The problem of superfluous downloads and evaluations occurs for each additional instance of a script in the page.


Avoiding Duplicate Scripts

To avoid such accidentally inclusion of same script more than once, we should implement a script management module in our templating system. Ideally all the scripts should be always included using SCRIPT tag in your HTML page:

http://my_script1.0.17.js

http://your_script1.0.17.js

http://his_script1.0.17.js

http://her_script1.0.17.js

The team members should be educated to take precautionary measures to avoid duplicate script referencing. Code review steps should include checking such instances. If we can automate this duplication removal using some tool then that’s great.

Part 10: Minify JavaScript and Obfuscation of Code

This is part 10 of a multiple part series on web performance improvements. The first Introductory blog can be found here. In previous part we discussed about pro and cons of including Javascript and CSS inline as well as externally. In this part we will discuss how minifying Javascipts and Obfuscation of Code helps us improve the performance of a web page.

The idea behind Minification and Obfuscation is very simple. Lets consider following two examples. In first call the client has requested a file of 3.4 MB. In the second example the client has requested a 2 MB file. With little knowledge about web applications anyone can tell that the time taken will be less in case of second example considering all the environmental factors are similar.

It make it clear that small files = less time.


Minification is the practice of removing unnecessary characters from code to reduce its size. This results in improved load time. In minification process all comments, unneeded whitespace characters (space, newline, and tab) are removed. In case of JavaScript, this improves response time performance because the size of the downloading file is reduced.

Obfuscation is technique of optimizing source code. Like minification, it removes comments and whitespace. On top of minification obfuscation also munges the code. As part of munging, function and variable names are converted into smaller strings making the code more compact, as well as harder to read. This is typically done to make it more difficult to reverse-engineer the code. But munging can help performance because it reduces the code size beyond what is achieved by minification.


Minification or Obfuscation

What should we should choose to optimize code, minification or obfuscation.

  • Minification is a safe and fairly straightforward process. Obfuscation, on the other hand, is more complex.
  • Because obfuscation is more complex, there’s a higher probability of introducing errors into the code as a result of the obfuscation process itself.
  • Since obfuscators change JavaScript symbols, any symbols that should not be changed (for example, API functions) have to be tagged so that the obfuscator leaves them unaltered.
  • Obfuscated code is more difficult to read. This makes issue debugging difficult in production environment.


Tools

The most popular tool for minifying JavaScript code is JSMin developed by Douglas Crockford. The JSMin source code is available in C, C#, Java, JavaScript, Perl, PHP, Python, and Ruby. The tool of choice is less clear in the area of JavaScript obfuscation. Dojo Compressor OR ShrinkSafe is one of the best.


Example

Following image shows two scripts of different sizes, 50K and 401K, and effect of minification and obfuscation on them.

We can conclude following points from the above figure:

  • The smaller script of size 50K was reduced to 13K and 12K respectively after minification and obfuscation. The file download time reduced to 481ms and 471ms respectively after minification and obfuscation when compared to 581ms time of normal file size download time.
  • The bigger script of size 401K was reduced to 131K and 128K respectively after minification and obfuscation. This is significant reduction in size. The file download time reduced to 769ms and 755ms respectively after minification and obfuscation when compared to 1112ms time of normal file size download time.
  • Obfuscation gives better results in both cases. It further reduces the size then minification and thus takes less time.
  • Minifying scripts reduces response times without carrying the risks that come with obfuscation.


Best of way out

There are a couple other ways to squeeze waste out of your JavaScript.

  • Inline Scripts
  • Inline JavaScript blocks should also be minified. Though this practice is less evident on today’s web sites. In practice, minifying inline scripts is easier than minifying external scripts. Whatever page generation platform you use, there is a version of JSMin that can be integrated with it. Once the functionality is available, all inlined JavaScript can be minified before being echoed to the HTML document.

  • Gzip and Minification
  • Using gzip can typical reduce the size of a file by 70%. Gzip compression decreases file sizes more than minification. It’s interesting that obfuscation and gzip perform about the same as minification and gzip, another reason to just stick with minification and avoid the additional risks of obfuscation. Gzip compression has the biggest impact, but minification further reduces file sizes. As the use and size of JavaScript increases, so will the savings gained by minifying your JavaScript code.


Minifying CSS

The savings from minifying CSS are typically less than the savings from minifying JavaScript because CSS generally has fewer comments and less whitespace than Java-Script. The greatest potential for size savings comes from optimizing CSS—merging identical classes, removing unused classes, etc. The best solution might be one that removes comments and whitespace, and does straightforward optimizations such as using abbreviations (like “#606” instead of “#660066”) and removing unnecessary strings (“0” instead of “0px”).

Part 9: Externally reference JavaScript and CSS

This is part 9 of a multiple part series on web performance improvements. The first Introductory blog can be found here. In previous part we discussed the impact of HTTP request cycle on performance and how we can overcome it. In this part we will discuss the role of Javascript and CSS in page request cycle and its impact on performance.

Javascript and CSS can be included in a page in two ways, Inline or Externally. Both has its own pro and cons and impact performance of the page. Let’s go through the pro and cons of both first.


Inline vs. External

Let’s try to understand using an example. Consider there are two pages, as shown in image below. Page 1 has all the HTML, CSS and Javascript in single file (inline) and Page 2 CSS and Javascript are referenced externally. So in cae of Page 2 there are 1 HTML, 1 CSS & 4 Javascript files.

Although the total amount of data downloaded is the same, the inline example is more faster than the external example. This is primarily because the external example suffers from the overhead of multiple HTTP requests. The external example, though, benefits from the stylesheet and scripts being downloaded in parallel, but the difference of 1 HTTP request compared to 6 is what makes the inline example faster.

So theoretically, Inline Is Faster as compared to External. Despite these results and findings, using external files in the real world examples generally produces faster pages. This is due to caching of external files by the browser. HTML documents, at least those that contain dynamic content, are typically not configured to be cached. When the HTML documents are not cached, the inline JavaScript and CSS is downloaded every time the HTML document is requested. On the other hand, if the JavaScript and CSS are in external files cached by the browser, the size of the HTML document is reduced without increasing the number of HTTP requests.

The key factor, then, is the frequency with which external JavaScript and CSS components are cached relative to the number of HTML documents requested. This factor, although difficult to quantify, can be gauged using the following metrics.

  • Page Views

    Two typical cases are:

    • If a typical user visits the website once per month. Between visits, it’s likely that any external JavaScript and CSS files have been purged from the browser’s cache, even if the components have a far future Expires header. So fewer page views per user, the stronger the argument for inlining JavaScript and CSS.
    • On the other hand, if a typical user has many page views, the browser is more likely to have external components in its cache. The benefit of serving JavaScript and CSS using external files grows along with the number of page views per user per month or page views per user per session.
  • Empty Cache vs. Primed Cache

    Knowing the potential for users to cache external components is critical to comparing inlining versus external files. The percentage of page views with a primed cache is higher than the percentage of unique users with a primed cache because many users perform multiple page views per session. Users may show up once during the day with an empty cache, but make several subsequent page views with a primed cache. These metrics vary depending on the type of web site. Knowing these statistics helps in estimating the potential benefit of using external files versus inlining. If the nature of your site results in higher primed cache rates for your users, the benefit of using external files is greater. If a primed cache is less likely, inlining becomes a better choice.

  • Component Reuse

    If every page on your site uses the same JavaScript and CSS, using external files will result in a high reuse rate for these components. Using external files becomes more advantageous in this situation because the JavaScript and CSS components are already in the browser’s cache while users navigate across pages. Ultimately, your decision about the boundaries for JavaScript and CSS external files affects the degree of component reuse. If you can find a balance that results in a high reuse rate, the argument is stronger for deploying your JavaScript and CSS as external files. If the reuse rate is low, inlining might make more sense.


Inline/External, how to decide?

In analyzing the tradeoffs between inlining versus using external files, the key is the frequency with which external JavaScript and CSS components are cached relative to the number of HTML documents requested. The three metrics i.e. page views, empty cache vs. primed cache, and component reuse that can help us determine the best option. The right answer for any specific web site depends on these metrics. Many web sites fall in the middle of these metrics. They get 5–15 page views per user per month, with 2–5 page views per user per session. Empty cache visits are in the range 40–60% of unique users per day have a primed cache, and 75–85% of page views per day are performed with a primed cache. There’s a fair amount of JavaScript and CSS reuse across pages, resulting in a handful of files that cover every major page type. For sites that have these metrics, the best solution is generally to deploy the Java-Script and CSS as external files. This is demonstrated by the example where the external components can be cached by the browser. Loading this page repeatedly and comparing the results to those of the first example, “Inlined JS and CSS,” shows that using external files with a far future Expires header is the fastest approach.


Especial Cases- Home Pages

There are some exceptional cases where we go against the defined metrics. One of such exception is home page where inlining is preferable. A home page is the URL chosen as the browser’s default page, such as FaceBook home page http://www.facebook.com. Let’s look at the three metrics from the perspective of home pages:

  • Page views
  • Home pages have a high number of page views per month. Ideally, whenever the browser is opened, the home page is visited. However, often the home page is visited only once per session.

  • Empty cache vs. primed cache
  • The primed cache percentage might be lower than other sites. Many user for security reasons, elect to clear the cache every time they close the browser. The next time they open the browser it generates an empty cache page view of the home page.

  • Component reuse
  • The reuse rate is low. Many home pages are the only page a user visits on the site, so there is really no reuse.

Analyzing these metrics, there’s an inclination toward inlining over using external files. Home pages have one more factor that tips the scale toward inlining, a high demand for responsiveness, even in the empty cache scenario. If a company decides to launch a campaign encouraging users to set their home pages to the company’s site, the last thing they want is a slow home page. For the company’s home page campaign to succeed, the page must be fast.


The balancing act

Most of the scenarios when looked with performance in view point to inlining. But in real applications it is still inefficient to add all that Java-Script and CSS to the page and not take advantage of the browser’s cache. So what is the solution. How we decide what is good and bad for performance. Following two techniques allow to gain the benefits of inlining, as well as caching external files.

  • Post-Onload Download

    Some home pages typically have only one page view per session. However, that’s not the case for all home pages. For home pages that are the first of many page views, we want to inline the Java-Script and CSS for the home page, but leverage external files for all secondary page views. This is accomplished by dynamically downloading the external components in the home page after it has completely loaded (via the onload event). This places the external files in the browser’s cache in anticipation of the user continuing on to other pages.

    The post-onload download JavaScript code associates the doOnload function with the document’s onload event. After a one-second delay to make sure the page is completely rendered, the appropriate JavaScript and CSS files are downloaded. This is done by creating the appropriate DOM elements (script and link, respectively) and assigning the specific URL:

    function doOnload( ) {
    setTimeout(“downloadComponents( )”, 1000);
    }
    window.onload = doOnload;
    // Download external components dynamically using JavaScript.
    function downloadComponents( ) {
    downloadJS(“http://stevesouders.com/hpws/testsma.js”);
    downloadCSS(“http://stevesouders.com/hpws/testsm.css”);
    }
    // Download a script dynamically.
    function downloadJS(url) {
    var elem = document.createElement(“script”);
    elem.src = url;
    document.body.appendChild(elem);
    }
    // Download a style-sheet dynamically.
    function downloadCSS(url) {
    var elem = document.createElement(“link”);
    elem.rel = “style-sheet”;
    elem.type = “text/css”;
    elem.href = url;
    document.body.appendChild(elem);
    }

    In these pages, the JavaScript and CSS are loaded twice into the page inline then external. This to work code has to deal with double definition. Scripts, for example, can define but can’t execute any functions. CSS that uses relative metrics may be problematic if applied twice. Inserting these components into an invisible IFrame is a more advanced approach that avoids these problems.

  • Dynamic Inlining

    If a home page server knew whether a component was in the browser’s cache, it could make the optimal decision about whether to inline or use external files. Although there is no way for a server to see what’s in the browser’s cache, cookies can be used as an indicator. By returning a session-based cookie with the component, the home page server can make a decision about inlining based on the absence or presence of the cookie. If the cookie is absent, the JavaScript or CSS is inlined. If the cookie is present, it’s likely the external component is in the browser’s cache and external files are used.

    Since every user starts off without the cookie, there has to be a way to bootstrap the process. This is accomplished by using the post-onload download technique from the previous example. The first time a user visits the page, the server sees that the cookie is absent and it generates a page that inlines the components. The server then adds JavaScript to dynamically download the external files (and set a cookie) after the page has loaded. The next time the page is visited, the server sees the cookie and generates a page that uses external files.

    The beauty of this approach is how forgiving it is. If there’s a mismatch between the state of the cookie and the state of the cache, the page still works. It’s not as optimized as it could be. The session-based cookie technique errs on the side of inlining even though the components are in the browser’s cache—if the user reopens the browser, the session-based cookie is absent but the components may still be cached. Changing the cookie from session-based to short-lived (hours or days) addresses this issue, but moves toward erring on the side of using external files when they’re not truly in the browser’s cache. Either way, the page still works, and across all users there is an improvement in response times by more intelligently choosing between inlining versus using external files.

Application Performance Series- Part 7- Reduce HTTP Requests cycle as much possible

This is part 7 of a multiple part series on web performance improvements. The first Introductory blog can be found here. In previous part we discussed the negative impact of redirects in web application performance. In this part we will discuss how reducing the number of HTTP requests between client and server will help improve the performance.

Whenever user requests any webpage there are a number of trips between client browser and server to get required HTML data and components which are required to render the page. For example in figure below there are a no of calls based on the no of components required for the page. In most of the sites only 10-20% of average time is spent on retrieving the requested HTML document. The remaining 80–90% of the time is spent making HTTP requests for all the components like images, scripts, style-sheets, Flash etc which are referenced in the HTML document.

94a0c-pagedownloadflow
Request Response Flow

There is a separate call for each image, CSS, .js or any other file. Due to this there are many round trips from browser to server which increases the page rendering time. Thus, a simple way to improve response time is to reduce the number of components, which will reduce the number of HTTP requests.

What we can do to reduce calls to server

Writing code to reduce get page HTML is key. Techniques like image maps, CSS sprites, inline images, and combined scripts and style-sheets helps us reducing the no of HTTP requests.

1. Use Image Maps whenever possible

In many sites images are effectively used as a medium of communication. Even the hyperlinks, buttons, navigation bar all the composed of images. Then have become integral part of any modern day website. In its simplest form, a hyperlink associates the destination URL with some text. A prettier alternative is to associate the hyperlink with an image. Let’s take an example of following navigation bar. Suppose the navigation bar is formed of 5 different images as shown below.

When the page is rendered there will be 5 calls (1 for each image) to the server, which will hurt the performance of the website. If you use multiple hyperlinked images in this way, image maps may be a way to reduce the number of HTTP requests without changing the page’s look and feel. An image map allows you to associate multiple URLs with a single image. The destination URL is chosen based on where the user clicks on the image. The HTML for converting the navigation bar in above figure to an image map shows how the MAP tag is used:

<img usemap=”#map1″ border=0 src=”/images/imagemap.gif”>
<map name=”map1″>
<area shape=”rect” coords=”0,0,25,25″ href=”home.html” title=”Home”>
<area shape=”rect” coords=”30,0,60,25″ href=”gifts.html” title=”Gifts”>
<area shape=”rect” coords=”69,0,101,25″ href=”cart.html” title=”Cart”>
<area shape=”rect” coords=”100,0,131,25″ href=”settings.html” title=”Settings”>
<area shape=”rect” coords=”136,0,167,25″ href=”help.html” title=”Help”>
</map>

Limitations

  • Defining the area coordinates of the image map, is tedious and error-prone
  • It is next to impossible for any shape other than rectangles to exact define co-ordinates.
  • Creating image maps via DHTML won’t work in Internet Explorer.

Though there are some limitations of using image maps but if multiple images are used in a navigation-bar or other hyperlinks, switching to an image map is an easy way to speed up the page.

2. Use CSS Sprites

CSS sprites are alternative to image maps, which combines images. They’re much more flexible then image maps. In CSS sprites, multiple images are combined into a single image.

Let’s take the previous example with CSS sprites. The five links are contained in a DIV named navbar. Each link is wrapped around a SPAN that uses a single background image, backgroundbg.gif, as defined in the #navbar span rule. Each SPAN has a different class that specifies the offset into the CSS sprite using the background position property:

#navbar span {
width:31px;
height:31px;
display:inline;
float:left;
background-image:url(/images/ backgroundbg.gif);
}
.home { background-position:0 0; margin-right:4px; margin-left: 4px;}
.gifts { background-position:-32px 0; margin-right:4px;}
.cart { background-position:-64px 0; margin-right:4px;}
.settings { background-position:-96px 0; margin-right:4px;}
.help { background-position:-128px 0; margin-right:0px;}

  • In case of image map the images must be contiguous, but this rule doesn’t apply to CSS sprites.
  • They reduce HTTP requests by combining images and are more flexible than image maps.
  • Added benefit is reduced download size. In general people will think the combined image will be larger than the sum of the separate images because of the combined image has additional area used for spacing. In fact, the combined image tends to be smaller than the sum of the separate images as a result of reducing the amount of image overhead color tables, formatting information, etc.

If there are a lot of images in any web page for backgrounds, buttons, navigation-bars, links, etc., CSS sprites are an elegant solution that results in clean markup, fewer images to deal with, and faster response times.

3. Combine scripts and style-sheet (whenever possible)

A site without java-script!!! This thought itself is scaring in today’s website world. Java-script and CSS are integral part of website development and one can not imagine a beautiful site without them. Developers need to choose whether to “inline” their JavaScript and CSS or include it from external script and style-sheet files. In general, using external scripts and style-sheets is better for performance. However, if we follow the recommended code modularize approach by breaking it into many small files; it will decrease performance because each file results in an additional HTTP request.

So to increase performance, if possible, multiple scripts should be combined into a single script, and multiple style-sheets should be combined into a single style-sheet. In the ideal situation, there would be no more than one script and one style-sheet in each page.

For developers who have been trained to write modular code whether in JavaScript or some other programming language, this suggestion of combining everything into a single file seems like a step backward, and indeed it would be bad in your development environment to combine all your JavaScript into a single file. One page might need script1, script2, and script3, while another page needs script1, script3 and script4. The solution is to follow the model of compiled languages and keep the JavaScript modular while putting in place a build process for generating a target file from a set of specified modules.

It’s easy to imagine a build process that includes combining scripts and style-sheets simply concatenate the appropriate files into a single file. Combining files is easy. The difficult part can be the growth in the number of combinations. If you have a lot of pages with different module requirements, the number of combinations can be large.