Monday, April 23, 2012

Factors Affecting Performance of web application

It has been known for years that although software development constantly
strives towards constant improvement, it will never completely be 100%
perfect. An application’s performance, in turn, can only be as good as in
comparison to its performance objectives.
Performance problems affect all types of systems, regardless of whether they
are client/server or Web application systems. It is imperative to understand
the factors affecting system performance before embarking on the task of
handling them.
Generally speaking, the factors affecting performance may be divided into
two large categories: project management oriented and technical.
Project Management Factors Affecting Performance
In the modern Software Development Life Cycle (SDLC), the main phases
are subject to time constraints in order to address ever growing competition.
This causes the following project management issues to arise:
➤ Shorter coding time in development may lead to a lower quality product
due to a lack of concentration on performance.
➤ Chances of missing information due to the rapid approach may disqualify
the performance objectives.
➤ Inconsistent internal designs may be observed after product deployment, for
example, too much cluttering of objects and sequence of screen navigation.
➤ Higher probability of violating coding standards, resulting in unoptimized
code that may consume too many resources.
➤ Module reuse for future projects may not be possible due to the project
specific design.
➤ Module may not be designed for scalability.
➤ System may collapse due to a sudden increase in user load.

Technical Factors Affecting Performance
While project management related issues have great impact on the output,
technical problems may severely affect the application’s overall
performance. The problems may stem from the selection of the technology
platform, which may be designed for a specific purpose and does not
perform well under different conditions.
Usually, however, the technical problems arise due to the developer’s
negligence regarding performance. A common practice among many
developers is not to optimize the code at the development stage. This code
may unnecessarily utilize scarce system resources such as memory and
processor. Such coding practice may lead to severe performance bottlenecks
such as:
➤ memory leaks
➤ array bound errors
➤ inefficient buffering
➤ too many processing cycles
➤ larger number of HTTP transactions
➤ too many file transfers between memory and disk
➤ inefficient session state management
➤ thread contention due to maximum concurrent users
➤ poor architecture sizing for peak load
➤ inefficient SQL statements
➤ lack of proper indexing on the database tables
➤ inappropriate configuration of the servers
These problems are difficult to trace once the code is packaged for
deployment and require special tools and methodologies.
Another cluster of technical factors affecting performance is security.
Performance of the application and its security are commonly at odds, since
adding layers of security (SSL, private/public keys and so on) is extremely
computation intensive.
Network related issues must also be taken into account, especially with
regard to Web applications. They may be coming from the various sources
such as:
➤ Older or unoptimized network infrastructure
➤ Slow web site connections lead to network traffic and hence poor
response time
➤ Imbalanced load on servers affecting the performance

Thursday, April 19, 2012

Test Strategy Vs Test Planning

Test Strategy
A Test Strategy document is a high level document and normally developed by project manager. This document defines “Testing Approach” to achieve testing objectives. The Test Strategy is normally derived from the Business Requirement Specification document.
The Test Stategy document is a static document meaning that it is not updated too often. It sets the standards for testing processes and activities and other documents such as the Test Plan draws its contents from those standards set in the Test Strategy Document.
Some companies include the “Test Approach” or “Strategy” inside the Test Plan, which is fine and it is usually the case for small projects. However, for larger projects, there is one Test Strategy document and different number of Test Plans for each phase or level of testing.
Components of the Test Strategy document
  • Scope and Objectives
  • Business issues
  • Roles and responsibilities
  • Communication and status reporting
  • Test deliverability
  • Industry standards to follow
  • Test automation and tools
  • Testing measurements and metrices
  • Risks and mitigation
  • Defect reporting and tracking
  • Change and configuration management
  • Training plan
Test Plan
The Test Plan document on the other hand, is derived from the Product Description, Software Requirement Specification SRS, or Use Case Documents.
The Test Plan document is usually prepared by the Test Lead or Test Manager and the focus of the document is to describe what to test, how to test, when to test and who will do what test.
It is not uncommon to have one Master Test Plan which is a common document for the test phases and each test phase have their own Test Plan documents.
There is much debate, as to whether the Test Plan document should also be a static document like the Test Strategy document mentioned above or should it be updated every often to reflect changes according to the direction of the project and activities.
My own personal view is that when a testing phase starts and the Test Manager is “controlling” the activities, the test plan should be updated to reflect any deviation from the original plan. After all, Planning and Control are continuous activities in the formal test process.
  • Test Plan id
  • Introduction
  • Test items
  • Features to be tested
  • Features not to be tested
  • Test techniques
  • Testing tasks
  • Suspension criteria
  • Features pass or fail criteria
  • Test environment (Entry criteria, Exit criteria)
  • Test delivarables
  • Staff and training needs
  • Responsibilities
  • Schedule
This is a standard approach to prepare test plan and test strategy documents, but things can vary company-to-company

Case Studies – Identifying Performance-testing Objectives



Case Study 1

Scenario

A 40-year-old financial services company with 3,000 employees is implementing its annual Enterprise Resource Planning (ERP) software upgrade, including new production hardware. The last upgrade resulted in disappointing performance and many months of tuning during production.

Performance Objectives

The performance-testing effort was based on the following overall performance objectives:
  • Ensure that the new production hardware is no slower than the previous release.
  • Determine configuration settings for the new production hardware.
  • Tune customizations. 

Performance Budget/Constraints

The following budget limitations constrained the performance-testing effort:
  • No server should have sustained processor utilization above 80 percent under any anticipated load. (Threshold)
  • No single requested report is permitted to lock more than 20 MB of RAM and 15-percent processor utilization on the Data Cube Server.
  • No combination of requested reports is permitted to lock more than 100 MB of RAM and 50-percent processor utilization on the Data Cube Server at one time.

Performance-Testing Objectives

The following priority objectives focused the performance testing:
  • Verify that there is no performance degradation over the previous release.
  • Verify the ideal configuration for the application in terms of response time, throughput, and resource utilization.
  • Resolve existing performance inadequacy with the Data Cube Server.

Questions

The following questions helped to determine relevant testing objectives:
  • What is the reason for deciding to test performance? 
  • In terms of performance, what issues concern you most in relation to the upgrade?
  • Why are you concerned about the Data Cube Server?

Case Study 2

Scenario

A financial institution with 4,000 users distributed among the central headquarters and several branch offices is experiencing performance problems with business applications that deal with loan processing.
Six major business operations have been affected by problems related to slowness as well as high resource consumption and error rates identified by the company’s IT group. The consumption issue is due to high processor usage in the database, while the errors are related to database queries with exceptions.

Performance Objectives

The performance-testing effort was based on the following overall performance objectives:
  • The system must support all users in the central headquarters and branch offices who use the system during peak business hours.
  • The system must meet backup duration requirements for the minimal possible timeframe.
  • Database queries should be optimal, resulting in processor utilization no higher than 50-75 percent during normal and peak business activities.

Performance Budget/Constraints

The following budget limitations constrained the performance-testing effort:
  • No server should have sustained processor utilization above 75 percent under any anticipated load (normal and peak) when users in headquarters and branch offices are using the system. (Threshold)
  • When system backups are being performed, the response times of business operations should not exceed 8 percent, or the response times experienced when no backup is being done.
  • Response times for all business operations during normal and peak load should not exceed 6 seconds.
  • No error rates are allowable during transaction activity in the database that may result in the loss of user-submitted loan applications.

Performance-Testing Objectives

The following priority objectives focused the performance testing:
  • Help to optimize the loan-processing applications to ensure that the system meets stated business requirements.
  • Test for 100-percent coverage of the entire six business processes affected by the loan-manufacturing applications.
  • Target database queries that were confirmed to be extremely sub-optimal, with improper hints and nested sub-query hashing.
  • Help to remove superfluous database queries in order to minimize transactional cost.
  • Tests should monitor for relevant component metrics: end-user response time, error rate, database transactions per second, and overall processor, memory, network, and disk status for the database server.

Questions

The following questions helped to determine relevant testing objectives:
  • What is the reason for deciding to test performance? 
  • In terms of performance, what issues concern you most in relation to the queries that may be causing processor bottlenecks and transactional errors?
  • What business cases related to the queries might be causing processor and transactional errors?
  • What database backup operations might affect performance during business operations?
  • What are the timeframes for back-up procedures that might affect business operations, and what are the most critical scenarios involved in the time frame?
  • How many users are there and where are they located (headquarters, branch offices) during times of critical business operations?
These questions helped performance testers identify the most important concerns in order to help prioritize testing efforts. The questions also helped determine what information to include in conversations and reports.

Case Study 3

Scenario

A Web site is responsible for conducting online surveys with 2 million users in a one-hour timeframe. The site infrastructure was built with wide area network (WAN) links all over the world. The site administrators want to test the site’s performance to ensure that it can sustain 2 million user visits in one hour.

Performance Objectives

The performance-testing effort was based on the following overall performance objectives:
  • The Web site is able to support a peak load of 2million user visits in a one-hour timeframe.
  • Survey submissions should not be compromised due to application errors.

Performance Budget/Constraints

The following budget limitations constrained the performance-testing effort:
  • No server can have sustained processor utilization above 75 percent under any anticipated load (normal and peak) during submission of surveys (2 million at peak load).
  • Response times for all survey submissions must not exceed 8 seconds during normal and peak loads.
  • No survey submissions can be lost due to application errors.

Performance-Testing Objectives

The following priority objectives focused the performance testing:
  • Simulate one user transaction scripted with 2 million total virtual users in one hour distributed among two datacenters, with 1 million active users at each data center.
  • Simulate the peak load of 2 million user visits in a one-hour period.
  • Test for 100-percent coverage of all survey types.
  • Monitor for relevant component metrics: end-user response time, error rate, database transactions per second, and overall processor, memory, network and disk status for the database server.
  • Test the error rate to determine the reliability metrics of the survey system.
  • Test by using firewall and load-balancing configurations.

Questions

The following questions helped to determine relevant testing objectives:
  • What is the reason for deciding to test performance?
  • In terms of performance, what issues concern you most in relation to survey submissions that might cause data loss or user abandonment due to slow response time?
  • What types of submissions need to be simulated for surveys related to business requirements?
  • Where are the users located geographically when submitting the surveys?

Important aspects of Performance test project

Important aspects of a Performance test project

For a performance testing project to be successful, both the approach to testing performance and the testing itself must be relevant to the context of the project. Without an understanding of the project context, performance testing is bound to focus on only those items that the performance tester or test team assumes to be important, as opposed to those that truly are important, frequently leading to wasted time, frustration, and conflicts.
The project context is nothing more than those things that are, or may become, relevant to achieving project success. This may include, but is not limited to:
  • The overall vision or intent of the project
  • Performance testing objectives
  • Performance success criteria
  • The development life cycle
  • The project schedule
  • The project budget
  • Available tools and environments set of the
  • The skill performance tester and the team
  • The priority of detected performance concerns
  • The business impact of deploying an application that performs poorly
Some examples of items that may be relevant to the performance-testing effort in your project context include:
  • Project vision.  Before beginning performance testing, ensure that you understand the current project vision. The project vision is the foundation for determining what performance testing is necessary and valuable. Revisit the vision regularly, as it has the potential to change as well.
  • Purpose of the system.  Understand the purpose of the application or system you are testing. This will help you identify the highest-priority performance characteristics on which you should focus your testing. You will need to know the system’s intent, the actual hardware and software architecture deployed, and the characteristics of the typical end user.
  • Customer or user expectations.  Keep customer or user expectations in mind when planning performance testing. Remember that customer or user satisfaction is based on expectations, not simply compliance with explicitly stated requirements.
  • Business drivers.  Understand the business drivers – such as business needs or opportunities – that are constrained to some degree by budget, schedule, and/or resources. It is important to meet your business requirements on time and within the available budget.
  • Reasons for testing performance.  Understand the reasons for conducting performance testing very early in the project. Failing to do so might lead to ineffective performance testing. These reasons often go beyond a list of performance acceptance criteria and are bound to change or shift priority as the project progresses, so revisit them regularly as you and your team learn more about the application, its performance, and the customer or user.
  • Value that performance testing brings to the project.  Understand the value that performance testing is expected to bring to the project by translating the project- and business-level objectives into specific, identifiable, and manageable performance testing activities. Coordinate and prioritize these activities to determine which performance testing activities are likely to add value.
  • Project management and staffing.  Understand the team’s organization, operation, and communication techniques in order to conduct performance testing effectively.
  • Process.  Understand your team’s process and interpret how that process applies to performance testing. If the team’s process documentation does not address performance testing directly, extrapolate the document to include performance testing to the best of your ability, and then get the revised document approved by the project manager and/or process engineer.
  • Compliance criteria.  Understand the regulatory requirements related to your project. Obtain compliance documents to ensure that you have the specific language and context of any statement related to testing, as this information is critical to determining compliance tests and ensuring a compliant product. Also understand that the nature of performance testing makes it virtually impossible to follow the same processes that have been developed for functional testing.
  • Project schedule.  Be aware of the project start and end dates, the hardware and environment availability dates, the flow of builds and releases, and any checkpoints and milestones in the project schedule.

Wednesday, April 18, 2012

Difference between Application server and Web server ?

Taking a big step back, a Web server serves pages for viewing in a Web browser, while an application server provides methods that client applications can call. A little more precisely, you can say that:
A Web server exclusively handles HTTP requests, whereas an application server serves business logic to application programs through any number of protocols.


Let's examine each in more detail.

The Web server

A Web server handles the HTTP protocol. When the Web server receives an HTTP request, it responds with an HTTP response, such as sending back an HTML page. To process a request, a Web server may respond with a static HTML page or image, send a redirect, or delegate the dynamic response generation to some other program such as CGI scripts, JSPs (JavaServer Pages), servlets, ASPs (Active Server Pages), server-side JavaScripts, or some other server-side technology. Whatever their purpose, such server-side programs generate a response, most often in HTML, for viewing in a Web browser.
Understand that a Web server's delegation model is fairly simple. When a request comes into the Web server, the Web server simply passes the request to the program best able to handle it. The Web server doesn't provide any functionality beyond simply providing an environment in which the server-side program can execute and pass back the generated responses. The server-side program usually provides for itself such functions as transaction processing, database connectivity, and messaging.
While a Web server may not itself support transactions or database connection pooling, it may employ various strategies for fault tolerance and scalability such as load balancing, caching, and clustering—features oftentimes erroneously assigned as features reserved only for application servers.

The application server

As for the application server, an application server exposes business logic to client applications through various protocols, possibly including HTTP. While a Web server mainly deals with sending HTML for display in a Web browser, an application server provides access to business logic for use by client application programs. The application program can use this logic just as it would call a method on an object (or a function in the procedural world).
Such application server clients can include GUIs (graphical user interface) running on a PC, a Web server, or even other application servers. The information traveling back and forth between an application server and its client is not restricted to simple display markup. Instead, the information is program logic. Since the logic takes the form of data and method calls and not static HTML, the client can employ the exposed business logic however it wants.
In most cases, the server exposes this business logic through a component API, such as the EJB (Enterprise JavaBean) component model found on J2EE (Java 2 Platform, Enterprise Edition) application servers. Moreover, the application server manages its own resources. Such gate-keeping duties include security, transaction processing, resource pooling, and messaging. Like a Web server, an application server may also employ various scalability and fault-tolerance techniques.

An example

As an example, consider an online store that provides real-time pricing and availability information. Most likely, the site will provide a form with which you can choose a product. When you submit your query, the site performs a lookup and returns the results embedded within an HTML page. The site may implement this functionality in numerous ways. I'll show you one scenario that doesn't use an application server and another that does. Seeing how these scenarios differ will help you to see the application server's function.

Scenario 1: Web server without an application server

In the first scenario, a Web server alone provides the online store's functionality. The Web server takes your request, then passes it to a server-side program able to handle the request. The server-side program looks up the pricing information from a database or a flat file. Once retrieved, the server-side program uses the information to formulate the HTML response, then the Web server sends it back to your Web browser.
To summarize, a Web server simply processes HTTP requests by responding with HTML pages.

Scenario 2: Web server with an application server

Scenario 2 resembles Scenario 1 in that the Web server still delegates the response generation to a script. However, you can now put the business logic for the pricing lookup onto an application server. With that change, instead of the script knowing how to look up the data and formulate a response, the script can simply call the application server's lookup service. The script can then use the service's result when the script generates its HTML response.
In this scenario, the application server serves the business logic for looking up a product's pricing information. That functionality doesn't say anything about display or how the client must use the information. Instead, the client and application server send data back and forth. When a client calls the application server's lookup service, the service simply looks up the information and returns it to the client.
By separating the pricing logic from the HTML response-generating code, the pricing logic becomes far more reusable between applications. A second client, such as a cash register, could also call the same service as a clerk checks out a customer. In contrast, in Scenario 1 the pricing lookup service is not reusable because the information is embedded within the HTML page. To summarize, in Scenario 2's model, the Web server handles HTTP requests by replying with an HTML page while the application server serves application logic by processing pricing and availability requests.

Note:

Recently, XML Web services have blurred the line between application servers and Web servers. By passing an XML payload to a Web server, the Web server can now process the data and respond much as application servers have in the past.
Additionally, most application servers also contain a Web server, meaning you can consider a Web server a subset of an application server. While application servers contain Web server functionality, developers rarely deploy application servers in that capacity. Instead, when needed, they often deploy standalone Web servers in tandem with application servers. Such a separation of functionality aids performance (simple Web requests won't impact application server performance), deployment configuration (dedicated Web servers, clustering, and so on), and allows for best-of-breed product selection.