Wednesday, June 22, 2011

Moving from BlackBox to WhiteBox testing as a career option



Webinar: Moving from BlackBox to WhiteBox testing as a career option
This is a good video to throw some light on how to move to Whitebox testing from Blackbox testing.
This video was presented by the Ganesh Sahai, Quality Manager at Adobe Systems India Pvt. Ltd

Wednesday, June 1, 2011

Modernizing the Manual testing with help of HP- Sprinter software


Manual testing practices have to be modernized.
Traditional manual testing has changed very little over the years. While organizations are increasingly
trying to leverage automated testing, manual testing still accounts for about 75% of all functional testing.
However, innovation hasn’t come into play yet and manual testers find themselves struggling to keep up
with the fast pace of change dictated by today’s world, using outdated methods and tools.

Manual testing should keep up with global market shifts.
We live in an “instant-on” world—a world where everything and everybody wants to be always connected and interact immediately with information and products. In order to deliver this instant-on experience, software development and delivery have changed dramatically, with more and more organizations adopting agile development methodologies, supporting a wider array of environments and configurations distributed across the globe. In conjunction with these market trends, organizations are expected to deliver faster than ever. On top of this growing rate of change, agile practices require accelerated testing and necessitate more collaboration. Relying on common detailed test design and planning no longer works. Unfortunately, if not revised, traditional manual testing practices will not allow testers to catch up with this growing demand to accelerate software delivery.

HP Sprinter software revolutionizes manual testing.
HP Sprinter, a new HP manual-testing solution, focuses on simplifying and expediting manual testing and increasing team collaboration in order to allow testers to answer the demand of today’s rapidly evolving world. HP Sprinter minimizes repetitive tasks, allowing quality assurance (QA) teams to test multiple environments and configurations in parallel. HP Sprinter automatically documents defects and according to customers has been shown to boost overall manual test productivity by 50–70% (customer testimonials are available through Techvalidate: see http://techvalidate.com/portals/hp-sprinter). HP Sprinter enables QA teams to accelerate the delivery of innovative technologies and solutions, supporting their business’ instant-on quest. Before HP Sprinter, testers were required to print test steps, input data manually from external spreadsheets, take notes during test runs, and type results manually into the testing suite. This tediously manual process was time-consuming and error-prone. HP Sprinter changes everything by being an integral part of HP Application Lifecycle Management (HP ALM) software and HP Quality Center software. HP Sprinter automatically logs test cases, test steps and expected results directly from HP Quality Center in a convenient tab-navigation view available at all times while the tester works on the application under test (AUT). At the end of each test run, test results and all relevant documentation are automatically saved to HP ALM or HP Quality Center, providing testers and other stakeholders with full traceability of test results to test cases, requirements and defects over a single system of record.

Facilitate agile testing.  
One of the most valuable capabilities that HP Sprinter introduces is the ability to log and document test coverage automatically, freeing testers from the verbose documentation they were required to maintain during test runs. HP Sprinter logs all tester actions and stores them in HP ALM in three formats: a video recording, a screen capture of all user actions and a textual description. One of the unique features of HP Sprinter, an innovative patent-pending technology, is the ability to translate user actions into a textual description, providing testers and developers with the most detailed test description generated automatically as testers perform their test. This intelligent auto-documentation can be attached to a new defect with a click of a button, facilitating better understanding of defects by developers and accelerating defect remediation. In addition, testers can utilize this description to create a formal test out of an informal test following the execution of a test run. The visibility of the test results to multiple teams, whether distributed or not, increases team collaboration and facilitates agile and exploratory testing practices. Agile methodologies promise faster development and  time-to-market; on the other hand they raise a number of challenges:

• Testing is expected to happen continuously as part of  each development iteration or cycle and not only at the end of development. This requires accelerating testing pace and testing in parallel to development.
• Testers should be able to test using less documentation in more informal ways but still be able to communicate a comprehensive defect description to allow fast remediation.
• Testers need to test partial applications and features instead of a complete final version of the application.
• User acceptance tests and unit testing should happen more frequently and quickly while still presenting test coverage in a clear way.

Embrace exploratory testing.  
These challenges lead to a growing need to implement practices such as exploratory testing. The main idea is to have testers use their creativity and experience after engaging with relevant stakeholders and understanding the business goals of the application and its main use cases. With a deeper understanding testers are able to rapidly explore the application to uncover defects. The main benefit is time-saving, both in performing the tests as well as in reduced test design and documentation. HP Sprinter facilitates exploratory testing by allowing comprehensive defect communication without requiring the user to document or rely on his memory and by allowing teams to fully understand test coverage, including exact timeline of user actions.

Minimize repetitive tedious work with data injection and mirror testing.
HP has introduced automation to manual testing to speed up and simplify repetitive work as much as possible. HP Sprinter provides a unique capability called “data injection” aimed to inject data automatically from an external spreadsheet to the application under test. This capability frees testers from the burden of manually inputting data into the application in data-driven tests. The “mirror-testing” capability will save testers precious testing resources and time by cloning the test multiple times on different environments and configurations, allowing the tester to run the test only once while HP Sprinter replicates the test on up to four machines at the same time, providing a great ROI.

Simplify testing with annotation and usability testing.
Another useful capability is the array of smart annotation tools allowing testers to easily and quickly take screen captures and annotate them, measure screen distance between UI components, check alignment of UI components and more. As always, this data can be easily attached to defects to allow developers to better understand defect description.

Design your test cases via HP Business Process Testing (HP BPT) software to allow reuse of test components.
 While HP Sprinter is more about test execution, HP offers organizations a way to ease manual test authoring by sharing test-case components between multiple test cases utilizing HP BPT for test design. Testers and business analysts are able to easily duplicate test-case components from one test case to another by quickly populating updates and adjustments to these test-case components via a user-friendly drag-and-drop UI, saving time spent on test design and test-case maintenance. 

Modernize manual testing.
By introducing HP Sprinter, HP offers a modern way to capture, plan and execute functional manual testing. The above mentioned describes the new HP approach to software testing, an approach that increases business agility and supports the modernization of testing practices to allow organizations to keep pace with the rapidly increasing rate of change and volume of testing that modern application delivery requires today.

Thursday, May 19, 2011

Question pattern of Microsoft's manual testing paper

It will compromise of four section.
1. Analytical ability
2. Testing
3. RDBMS
4.C,C++/.NET/JAVA  (One is manadatory)

1.Analytical ability: It will contain aptitude question like figures, cartesian product, Rank, average.
Q If ghjikl is written as "hijknld" then oiundf can be written as
Q If a likes b
    and b likes d  then like a likes d type question.

2. Testing: Prepare for the ISTQB type questions.
3. RDBMS: Basic questions of select query , delete query, normalization, denormalization,  keys, views etc.
4. C++: General questions of output, complile and run time error and abstract class etc.

And best of luck , if you are going to appear.
Himanshu Jain
9899180227
http://www.facebook.com/jainhimanshu1986


Wednesday, May 18, 2011

what it test incident report?

Test Incident Report: detailing for any test that failed the actual versus expected result and other information intended to throw light on why a test has failed. This document is deliberately named as an incident report and not a fault report. The reason is that a discrepancy between expected and actual results can occur for a number of reasons other than a fault in the system. These include the expected results being wrong the test being run wrongly or inconsistency in the requirements meaning that more than one interpretation could be made. The report consists of all details of the incident such as actual and expected results when it failed and any supporting evidence that will help in its resolution. The report will also include if possible an assessment of the impact upon testing of an incident.


Format:

Date: ___________
Project: ____________
Programmer: __________________
Tester: _________________
Program/Module: _______________________
Build/Revision/Release: _______________
Software Environment: _________________
Hardware Environment: _________________
Number of Occurrences: _______
Severity: _________
Priority __________
Detailed Description: ___________________________________________________
___________________________________________________
___________________________________________________
___________________________________________________
___________________________________________________
Assigned To: ___________________
Incident Resolution: ____________________________________________________

Himanshu jain 

Tuesday, May 17, 2011

What is Functional point analysis(FPA)?

FPA is a method to determine the functional size of an information system or project. The functional size may be used for different purposes, for example budgeting. The measurement is independent of the technology. This measurement may be used as a basis for the measurement of productivity, the estimation of the needed resources, and project control.
 
FPA carries out the following steps to determine the size of an information system or system development project:
• Step 1: Identify the functions of the system that are relevant to the user
• Step 2: Determine the functional complexity of each function
• Step 3: Calculate the unadjusted function point count of the system
• Step 4: Rate the general requirements for the system using the 14 general system characteristics
• Step 5: Calculate the adjusted function point count of the system

Monday, March 28, 2011

Some Questions to prepare for amdocs interview.

HI,
I was encountered with some good Questions in the amdocs online interview which is conducted by the MERITRAC.
This is a online test which compromises of 6 papers of mostly 15,20 questions.
Each section contain specific time like for 15 questions-20 minutes.
Papers
Mathematical and logical reasoning paper will have this .
- Data sufficiency
- Puzzles
- Blood relations
- Venn Diagrams
- Profit & Loss
- RC
- Percentage
- Ratio
- Cubes and Dice

1.Mathematical Ability-> This paper will try to check your mathematical ability.
Some of the question which i remember are:
a.In a class 60 students gave A paper, 80 gave B paper,120 gave A+ B paper,45 gave B+C paper, 15 gave A+C paper, and 30 gave A+B+C paper.
On the basis of this you have to give 5 answers(VENN Diagram)
2. A cube is cut into 64 pieces. and 2-3 faces are paint black.
Questions were related to this.

Logical
1. 0 is represented by * and 1 is represented by $. and if one number is added one more $ is added to number
eg ( 4 is shown as $$ and 3 is $**)
You have to identify which digit will resemble what then you would be able to answer all questions.
2.Check the numbers who are alike or not
like 5670.239875 and 5670.236875 and 5670.239875
3. There were also this type of questions like + is *, - id / and * is + and / is -
so find the calculations.

There were questions of data sufficiency.
Like a condition was given for a suitable candidate to be interviewed.
you will be given few candidates.You have to tell which one will be selected and id data is inadequate.

Third one is a comprehension type which usually we do in our english papers.
Like ( What are components of computer and how to connect them)
Direction: Please read carefully and answer all questions.

Fourth is Testing/C/JAVA
some of the questions are :
1.Read Testing workbench.
2. You will be given Four figures of Spiral model .Find correct one?
3. Read the questions of the testing lifecycle.
4. Peer and inspection, walkthrough.

Fifth is SQL
Read Procedures and ACID property,
Find the Him from "Himanshu" using substr.
query to connect remote host.


Sixth is UNIX
learn basic commands like ls,grep,Pipe command,chown,chgrp,gunzip,vi editor, sort command.
key to go the end of line in vi editor.

Tuesday, March 22, 2011

Basics of QTP

Wednesday, February 16, 2011

TMMI (Test Maturity Model)

The Test Maturity Model Integration (“TMMi”) is a guideline and reference framework for test process improvement.
Such a framework is often called a “model”, that is a generalized description of how an activity, in this case testing, should be done. TMMi can be used to complement Capability Maturity Model Integration (“CMMI”), the Carnegie Mellon Software Engineering Institute's wider process improvement approach (see
http://sei.cmu.edu/cmmi), or independently. Applying TMMi to evaluate and improve an organization's test process should increase test productivity and therefore product quality. In achieving this it benefits
testers by promoting education, sufficient resourcing and tight integration of testing with development.
Like CMMI, TMMi defines maturity levels, process areas, improvement goals and practices.
An organization that has not implemented TMMi is assumed to be at maturity level 1. Being at level 2, called “Managed”, requires the practices most testers would consider basic and essential to any test project: decision on approach, production of plans and application of techniques. I call it “the project-oriented level”.
The goals and practices required by level 3, “Defined”, invoke a test organization, professional testers (that is people whose main role is testing and who are trained to perform it) earlier and more strategic test
planning, non-functional testing and reviews. These practices are deployed across the organization, not just at the project level. I think of level 3 as the one where testing has become institutionalized: that is defined, managed and organized. To achieve that, testers are involved in development projects at or near their commencement.


Version 3.1 of TMMi, launched at EuroSTAR in December 2010, defines its top levels:
4 “Measured” and 5“Optimization”.

TMMi level 4: Measured This is the level where testing becomes self-aware. The Test Measurement process area requires that the technical, managerial and operational resources achieved to reach level 3 are used to put in place an organization-wide programme capable of measuring the effectiveness and productivity of testing to assess productivity and monitor improvement. Analysis of the measurements taken is used to support (i) taking of decisions based on fact and (ii) prediction of future test performance and cost. Rather than being simply necessary to detect defects, testing at this level is evaluation: everything that is done to check the quality of all work products, throughout the software lifecycle. That quality is understood quantitatively, supporting the achievement of specified quality needs, attributes and metrics. Work products are evaluated against these quantitative criteria and management is informed and driven by that evaluationthroughout the lifecycle. All of these practices are covered in the Product Quality Evaluation process area.The Advanced Peer Reviews process area is introduced and builds on the review practices from level 3. Peer review product quality early in the life cycle. The findings and measurement results are the basis of the strategy, planning and implementation of dynamic testing of subsequent (work) products.

TMMi Level 5: Optimization When the improvement goals at levels 2, 3 and 4 have been achieved, testing is defined completely and measured accurately, enabling its cost and effectiveness to be controlled. At level 5 the measurements become statistical and the control detailed enough to be used to fine-tune the process and achieve continuous further improvement: testing becomes self-optimizing. Improvement is defined as that which helps to achieve the organization's business objectives. The basis for improvement is a quantitative understanding of the causes of variation inherent to the process; incremental and innovative change is applied to address those causes, increasing predictability. An optimizing process is also supported as much as possible by automation and able to support technology transfer and test process component reuse. To achieve such a process a permanent group, formed of appropriately skilled and trained people, is formally established. Some organizations call this the Test Process Group or TPG: it relates to and  the test organization defined at TMMi level 3, but now takes on responsibility for practices introducedat level 5: establishing and applying a procedure to identify process enhancements, developing and maintaining a library of reusable process assets, and evaluating and selecting new test methods and tools. Level 5 introduces a new process area, Defect Prevention. Defects are analyzed to identify their causes and action
taken, comprising change to the test and/or other processes as necessary, to prevent the introduction of similar and related defects in future. By including these practices, at level 5 the objective of testing becomes to prevent defects. This and the other process areas introduced at level 5, Test Process Optimization and Quality Control, are interdependent and cyclic: Defect Prevention assists product and process Quality Control, which contributes to Test Process Optimization, which in turn feeds into Defect Prevention and Quality Control. All three process areas are, in turn, supported by the continuing practices within the process areas established at the lower levels.

Friday, January 21, 2011

Testing is better than devolpment

Pick-up any recruitment newspaper of the week and you will find a list of companies advertising for test engineers. This was not the case a few years ago. Today the role of a test engineer has found new respect in the product development process, playing a strategic role in moving product quality upstream.

This article aims to define the evolving job profile of a tester from what was earlier a low-profile, non-challenging job - to today, where he is considered a key part of the entire development cycle.

A typical software product development cycle consists of requirements analysis, design, coding, testing, bug fixing and the stabilization phase.

Testing, as a part of this process should ideally be used across the development cycle to help identify and ascertain the correctness, completeness and quality of ensuing product. Even the best product is likely to fail if its software component is not tested throughout the development stage.

Testers today promote the customer's point of view throughout the product cycle, from the first nascent product vision to the eventual product release and ongoing maintenance. Testers are human meters of product quality and should examine a software product, evaluate it, and discover if the product satisfies the customer's requirements. A good tester should be a good engineer as well, and should be perceived as the ‘developer's eyes to overall improved quality and functionality.’

Testing therefore, has to evolve as a career – with companies educating and training their test teams about the new challenges and opportunities in the testing profession.

Consider a scenario where a company has focused on developing only end user testing and bug finding skills in their employees. They measure the productivity of a tester by the number and quality of bugs he finds. Now consider another scenario where a company has in addition focused on developing skills related to coding, design as well in their test work force - in this company the tester finds the initial few bugs, figure out that the bug is because of a coding mistake pattern, and then challenge the developer on fixing the issue throughout the product code which saves the organization a lot of time. This test person can then also contribute to finding bugs early in the cycle by reviewing code and design throughout the product development stage. This will save the test person a lot of time to focus on more interesting scenarios and finding more complex bugs. This way the overall product quality benefits and the test person also feel more challenged.

It is very obvious that the second company will be in a better position to create successful products and do better qualities job given similar timeframes and resources. For their test roles, successful product companies of tomorrow need to hire people with good engineering skills and also focus on developing the coding/design skills. This will enable the companies to move up the value chain and also attract good engineers to the test discipline.

Typically, more than 50 percent of development time is spent in testing - working closely with software design engineers and program managers to understand product requirements, design appropriate test plans and cases, verify features and functionalities, and then identify bugs through systematic testing. In the course of their work, test professionals also identify key engineering efficiency, usability, business improvement opportunities and potential future projects.

In a successful product development company both developer and tester need to develop a deep focus on technology – with the tester in addition needing to develop deep customer empathy. While a developer is committed to building a successful product - a tester tries to minimize the risk of failure and tries to improve the software by detecting defects to ensure a Zero defect product.

Testing can be a great profession when people do not limit themselves to just finding bugs but also preventing bugs. But in order to prevent bugs it is very important that a test engineer develops his coding/design/engineering skills in addition to test/customer understanding/process skills.

There is a vast majority of companies who do not treat their test teams on par with developers both in terms of growth or salary. As a result a test person will hit the glass ceiling once he starts heading a big QA team whereas the dev counterparts grow to head up businesses. The trend however is now changing, albeit in a few companies – where testers have grown to manage businesses.

Test engineers however cannot grow in their career just by companies elevating their job profile. Test engineers also need to change their mindset and be motivated to produce a quality software product. Testers should not be caught up in the assumption held by many that testing has a lesser job status than development. Instead he should focus on building the right skills, developing deep product knowledge and delivering products which customers love. Once a test engineer does that, he can grow to head product engineering business units.

In the new internet and wireless age and the increasing complexity of modern software development projects, the will be resurgence of demand in software testing careers, with plenty of demand for sharp, motivated people. All things considered, the future looks bright for those who are in and planning to enter into this exciting field of software testing.

Sunday, January 16, 2011

What is manual testing?

It will not surprise you to know that manual testing is the
oldest form of software testing. It may surprise you however,
that despite the rise of software test automation solutions,
manual testing still accounts for at least 80% of all testing
carried out today.
Manual testing requires the tester to perform manual test
operations on the test application without the help of test
automation software. Manual testing can be a laborious
activity that requires the tester to possess a certain set of
qualities; to be patient, observant, speculative, creative,
innovative, open-minded, resourceful, un-opinionated, and
skilful.
Manual testing is carried out by a variety of people, from
developers and QA through to Business Analysts and end
users and helps discover defects related to usability testing
and user interface testing areas. While performing manual
tests the software application can be validated as to whether it
meets the various standards defined for effective and efficient
usage and accessibility. For example, the standard location of
the OK button on a screen might be on the left and the
CANCEL button on the right.
During manual testing you might discover that on some
screens, this is not the case. This is a new defect related to
the usability of the screen. In addition, there could be many
cases where the user interface is not displayed correctly on
screen and the basic functionality of the program is not
correct.
There are a number of advantages to manually testing
everything: The entire surface of the product can be covered
(albeit superficially); when something unexpected happens it
is easily followed up; there is little planning needed and
technology issues and there are no lengthy set up issues or
ongoing maintenance required to keep the test cases up to
date with changes in the application.
However, all is not necessarily rosy in the manual testing
garden…

Repetitive manual testing can be difficult to perform on large
software applications or with large data sets. It just gets too
complicated. Testing in subsets of the whole application can
help ease this burden, but it will still be complicated and
arduous.
A manual tester would ideally perform the following steps for
manual test:
1. Understand the business/functional requirement.
2. Prepare the test environment.
3. Execute test case(s) manually.
4. Verify results.
5. Record the result (pass/fail) & Record any new defects
uncovered during the test execution.
6. Make a summary report of the pass/fail test case.
7. Publish the report.
8. If any issue is reopened, then he needs to identify the test steps
and run them again to ensure the issue is fixed.