Sponsered Links

Friday, May 30, 2008

Testing Stop Process - V

If error data is broken into the distinct testing phases of the life cycle (e.g., unit, system, integration), the projected error curve using the SATC model closely fits the rate at which errors are found in each phase.
Some points need to be clarified about the SATC error trend model. The formulation of the SATC equation is the direct result of assuming that at any instant of time, the rate of discovery of errors is proportional to the number of errors remaining in the software and to the resources applied to finding errors. Additional conditions needed in order for the SATC trending model to be a valid are:
The code being tested is not being substantially altered during the testing process, especially through the addition or rework of large amounts of code.
All errors found are reported.
All of the software is tested, and testing of the software is uniform throughout the time of the testing activity.
Condition 1 is present to ensure that the total number of errors is a relatively stable number throughout the testing activity. Conditions 2 and 3 are present to ensure that the estimate of the total number of errors is in fact an estimate of the total errors present in the software at the start of testing - no new errors are introduced during testing. If testing is not "uniform" then the rate of error discovery will not necessarily be proportional to the number of errors remaining in the software and so the equation will not be an appropriate model for errors found. No attempt will be made here to make precise the meaning of the word "uniform".

Testing Stop Process - V


On another project, different estimates of the total number of errors were obtained when estimates were made over different testing time intervals. That is, there was inconsistent agreement between the trend model and the error data over different time intervals. Through subsequent discussion with the project manager it was learned that the rate of error reporting by the project went from approximately 100% during integration testing to 40% during acceptance testing. Furthermore, there was a significant amount of code rework, and testing of the software involved a sequential strategy of completely testing a single functional area before proceeding to test the next functional area of the code. Thus, the instability of the estimates of the total errors was a useful indicator of the fact that there was a significant change in either the project's testing and reporting process. Figure 3 shows the results for this project. Note the change in slope of the reported number of errors occurring around 150 days. The data curve flattens at the right end of the curve due to a pause in testing, rather than a lack of error detection. This project is still undergoing testing.


Figure 3: Cumulative S/W Errors for Project B - Flight S/W

Testing Stop Process - IV

On most of the projects, there was good conformity between the trend model and the reported error data. More importantly, estimates of the total number of errors and the error discovery parameter, made fairly early in the testing activity, seemed to provide reliable indicators of the total number of errors actually found and the time it took to find future errors. Figures 2 shows the relationship between reported errors and the SATC trend model for one project. The graph represents data available at the conclusion of the project. This close fit was also found on other projects when sufficient data was available.


Figure 2: Cumulative Software Errors for Project A

Testing Stop Process - III



The SATC has currently examined and modeled error data from a limited number of projects. Generally, only the date on which an error was entered into the error tracking system was available, not the date of discovery of the error. No useful data was available on human or computer resources expended for testing. What is needed for the most accurate model is the total time expended for testing, even if the times are approximate. Using the sum of reported times to find/fix individual errors did not produce any reasonable correlation with the resource function required. Some indirect attempts to estimate resource usage, however, led to some very good fits. On one project errors were reported along with the name of the person that found the error. Resource usage for testing was estimated as follows: A person was estimated to be working on the testing effort over a period beginning with the first error that they reported and ending with the last error that they reported. The percentage of time that each person worked during that period was assumed to be an unknown constant that did not differ from person to person. Using this technique led to a resource curve that closely resembled the Rayleigh curve (Figure 1).



Figure 1: Test Resource Levels for Project A

Testing Stop Process - II

The Software Assurance Technology Center (SATC) in the Systems Reliability and Safety Office at Goddard Space Flight Center (GSFC) is investigating the use of software error data as an indicator of testing status. Items of interest for determining the status of testing include projections of the number of errors remaining in the software and the expected amount of time to find some percentage of the remaining errors. To project the number of errors remaining in software, one needs an estimate of the total number of errors in the software at the start of testing and a count of the errors found and corrected throughout testing. There are a number of models that reasonably fit the rate at which errors are found in software, the most commonly used is referred to in this paper as the Musa model. This model is not easily applicable at GSFC, however, due to the availability and the quality of the error data.
At GSFC, useful error data is not easy to obtain for projects not in the Software Engineering Laboratory. Of the projects studied by the SATC, only a few had an organized accounting scheme for tracking errors, but they often did not have a consistent format for recording errors. Some projects record errors that were found but did not record any information about resources applied to testing. The error data frequently contained the date of entry of the error data rather than the actual date of error discovery. In order to use traditional models such as the Musa model for estimating the cumulative number of errors, one needs fairly precise data on the time of discovery of errors and the level of resources applied to testing. Real world software projects are generally not very accommodating when it comes to either accuracy or completeness of error data. The models developed by the SATC to perform trending and prediction on error data attempt to compensate for these shortcomings in the quantity and availability of project data.
In order to compensate for the quality of the error data, the SATC developed a software error trending models using two techniques, each based on the basic Musa model, but with the constant in the exponential term replaced by a function of time that describes the 'intensity' of the testing effort. The shape and the parameters for this function can be estimated using measures such as CPU time or staff hours devoted to testing. The first technique involves fitting cumulative error data to the modified Musa model using a least squares fit that is based on gradient methods. This technique requires data on errors found and the number of staff hours devoted to testing each week of the testing activity. The second technique uses a Kalman filter to estimate both the total number of errors in the software and the level of testing being performed. This technique requires error data and initial estimates of the total number of errors and the initial amount of effort applied to testing.

Testing Stop Process

This can be difficult to determine. Many modern software applications are so complex, and run in such as interdependent environment, that complete testing can never be done. "When to stop testing" is one of the most difficult questions to a test engineer. Common factors in deciding when to stop are:
* Deadlines ( release deadlines,testing deadlines.)

* Test cases completed with certain percentages passed

* Test budget depleted

* Coverage of code/functionality/requirements reaches a specified point

* The rate at which Bugs can be found is too small

* Beta or Alpha Testing period ends

* The risk in the project is under acceptable limit.

Practically, we feel that the decision of stopping testing is based on the level of the risk acceptable to the management. As testing is a never ending process we can never assume that 100 % testing has been done, we can only minimize the risk of shipping the product to client with X testing done. The risk can be measured by Risk analysis but for small duration / low budget / low resources project, risk can be deduced by simply: -
* Measuring Test Coverage.

* Number of test cycles.

* Number of high priority bugs.

Operations and Maintenance

Corrections, modifications and extensions are bound to occur even for small programs and testing is required every time there is a change. Testing during maintenance is termed regression testing. The test set, the test plan, and the test results for the original program should exist. Modifications must be made to accommodate the program changes, and then all portions of the program affected by the modifications must be re-tested. After regression testing is complete, the program and test documentation must be updated to reflect the changes.

Programming/Construction

Here the main testing points are:
- Check the code for consistency with design - the areas to check include modular structure, module interfaces, data structures, functions, algorithms and I/O handling.
- Perform the Testing process in an organized and systematic manner with test runs dated, annotated and saved. A plan or schedule can be used as a checklist to help the programmer organize testing efforts. If errors are found and changes made to the program, all tests involving the erroneous segment (including those which resulted in success previously) must be rerun and recorded.
- Asks some colleague for assistance - Some independent party, other than the programmer of the specific part of the code, should analyze the development product at each phase. The programmer should explain the product to the party who will then question the logic and search for errors with a checklist to guide the search. This is needed to locate errors the programmer has overlooked.
- Use available tools - the programmer should be familiar with various compilers and interpreters available on the system for the implementation language being used because they differ in their error analysis and code generation capabilities.
- Apply Stress to the Program - Testing should exercise and stress the program structure, the data structures, the internal functions and the externally visible functions or functionality. Both valid and invalid data should be included in the test set.
- Test one at a time - Pieces of code, individual modules and small collections of modules should be exercised separately before they are integrated into the total program, one by one. Errors are easier to isolate when the no. of potential interactions should be kept small. Instrumentation-insertion of some code into the program solely to measure various program characteristics – can be useful here. A tester should perform array bound checks, check loop control variables, determine whether key data values are within permissible ranges, trace program execution, and count the no. of times a group of statements is executed.
- Measure testing coverage/When should testing stop? - If errors are still found every time the program is executed, testing should continue. Because errors tend to cluster, modules appearing particularly error-prone require special scrutiny.
The metrics used to measure testing thoroughness include statement testing (whether each statement in the program has been executed at least once), branch testing (whether each exit from each branch has been executed at least once) and path testing (whether all logical paths, which may involve repeated execution of various segments, have been executed at least once). Statement testing is the coverage metric most frequently used as it is relatively simple to implement.
The amount of testing depends on the cost of an error. Critical programs or functions require more thorough testing than the less significant functions.

Design

The design document aids in programming, communication, and error analysis and test data generation. The requirements statement and the design document should together give the problem and the organization of the solution i.e. what the program will do and how it will be done.
The design document should contain:
* Principal data structures.
* Functions, algorithms, heuristics or special techniques used for processing.
* The program organization, how it will be modularized and categorized into external and internal interfaces.
* Any additional information.
Here the testing activities should consist of:
- Analysis of design to check its completeness and consistency - the total process should be analyzed to determine that no steps or special cases have been overlooked. Internal interfaces, I/O handling and data structures should specially be checked for inconsistencies.
- Analysis of design to check whether it satisfies the requirements - check whether both requirements and design document contain the same form, format, units used for input and output and also that all functions listed in the requirement document have been included in the design document. Selected test data which is generated during the requirements analysis phase should be manually simulated to determine whether the design will yield the expected values.
- Generation of test data based on the design - The tests generated should cover the structure as well as the internal functions of the design like the data structures, algorithm, functions, heuristics and general program structure etc. Standard extreme and special values should be included and expected output should be recorded in the test data.
- Re-examination and refinement of the test data set generated at the requirements analysis phase.
The first two steps should also be performed by some colleague and not only the designer/developer.

Requirements Analysis

The following test activities should be performed during this stage:
1.1 Invest in analysis at the beginning of the project - Having a clear, concise and formal statement of the requirements facilitates programming, communication, error analysis and test data generation.
The requirements statement should record the following information and decisions:
a. Program function - What the program must do?
b. The form, format, data types and units for input.
c. The form, format, data types and units for output.
d. How exceptions, errors and deviations are to be handled.
e. For scientific computations, the numerical method or at least the required accuracy of the solution.
f. The hardware/software environment required or assumed (e.g. the machine, the operating system, and the implementation language).
Deciding the above issues is one of the activities related to testing that should be performed during this stage.
1.2 Start developing the test set at the requirements analysis phase - Data should be generated that can be used to determine whether the requirements have been met. To do this, the input domain should be partitioned into classes of values that the program will treat in a similar manner and for each class a representative element should be included in the test data.
In addition, following should also be included in the data set:
(1) boundary values
(2) any non-extreme input values that would require special handling.
The output domain should be treated similarly.
Invalid input requires the same analysis as valid input.
1.3 The correctness, consistency and completeness of the requirements should also be analyzed - Consider whether the correct problem is being solved, check for conflicts and inconsistencies among the requirements and consider the possibility of missing cases.

Testing Activities in Each Phase

The following testing activities should be performed during the phases:
1. Requirements Analysis
- Determine correctness
- Generate functional test data.
2. Design
- Determine correctness and consistency
- Generate structural and functional test data.
3. Programming/Construction
- Determine correctness and consistency
- Generate structural and functional test data
- Apply test data
- Refine test data
4. Operation and Maintenance
- Retest

Testing Start Process

Testing is sometimes incorrectly thought as an after-the-fact activity; performed after programming is done for a product. Instead, testing should be performed at every development stage of the product. Test data sets must be derived and their correctness and consistency should be monitored throughout the development process.
If we divide the lifecycle of software development into “Requirements Analysis”, “Design”, “Programming/Construction” and “Operation and Maintenance”, then testing should accompany each of the above phases. If testing is isolated as a single phase late in the cycle, errors in the problem statement or design may incur exorbitant costs. Not only must the original error be corrected, but the entire structure built upon it must also be changed. Therefore, testing should not be isolated as an inspection activity. Rather testing should be involved throughout the SDLC in order to bring out a quality product.

Introduction of Software Testing

Testing is a process used to help identify the correctness, completeness and quality of developed computer software. With that in mind, testing can never completely establish the correctness of computer software.
There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation, not merely a matter of creating and following rote procedure. One definition of testing is "the process of questioning a product in order to evaluate it", where the "questions" are things the tester tries to do with the product, and the product answers with its behavior in reaction to the probing of the tester. Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the word testing is connoted to mean the dynamic analysis of the product—putting the product through its paces.
The quality of the application can and normally does vary widely from system to system but some of the common quality attributes include reliability, stability, portability, maintainability and usability. Refer to the ISO standard ISO 9126 for a more complete list of attributes and criteria.
Testing helps is Verifying and Validating if the Software is working as it is intended to be working. Thins involves using Static and Dynamic methodologies to Test the application.
Because of the fallibility of its human designers and its own abstract, complex nature, software development must be accompanied by quality assurance activities. It is not unusual for developers to spend 40% of the total project time on testing. For life-critical software (e.g. flight control, reactor monitoring), testing can cost 3 to 5 times as much as all other activities combined. The destructive nature of testing requires that the developer discard preconceived notions of the correctness of his/her developed software.Software Testing Fundamentals
Testing objectives include
1. Testing is a process of executing a program with the intent of finding an error.2. A good test case is one that has a high probability of finding an as yet undiscovered error.3. A successful test is one that uncovers an as yet undiscovered error.
Testing should systematically uncover different classes of errors in a minimum amount of time and with a minimum amount of effort. A secondary benefit of testing is that it demonstrates that the software appears to be working as stated in the specifications. The data collected through testing can also provide an indication of the software's reliability and quality. But, testing cannot show the absence of defect -- it can only show that software defects are present.

Thursday, May 29, 2008

Documenting your software test project

The test project is the technical effort of estimating work, planning the test scope and strategy, effectively managing test execution, and reporting on status and risk. Every project involves tradeoffs between features, time, quality and costs. It's the test manager's responsibility to manage the details of the test project by possibly providing information to stakeholders in terms of project estimates for the amount of work required, actively managing the test effort, and providing information about product quality. This tip focuses on the documents that might help when managing the testing project. we outline some common test planning artifacts that one might find useful. These include the following:
Test strategy document
Test plan document
Test estimates
Test project plan
Other documents based on your context
Test strategy documentWe have used the test strategy document to define the strategic plan for what aspects of the system we plan to test, our approach to testing, and to capture the details around constraints, assumptions and risks for our testing.
The test strategy document might capture the following information:
Scope of the testing
Quality criteria
Feature context
Technical architecture
Operating environment
Interaction with other features and systems
Other content items as appropriate to your project
Testing dependencies, assumptions and constraints
Testing objectives
Testing approach
Types of testing
Testing methods and procedures
Testing tools
Test data
Traceability
Other content items as appropriate to your project
Deliverables
Resources
Timelines
Risks and contingencies
As you develop the test strategy document, the creation of the document and working through the process of gathering information can help you assess and determine the strategy. The test strategy document can also be useful to convey the strategy to stakeholders to gain their agreement to your testing approach.
Test plan documentWe use the test plan document to defines the goals and objectives of testing within the scope of an iteration, or if the project is small, for the entire project. For smaller projects, our test plan and strategy document may be the same document.
The test plan document might capture the following information:
Preparations
Staffing
Test coverage
Any testing requirements (technical or otherwise)
Test environments
Entry criteria
Exit criteria
Delegation of responsibilities
Facility acquisition
Task planning
Scheduling
Documentation on coordination and collaboration with other teams
Risks and issues that may impact testing
Specific deliverables of the test project
· The purpose is to outline and communicate the details of the testing effort for a specific period of time. It can be used to direct and guide the test effort. It may or may not contain the details found in the test project plan or test estimates (described below). Whether or not those documents are separate or contained under the plan again depends on the size of the project and the context in which we are working.
· Test estimates and detailed plans We sometimes find we need to do formal estimating when we plan our projects. To do this, we like to use bottom-up estimation. Working backwards from our test objectives, we can define all the tasks necessary to complete the objective. When finished, we have a work breakdown structure defined for many of our objectives. We can then take the various work breakdown structures and estimate the work for each of the tasks.
· Once we have the estimates for the work, we lay out those estimates against a high-level schedule to understand what our resource needs are going to be. At the end of this process we have estimates in terms of work effort (hours, days, etc.) and resources (a number of people for a period of time).
· Depending on the project, this information might be transferred into a formal project plan (most likely using Microsoft Project). This allows us to integrate the various work breakdown structures, consolidate the estimates, assign the resources planned and track the changes to the plan over time as the project unfolds. Maintaining a project plan always has the potential to be a distraction, so be sure that you need that kind of formality or that you truly find it helpful before you make the investment.
· Other documents based on your contextDepending on the context you're working in, there might be other documents you need or want to produce. These can include models, lists, checklists, standards, templates, process documents, charters or contracts. For a specific example, Mike once included a site map for a Web site as part of his test planning documents, including details around each of the pages.
There's really no limit to what might be helpful when planning your project. From a documentation perspective, we suggest finding a balance between what's required, what's helpful in driving your understanding of the work, and what's helpful in communicating the testing to the various stakeholders on the project.

Wednesday, May 28, 2008

API Testing

Before dwelling into the subject of API Testing, we should understand what is the meaning of API or, Application Programming Interface. An API (Application Programming Interface) is a collection of software functions and procedures, called API calls, that can be executed by other software applications. API testing is mostly used for the system which has collection of API that needs to be tested. The system could be system software, application software or libraries.
API testing is different from other testing types as GUI is rarely involved in API Testing. Even if GUI is not involved in API testing, you still need to setup initial environment, invoke API with required set of parameters and then finally analyze the result. Setting initial environment become complex because GUI is not involved. It is very easy to setup initial condition in GUI, In most cases you can find out in a glance whether system is ready or not. In case of API this is not the case, you need to have some way to make sure that system is ready for testing. This can be divided further in test environment setup and application setup.
Things like database should be configured, server should be started are related to test environment setup. On the other hand object should be created before calling non static member of the class falls under application specific setup. Initial condition in API testing also involves creating conditions under which API will be called. Probably, API can be called directly or it can be called because of some event or in response of some exception. Output of API could be some data or status or it can just wait for some other call to complete in a-synchronized environment. Most of the test cases of API will be based on the output, if API
Return value based on input condition: This are relatively simple to test as input can be defined and results can be validated against expected return value. For example, It is very easy to write test cases for int add(int a, int b) kind of API. You can pass different combinations of int a and int b and can validate these against known results.

Does not return anything: For cases like these you will probably have some mechanism to check behavior of API on the system. For example, if you need to write test cases for delete(ListElement) function you will probably validate size of the list, absence of list element in the list.
Trigger some other API/event/interrupt: If API is triggering some event or raising some interrupt, then you need to listen for those events and interrupt listener. Your test suite should call appropriate API and asserts should be on the interrupts and listener.
Update data structure: This category is also similar to the API category which does not return anything. Updating data structure will have some effect on the system and that should be validated. If you have other means of accessing the data structure, it should be used to validate that data structure is updated.
Modify certain resources: If API call is modifying some resources, for example updating some database, changing registry, killing some process etc, then it should be validated by accessing those resources.
You should not get confused with API Testing and Unit Testing. API testing is not Unit testing. Unit testing is owned by dev team and API by QE team. API is mostly black box testing where as unit testing is essentially white box testing. Unit test cases are typically designed by the developers and there scope is limited to the unit under test. In API testing, test cases are designed by the QE team and there scope is not limited to any specific unit, but it normally cover complete system. Main Challenges of API Testing can be divided into following categories.
  • Parameter Selection
  • Parameter combination
  • Call sequencing

Tuesday, May 27, 2008

An introduction to Agile testing


यारों मैं सोच रहा था कि blogging करूं लेकिन सोच नहीं पा रहा था कि कैसे करूं , लेकिन मेरे दिमाग मैं कुछ ideas आ रहे हैं । और मैं पूरे टेस्टिंग वर्ल्ड से शेयर करना चाहता हूँ।


आपने Agile टेस्टिंग के बारे मैं आपने सुना होगा, ये एक दम नई technique है ,



Let's start by setting the philosophical groundwork:
First, you want to test as early as you possibly can because the potential impact of a defect rises exponentially over time (this isn't always true, but it's something to be concerned about). In fact, many agile developers prefer a test-first approach.
Second, you want to test as often as possible, and more importantly, as effectively as possible, to increase the chance that you'll find defects. Although this increases your costs in the short term, studies have shown that greater investment in testing reduces the total cost of ownership of a system due to improved quality.
Third, you want to do just enough testing for your situation: Commercial banking software requires a greater investment in testing than membership administration software for your local Girl Scouts group.
Fourth, pair testing, just like pair programming and modeling with others, is an exceptionally good idea. My general philosophy is that software development is a lot like swimming—it's very dangerous to do it alone.
Testing Throughout the Lifecycle
Figure 1 presents a high-level view of the agile lifecycle for the purpose of testing Agile projects go through an often short Initiation phase (Iteration 0) where we set the foundation for the project; a Construction phase where we develop the system in an evolutionary (iterative and incremental) manner; an End Game phase where we transition our system into production; and a Production phase where we operate the system and support users। Don't fear the serial boogeyman: The Initiation phase is not a requirements phase, nor is the End Game a testing phase।

Test activities during the agile lifecycle.
Testing activities vary throughout the lifecycle. During Iteration 0, you perform initial setup tasks. This includes identifying the people who will be on the external "investigative" testing team, identifying and potentially installing your testing tools, and starting to schedule scarce resources such as a usability-testing lab if required. If your project has a deadline, you likely want to identify the date into which your project must enter the End Game. The good news is that you'll discover that increased testing during construction iterations enables you to do less testing during the End Game.
A significant amount of testing occurs during construction iterations—remember, agilists test often, test early, and usually test first. This is confirmatory testing against the stakeholder's current intent and is typically milestone-based at the unit level. This is a great start, but it's not the entire testing picture (which is why we also need investigative testing that is risk-based at more of an integration level). Regardless of the style, your true goal should be to test, not to plan to test, and certainly not to write comprehensive documentation about how you intend to hopefully test at some point. Agilists still do planning, and we still write documentation, but our focus is on high-value activities such as actual testing.
During the End Game, you may be required to perform final testing efforts for the release, including full system and acceptance testing. This is true if you are legislated to do so (common in life-critical situations such as medical software development) or if your organization has defined service-level agreements with customers who require it. Luckily, if you've tested effectively during the construction iterations, your final testing efforts will prove to be straightforward and quick. If you're counting on doing any form of "serious testing" during the End Game, then you're likely in trouble because your team won't have sufficient time to act on any defects that you do find.
Testing During a Construction Iteration
The majority of testing occurs during construction iterations on agile projects. Your testing effort, just like your system, evolves throughout construction. Figure 2 depicts two construction iterations, indicating that there is confirmatory testing performed by the team, and in parallel, investigative testing efforts ideally performed by a independent test team (I've adopted the terms "confirmatory" and "investigative" testing from Michael Bolton, a thought leader within the testing community). Although it isn't always possible to have an independent test team, particularly for small projects, it is highly desirable. Confirmatory testing focuses on verifying that the system fulfills the intent of the stakeholders as described to the team to date, whereas investigative testing strives to discover problems that the development team didn't consider.

There are two aspects to confirmatory testing: agile acceptance testing and developer testing, both of which are automated to enable continuous regression testing throughout the lifecycle. Confirmatory testing is the agile equivalent of testing to the specification, and in fact, we consider acceptance tests to be the primary part of the requirements specification and our developer tests to be the primary part of the design specification. Both of these concepts are applications of the agile practice of single sourcing information whenever possible.
Agile acceptance testing is a mix of traditional functional testing and traditional acceptance testing because the development team and their stakeholders are doing it collaboratively. Developer testing is a mix of traditional unit testing and traditional class/component/service integration testing. Developer testing strives to verify both the application code and the database schema (for more information, see my article "Ensuring Database Quality"; www.ddj.com/architect). Your goal is to look for coding errors, perform at least coverage if not full path testing, and to ensure that the system meets the current intent of its stakeholders. Developer testing is often done in a test-first manner, where a single test is written and then sufficient production code is written to fulfill that test (see www.agiledata.org/essays/ tdd.html for details). Interestingly, this test-first approach is considered a detailed design activity first and a testing activity second.
Automation is an important aspect of construction testing due to the increased need for regression testing on evolutionary projects.

Investigative Testing:

A separate test team? Preposterous you say! Actually, there is significant value to be gained by submitting your system to an independent test team at intervals throughout the lifecycle so that they can verify the quality of your work. Agile teams produce working software at the end of each construction iteration; therefore, you have something new to test at that point. A common practice is to provide a new version of the system at least once a week, regardless of your iteration length, a particularly good strategy the closer you get to the End Game.
The investigative test team's goal should be to ask, "What could go wrong," and to explore potential scenarios that neither the development team nor business stakeholders may have considered. They're attempting to address the question, "Is this system any good?" and not, "Does this system fulfill the written specification?" The confirmatory testing efforts verify whether the system fulfills the intent, so simply repeating that work isn't going to add much value. Kaner promotes the idea that good testers look for defects that programmers missed, exploring the unique blind spots of the individual developers.
Investigative testers describe potential problems in the form of defect stories—the agile equivalent of a defect report. A defect story is treated as a form of requirement—it is estimated and prioritized and put on your requirements stack. The need to fix a defect is a type of requirement, so it makes perfect sense to address it just like any other requirement. As you would expect, during the End Game, the only requirement type that you're working on is defect stories.
Your investigative testing will address common issues such as load/stress testing, integration testing, and security testing. Scenario testing, against both the system itself and the supporting documentation, is also common. You may also do some form of usability testing—the user interface is the system to most end users; therefore, usability is critical to success. The UI includes both the screens that people interact with and the documentation that they read, implying that you need to test both.
Good investigative testing efforts reveal any problems that developers missed long before they become too expensive to address. It also provides feedback to management that the team is successfully delivering high-quality working software on a regular basis. Kaner pointed out that there's no one right way to go about investigative testing, nor is there one correct list of techniques to employ. Your efforts must reflect the goals of the project team that you're supporting. For example, is the goal to determine whether the system is ready to be shipped? Is it to ensure that the system interoperates with other existing systems? Is it to help the developers to identify problems in their own testing efforts by pointing out the causes of defects that they missed? Is it to minimize the chance of a lawsuit against your organization or its managers? The context in which you are testing will determine what and how you test—not only will the context be different for each project; the context also changes over the life of the project.
The type of confirmatory testing performed by agile teams is only one part of the testing picture—it is the agile equivalent of traditional smoke testing. This is a great start, and having automated regression testing provides the safety net required by evolutionary development techniques. Investigative testing enables you to explore the critical "big picture" issues, as well as the "little picture" issues that nobody thought of until now, which confirmatory testing typically does not.