Q- What's the purpose of acceptance testing? Can we use the same test cases of system testing for acceptance testing?
A- As with most questions folks ask about items related to software testing, the answer starts with "It depends…" In this case, it depends on what the client or team means when they refer to acceptance testing (and possibly how the team defines and implements test cases). Rather than dig too far into all of the variability in the use of these terms I've seen over the course of my career, I'm going to state some assumptions.
Let's assume that we're talking about "user acceptance testing." Acceptance testing could relate to anyone whose approval is required prior to launching an application, but user acceptance testing is by far the most common. Further, let's assume Cem Kaner's definition of a test case -- "a test case is a question that you ask of the program. The point of running the test is to gain information, for example whether the program will pass or fail the test." This allows us to focus on the point of having test cases rather than focusing on how they are documented.
With that out of the way, let's take a look at the first part of the question – "What is the purpose of acceptance testing?" Simply speaking, the purpose is to give the end-user a chance to give the development team feed back as to whether or not the software meets their needs. Ultimately, it's the user that needs to be satisfied with the application, not the testers, managers or contract writers. Personally, I think user acceptance testing is one of the most important types of testing we can conduct on a product. The way I see it, I care a whole lot more about whether users are happy about the way a program works than whether or not the program passes a bunch of tests that were created by testers in an attempt to validate the requirements that an analyst did their best to capture and a programmer interpreted based on their understanding of those requirements.
Which leads us to the second part of the question – "Can we use the same test cases for system testing and acceptance testing." I've seen a lot of projects where there is a tester put in charge of developing user acceptance tests. They start with test cases. They then write detailed scripts, including test data, for users to follow through the application and ask those users to check boxes on those scripts as pass or fail for the test cases the tester decided to include. Sometimes at the end of the script the tester leaves a section for free responses from the end-users, but in my experience, not very often.
Now, that process has never made any sense to me. If the point of user acceptance testing is to find out if the user is happy with the software, what sense does it make for some tester to tell them what to look for? Why not just ask the users to try the software and tell the team what they think? The only answer to that question I've ever gotten is "It's too hard to figure out if they are actually happy, so we're just trying to figure out if we gave them what they asked for according to the requirements document that they signed off on." Which boils down to "we're just getting the users to agree that we should get paid." So, if that is your goal, go ahead and use system test cases. But if your goal is to determine user satisfaction, just let them use the system and tell you what they like and what they don't like about the system. I'm willing to bet you'll end up with a better application that way.
BY-Scott Barber
Sponsered Links
Wednesday, October 29, 2008
Monday, October 20, 2008
Six functional tests to ensure software quality
Six types of functional testing can be used to ensure the quality of the end product. Understand these testing types and scale the execution to match the risk to the project.
1. Ensure every line of code executes properly with Unit Testing.
Unit testing is the process of testing each unit of code in a single component. This form of testing is carried out by the developer as the component is being developed. The developer is responsible for ensuring that each detail of the implementation is logically correct. Unit tests are normally discussed in terms of the type of coverage they provide:
Function coverage: Each function/method executed by at least one test case.
Statement coverage: Each line of code covered by at least one test case (need more test cases than above).
Path coverage: Every possible path through code covered by at least one test case (need plenty of test cases).
2.Ensure every function produces its expected outcome with Functional Testing.
Functional testing addresses concerns surrounding the correct implementation of functional requirements. Commonly referred to as black box testing, this type of testing requires no knowledge of the underlying implementation.
Functional test suites are created from requirement use cases, with each scenario becoming a functional test. As a component is implemented, the respective functional test is applied to it after it has been unit tested.
For many projects, it is unreasonable to test every functional aspect of the software. Instead, define functional testing goals that are appropriate for the project. Prioritize critical and widely used functions and include other functions as time and resources permit.
For detailed information on how to correctly develop use cases to support functional testing, refer to the Info-Tech Advisor research note, "Use Cases: Steer Clear of the Pitfalls."
3.Ensure all functions combine to deliver the desired business result with System
Testing.
System testing executes end-to-end functional tests that cross software units, helping to realize the goal of ensuring that components combine to deliver the desired business result. In defining the project's system testing goals, focus on those scenarios that require critical units to integrate.
Also, consider whether all subsystems should be tested first or if all layers of a single subsystem should be tested before being combined with another subsystem.
Combining the various components together in one swift move should be avoided. The issue with this approach is the difficulty in localizing error. Components should be integrated incrementally after each has been tested in isolation.
4.Ensure new changes did not adversely affect other parts of the system with Regression Testing.
Regression testing ensures code modifications have not inadvertently introduced bugs into the system or changed existing functionality. Goals for regression testing should include plans from the original unit, as well as functional and system tests phases to demonstrate that existing functionality behaves as intended.
Determining when regression testing is sufficient can be difficult. Although it is not desirable to test the entire system again, critical functionality should be tested regardless of where the modification occurred. Regression testing should be done frequently to ensure a baseline software quality is maintained.
5.Ensure the system integrates with and does not adversely affect other enterprise systems with System Integration Testing.
System integration testing is a process that assesses the software's interoperability and cooperation with other applications. Define testing goals that will exercise required communication. (It is fruitless to test interaction between systems that will not collaborate once the developed system is installed.) This is done using process flows that encapsulate the entire system.
The need for a developed system to coexist with existing enterprise applications necessitates developing testing goals that can uncover faults in their integration. In the case that the new system is standalone software and there is no requirement for compatibility with any other enterprise system, system integration testing can be ignored.
6.Ensure the customer is satisfied with the system with Acceptance Testing.
Acceptance testing aims to test how well users interact with the system, that it does what they expect and is easy to use. Although it is the final phase of testing before software deployment, the tests themselves should be defined as early as possible in the SDLC. Early definition ensures customer expectations are set appropriately and confirms for designers that what they are building will satisfy the end user's requirements. To that end, acceptance test cases are developed from user requirements and are validated in conjunction with actual end users of the system. The process results in acceptance or rejection of the final product.
BY : Sunil Tadwalkar(PMP,GLG Educator)
1. Ensure every line of code executes properly with Unit Testing.
Unit testing is the process of testing each unit of code in a single component. This form of testing is carried out by the developer as the component is being developed. The developer is responsible for ensuring that each detail of the implementation is logically correct. Unit tests are normally discussed in terms of the type of coverage they provide:
Function coverage: Each function/method executed by at least one test case.
Statement coverage: Each line of code covered by at least one test case (need more test cases than above).
Path coverage: Every possible path through code covered by at least one test case (need plenty of test cases).
2.Ensure every function produces its expected outcome with Functional Testing.
Functional testing addresses concerns surrounding the correct implementation of functional requirements. Commonly referred to as black box testing, this type of testing requires no knowledge of the underlying implementation.
Functional test suites are created from requirement use cases, with each scenario becoming a functional test. As a component is implemented, the respective functional test is applied to it after it has been unit tested.
For many projects, it is unreasonable to test every functional aspect of the software. Instead, define functional testing goals that are appropriate for the project. Prioritize critical and widely used functions and include other functions as time and resources permit.
For detailed information on how to correctly develop use cases to support functional testing, refer to the Info-Tech Advisor research note, "Use Cases: Steer Clear of the Pitfalls."
3.Ensure all functions combine to deliver the desired business result with System
Testing.
System testing executes end-to-end functional tests that cross software units, helping to realize the goal of ensuring that components combine to deliver the desired business result. In defining the project's system testing goals, focus on those scenarios that require critical units to integrate.
Also, consider whether all subsystems should be tested first or if all layers of a single subsystem should be tested before being combined with another subsystem.
Combining the various components together in one swift move should be avoided. The issue with this approach is the difficulty in localizing error. Components should be integrated incrementally after each has been tested in isolation.
4.Ensure new changes did not adversely affect other parts of the system with Regression Testing.
Regression testing ensures code modifications have not inadvertently introduced bugs into the system or changed existing functionality. Goals for regression testing should include plans from the original unit, as well as functional and system tests phases to demonstrate that existing functionality behaves as intended.
Determining when regression testing is sufficient can be difficult. Although it is not desirable to test the entire system again, critical functionality should be tested regardless of where the modification occurred. Regression testing should be done frequently to ensure a baseline software quality is maintained.
5.Ensure the system integrates with and does not adversely affect other enterprise systems with System Integration Testing.
System integration testing is a process that assesses the software's interoperability and cooperation with other applications. Define testing goals that will exercise required communication. (It is fruitless to test interaction between systems that will not collaborate once the developed system is installed.) This is done using process flows that encapsulate the entire system.
The need for a developed system to coexist with existing enterprise applications necessitates developing testing goals that can uncover faults in their integration. In the case that the new system is standalone software and there is no requirement for compatibility with any other enterprise system, system integration testing can be ignored.
6.Ensure the customer is satisfied with the system with Acceptance Testing.
Acceptance testing aims to test how well users interact with the system, that it does what they expect and is easy to use. Although it is the final phase of testing before software deployment, the tests themselves should be defined as early as possible in the SDLC. Early definition ensures customer expectations are set appropriately and confirms for designers that what they are building will satisfy the end user's requirements. To that end, acceptance test cases are developed from user requirements and are validated in conjunction with actual end users of the system. The process results in acceptance or rejection of the final product.
BY : Sunil Tadwalkar(PMP,GLG Educator)
Monday, October 13, 2008
Managing a software testing team while cultivating talent
Q- How can I manage a test team effectively and enhance my team's testing skills at the same time?
A- The size of your team and the experience level of each person on your team are two considerable influences in how you manage your team. Without knowing either of these factors or the environment you're working in, let me offer several ideas.
I'd begin with each person individually. I often build a custom learning plan for people I hire (or inherit). I ask each person to help clarify what they know in several areas such as: database models, SQL, test automation, types of testing such as functional, performance, and installation, the subject domain we're working in whether it's banking software, contact management or another field. I work with each person individually as much as I feasibly can and help each person grow their knowledge in these areas -- or other areas that maybe more applicable based on their background and the environment we're working in. Together, we'll look for project work where they can apply knowledge as soon as possible. Let me back up and add that ongoing knowledge and the pursuit of learning isn't a surprise for anyone since it's a factor in my hiring and a spirit I look for in people.
In terms of building a team's testing skills there are more options. If more than one person is trying to acquire the same or similar knowledge you can establish a buddy system between the two people. An effective pairing will often be two experienced people trying to expand in a new area as opposed to two entry level people who might both be struggling in many areas. Paired testing sessions are an option with a more senior tester working with a less experienced tester.
We can learn from every person we meet. If you build a learning list together with the team, you should look to different people on your team to lead. The point is the team is part of building the list. Perhaps your lead automation tester can lead brown bag lunches or offer learning sessions where manual testers listen in on the automation planning sessions. Unless you have a team of completely inexperienced testers, you should not have to lead all the knowledge exchange sessions but you will have to start the exchange and provide an environment (time, space and attitude) where knowledge and skills are shared. I've hosted internal book clubs where we read testing books together and then talk about books but I've found more immediate project work with relevant small bits to read more effective. Experiment with your team since each team has its own unique dynamic.
In terms of management, your knowledge exchange program provides leadership opportunities for people. Beyond project work, you'll be able to see how your team members work together or perhaps, don't work together. I'd be looking for energy levels, willingness and commitment to learning. The sessions you host will give you another opportunity to observe the group and each individual.
Since I don't think any of us are ever done learning, you can also demonstrate to the team what you're learning and how you go about pursuing more skills or background. Someone on your team might have more experience in an area and this could be a great way for you to learn and someone else to teach. Knowledge exchange is about exchanging and I think if you hold the title of manager or lead, but demonstrate that you're still learning and you're open to someone else teaching you, then you're fostering a true exchange.
By Karen N. Johnson
A- The size of your team and the experience level of each person on your team are two considerable influences in how you manage your team. Without knowing either of these factors or the environment you're working in, let me offer several ideas.
I'd begin with each person individually. I often build a custom learning plan for people I hire (or inherit). I ask each person to help clarify what they know in several areas such as: database models, SQL, test automation, types of testing such as functional, performance, and installation, the subject domain we're working in whether it's banking software, contact management or another field. I work with each person individually as much as I feasibly can and help each person grow their knowledge in these areas -- or other areas that maybe more applicable based on their background and the environment we're working in. Together, we'll look for project work where they can apply knowledge as soon as possible. Let me back up and add that ongoing knowledge and the pursuit of learning isn't a surprise for anyone since it's a factor in my hiring and a spirit I look for in people.
In terms of building a team's testing skills there are more options. If more than one person is trying to acquire the same or similar knowledge you can establish a buddy system between the two people. An effective pairing will often be two experienced people trying to expand in a new area as opposed to two entry level people who might both be struggling in many areas. Paired testing sessions are an option with a more senior tester working with a less experienced tester.
We can learn from every person we meet. If you build a learning list together with the team, you should look to different people on your team to lead. The point is the team is part of building the list. Perhaps your lead automation tester can lead brown bag lunches or offer learning sessions where manual testers listen in on the automation planning sessions. Unless you have a team of completely inexperienced testers, you should not have to lead all the knowledge exchange sessions but you will have to start the exchange and provide an environment (time, space and attitude) where knowledge and skills are shared. I've hosted internal book clubs where we read testing books together and then talk about books but I've found more immediate project work with relevant small bits to read more effective. Experiment with your team since each team has its own unique dynamic.
In terms of management, your knowledge exchange program provides leadership opportunities for people. Beyond project work, you'll be able to see how your team members work together or perhaps, don't work together. I'd be looking for energy levels, willingness and commitment to learning. The sessions you host will give you another opportunity to observe the group and each individual.
Since I don't think any of us are ever done learning, you can also demonstrate to the team what you're learning and how you go about pursuing more skills or background. Someone on your team might have more experience in an area and this could be a great way for you to learn and someone else to teach. Knowledge exchange is about exchanging and I think if you hold the title of manager or lead, but demonstrate that you're still learning and you're open to someone else teaching you, then you're fostering a true exchange.
By Karen N. Johnson
Wednesday, October 8, 2008
The benefits of user acceptance testing
Q- So we've been involved in system testing of different applications and have acquired good knowledge of each of the applications. Now we would like move into the user acceptance testing area. UAT was traditionally being done by BSAs -- short test cycle once system testing is over and most often UAT test cases are derived from system test cases themselves. One question given to us is about what difference or value we will bring on board in terms of test cases or coverage in UAT. Can you help me on this?
Answer- If I understand your situation clearly, you and your team know several applications well and have been testing the applications. Now you'll be directing user acceptance testing as well and need to explain what benefit you and your team can provide to UAT.
Let me share details of one of my experiences and then answer your questions more directly. I was in a similar situation once and worked directly with users through UAT. I was able to teach the users more about the application. Once the users were able to see more intricacies in the application, they became more skilled testers themselves and appreciated the testing team (as well as the developers) even more than they previously did. I gave them ideas and in turn I learned what quirks of the application irritated the users. I learned more about their perspective too.
I think there are multiple benefits, such as the ones I highlighted, as well as a few more. You'll get to know the users; they'll get to know you and your team. The users might be more inclined to share ideas that can add to your testing. You may also better understand what the users need to accomplish and become a better advocate for the users. I believe spending time with users of the products is beneficial.
Answer- If I understand your situation clearly, you and your team know several applications well and have been testing the applications. Now you'll be directing user acceptance testing as well and need to explain what benefit you and your team can provide to UAT.
Let me share details of one of my experiences and then answer your questions more directly. I was in a similar situation once and worked directly with users through UAT. I was able to teach the users more about the application. Once the users were able to see more intricacies in the application, they became more skilled testers themselves and appreciated the testing team (as well as the developers) even more than they previously did. I gave them ideas and in turn I learned what quirks of the application irritated the users. I learned more about their perspective too.
I think there are multiple benefits, such as the ones I highlighted, as well as a few more. You'll get to know the users; they'll get to know you and your team. The users might be more inclined to share ideas that can add to your testing. You may also better understand what the users need to accomplish and become a better advocate for the users. I believe spending time with users of the products is beneficial.
Tuesday, October 7, 2008
User acceptance testing
Q- I have been now put into acceptance testing of a project. Before this I was into integration testing. Now that team is challenging me to find the maximum number of bugs to prove myself because of some conflicts between us. I really agree with your view towards acceptance testing. But we have been asked to design use cases and tests. I would like to face this so could you support me with your valuable tips to make this successful?
A- This is a sadly frequent situation. In my opinion, a good test script for user acceptance testing is similar to the following:
"I'm going to give you a brief demonstration of how this application works. Then I will provide you with a user's manual and some sample data (such as a list of products that have been previously entered into the system) and I'd like you go use the application to complete the tasks that you would use an application such as this for. As you work, please provide your feedback on this form."
The problem, however, is that this kind of feedback is not what most managers and stakeholders are looking for when they ask for user acceptance testing to be conducted. What they tend to be looking for is the answer to the question:
"Do the users of the system agree that we have met the requirements we were given?"
In an ideal world, high user satisfaction would map directly to successfully implemented requirements. Unfortunately, this is not often the case. Which leaves us with the dilemma of trying to balance the needs of the managers and stakeholders with one of the core ethical principles related to software testing as spelled out the ACM code of ethics, section 2.5 below (you can see the entire code of ethics reprinted on the at Association for Software Testing Web site)
"Give comprehensive and thorough evaluations of computer systems and their impacts, including analysis of possible risks. Computer professionals must strive to be perceptive, thorough, and objective when evaluating, recommending, and presenting system descriptions and alternatives. Computer professionals are in a position of special trust, and therefore have a special responsibility to provide objective, credible evaluations to employers, clients, users, and the public."
So what the question really boils down to is:
"How do I design user acceptance tests that both satisfy the needs of management to determine if the end users agree that the requirements have been met while also satisfying my obligation to capture the information I need to provide a comprehensive and thorough evaluation of the user's overall satisfaction with the product?"
Luckily, I believe the answer is easier than the question. In my experience, if you simply compliment highly structured, step-by-step user acceptance scripts, containing specific pass/fail criteria as derived from the system requirements with both the time and a mechanism for providing non-requirement-specific feedback, users will provide you with answers to both of the questions of interest. All this involves on your part is to encourage the users to, in addition to, executing the user acceptance tests that you provide, to use the system as they normally would and to provide freeform feedback in the space you provide in the script about their satisfaction with the application as it stands today. In this way, you will collect the pass/fail information that it sounds like your managers and stakeholders are asking you for, but also the information you need to be the user's advocate for changes or enhancements to the system that have resulted from unknown, overlooked, or poorly implemented requirements.
A- This is a sadly frequent situation. In my opinion, a good test script for user acceptance testing is similar to the following:
"I'm going to give you a brief demonstration of how this application works. Then I will provide you with a user's manual and some sample data (such as a list of products that have been previously entered into the system) and I'd like you go use the application to complete the tasks that you would use an application such as this for. As you work, please provide your feedback on this form."
The problem, however, is that this kind of feedback is not what most managers and stakeholders are looking for when they ask for user acceptance testing to be conducted. What they tend to be looking for is the answer to the question:
"Do the users of the system agree that we have met the requirements we were given?"
In an ideal world, high user satisfaction would map directly to successfully implemented requirements. Unfortunately, this is not often the case. Which leaves us with the dilemma of trying to balance the needs of the managers and stakeholders with one of the core ethical principles related to software testing as spelled out the ACM code of ethics, section 2.5 below (you can see the entire code of ethics reprinted on the at Association for Software Testing Web site)
"Give comprehensive and thorough evaluations of computer systems and their impacts, including analysis of possible risks. Computer professionals must strive to be perceptive, thorough, and objective when evaluating, recommending, and presenting system descriptions and alternatives. Computer professionals are in a position of special trust, and therefore have a special responsibility to provide objective, credible evaluations to employers, clients, users, and the public."
So what the question really boils down to is:
"How do I design user acceptance tests that both satisfy the needs of management to determine if the end users agree that the requirements have been met while also satisfying my obligation to capture the information I need to provide a comprehensive and thorough evaluation of the user's overall satisfaction with the product?"
Luckily, I believe the answer is easier than the question. In my experience, if you simply compliment highly structured, step-by-step user acceptance scripts, containing specific pass/fail criteria as derived from the system requirements with both the time and a mechanism for providing non-requirement-specific feedback, users will provide you with answers to both of the questions of interest. All this involves on your part is to encourage the users to, in addition to, executing the user acceptance tests that you provide, to use the system as they normally would and to provide freeform feedback in the space you provide in the script about their satisfaction with the application as it stands today. In this way, you will collect the pass/fail information that it sounds like your managers and stakeholders are asking you for, but also the information you need to be the user's advocate for changes or enhancements to the system that have resulted from unknown, overlooked, or poorly implemented requirements.
Subscribe to:
Posts (Atom)